Search Results

Search found 14053 results on 563 pages for 'upk pro (knowledge pathwa'.

Page 152/563 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Website Stopped Showing From Google Search Results Sunddenly

    - by Aman Virk
    I have a design and development blog http://www.thetutlage.com (1.5 years old), which was doing really well in Google search as I was getting over 70% of my traffic from Google. Now suddenly from last two days it reduced the amount of traffic from 70% to 20% and also when I am trying to search for the exact posts that I can created even after appending my website name to it does not show any results for that. Sample Search Text: JQuery Game Programming Creating A Ping Pong Game Part 1 I have post with exact same title and it does not show it on Google search anywhere. I am totally shocked, I write my own unique content and follow Google guide lines like bible. Also there is no message under my webmasters account stating any problem or error.

    Read the article

  • How does one block unsupported web browsers?

    - by Sn3akyP3t3
    Web browsers with an end of life no longer receive security updates which not only makes them vulnerable to the end user, but I imagine its not safe for the server's which receive visits by them either. Is it practical to block or enforce and notify the end user that their browser is unsafe and unsupported? If so, how would one achieve that? I don't know of any official or crowd-sourced listing with that information to parse and keep up to date. I'm aware that the practice can be custom built with User Agent parsing and feature detection for HTML5 enabled browsers.

    Read the article

  • Contact YouTube

    - by takeshin
    Is there any direct way do contact a „real” person, an YouTube employer? Someone created an account for the company I work for (previous employer). She entered some password and e-mail. The provided e-mail was valid, but since last login (more than two years), we changed our domain, and this e-mail is probably not valid anymore (and we even don't know it), co we can't use option to reset password. I have used all the options in YouTube help center, and no one worked. Also we can't contact this previous employer to get any data she entered in the registration form. The only data I know is the username. All the movies present the products of our company, there are links to our site in movie descriptions, so there should be no problem to prove that the account is ours. This is an urgent case, becouse the movies contain outdated information.

    Read the article

  • wordpress woocommerce php variable usage %1$s

    - by tech
    I am using wordpress with woocommerce and I am trying to manipulate a copy of myaccount.php The default code uses some variables of some sort that I am not familiar with nor have I been able to find documentation on. The variables in question are %1$s, %2$s and %s <p class="myaccount_user"> <?php printf( __( 'Hello <strong>%1$s</strong> (not %1$s? <a href="%2$s">Sign out</a>).', 'woocommerce' ) . ' ', $current_user->display_name, wp_logout_url( get_permalink( wc_get_page_id( 'myaccount' ) ) ) ); ?> <?php printf( __( 'From this page you can view your recent orders, manage your shipping and billing addresses and <a href="%s">edit your password and account details</a>.', 'woocommerce' ), wc_customer_edit_account_url() ); ?> </p> How can I identify the variables, what they represent and how to use them? Thank you.

    Read the article

  • Pagination for product listing, what to use? "canonical" or "rel-prev-next" or do nothing?

    - by Jayapal Chandran
    I want to make sure my product listing is 10 products per page which are not in a series (link). They have explained how to use canonical or rel prev for pagination when a long page has been divided into multiple page and the multiple pages becomes a series were as my condition is not that. They are unique listing which are not related to each listing... All the listing links leads to a product profile page. So lets say my site is all about cars and I have a Used Audi page with 1000 Audi's for sale. There are 10 used audi cars on each page so there's 100 pages in the series. If I start to utilise Rel="prev" and rel="next" should I set page 2 onwards as index,follow or noindex,follow? The content on Page 2 all the way to 100 only changes ever so slightly as different cars will be for sale on different pages but from a "Panda" point of view the pages are incredibly similar as they'd hold the same meta data as page 1 in the series along with duplicate reviews & news etc. I want Page 1 in the series as the Main page for Google to send users too and I don't see the point in Google indexing page 2 100. What's everyone's view on this? Lastly with the rel="canonical" tag should page 2 to 100 all point back to page 1 in the series or the individual page itself? E.G: /used-audi/page-3/.

    Read the article

  • Restricting crawler activity to certain directories with robots.txt

    - by neimad
    I would like to use robots.txt to prevent indexing of some parts of my website. I want search engines to index only the / directory and not search inside my controllers. In my robots.txt, I have this: User-Agent: * Disallow: /compagnies/ Disallow: /floors/ Disallow: /spaces/ Disallow: /buildings/ Disallow: /users/ Disallow: / I put this file in /mysite/public. I tested the file with a robots.txt validator and got no errors. However, Google always returns the result of my site. For testing, I added Disallow: /, but again, Google indexed all pages. floors, spaces, buildings, etc. are not physical directories. Is this a bug? How can I work around it?

    Read the article

  • .htaccess rewrite , parked domain on another site to read the proper domain name

    - by Stefano
    I have a parked domain ¨mysite.co.uk¨ on another domain name webspace in a folder and need to rewrite the mysite.co.uk domain name in the browser URL as it currently shows the Other domain name while browsing and i need it to read mysite.co.uk. When parking the domain the isp automatically added this which works although displays anotherdomain name. RewriteCond %{HTTP_HOST} ^mysite.co.uk$ [OR] RewriteCond %{HTTP_HOST} ^www.mysite.co.uk$ RewriteRule ^/?$ "http\://www.otherdomain.eu/myfolder" [R=301,L]

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • Standard ratio of cookies to "visitors"?

    - by Jeff Atwood
    As noted in a recent blog post, We see a large discrepancy between Google Analytics "visitors" and Quantcast "visitors". Also, for reasons we have never figured out, Google Analytics just gets larger numbers than Quantcast. Right now GA is showing more visitors (15 million) on stackoverflow.com alone than Quantcast sees on the whole network (14 million): Why? I don’t know. Either Google Analytics loses cookies sometimes, or Quantcast misses visitors. Counting is an inexact science. We think this is because Quantcast uses a more conservative ratio of cookies-to-visitors. Whereas Google Analytics might consider every cookie a "visitor", Quantcast will only consider every 1.24 cookies a "visitor". This makes sense to me, as people may access our sites from multiple computers, multiple browsers, etcetera. I have two closely related questions: Is there an accepted standard ratio of cookies to visitors? This is obviously an inexact science, but is there any emerging rule of thumb? Is there any more accurate way to count "visitors" to a website other than relying on browser cookies? Or is this just always going to be kind of a best-effort estimation crapshoot no matter how you measure it?

    Read the article

  • How can I receive more traffic? My VPS fails!!!

    - by Vic
    I have a web site - photo gallery. About 400 photos. Site on Gallery 3. mySQL. Hosted on VPS from myhosting.com (CPU 1792 MHz, 2048 MB RAM). Everything seems to be ok, but there is one big problem. Once traffic reaches ~ 20 people (online) - website start loading really really slow. Actually website can't be loaded about 30-60 sec. What should I do? Buy more RAM / CPU on the same VPS? Move to a dedicated server or maybe myhosting.com just sucks? What do you recommend?

    Read the article

  • Does Google AdWords care about duplicate content?

    - by Yarin
    Our site offers several families of products, all of which have a common set of configurations. For simplicity's sake, we'll say we offer products A, B and C, each with configurations 1, 2 and 3 Products: A, B, C Configurations: 1, 2, 3 We want to create landing page <- ad group combinations that reflect each possible combination of each product and configuration. Each product and each configuration have their own page, and so each landing page would have include the product content and the configuration content: ourproducts.com/A-1 (Contains copy for A and 1) ourproducts.com/A-2 ourproducts.com/A-3 ourproducts.com/B-1 ... etc... As you can see, this will lead to duplicate content across our product pages, though in different combinations. My question is, does this matter from AdWords point of view? Will there be any negative consequence to repeating portions of content this way?

    Read the article

  • Moving from one DNS provider to another

    - by Senthil Kumaran
    I had registered with a particular DNS provider X and I have been unhappy with their services and now when the time for renewal came, I did not renew and I let it expire. I am hoping that once it is expired from this provider, I would be able to sign up for the same domain name from an alternative provider which I have tested and I am satisfied. What kind of precautions should I take? The domain name is not a critical one, it is of a NGO and we prefer to own it again without any change in the name. The information given by the expiry notice says Domains can be renewed between 90 days before and 14 days after the expiry date. If domains are not renewed they will be removed from the account and set for deletion. Should I wait for time till gets deleted at their end so that I can sign up for the same from another provider?

    Read the article

  • Projects to learn web development

    - by David McDavidson
    I'm trying to get a job as a web developer, but the great majority of jobs offers requires previous experience and a portfolio to prove you've got the required skills. Unfortunately I don't have any real experience or anything to show. The best way to learn is to try and tackle real world problems, so I'd like to know what would be some nice projects to learn stuff and that will look good in a portfolio?

    Read the article

  • Safety of purchasing country-specific domains from registrars?

    - by Marc Bollinger
    None of the previous questions tackle some of the one-off (or further) countries' registries, beyond .co.uk, .it, et al. or else I'd have found an answer myself. Is it safe to buy a domain from a foreign country TLD from a registrar? http://www.iana.org/domains/root/db/ I'm just looking for information for a vanity domain, so obviously I'm alright without an answer, but it's an unasked question (or at least, unanswered), and I'm not exactly in a hurry to give my credit card information over country lines, sight unseen.

    Read the article

  • When Googlebot sees a link, will it click it or navigate to it?

    - by FakeRainBrigand
    My site uses pushState and JSON data to display content. So, for example, this might appear on my page: <a href="/some/page">some page</a> The JavaScript then prevents the default action (following the link), and instead renders a view (using a different api, such as /getjson?some_page). $('[href]').click(function(){ history.pushState(...); handleURL(...); }); Assume my server will respond to requests at /some/page with a pre-rendered version. My questions are: will Googlebot receive the prerendered version, or allow JavaScript to instead invoke pushState, etc. if it doesn't make the direct request, will it wait for AJAX content to be loaded? does Googlebot implement pushState, so it will show the proper URL in search results?

    Read the article

  • What is this chargeback scam from eBooks bought on my website?

    - by Dan Friedman
    We have a scammer that is buying our e-Books and then performing chargebacks. Our e-Books don't have DRM, so if they wanted to resell them, they would only need to buy each book once. But instead, they keep buying the same books over and over again and then performing hundreds of chargebacks. We have created some additional rules in our fraud protection tools to block certain aspects, even though all the info looks legit, and are hopeful this will slow them down. But my question is: What is the scam? If they aren't getting any product and they only get chargebacks for something they already purchased, then they can't get additional money from the credit card company, so then what's their motivation?

    Read the article

  • Google results show .info domain instead of .com

    - by user481913
    I am on shared hosting currently and i registered this account with a .info domain as the main domain.... say MyDomain.info . However, the site runs from MyDomain.com . This is a cpanel based shared hosting account. MyDomain.info has nothing hosted at all... i.e no content files... MyDomain.com is setup as an Add On Domain and run from /public_html/MyDomain under MyDomain.info The problem is that when i type MyDomain as the keyword for search in Google , it shows result(s)for Mydomain.info although this is not the intended site and has no content hosted on itself. I tried to solve the issue by issuing a 301 permanent redirect from MyDomain.info to MyDomain.com, however Google keeps on displaying results as mydomain.info as the main site even after 1 month of the redirect. I want google to index MyDomain.com as the main site and remove MyDomain.info from the results. Also is this harmful from the seo point of view? How can i improve the seo if it is?

    Read the article

  • Google crawler not found an error inside of the <head> tag

    - by inckka
    I've found a crawler error in my site and it is listed as a page not found(404) link. Heres the broken link http://mydomain.com/blog/comments/feed/ I'm using Google web master tools and found that broken link coming from my web site pages' head tag. here's actual code where that link situated. <head> <link rel="alternate" type="application/rss+xml" title="My Domain Blog &raquo; Feed" href="http://www.my-domain.com/blog/feed/" /> </head> So Google report this link as a not found. Actually this link target is not an exact page or a location. But essential for the blog feeds. Anyway I have to fix this and remove from the Google crawler error's list. But haven't got any idea, because cannot redirect or do a 404 header with this link target. Have anyone got an idea of fixing this?

    Read the article

  • Pros and cons of the Google font API

    - by Seamus
    I am currently using a font from the google font directory on my website. I don't fully understand how it works, but it seems like when someone opens my site, their browser is told to go and fetch the font from Google. (correct me if I'm wrong). Now, what I'm wondering is, what are the pros and cons of this over just specifying a font family the old-school way? Presumably doing it the google font directory way has the advantage that they'll definitely see the font I want them to. (as long as the font directory is up). But does this way have disadvantages? Maybe using fonts that are stored locally speeds up the site loading?

    Read the article

  • why the difference in google search result using script for search and using a browser for search

    - by Jayapal Chandran
    I wrote a code to find the position in google search result for a search keyword. I also did the same with the browser. Both the results are different. Let me explain in detail here. I have a website and i wanted to know on which page number my domain appears for a search string. Like when i search for 'code snippets' i wanted to find in google search on which page number a certain domain appears. I wrote a php code to search page by page starting from page 1 to page n. I did the same task using a browser. The script returned page 4 and when browsed i can see the domain appearing in second page. here is the search string i use in my code. /search?hl=en&output=search&sclient=psy-ab&q=code+snippets&start=0&btnG= and for each request i change the start=0 to start=1, start=2, etc... and in the response i will check whether my domain appears in it. any idea for this different in search results?

    Read the article

  • Temporary background image while the big one is loading? [migrated]

    - by Mikhail
    Is there a way, without javascript, to load a small image for a background before the real image is downloaded? Without javascript because I know how to do it with it. I can't test if the following CSS3 would work because it works too quick: body { background-image:url('hugefile.jpg'), url('tinypreload.jpg'); } If the tinypreload.jpg is only, say 20k, and the hugefile.jpg is 300k -- would this accomplish the task? I assume that both downloads would start at the same time instead of being consecutive.

    Read the article

  • Clicks counting and crawler bots

    - by Dennis
    I am currently running a small affiliate-program for Facebook users. We use an auto-poster to publish links to fan pages. Every hit is stored in our database and we have included a 24 hour reload block for the IP-addresses. My problem right now is that the PHP script also stores every hit from all the bots that crawls my website. Now I was thinking to block those bots with the robots.txt of my website but I am afraid that this will have a negative effect on my AdSense ads. Does anybody have an idea for me how to work this out?

    Read the article

  • Registering domain during christmas holydays

    - by arkascha
    One of the domain names I tried to register previously has been blocked by a domain grabber two days prior to my own attempt. That was about 1 year ago. The attempt to buy the domain from that person failed due to a totally exaggerated price. So I dropped the issue and watched the domain (offered at sedo.com). As expected there were no more offers, the domain was not sold. Now I learn from the whois database that the registration of that domain name ends on 25.12.2012 (christmas holyday). This raises two questions for me, I fail to find reliable answers on the internet. So maybe someone experienced here can drop a statement or a hint: is it reasonable that the domain name in question really will be free again when that date mentioned in the whois database up to when the domain is registered has passed? I certainly know that the registration can be prolonged, that is not what I mean. I expect (hope) that that domain grabber does not extend the registration, since it costs money and effort and he failed to sell the domain. Provided this is the case and the domain registration is not prolonged, is that date mentioned reliable? Or might it just be some 'default' date? I would like to try to register that domain name as soon as it is unregistered. Since that domain grabber registered that domain only two days before my own registration attempt I would like to prevent such annoying interference next time. So I ask myself: is it possible to register a domain name on a holyday? I mean not to send an email to my provider to do so on that day or before, but to actually have to process taking place as not to wait for 1-2 days after the unregistration? My own provider which I am very happy with does not offer such service on a holyday (which is perfectly understandable). They are 'still checking' if they can offer something automatic. I researched and did not find an answer to the question if that is possible at all. Is an autoomatic registration attempt on a holyday possible? Where can I do that? Is that reliable? Thanks for any reply!

    Read the article

  • AdWords Keyword Tool planner CPC completely different to real CPC?

    - by steve
    I'm new to AdWords, and trying to figure out the best keywords to use. I go to Adwords Keyword Planner, and typed in an example keyword. It gives me an average CPC of $0.94. But when I go to set up a real campaign and type it the same keyword, I get an error saying 'below first page bid estimate' which is $8.75. What gives? Is there a better way to get more accurate feedback on how much this will cost?

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >