Search Results

Search found 17124 results on 685 pages for 'final cut pro'.

Page 217/685 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Structured data: Field missing: price [on hold]

    - by Handi Occasion
    I just set up my site to make it better thanks to the micro-referenced data. After finishing the creation of meta-data, I checked via webmaster tool the result of my work and my a priori data are taken into account (see here). Today I had a look through the webmaster tools - Structured Data and then surprise! I have a groin ad 50 with the error field missing: price while the price is this! any idea? thank you

    Read the article

  • Why googling by keycaptcha gives results on reCAPTCHA? [closed]

    - by vgv8
    EDIT: I'd like to change this title to: How to STOP Google's manipulation of Google search engine presented to general public? I am frequently googling and more and more frequently bump when searching by one software product I am given instead the results on Google's own products. For ex., if I google by keyword keycaptcha for the "Past 24 hours" (after clicking on "Show search tools" -- "Past 24 hours" on the left sidebar of a browser) I am getting the Google's search results show only results on reCAPTCHA. Image uploaded later: Though, if confine keycaptcha in quotes the results are "correct" (well, kind of since they are still distorted in comparison with other search engines). I checked this during few months from different domains at different ISPs, different operating systems and from a dozen of browsers. The results are the same. Why is it and how can it be possibly corrected? My related posts: "How Gmail spam filter works?" IP adresses blacklisting Update: It is impossible for me to directly start using google.com as I am always redirected to google.ru (from google.com) by my ip-address "auto-detect location" google's "convenience". The google's help tells that it is impossible to switch off my location auto-detection because it is very helpful feature. There is a work-around to use google.com/ncr (to get google.com) (?anybody know what does it mean) to prevent redirection from google.com but even. But all results are exactly the same OK, I can search by quoted "keycaptcha", I am already accustomed to these google's quirks, but the question arises why the heck to burn time promoting someone's product if GOOGLE uses other product brands for showing its own interests/brands (reCAPTCHA) instead and what can be done with it? The general user will not understand that he was cheated and just will pick up the first (wrong) results Update2: Note that this googling behaviour: is independent on whether I am logged-in (or log-out-ed of) a google account, which account, on browser (I tried Opera, Chrome, FireFox, IE of different versions, Safari), OS or even domain; there are many such cases but I just targeted one concrete restricted example speciffically to to prevent wandering between unrelated details and peculiarities; @Michael, first it is not true and this text contains 2 links for real and significant results.. I also wrote that this is just one concrete example from many and based on many-month exp. These distortions happen upon clicking on: Past 24 hours, Past week, Past month, Past year in many other keywords, occasions/configurations of searches, etc. Second, the absence of the results is the result and there is no point to sneakingly substitute it by another unsolicited one. It is the definition of spam and scam. 3d, the question is not abt workarounds like how to write search queries or use another searching engines. The question is how to straighten the googling's results in order to stop disorienting general public about. Update: I could not understand: nobody reproduces the described by me behavior (i.e. when I click "Past 24 hours" link in google search searching for keycaptcha, the presented results are only on reCAPTCHA presented)? Update: And for the "Past week":

    Read the article

  • Googlebot requesting invalid url

    - by Rob Walker
    I have a web app which emails me exceptions automatically. This morning there was an error relating to a url: /Catalog/LiveCatalog?id=ylwpfqzts id is invalid (should be a guid) and caused an error parsing. Everything was handled correctly, and an error page is returned. But what was odd is that the user-agent reported itself as Googlebot and the IP is registered to Google. The URL would never have been generated by my web app but doesn't look particularly malicious. Anyone ever seen anything like this?

    Read the article

  • SEO strategy for h1, h2, h3 tags for list of items

    - by Theo G
    On a page on my website page I have a list of ALL the products on my site. This list is growing rapidly and I am wondering how to manage it from an SEO point of view. I am shortly adding a title to this section and giving it an H1 tag. Currently the name of each product in this list is not h1,2,3,4 its just styled text. However I was looking to make these h2,3,4. Questions: Is the use of h2,3,4 on these list items bad form as they should be used for content rather than all links? I am thinking of limiting this main list to only 8 items and using h2 tags for each name. Do you this this will have a negative or possible affect over all. I may create a piece of script which counts the first 8 items on the list. These 8 will get the h2, and any after that will get h3 (all styled the same). If I do add h tags should I put just on the name of the product or the outside of the a tag, therefore collecting all info. Has anyone been in a similar situation as this, and if so did they really see any significant difference?

    Read the article

  • Google Analytics Visitors drop-off for certain region of site only

    - by crmpicco
    I have an issue with the tracking on my site where I have seen a dramatic drop off of visitors to the site from a certain region. I have four regions on my site at the moment, these are UK, EU, US and RoW (Rest of the World). The UK, EU and US regions are unaffected, only the RoW region suffers this drop-off. I have included a screen shot below from my GA account, which shows this effect. My GA code, which is included on every page on the site is below. I have changed the UA account number intentionally for this example. There have been no changes made to the GA account or the tracking code in a live environment for some considerable time, but for some reason I am seeing the drop-off for this region only. In the code below I am not tracking page views on certain pages as I have event tracking setup for these pages. <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-18721873-5']); _gaq.push(['_setCookiePath', '/row/']); if ( typeof(p_page) != 'undefined') { // do nothing if user is on above pages // N.B. there are a series of conditions in this if statement checking that we are not on a particular page } else { _gaq.push(['_trackPageview']); } </script>

    Read the article

  • command-line zip not working

    - by ptriek
    I have a Wordpress site on a Debian/Linux dedicated server, with a Backupbuddy plugin for making automatic backups. The plugin, however, gives an error 'Your server does not support command line ZIP'. My knowledge of Linux commands is very limited, but I managed installing zip with the command sudo apt-get install zip However, I still get the same error message. Plugin documentation mentions the problem could also be caused by disabled exec() or safe_mode - but exec isn't disabled, and safe_mode is off. Any ideas what might be causing this, or how to fix it? The only thing I could think of, is it might be caused by wrong permissions?

    Read the article

  • why google ignore my links page?

    - by Yaniv
    I have a website, where im loading all the data via AJAX. since, google doesn't work with AJAX, and the ways to make it AJAX-friendly are a bit odd, i thought that by creating a links page, where it links, from server side, to all the links that im loading in ajax - will solve the problem. but unfortunately, that doesnt seem to work. google webmaster shows that even though my links page discovered, the content of it - the links - are totally ignored. I can only assume that google tend to ignore links in such pages. my question is - WHY?! and furthermore, how to overcome this. Thanks.

    Read the article

  • Dashboard to aggregate Google Analytics, Facebook, YouTube etc tracking data?

    - by Richard
    I'd like to see as much tracking data as possible about my online presence, in one single dashboard - so views/conversions from Google Analytics data, the performance of my Facebook campaigns via the Insights API, views/clicks from my YouTube campaigns, etc. This could be as simple as a graph with time on the x-axis, and key indicators from each source on the y-axis (conversions from Analytics, likes on Facebook, views on YouTube, etc). The idea is that I can see customer engagement with each source, over time. I can write my own such dashboard easily enough, but I wondered if there was something off-the-shelf that already did this. Apologies if this isn't the right forum for such a question - would appreciate tips for the best place to ask.

    Read the article

  • Google Analytics: tracking subdomains for a profile defined for a subdomain

    - by Alex G
    Hope you can help. We have set up a single property under our Google Analytics account. That property's default URL is set to subdomain1.example.com. We would now like to track multiple subdomains for example.com, under the same property. Seems easy enough: we just need to add _gaq.push(['_setDomainName', 'example.com']); to our tracking code, right? But my question is: does it matter if a) we don't need to track www.example.com (this is tracked under a seperate account and property) and b) the default URL for our property is set to subdomain1.example.com? Will either of these have any impact on data collection?

    Read the article

  • 2 domains 1 host package

    - by sp-1986
    I have a windows web hosting package. I have 2 different domain names. Can i point my first domain to the the hosting package and then run BlogEngine.NET and then point the second domain to the hosting package running NopCommerce cart? www.domain1.co.uk (blog) www.domain2.co.uk (e-commerce cart) In IIS i would just create a new application within the site and create the bindings for domain2. But does this work for web hosting packages from 123-reg.co.uk

    Read the article

  • Is there a way to take credit cards on my website without needing a merchant account/payment gateway?

    - by Erik
    I've been looking for a service like this but can't find one -- it boggles my mind that such a thing doesn't exist. The ideal thing I'm looking for would be something like this: User fills out a form on my website I submit data to the service (cc #, payment amount) I get paid perhaps monthly by the service the amounts that were charged (less a fee) This is more or less how accepting paypal for payments works, except it takes my users to paypal's site and forces them to create a paypal account etc, which I'd like to avoid. Does such a service exist?

    Read the article

  • Monitor offline adwords conversions

    - by Frank Meulenaar
    I'm trying to evaluate the usefulness of Google Adwords for a friend's site. I'm trying to count the number of sales per month, and see how many have found her page because of the Adwords campaign. Her site has an online order system, but she also gets customers that buy just via the email contact and never use the online order system. There aren't many conversions per month (usually only one to three), so I don't want to miss any conversions when I want to gauge the effectiveness of a campaign. Is there a good way to also include those conversions?

    Read the article

  • Multi user issue in Drupal 7

    - by sachin
    I am trying to create a web site in which there are different link for each user. Like if my site address is example.com and there are three users: u1,u2,u3... If any of them is logged in, then the site should redirect to example.com/u1 and if he create any link or block on this url, this link or block should not visible for remaining two links. Also all of them should have different admin pannel.

    Read the article

  • Why are 20% of keywords still provided when Google is using HTTPS across the board?

    - by Rajesh Magar
    Most of the searches that appear in my analytics are "not provided" because Google has encrypted their all searches. However, if all search results are now encrypted with HTTPS protocol then how is Google analytics still able to track some (20%) of the organic keywords details? There are still some keywords appearing in my organic keywords section. So how did Google analytics do this tracking? Does it bypass the HTTPS restrictions for the referrer?

    Read the article

  • historical weather data APIs

    - by AJ.
    I am building a web application where I need to display whole year's month wise weather conditions. So that users get an idea of what the weather conditions are like and plan their trips accordingly. I am using WunderGround's History feature but it does not give this data for smaller towns and destinations, even some very popular tourist destinations. Are there any alternatives which could provide me the same information.

    Read the article

  • Why are the stats for HTML Improvements in Google Webmaster Tools not decreasing?

    - by Kookoriko
    I have read that resolving HTML Improvements in Google Webmaster Tools can take as long as 6 weeks to show up, but those numbers seem to increase without decreasing even though I've been fixing almost everything that Google points out. I have checked some sites with the View as Google tool and the bugs are resolved (let's say "short meta descriptions" for example). Any idea why is this happening?

    Read the article

  • visit counts in advanced segments not consistant

    - by user671201
    My organization has recently noticed an issue when applying advanced segments to visit counts during different time ranges. With no advanced segments turned on, here are the visit counts for Oct 1st - Oct 4th during the time range Sept 8th - Oct 8th: Oct 1 - 7 Oct 2 - 7 Oct 3 - 8 Oct 4 - 5 Again, with no advanced segments turned on, here are the visit counts for Oct 1st - Oct 4th but I've changed the time range to Oct 1st - Oct 4th. As expected, the numbers are the exact same as above: Oct 1 - 7 Oct 2 - 7 Oct 3 - 8 Oct 4 - 5 Now, I turn on the "Non paid search traffic" advanced segment. Here are the visit counts for Oct 1st - Oct 4th during the time range Sept 8th - Oct 8th: Oct 1 - 0 Oct 2 - 0 Oct 3 - 0 Oct 4 - 2 Here is where it gets weird. I keep the advanced segment on, and change the time range to Oct 1st - Oct 4th. This is what I get for the exact same dates as above: Oct 1 - 4 Oct 2 - 2 Oct 3 - 6 Oct 4 - 5 We've found the same inconsistency in our other GA profiles that get much more traffic (the above numbers come from one of our specialized topic blogs), but the inconsistency is less pronounced where there are more visits. My question is: why are the visit counts different for different time ranges when advanced segments are turned on, but exactly the same when no advanced segments are applied? Is this a GA bug or am I missing something about how the advanced segments work?

    Read the article

  • Best way to lay-out the website when sections of it are almost identical

    - by Linas
    so, I have a minisite for the mobile application that I did. The mobile application is a public transport (transit) schedule viewer for a particular city (let's call it Foo), and I'm trying to sell it via that minisite. I publish that minisite in www.myawesomeapplication.com/foo/. It has the usual "standard" subpages, like "About", "Compatible phones", "Contact", etc. Now, I have decided to create analogue mobile application for other cities, Bar and Baz. These mobile applications (products) would be almost identical to the one for the Foo city, thus the minisites for those would (should) look very similar too (except for some artwork and Foo = Bar replacement). The question is: what do you think would be the most logical way to lay-out the website in this situation, both from the business and search engine perspective? In other words, should I just duplicate the /foo/ website to /bar/ and /baz/, or would it be better to try to create a single website under root path (/)? I don't want search engine penalties for almost-duplicate information under /foo/, /bar/ and /baz/, and also I don't want a messy, non-localized website (I guess the user is more likely to buy something if he/she sees "This-and-that is the application for NYC, the city you live in", not "This-and-that is the application for city A, city B, ..., NYC, ..., and city Z.")

    Read the article

  • wget not respecting my robots.txt. Is there an interceptor?

    - by Jane Wilkie
    I have a website where I post csv files as a free service. Recently I have noticed that wget and libwww have been scraping pretty hard and I was wondering how to circumvent that even if only a little. I have implemented a robots.txt policy. I posted it below.. User-agent: wget Disallow: / User-agent: libwww Disallow: / User-agent: * Disallow: / Issuing a wget from my totally independent ubuntu box shows that wget against my server just doesn't seem to work like so.... http://myserver.com/file.csv Anyway I don't mind people just grabbing the info, I just want to implement some sort of flood control, like a wrapper or an interceptor. Does anyone have a thought about this or could point me in the direction of a resource. I realize that it might not even be possible. Just after some ideas. Janie

    Read the article

  • Possible / How to render to multiple back buffers, using one as a shader resource when rendering to the other, and vice versa?

    - by Raptormeat
    I'm making a game in Direct3D10. For several of my rendering passes, I need to change the behavior of the pass depending on what is already rendered on the back buffer. (For example, I'd like to do some custom blending- when the destination color is dark, do one thing; when it is light, do another). It looks like I'll need to create multiple render targets and render back and forth between them. What's the best way to do this? Create my own render textures, use them, and then copy the final result into the back buffer. Create multiple back buffers, render between them, and then present the last one that was rendered to. Create one render texture, and one back buffer, render between them, and just ensure that the back buffer is the final target rendered to I'm not sure which of these is possible, and if there are any performance issues that aren't obvious. Clearly my preference would be to have 2 rather than 3 default render targets, if possible.

    Read the article

  • Google indexed site's address by accident. What do I do now?

    - by AndrejaKo
    I was making a site for a friend of mine and he wanted to be able to see my progress as I worked on the site, so I decided to put the site on a server on my computer and enable access by a domain name registered to me. It turns out that I forgot to set up a robots.txt file for the site and somehow Google indexed the site. My question is: What do I do now? As I understand it, Google doesn't like duplicate content and my friend could have problems when I upload the new site to his server. Right now his current site, which only has a work in progress page, is first on Google when searching for relevant keywords and I really really don't want to damage that. Is there anything else I need to be concerned about?

    Read the article

  • Keeping rackspace vserver alive

    - by mit
    It appears to me that rackspace somehow freezes cloud VMs after some idle time. This means the first page request to a php page takes much longer to respond than the subsequent requests. This is in some cases good, in other cases not acceptable. I am actually querying a machine with wget from a different host now to keep it "alive". But I wonder what frequency would be necessary. Does anyone know the time period after which they send a VM to "sleep"? I guess it would be some minutes. EDIT: There is absolutely no caching involved on the php site. It just recently moved from another vhost and there was never such latency on the first request.

    Read the article

  • Google Analytics - include filter not working

    - by gerl
    I just added an include filter this morning in my domain (test.org). I have: Custom Filter Include Request URI ^/test-a/46212$|^/test-a/46212|^/test-a/46315 Now after I go to Content Site Content All Pages, I see stats for other pages that I didn't include in my filter. For example I see /somethingelse. I only want to see stats for /test-a/46212 and whatever else in my filter. Please let me know what I'm doing wrong.

    Read the article

  • SEO with duplicate content

    - by user16831
    I have a nature photography site with multiple types of photo galleries. Each photo and associated caption on my site appears in several galleries. For instance, a photo of a goldfinch that was taken on a trip to New Mexico in 2008 will appear in the "goldfinch.php" gallery, in the "finches.php" gallery, and in the "New_Mexico_2008.php" gallery. This duplication is useful for my site visitors - User A may want to see goldfinch photos, whereas User B wants to see photos from New Mexico - but I am concerned about the SEO implications. The typical suggestions to deal with duplicate content, such as 301 redirects and canonical tags, probably won't work in this case, because the page content is substantially different (ranging from ~1% to ~90% duplication, depending on the specific example chosen). The obvious solution to me would be to edit robots.txt to only allow search engines to crawl one type of gallery - for instance, if they crawled only the galleries organized by species(e.g. goldfinch.php), all the photos on my site would be found exactly once. However, the Google content guidelines recommend against blocking crawler access to duplicate information. Should I go ahead and use robots.txt anyway? Or is there a better solution?

    Read the article

  • How can IIS 7.5 have the error pages for a site reset to the default configuration?

    - by Sn3akyP3t3
    A mishap occurred with web.config to accommodate a subsite existing. I made use of “<location path="." inheritInChildApplications="false">”. Essentially it was a workaround put in place for nested web.config files which was causing a conflict. The result was that error pages were not being handled properly. Error 500 was being passed to the client for every type of error encountered. Removal of the offending inheritInChildApplications tag from the root web.config restored normal operations of most of the error handling, but for some reason error 503 is a correct response header, but the IIS server is performing the custom actions for error 403.4 which is a redirect to https. I'm looking to restore defaults for error pages so that the behavior once again is restored. I then can re-add customizations for the error pages.

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >