Search Results

Search found 21930 results on 878 pages for 'google voice'.

Page 237/878 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • Awesome Bookmarklet Collection Ready to Select from and Add to Your Browser

    - by Asian Angel
    Bookmarklets are extremely useful additions to have for your everyday browsing needs without the hassle (or slowdown effect) of extensions. With that in mind tech blog Guiding Tech has put together a terrific collection of 21 bookmarklets that are ready to add to your favorite browser. Just scroll down and select/install the bookmarklets you like from the blog post and enjoy the enhanced browsing! You can see the beginning and end results from our sample use of the Search Site Bookmarklet in the screenshots above and below… Note: We altered the bookmarklet slightly to focus the search results through Google Singapore. How to Stress Test the Hard Drives in Your PC or Server How To Customize Your Android Lock Screen with WidgetLocker The Best Free Portable Apps for Your Flash Drive Toolkit

    Read the article

  • Black & White Video with Youtube

    - by RobertPitt
    Hey'a guys, I have an issue with the Youtube Player within Ubuntu, it seems that when playing videos there in black & white, no matter if there normal or HD. I kind of figured out how to prevent this from happening by deleting the cookie PREF that holds a KV Pair like so: PREF f1=50000000&fv=10.2.154 .youtube.com / Sat, 12 Mar 2011 11:38:29 GMT The issue is the Flash Version key, when I Delete this the color comes back, but obviously when I navigate to another page the cookie is set again. Does anyone know why this issue occurs and a possible fix? Using Latest Google Chrome on the latest version of Ubuntu (Installed 3 days ago) Thanks

    Read the article

  • User Experience Guidance for Developers: Anti-Patterns

    - by ultan o'broin
    Picked this up from a recent Dublin Google Technology User Group meeting: Android App Mistakes: Avoiding the Anti-Patterns by Mark Murphy, CommonsWare Interesting approach of "anti-patterns" aimed at mobile developers (in this case Android), looking at the best way to use code and what's in the SDK while combining it with UX guidance (the premise being the developer does the lot). Interestingly, the idea came through that developers need to stop trying to make one O/S behave like another--on UX grounds. Also, pretty clear that a web-based paradigm is being promoting for Android (translators tell me that translating an Android app reminded them of translating web pages too). Haven't see the "anti"-approach before, developer cookbooks and design patterns sure. Check out the slideshare presentation.

    Read the article

  • Use virtual pageviews for all goal tracking

    - by Jeff Wu
    I'm new to Google Analytics and I'm wondering if it would be cleaner to user virtual pageviews for all the goal tracking on my website instead of using a mix of regular page views and virtual pageviews. I know in most cases this is just semantics but there are multiple pages where the same goal can be achieved and I think it would be cleaner just to fire the same virtual pageview instead of having two different goal pages. Will this model also give developers more flexibility when they do development? I know we are moving to a CMS and urls can get hairy, so I think this might be a good way to make analytics portion of the site "future proof". Any thoughts are appreciated! Thanks.

    Read the article

  • Custom Theming now Available in Gmail

    - by Asian Angel
    This past November Google unveiled a new look for Gmail with HD themes, but you could not set up custom themes until now. Set up your new custom theme with a Light or Dark look to match up nicely with your chosen background and enjoy a more personalized experience in your inbox. This is where you will find the new custom settings on the Themes Settings Page… The confirmation screens for the new Light and Dark Custom Themes… How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • Flash is not working in Chrome (Crossover Linux is installed)

    - by Jim Ford
    I have google Chrome 8.0.552.237 on Ubuntu 10.10 64-bit and flash is not working, I have tried a variety of methods to install flash, including Firefox flash-aid and the flash-installer package and nothing is working for me. I have even uninstalled and reinstalled chrome to no avail. I get "missing plugin" message where flash plugin should be in a website. What am I missing? I have a variety of plugins returned by jgbelacqua's command: /usr/lib/chromium-browser/plugins/flashplugin-alternative.so /usr/lib/firefox/plugins/flashplugin-alternative.so /usr/lib/flashplugin-installer/libflashplayer.so /usr/lib/iceape/plugins/flashplugin-alternative.so /usr/lib/iceweasel/plugins/flashplugin-alternative.so /usr/lib/midbrowser/plugins/flashplugin-alternative.so /usr/lib/mozilla/plugins/flashplugin-alternative.so /usr/lib/xulrunner/plugins/flashplugin-alternative.so /usr/lib/xulrunner-addons/plugins/flashplugin-alternative.so /usr/share/ubufox/plugins/libflashplayer.so /var/cache/flashplugin-installer/libflashplayer.so I'm not sure which is necessary and which not. I should note tho that my Chromium does have flash and it does work... just not chrome or firefox.

    Read the article

  • Tor check failed though Vidalia shows green onion

    - by Wolter Hellmund
    I have installed tor successfully and Vidalia shows it is running without problems; however, when I check if I am using tor in this website I get an error message saying I am not using tor. I have tried two things to fix this: I installed ProxySwitchy on Google Chrome, and created a profile for Tor (with address 127.0.0.1, port 8118), but enabling the proxy doesn't change the results in the tor check website linked before. I changed my network proxy settings through System Settings Network from None to Manual, and selected as address always 127.0.0.1 and as port 8118 for all but for the socket, for which I entered 9050 instead. This makes internet stop working completely. How can I fix this problem?

    Read the article

  • Ajax site not being crawled - have escaped fragment, what's wrong? [closed]

    - by Harry
    My site is anonkun.com. You can see that it's "ajax" and doesn't load much HTML. Here are some example pages: http://anonkun.com http://anonkun.com/?_escaped_fragment_= http://anonkun.com/stories/Dev-kun---FAQ/6ef881f8-cf48-4f87-a688-c585f23809c5 http://anonkun.com/stories/Dev-kun---FAQ/6ef881f8-cf48-4f87-a688-c585f23809c5?_escaped_fragment_= As you can see the original page has the meta fragment tag and the escaped fragment versions loads static html. Why am I not getting crawled? http://cl.ly/image/2n30212q0K2W Webmaster tools show that pages are being seen as duplicate and fetch as google show me the ajax version of the source not the static escaped fragment version. What's wrong and how do I make this work? Thanks.

    Read the article

  • Robots.txt never downloaded but some blocked URLs in GWT

    - by Zistoloen
    There is something I don't understand in Google Webmaster Tools (GWT) for my Wordpress site. In menu "Blocked URLs", it mention that my robots.txt has never been downloaded but there are some blocked URLs. It's kind of weird and not logical. Am i missing something? User-agent : * Disallow: /*? Disallow: /wp-login.php Disallow: /wp-admin Disallow: /wp-includes Disallow: /wp-content Allow: /wp-content/uploads Disallow: */trackback Disallow: /*/feed Disallow: /*/comments Disallow: /cgi-bin Disallow: /*.php$ Disallow: /*.inc$ Disallow: /*.gz$ Disallow: /*.cgi$ Disallow: /author/* I'm afraid my robots.txt doesn't block several URLs I want to block.

    Read the article

  • Meta description of my blog post changes

    - by Aadarsh sojitra
    I have some problems in Meta description tags in my Blogger blog. When I update my pages in search engine with the help of the Fetch as Google feature in GWT, all my blog's results comes with a correct meta description like Today I am back with a reason that "WHY IS ORIGINAL MEMORY IN HARD-DISK IS LESS THAN PRINTED" on a box. If we buy any hard-disk or a pen drive... But after approx 5-6 days, it changes to my blog's default meta description. This is also happening after changing the default meta description of my blog. I want only one answer that why its happening? After deleting my blog and creating a new blog with the same name this problem was solved. Why this problem was solved? - I am asking this question because to solve problems in the future.

    Read the article

  • Setting up page goals in Analytics when using progressive enhancement to load content using jquery .load

    - by sam
    I'm using jQuery .load to load content in from other pages into my homepage, so that Google can still see whats going on I've made the <a> tags go to the pages but over ride them in the JavaScript so instead of going the that page it just loads in the content from that page to the main page. Normaly I would just make the page /contact.html a goal. Can I still get it to work as a goal if the content is being loaded in? Can I do something like when the user clicks <a href="contact.html" id="load-contact">contact</a> it logs the clicking of the <a> tag as a goal, rather than the actaul page being visited?

    Read the article

  • How can I prevent HTTPS on another domain from wrongly showing on my HTTP-only domain?

    - by Earlz
    So, I have a blog at domain.com. This blog is HTTP-only because I would gain almost nothing from adding SSL support. I have a web service now that I want to enable SSL support on that runs on the same server and IP address as my blog. I got it all working pretty easily, but not if I go to https://domain.com I will see a huge warning about an SSL certificate error and then if I click "ok" through the warning, I'll see the web service with SSL support, not my blog. My biggest fear with this scheme is Google indexing an HTTPS version of it and penalizing my blog because the content between the two doesn't match. How can I somehow for my blog's domain to either not serve anything on HTTPS, or to redirect back to my HTTP blog, or to serve my blog, but with an invalid SSL certificate? What can I do, preferably without buying another dedicated IP for my website?

    Read the article

  • Cannot access Adsense funds after switching to third-party partnership program

    - by Clay
    I had a Google Adsense partnership with $80 in it, but then switched to a different partnership and now can't get my money. When I first started YouTube, I joined the Adsense partnership program. After gaining $80 in my Adsense account, I got an offer to join a third party partnership program called Zoomin.tv. I accepted, and it is paying me monthly now. The problem is that my Adsense account still has the $80 in it, and is not gaining more cash. The Zoomin.tv money is going directly to my PayPal. The payment threshold in Adsense is $100, and you can't make it lower. Therefore, my money is stuck in Adsense and I'd love a solution that allows me to access my money.

    Read the article

  • Use virtual pageviews for all goal tracking

    - by Jeff Wu
    Hi Pro Webmasters, I'm new to Google Analytics and I'm wondering if it would be cleaner to user virtual pageviews for all the goal tracking on my website instead of using a mix of regular page views and virtual pageviews. I know in most cases this is just semantics but there are multiple pages where the same goal can be achieved and I think it would be cleaner just to fire the same virtual pageview instead of having two different goal pages. Will this model also give developers more flexibility when they do development? I know we are moving to a CMS and urls can get hairy, so I think this might be a good way to make analytics portion of the site "future proof". Any thoughts are appreciated! Thanks.

    Read the article

  • Flash isn't working in Chrome on 64 bit Ubuntu 10.10 fresh install

    - by IanBalisy
    I just installed Ubuntu 10.10 64 bit last night on my laptop and installed Google Chrome ver. 8.0.552.237. So far flash works on Firefox and Chromium, but not at all on Chrome. I did the sevenmachines install for flashplugin64 and that worked for firefox and chromium. Anyone know how to make it work on Chrome? I really would prefer to use Chrome over Chromium, but if it's not an easy fix I can switch. I'm not too Ubuntu literate, but I can figure things out if necessary. (In short, long explanations are not necessary).

    Read the article

  • Recovery from URL structure change?

    - by Dejan Pelzel
    in July this year, we have changed the URL structure of the website from: Post: domain.com/blog/post/986/dance/heart-beats-dance-video-by-chinatsu/ Category: domain.com/blog/index/cosplay/ to Post: domain.com/dance/heart-beats-dance-video-by-chinatsu-986/ Category: domain.com/cosplay/ Everything was (supposedly) properly redirected with 301 redirects and it first seemed that the traffic returned after a couple of days, but it has now been close to 2 months and things keep going worse although Google is slowly indexing the changes. What is worrying me even more is that the Pages crawled per day from Webmaster Tools started drastically dropping a few days ago and has just reached a new low in months (from over 2000 to 700). Should I be worried or will things sort out eventually?

    Read the article

  • What happens differently when you add a task Asynchronously on GAE?

    - by Ben Grunfeld
    Google's doc on async tasks assumes knowledge of the difference between regular and asynchronously added tasks. add_async(task, transactional=False, rpc=None) Asynchronously add a Task or a list of Tasks to this Queue. How is adding tasks asynchronously different to adding them regularly. I.e. what is the difference between using add(task, transactional=False) and add_async(task, transactional=False, rpc=None) I've heard that adding tasks regularly blocks certain things. Any explanation of what it blocks and how, and how async tasks don't block would be greatly appreciated.

    Read the article

  • Huge difference between Facebook Ad Click figures and Apache log requests

    - by Gearóid
    We're running a facebook ad campaign for our business but there seems to be a huge discrepancy between the number of clicks registered and the number of requests made with "facebook.com" in the HTTP referrer. The difference can be anything between 40-80 clicks/requests. I understand why the Google Analytics would be off and I understand that the figures shouldnt be exactly the same but surely if 100 people click the ad then I should be seeing at least 90 requests for the homepage with facebook.com as the referrer? Can anybody provide any insight into why this may be happening?

    Read the article

  • How should calculations be handled in a document database

    - by Morten
    Ok, so I have a program that basically logs errors into a nosql database. Right now there is just a single model for an error and its stored as a document in the nosql database. Basically I want to summarize across different errors and produce a summary of the "types" of errors that occured. Traditionally in a SQL database the this normalization would work with groupings, sums and averages but in a NoSQL database I assume I need to use mapreduce. My current model seems unfit for the task, how should I change the way I store "models" in order to make statistical analysis easy? Would a NoSQL database even be the right tool for this type of problem? I'm storing things in Google AppEngine's BigTable, so there are some limitations to think of as well.

    Read the article

  • Is it safe to block redirected (but still linked) URLs with robots.txt?

    - by Edgar Quintero
    I have a website that has all URLs optimized and 301 redirected from nasty URLs to clean ones. However, everywhere throughout the site the unclean URLs are linked in menus, content, products, etc. Google currently has all clean URLs indexed, along with a few unclean URLs too. So the site still has linked everywhere the old URLs (ideally this wouldn't be the case but this is how it is ATM). I would like to block the unclean URLs with robots.txt. The question: if I block these unclean URLs with the robots.txt, when the entire website is linked with them (but they all redirect to the clean version), will this affect the indexing status at all?

    Read the article

  • Facebook and Gmail stop working after 10 minutes

    - by Julia
    I have problem with facebook and gmail only: It works fine and lets me log in, view photos and videos, read new messages, etc. But after 5-10 minuets it doesn't load at all This webpage is not available. The webpage at http://www.facebook.com/ might be temporarily down or it may have moved permanently to a new web address. More information on this error Below is the original error message Error 101 (net::ERR_CONNECTION_RESET): Unknown error After deleting cookies this problem disappears for 5-10 minuets, then I get the same error. It happens with Google Chrome and Firefox. Ping works fine. I have checked System-Preferences-Network Proxy it sets on Default - "Direct Internet Connection". Then I have run test (chrome://net-internals/#tests) and got some FAIL results Use system proxy settings Disable IPv6 host resolving Probe for IPv6 host resolving IPv6 is disabled.

    Read the article

  • How can I had some contents from Chrome/Chromium browsers?

    - by MIH1406
    I need to put a "Bookmark us" in my website. But as I searched using Google all the results conclude that no way to do "Bookmark us" for Chrome/Chromium browsers. So I want to either: 1- Hide the content from chrome/chromium browsers. or at least, 2- Show a message if the user's browser is chrome/chromium after clicking that buttong. Here is my "Bookmark Us" script: /** Bookmark Us */ function bookmark_us(url, title){ if(window.sidebar) // firefox window.sidebar.addPanel(title, url, ""); else if(window.opera && window.print){ // opera var elem = document.createElement('a'); elem.setAttribute('href',url); elem.setAttribute('title',title); elem.setAttribute('rel','sidebar'); elem.click(); } else if(document.all) // ie window.external.AddFavorite(url, title); } else { } /** Bookmark Us */ <a href="javascript:bookmark_us('URL','TITLE')">Bookmark Us!</a>

    Read the article

  • How to diagnose a search engine ranking drop?

    - by Itai
    EDIT: Reworded & Cleaned Up to Ease Understanding There has been a significant drop (90%) in traffic from Google to one of my well-established sites (6+ years) in the last week. Searches show that the top 3 wide keywords all dropped 4 spots. Searches for other keywords do not show ANY results in the first 10 to 25 pages of results, while previously one 1st or 2nd page at most. Since there are 200 factors for ranking, the question really is: What steps are necesary to figure out what caused such a drop? There has been no major changes during this period or the last month on the site and certainly not in the homepage which has dropped rank. Over all the years of running this site and plenty of others, I have never experienced this. There are no duplicate content on my site and I have rigorously used canonical links for the last few years to ensure it is not misinterpreted as such.

    Read the article

  • How long should my Html Page Title Really be?

    - by RandomBen
    How long should my text within my <title></title> tags really be? I know Google cuts it off at some point but when? When I used IIS7's SEO Toolkit 1.0 I get error stating my title should be under 65 characters. I have a book by Bruce Clay that states I should use from 62-70 characters and roughly 9 +/- 3 words. I also have used SenSEO's Firefox Add-on and it states I should use a max of 65 characters or roughly 15 words. What is the max really? I have 2 sources saying 65 and 1 saying 72 but Bruce Clay is generally kept in high regard.

    Read the article

  • Tracking QR Code referrals

    - by Vince Pettit
    We were using a third party website to provide QR codes and track them, the only problem was one week their server went down for some time and effectively killed the QR code off as it was a dead link. As far as I could see their tracking was a simple redirect via their website. I have set up a page with a javascript redirect to the destination URL with our Google analytics code in the page but was just wondering if anyone else has had any experience of setting up their own tracking/redirect for QR codes this way or have you done it differently?

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >