Search Results

Search found 22027 results on 882 pages for 'google analytics'.

Page 50/882 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • how to get email id from google api response

    - by user1726508
    i am able to get user information from Google API response using oath2 . But i do't know how to get those responses individually . Response i am getting from Google Api: * Access token: ya29.AHES6ZQ3QxKxnfAzpZasdfd23423NuxJs29gMa39MXV551yMmyM5IgA { "id": "112361893525676437860", "name": "Ansuman Singh", "given_name": "Ansuman", "family_name": "Singh", "link": "https://plus.google.com/112361893525676437860", "gender": "male", "birthday": "0000-03-18", "locale": "en" } Original Token: ya29.AHES6ZQ3QxKxnfAzpZu0lYHYu8sdfsdafdgMa39MXV551yMmyM5IgA New Token: ya29.AHES6ZQ3QxKxnfdsfsdaYHYu8TNuxJs29gMa39MXV551yMmyM5IgA But i want only "id" & "name" indiviually to save in my Database table. How can i do this? I got those above response/output By using the below code. public static void main(String[] args) throws IOException { ------------------------- ------------------------- ------------------------- String accessToken = authResponse.accessToken; GoogleAccessProtectedResource access = new GoogleAccessProtectedResource(accessToken, TRANSPORT, JSON_FACTORY, CLIENT_ID, CLIENT_SECRET, authResponse.refreshToken); HttpRequestFactory rf = TRANSPORT.createRequestFactory(access); System.out.println("Access token: " + authResponse.accessToken); String url = "https://www.googleapis.com/oauth2/v1/userinfo?alt=json&access_token=" + authResponse.accessToken; final StringBuffer r = new StringBuffer(); final URL u = new URL(url); final URLConnection uc = u.openConnection(); final int end = 1000; InputStreamReader isr = null; BufferedReader br = null; isr = new InputStreamReader(uc.getInputStream()); br = new BufferedReader(isr); final int chk = 0; while ((url = br.readLine()) != null) { if ((chk >= 0) && ((chk < end))) { r.append(url).append('\n'); } } System.out.print(""); System.out.println(); System.out.print(" "+ r ); //this is printing at once but i want them individually access.refreshToken(); System.out.println("Original Token: " + accessToken + " New Token: " + access.getAccessToken()); }

    Read the article

  • Benchmarking Linux flash player and google chrome built in flash player

    - by Fischer
    I use xubuntu 14.04 64 bit, I installed flash player from software center and xubuntu-restricted-extras too Are there any benchmarks on Linux flash player and google chrome built in flash player? I just want to see their performance because in theory google's flash player should be more updated and have better performance than the one we use in Firefox. (that's what I read everywhere) I have chrome latest version installed and Firefox next, and I found that flash videos in Chrome are laggy and they take long time to load. While the same flash videos load much faster in Firefox and I tend to prefer watching flash videos in firefox, especially the long ones because it loads them so much faster. I can't believe these results on my PC, so is there any way to benchmark flash players performance on both browsers? I want to know if it's because of the flash player or the browsers or something else

    Read the article

  • Google Storage for Developers…

    - by joelvarty
    I noticed this today and it seems to be a service that will compete with Amazon S3 and Microsoft’s Azure Blob storage. It’s only open to US developers for now, but I have one burning question: can we transfer directly from Google Storage to another Google service (like YouTube, Docs, etc) without incurring any transfer charges?  The even bigger question is whether all of the APIs will be updated to include this new service and to better amalgamate the existing app services with this one, since storage is so central to everything, it seems to beg the question. via Daring Fireball more later - joel

    Read the article

  • Cannot connect to google talk using empathy/pidgin after upgrading

    - by levesque
    I can't seem to be able to connect to google talk using either pidgin or empathy after upgrading to Ubuntu 10.10 (fresh install). I do not know why... empathy just writes "Connecting ..." forever and never connects. What can I do from there? Google talk's web interface works just fine for me so I doubt it is a firewall issue. EDIT: In fact sometimes it's able to connect (empathy), it just takes a LOOONG time (I'd say about 5 mins), longer than I am willing to wait. On the other hand, the web interface is able to connect in seconds.

    Read the article

  • Internal microphone not working in Google talk

    - by Tim
    My laptop is Lenovo T400, which has an internal microphone. My OS is Ubuntu 10.10. My browser is Firefox 11.0. The internal microphone works fine, because I can use it to record my voice using Audaciy. However, in Google talk, I have choose "Internal Audio Analog Stereo" in "Settings - Voice and video chat - Microphone:". But when I click "Verify your settings", it doesn't work. Neither can I speak to others using Google Talk. I wonder why? Thanks and regards!

    Read the article

  • google-chrome video application association

    - by Ben Lee
    Is there any way to tell google-chrome to launch video files of particular types in an external application (or even just to bring up the download box as if the type was un-handled), instead of showing the video inside the browser? Searching online, it seems that chrome is supposed to use xdg-mime for file associations, but apparently is ignoring this for video. For example, when I do: xdg-mime query default video/mpeg It returns dragonplayer.desktop. But when I click on a mpeg video link, chrome displays it internally instead of launching Dragon Player (if I double click on a mpeg file in my file manager, on the other hand, it does open Dragon Player). So is there a way to tell chrome to respect this setting, or another way to coax chrome into opening the file externally? If it matters, I'm running the latest version of google-chrome stable (not chromium) at the time of writing, v. 18.0.1025.151, on kubuntu 11.10.

    Read the article

  • Google indexing page with parameters but page is Disallowed in robots.txt

    - by Jakobud
    I have the following in robots.txt: User-agent: * Disallow: /refer.php User-agent: NinjaBot Allow: / Sitemap: http://www.mysite.com/sitemap.xml The refer.php file does various things depending on what GET parameters are passed to it. When I do a Google search, I see tons of results for pages like this: http://www.mysite.com/refer.php?o=23945 http://www.mysite.com/refer.php?o=39858 http://www.mysite.com/refer.php?o=9683 http://www.mysite.com/refer.php?o=10569 http://www.mysite.com/refer.php?o=58304 http://www.mysite.com/refer.php?o=69604 Is the reason that Google is indexing these because I don't have an asterisk * after refer.php in the robots.txt ? Should changing it to Disallow: /refer.php* fix the problem?

    Read the article

  • SEO - why did my google search rank drop?

    - by Brian McCarthy
    My nutrition shop websites for tampa and brandon were coming up on page one of google search results and now they have dissappeared. The 2 websites serve different markets although they are close geographically and have the same products, keywords, and layouts. Brandon, FL is considered a suburb of Tampa, FL and could be grouped into the Tampa Bay area. There's also a mirror site nutrition shop setup for orlando. Is the google search ranking drop because: 1) from a flash banner recently added on the front page 2) is the site just being re-indexed on all search engines b/c of new content? 3) is it seen as duplicate content as there are separate websites for the cities of Tampa and Brandon but with the same content? What can be done to fix this? Thanks!

    Read the article

  • Bingbot requests from Google IP address

    - by JITHIN JOSE
    We have some suspicious requests to our server, 74.125.186.46 - - [24/Aug/2014:23:24:11 -0500] "GET <url> HTTP/1.1" 200 16912 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" 74.125.187.193 - - [24/Aug/2014:23:24:12 -0500] "GET <url> HTTP/1.1" 200 20119 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" As it shows, user-agent shows it is bingbot. But whois data of IP address(74.125.186.46 and 74.125.187.193) shows it is from google servers. So is it Google,Bing or any other content scrappers?

    Read the article

  • Is the Google Webmaster Tools verification temporary?

    - by Senseful
    When you add a site to Google Webmaster Tools, it asks you to verify it (e.g. via a <meta> tag). I verified a site a while ago, but when I logged in, I noticed that it isn't verified anymore. The history shows that it was verified 58 days ago, but then 30 days ago it tried and failed saying that "revierification failed". I'm not sure if this is a result of some setting I changed which required a reverification, or if Google Webmaster Tools periodically tries to verify the site. I was under the impression that the verification only happens once when you add the site, and then you can delete the <meta> tag. If this is not how it works, and it does reverify periodically, will it require a different <meta> tag value or can I keep the original one I used and never have to worry about it again?

    Read the article

  • Retreiving upcoming calendar events from a Google Calendar

    - by brian_ritchie
    Google has a great cloud-based calendar service that is part of their Gmail product.  Besides using it as a personal calendar, you can use it to store events for display on your web site.  The calendar is accessible through Google's GData API for which they provide a C# SDK. Here's some code to retrieve the upcoming entries from the calendar:  .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public class CalendarEvent 2: { 3: public string Title { get; set; } 4: public DateTime StartTime { get; set; } 5: } 6:   7: public class CalendarHelper 8: { 9: public static CalendarEvent[] GetUpcomingCalendarEvents 10: (int numberofEvents) 11: { 12: CalendarService service = new CalendarService("youraccount"); 13: EventQuery query = new EventQuery(); 14: query.Uri = new Uri( 15: "http://www.google.com/calendar/feeds/userid/public/full"); 16: query.FutureEvents = true; 17: query.SingleEvents = true; 18: query.SortOrder = CalendarSortOrder.ascending; 19: query.NumberToRetrieve = numberofEvents; 20: query.ExtraParameters = "orderby=starttime"; 21: var events = service.Query(query); 22: return (from e in events.Entries select new CalendarEvent() 23: { StartTime=(e as EventEntry).Times[0].StartTime, 24: Title = e.Title.Text }).ToArray(); 25: } 26: } There are a few special "tricks" to make this work: "SingleEvents" flag will flatten out reoccurring events "FutureEvents", "SortOrder", and the "orderby" parameters will get the upcoming events. "NumberToRetrieve" will limit the amount coming back  I then using Linq to Objects to put the results into my own DTO for use by my model.  It is always a good idea to place data into your own DTO for use within your MVC model.  This protects the rest of your code from changes to the underlying calendar source or API.

    Read the article

  • Transferring Email to Google Apps - Timing

    - by picus
    I did a site for a client a few months back. Hosting & email was setup through Dreamhost VPS. Hosting has not been an issue, but email has become increasingly dodgy. Long story short, they want to transfer to Google Apps for Biz. They already have the mailboxes setup - they are on macs so they will be transferring using the gmail email importer for mac - my question is this - should they transfer their domain over first or their emails? I'm a developer so I have no problem changing their DNS settings, but I am not an IT manager type by any stretch so I am a bit in the dark about process - my proposed process was: Delete any junk/deleted mail from current environment Backup email locally copy emails to google apps via importer Switch domain and update mac mail settings It seems that doing the domain first would be best but I don't know if that is possible. I have been trying to find a generic checklist, but i haven't been able to.

    Read the article

  • Best way to prevent Google from indexing a directory [duplicate]

    - by Gkhan14
    This question already has an answer here: Stopping Google index some web pages I have 5 answers I've researched many methods on how to prevent Google/other search engines from crawling a specific directory. The two most popular ones I've seen are: Adding it into the robots.txt file: Disallow: /directory/ Adding a meta tag: <meta name="robots" content="noindex, nofollow"> Which method would work the best? I want this directory to remain "invisible" from search engines so it does not affect any of my site's ranking. In other words, I want this directory to be neutral/invisible and "just there." I don't want it to affect any ranking. Which method would be the best to achieve this?

    Read the article

  • Google Webmasters tools crawl error

    - by Shiro
    I am looking in to Google Webmaster Tools - Crawl Error section. How should I handle for those URL due their system / application showed invalid URL. e.g http://www.example/images/products/s_=enlarge_16gb.jpg but, I dunno what happen to yahoo groups, it break the link into http://www.example/images/products/s_= enlarge_16gb.jpg and I only make the top part become hyperlink, which is http://www.example/images/products/s_= Because of the URL, Google show crawl error, I got few error because of this kind of result or because other people typo error. How do I prevent this. I am sure I don't have the right go and change other people post. What is the solution for this. Thanks!

    Read the article

  • SEO optimisation problems after Google Panda [on hold]

    - by Daniel West
    I am currently trying to improve a website's SEO after it took quite a hit from the Google Panda upgrades. What are the main things I need to look at improving when trying to improve its ranking in Google? I have already made sure that the pages validate to W3C Standards, minimized css and js and done the obvious meta tags and header optimization but this hasn't made any difference yet. It could possibly be a content issue as the pages currently read much like a brochure and there were some pages with just a video and no text content on them which is also an issue. I've added a rel="nofollow" attribute to the links to these pages although i'm told this doesn't really work anymore. If anyone has any ideas let me know. Cheers!

    Read the article

  • Discover What Powers Your Favorite Websites

    - by Matthew Guay
    Have you ever wondered if the site you’re visiting is powered by WordPress or if the webapp you’re using is powered by Ruby on Rails?  With these extensions for Google Chrome, you’ll never have to wonder again. Geeks love digging under the hood to see what makes their favorite apps and sites tick.  But opening the “View Source” window today doesn’t tell you everything there is to know about a website.  Plus, even if you can tell what CMS is powering a website from its source, it can be tedious to dig through lines of code to find what you’re looking for.  Also, the HTML code never tells you what web server a site is running on or what version of PHP it’s using.  With three extensions for Google Chrome you’ll never have to wonder again.  Note that some sites may not give as much information, but still, you’ll find enough data from most sites to be interesting. Discover Web Frameworks and Javascript Libraries with Chrome Sniffer If you want to know what CMS is powering a site or if it’s using Google Analytics or Quantcast, this is the extension for you.  Chrome Sniffer (link below) identifies over 40 different frameworks, and is constantly adding more.  It shows the logo of the main framework on the site on the left of your address bar.  Here wee see Chrome Sniffer noticed that How-To Geek is powered by WordPress.   Click the logo to see other frameworks on the site.  We can see that the site also has Google Analytics and Quantcast.  If you want more information about the framework, click on its logo and the framework’s homepage will open in a new tab. As another example, we can see that the Tumblr Staff blog is powered by Tumblr (of course), the Discus comment system, Quantcast, and the Prototype JavaScript framework. Or here’s a site that’s powered by Drupal, Google Analytics, Mollom spam protection, and jQuery.  Chrome Sniffer definitely uncovers a lot of neat stuff, so if you’re into web frameworks you’re sure to enjoy this extension. Find Out What Web Server The Site is Running On Want to know whether the site you’re looking at is running on IIS or Appache?  The Web Server Notifier extension for Chrome (link below) lets you easily recognize the web server a site is running on by its favicon on the right of the address bar.  Click the icon to see more information. Some web servers will show you a lot of information about their server, including version, operating system, PHP version, OpenSSL version, and more. Others will simply tell you their name. If the site is powered by IIS, you can usually tell the version of Windows Server its running on since the IIS versions are specific to a version of Windows.  Here we see that Microsoft.com is running on the latest and greatest – Windows Server 2008 R2 with IIS 7.5. Discover Web Technologies Powering Sites Wondering if a webapp is powered by Ruby on Rails or ASP.NET?  The Web Technology Notifier extension for Chrome (link below), from the same developer as the Web Server Notifier, will let you easily discover the backend of a site.  You’ll see the technology’s favicon on the right of your address bar, and, as with the other extension, can get more information by clicking the icon. Here we can see that Backpack from 37signals is powered by the Phusion Passenger module to run Ruby on Rails.   Microsoft’s new Docs.com Office Online apps is powered by ASP.NET…   And How-To Geek has PHP running to power WordPress. Conclusion With all these tools at hand, you can find out a lot about your favorite sites.  For example, with all three extensions we can see that How-To Geek runs on WordPress with PHP, uses Google Analytics and Quantcast, and is served by the LightSpeed web server.  Fun info, huh?   Links Download the Chrome Sniffer extension Download the Web Server Notifier extension Download the Web Technology Notifier extension Similar Articles Productive Geek Tips Enjoy a Clean Start Page with New Tab PageEnjoy Image Zooming on Your Favorite Photo Websites in ChromeAdd Your Own Folders to Favorites in Windows 7Find User Scripts for Your Favorite Websites the Easy WayAdd Social Elements to Your Gmail Contacts with Rapportive TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 tinysong gives a shortened URL for you to post on Twitter (or anywhere) 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes

    Read the article

  • No cache and Google AdSense performance

    - by Luca
    I'm developing a page where I need to avoid JavaScript caching by browser. I've added this header: <?php header('Cache-Control: no-cache, no-store, must-revalidate'); header('Pragma: no-cache'); header('Expires: 0'); ?> After this, browsers didn't cache more JavaScript sorting out the issue, but in the same time I noticed a drop in Google AdSense RPM. Then I removed the added code and now Google AdSense RPM is reaching again a good value. So, how could I avoid JavaScript caching without meddle with AdSense performance?

    Read the article

  • Moving from http to https - Google webmaster tools | Bing webmaster tools

    - by user2240778
    I'm moving from http to https for my entire site. The site is currently added to google webmaster tools as www.example.com and all the pages are indexed as http. How do i go about moving to the new https URLs on Google webmaster tools. Do I just submit a updated sitemap which has the https URLs OR Do I add a new site as https://www.example.com and submit the sitemap with https urls? All the http urls are set to redirect to their https counterparts.

    Read the article

  • What does Avg Position from Google Webmaster really mean

    - by RandomBen
    I have a website I am tracking on Google's Webmaster tools. When I look at the Search queries page it shows me a graph of Impressions and clicks per day. Underneath that there is a table showing specific results. One of the columns is Avg Position. That columns seems to include paid ads. Is that correct, does the Avg Position also include the top result ads(the ads at the top of the search results)? I am asking because the company I work for has a not so common name and whenever I Google it our site is the #1 result 100% of the time. The only thing above it is 1 paid add. When I checkout Webmaster tools I notice that searching for our name returns with us at an Avg Position of 2.0. That seems like it would be only possible if paid top result ads were included in that position. Does anyone know?

    Read the article

  • Google Open-Sources Their Book Scanner

    - by Jason Fitzpatrick
    Google has released the hardware and software source for their high speed/non-destructive book scanner–If you’re looking to scan a large volume of books, save yourself the design work and check out the Linear Book Scanner project. The design is pretty slick; the scanner uses vacuum pressure to automatically turn the pages as it works. Check out the video above to see a Google Tech Talk about the project and then hit up the link below to grab the hardware and software files. Linear Book Scanner [via Hack A Day] Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • Google Chrome would not exist fullscreen mode

    - by siberan
    I am running 64-bit Ubuntu 13.10, have latest stable google-chrome version 30.0.1599.114-1. Whenever I enter fullscreen mode by pressing F11, it would never exit this mode by pressing F11. Actually, I see it exit but then fullscreen mode is quickly restored. I searched for a solution, but nothing really helps. I even tried completely re-installing it with no luck. Any suggestions? Update: I tried completely removing ~/.config/google-chrome, it did not help. Update 2: I am running Cinnamon 2.0.6, maybe that would give some clues. Thanks, Nick.

    Read the article

  • Interview question: How would you implement Google Search?

    - by ripper234
    Supposed you were asked in an interview "How would you implement Google Search?" How would you answer such a question? There might be resources out there that explain how some pieces in Google are implemented (BigTable, MapReduce, PageRank, ...), but that doesn't exactly fit in an interview. What overall architecture would you use, and how would you explain this in a 15-30 minute time span? I would start with explaining how to build a search engine that handles ~ 100k documents, then expand this via sharding to around 50M docs, then perhaps another architectural/technical leap. This is the 20,000 feet view. What I'd like is the details - how you would actually answer that in an interview. Which data structures would you use. What services/machines is your architecture composed of. What would a typical query latency be? What about failover / split brain issues? Etc...

    Read the article

  • GA and Unique visitors again

    - by DDEX
    I take care of a company intranet and measure the traffic with GA. I am absolutely sure that there are no more than 5000 URLs in our company and it is impossible to check the intranet from outside the company network. Yet when I check the number of Unique Visitors (UV) in the last year GA says there were 36.500 of them...How is that possible? I thought UV should measure each URL only once in the given time period. Could anybody explain how this actually works? Can it be that the cookie trackers expire after some time and are counted more then once?

    Read the article

  • Fixing Google Chrome text antialias for .ttf fonts

    - by 71GA
    I have found a topic which presents a solution on how to get antialising working in Google Chrome - Windows, but they use .svg format. I have a .ttf format and I import all of my fonts like this at the moment: @font-face {font-family: "t1"; src: url(../fonts/title/circle.ttf);} @font-face {font-family: "t2"; src: url(../fonts/title/sanserifing.ttf);} @font-face {font-family: "t3"; src: url(../fonts/title/serveroff.ttf);} @font-face {font-family: "t4"; src: url(../fonts/title/pupcat.ttf);} How can I achieve antialising done right in Google Chrome Windows?

    Read the article

  • Google Webmasters tools crawl error caused by URL split into two lines

    - by Shiro
    I am looking in to Google Webmaster Tools - Crawl Error section. How should I handle for those URL due their system / application showed invalid URL. e.g http://www.example/images/products/s_=enlarge_16gb.jpg but, I dunno what happen to yahoo groups, it break the link into http://www.example/images/products/s_= enlarge_16gb.jpg and I only make the top part become hyperlink, which is http://www.example/images/products/s_= Because of the URL, Google show crawl error, I got few error because of this kind of result or because other people typo error. How do I prevent this. I am sure I don't have the right go and change other people post. What is the solution for this. Thanks!

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >