Search Results

Search found 31372 results on 1255 pages for 'google charts api'.

Page 32/1255 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Legal Issue: Remove/Hide links on Google Login page

    - by Rowell
    For the background: I'm developing a device application which offers connection to Google Drive. My end-users will need to login to their Google Account and authorize my application to access their Google Drive. I'm using OAuth 2.0 to do this. But my concern is that I don't want users to navigate away from my application using the links on the Google Login page. Basically, I don't want them to use my application to browse the internet. Question: Will I violate any terms of service/usage if I hide or change the href the links using GreaseMonkey or TamperMonkey? The changes will only be on the client side and I won't alter any processing at all. I already checked https://developers.google.com/terms/ but I found no item related to modifying the pages on client side. Thanks in advance.

    Read the article

  • Google tweets – Now search twitter archives using Google

    - by samsudeen
    Google has launched a Twitter archive service which allows you to  search tweets in real time as well as on its huge public archive (remember Twitter crossed 10 billionth tweet last month). The search results are displayed as tweets with twitter logo. To explore the twitter search go to Google.com homepage  and select   “Show options” on the search results page, then select “Updates.”.  The search is similar to the Google search with options to dig through the tweets by timeframe. You can explore results by zooming through a particular time range  or date. In addition to the time chart, it also displays the relative volume of an activity on Twitter about the topic. as you can see there is a spike about GSLV launch after 3 PM today.There is also a short cut link “Now” on the left corner which displays the latest results on the topics searched.The tweets also gets refreshed automatically.   Considering the huge volume of activity (50 million messages per day) on twitter, the archive is going to more and bigger. By providing such feature Google has once again proved it is way ahead of others in search Related Posts:None FoundJoin us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Sitemaps - do I need to submit each sitemap in sitemap_index.xml to Google Webmaster tools?

    - by iSumitG
    I am having a Wordpress blog on my CentOS server. There is no sitemap.xml in the root directory but there is sitemap_index.xml file in the root directory which contains the following code: <?xml-stylesheet type="text/xsl" href="http://mywebsite.com/wp-content/plugins/wordpress-seo/css/xml-sitemap-xsl.php"?> <sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <sitemap> <loc> http://mywebsite.com/post-sitemap.xml </loc> <lastmod> 2012-12-18T19:47:47+00:00 </lastmod> </sitemap> <sitemap> <loc> http://mywebsite.com/page-sitemap.xml </loc> <lastmod> 2012-12-18T17:32:49+00:00 </lastmod> </sitemap> </sitemapindex> My question: Which sitemap should I submit to Google Webmasters Tools? Options are: Only sitemap_index.xml Only post-sitemap.xml and page-sitemap.xml All 3 (sitemap_index.xml, post-sitemap.xml and page-sitemap.xml) Any other, please let me know.

    Read the article

  • Friday Fun: Play MineSweeper in Google Chrome

    - by Asian Angel
    Are you addicted to MineSweeper and love to play it when taking a break from work? Now you can add that mine sweeping goodness to Google Chrome with the Chrome MineSweeper extension. Find Those Mines! Once the extension has been installed simply click on the “Toolbar Button” to access the game (opens in a new tab). The “emoticons” at the top of the tab window indicate the difficulty level of game play available. Sometimes you can make quick progress in a short time with this game… Only to lose moments later. So you do have to plan your strategy out carefully. You will be surprised (or perhaps alarmed?) at just how quickly you get addicted to playing “just one more round”! Want a bigger challenge? Click on the “middle emoticon” to access a tougher level. The ultimate level…how much mine sweeping punishment are you up for?   Conclusion If you are a MineSweeper fan then this will be a perfect addition to your browser. For those who are new to this game then you have a lot of fun just waiting for you. Links Download the Chrome MineSweeper extension (Google Chrome Extensions) Similar Articles Productive Geek Tips How to Make Google Chrome Your Default BrowserPlay a New Random Game Each Day in ChromeEnable Vista Black Style Theme for Google Chrome in XPIncrease Google Chrome’s Omnibox Popup Suggestion Count With an Undocumented SwitchFriday Fun: Play 3D Rally Racing in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper Live PDF Searches PDF Files and Ebooks

    Read the article

  • Google Analytics setting cookies on static content despite being on entirely separate domain

    - by Donald Jenkins
    I recently decided to comply with the YSlow recommendation that static content is hosted on a cookieless domain. As I already use the root of my domain (donaldjenkins.com) to host my website—on which Google Analytics sets a few cookies—that meant I had to move the CNAME URL for the CDN serving the static files from cdn.donaldjenkins.com to an entirely separate, dedicated domain. I purchased cdn.dj (yes, it's a real Djibouti domain name), hosted the files on the root (which contains nothing else, other than a robots.txt file) and set a CNAME of e.cdn.dj for the CDN. This setup works, but I was rather surprised to find that YSlow was still flagging the static files for not being cookie-free: here's a screenshot: The cdn.djdomain was new, and was never used for anything other than hosting these static files. Running httpfox on the site shows the _utma and _utmz Google Analytics cookies are being set on the static files listed above—despite their being hosted on an entirely separate, dedicated domain. Here's my Google Analytics code: //Google Analytics tracking code var _gaq=[['_setAccount','UA-5245947-5'],['_trackPageview']]; (function(d,t){var g=d.createElement(t),s=d.getElementsByTagName(t)[0]; g.src=('https:'==location.protocol?'//ssl':'//www')+'.google-analytics.com/ga.js'; s.parentNode.insertBefore(g,s)}(document,'script')); // [END] Google Analytics tracking code I'm not obsessing about this issue—I know it's not really affecting server performance—but I'd like to just understand what is causing it not to go away...

    Read the article

  • Referral traffic not appearing properly in Google Analytics

    - by Crashalot
    We have a partnership arrangement with another site where we pay them for users sent to us. However, they claim our referral numbers for them are lower than theirs by 50%. They are tracking clicks in Google Analytics (using events) while we are using visits in Google Analytics. Are we doing something wrong with our Google Analytics installation? <!-- Google Analytics BEGIN --> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-12345678-1']); _gaq.push(['_setDomainName', 'example.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> <!-- Google Analytics END -->

    Read the article

  • Google Analytics checkout page tracking problem

    - by Amir E. Habib
    I am running a multilingual website, each lang on a different domain name. I am trying to lead all purchase requests to the checkout progress, which has its own domain too. In order to keep Google Analytics tracking I've updated the Google Analytics code accordingly. I set the source domain to 'multiple top-level domains'. Everything is going fine so far unless in E-commerce Overview; the "Sources / Medium" is always showing as (direct) - or the name of the source domain. Since I am redirecting using PHP header(location:.. etc.) the Google _link method doesn't seem to be working properly - I want to focus on two questions: Should I create a new profile for the checkout domain in Google Analytics? (I am now using the profile ID of the source domain even though I move to the checkout domain, si that OK?) When I'm trying to pass the cookies of the source domain to the checkout domain, I notice that the Google cookies are copied to the new domain (the cookie path is .checkout-domain/) and they have the same values of the original cookies - But for some reason another set of cookies is created once I access a page with google analytics code in the checkout pages, with different values (same path). Feels like I'm doing something wrong here, so my question is - What am I doing wrong here? Does anyone have an idea how to pass the cookies to the checkout domain?

    Read the article

  • Ignore Partial Upgrade -- Google Earth Dependencies

    - by pyraz
    I'm running a 64-bit install of Xubuntu 12.04. It took me a little while to get Google Earth working. The 64-bit Google earth package requires some 32-bit gtk libraries provided by ia32-libs. However, when I ran a simulation to install ia32-libs and it's dependencies, it wanted to remove a ton of programs, including the xubuntu-desktop meta-package. As a work-around, I used getlibs to get the 32-bit libraries I needed, and then installed Google Earth with the deb package and the --ignore-depend option to dpkg. Awesome, Google Earth is installed and is working great! Now, however, Update Manager keeps complaining about a "Partial Upgrade", and apt-get won't let me install any new applications. It wants me to do a fix-broken install, but when I do a simulation of apt-get -f install I get some very bad news, they want to uninstall the Google Earth I just worked so hard to install! $> apt-get -f -s install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be REMOVED: googleearth 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. Remv googleearth [6.0.3.2197+0.7.0-1] TL;DR The --ignore-depends passed to dpkg is not propagating to apt-get, so now I can't install any new applications until I uninstall Google Earth, because of it's missing dependencies (even though it works fine without them). How can I fix this?

    Read the article

  • Google Glasses–A new world in front of your eyes

    - by Gopinath
    Google is getting into a whole new business that would help us to see the world in a new dimension and free us from all gadgets we carry we today. Google Glasses is a wearable tiny computer that brings information in front of your eyes and lets you interact with it using voice commands. It’s a kind of glasses(spectacles) that you can wear to see and interact with the world in a new way.  With Google Glasses, for example you can look at a beautiful location and through voice you can instruct it to capture a photograph and share it to your friends. You don’t need a camera to capture the beautiful scene, you don’t need an App to upload and share it.  All you need is just Google Glasses By the way these glasses are not heavy head mountable stuff, they are very tiny one and look beautiful too. Check out the embedded video demo released by Google to see them in action and for sure you are going to be amazed.   Last year December 9 to 5 Google posted details about this secret project and NY Times says that these glasses would be available to everyone at affordable cost, anywhere between $250 and $600. It is powered by Android OS and the contains a GPS, motion sensor, camera, voice input & output devices. Check out Project Glass for more details.

    Read the article

  • Why does Google Chrome ignore "last_known_google_url" property in "Local State" file?

    - by Peter Sivák
    I want to force my Google Chrome web browser (version 21.0.1180.89, 64-bit) to use non-localized search (thus google in english) through address bar, using the default Google search engine. To achieve that, I have to change value of the property last_known_google_url to https://www.google.com/?hl=en& in Local State file (for instance on Linux, the full path to the file is ~/.config/google-chrome/Local State). In that file, there should be the property: "browser": { "last_known_google_url": but it is not. Even if I add there the property, it has no impact on search - Google Chrome does not use the property and still searches in localized version. Another option is to put the property to Preferences file (for instance on Linux, the full path to the file is ~/.config/google-chrome/Default/Preferences) - which works perfectly when I start Google Chrome and do some search - but just after that, the property (actually the whole Preferences file) is overriden, so "the most important" trailing part ?hl=en& of the property value is removed - and without it, the non-localized search does not work anymore. Why does Google Chrome ignore last_known_google_url property in Local State file?

    Read the article

  • Google is not treating two Australian schools as separate sites when both are subdomains of qld.edu.au

    - by LuckySpoon
    My question relates to two websites, each of which is a "Calvary Christian College", however in two totally different locations and unrelated to each other entirely (except by name, and thus domain). All schools in the state are issued a <school-name>.qld.edu.au subdomain, in this case calvary.qld.edu.au and calvarycc.qld.edu.au. Now what's interesting is that these domains are crossing each other in sitelinks for searches such as calvary christian college townsville. The green data here is for one school (the Townsville school, as per search term), and the red data is for the other school. I've put a demotion in for this 6 months ago (we control calvary.qld.edu.au), however we're seeing no change on the results page. I have been able to get the owners of calvarycc.qld.edu.au to submit demotions for our domain, which should go in sometime in the next few days. What can we do to tell Google that these websites are not interchangeable, despite both appearing as "subdomains" of qld.edu.au? We can possibly open channels of communication with the administrators of qld.edu.au but will need to tell them what we need to change, and at this point I'm out of ideas.

    Read the article

  • Multilingual sites and Google search results, using sub-folders for language

    - by AWinter
    About three months ago we added an English version of our, previously Japanese only, site under the subfolder /en/ we've tried to follow the sometimes incomplete best practices laid out by Google by adding alternate tags to all pages that are currently translated. The top page for instance has the following meta tags for language. <link rel="canonical" href="/"> <link rel="alternate" hreflang="ja" href="/"> <link rel="alternate" hreflang="en" href="/en/"> While the English main page under /en/ has <link rel="canonical" href="/en/"> <link rel="alternate" hreflang="ja" href="/"> <link rel="alternate" hreflang="en" href="/en/"> Alternate languages are setup in the sitemap. (as per Google's recommendations) It seems however that Google absolutely refuses to show the English top page in results when the user is using English at google.com if you search you'll, as of this post, get the Japanese description and a title that Google has apparently invented instead of the title and description in the meta-tags for the /en/ index page. Does anyone have any experience with subfolders actually working to affect search results? What are the best practices for ensuring that the correct language version of my website is displayed through Google and other search engines? And how long will it take before the new language version becomes prominent in search engine results?

    Read the article

  • No audio in Google Chrome

    - by Z9iT
    I started with Ubuntu 12.04 Minimal. Then installed only 3 utils sudo apt-get install xorg xinit google-chrome-stable alsa-base alsa-utils alsa-oss I have added google-chrome to .xinitrc file. Used sudo alsamixer to unmute everything using M. Also I am able to hear sound when I run this independently in a terminal sudo aplay /usr/share/sounds/alsa/Front_Center.wav However Google Chrome is not giving any sound output be it on youtube or the same file (/usr/share/sounds/alsa/Front_Center.wav) opened by browsing in chrome. UPDATE : the moment i install some Desktop (display) Manager like gnome or lxde and launch chrome then, the audio is perfect success. However if i kill the xsession and the desktop manager (lxde) AND then start with loading only the chrome (without DM) then again i loose the sound. This makes me wonder that there is something which is not allowing the sound to be loaded into chrome directly, but once the session like lxde loads, then it works flawless. I am thinking that i should rather ask, how to authorize google-chrome to use sound software? Miscellaneous : I am surprised to know that I cannot start google-chrome by sudo command (it asks to be a normal user) && that i cannot start alsamixer as a normal user (i must use sudo alsamixer ) May someone please help what i need to do so that google chrome speaks????

    Read the article

  • Google Analytics Not tracking data correctly IP-address issue?

    - by PaperThick
    I have developed a small site for a client and the site has been placed inside a <iframe> at the clients site. The GA-script I'm using looks like this: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push( ['_setAccount', 'UA-XXXXXXXX-2'], //My company's GA-account ['_trackPageview'], ['b._setAccount', 'UA-XXXXXXXX-1'], // Test GA-account ['b._trackPageview'], ['th._setAccount', 'UA-XXXXXXX-3'], ['th._setDomainName', '.clientdomain.se'], // Client GA-account ['th._trackPageview'] ); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </head> As you can see I report the GA pageviews to the client as well. The GA script is tracking visitors and pageviews at both ends. But the problem is that on my clients side the visitor-count is more than double what they are on my end (20 000 vs 5 000). At first I thought that it was being duplicated at some point but when I checked my Crazy-Egg account I saw that it had tracked over 10 000 visits and then stopped tracking because that was my limit on the account. The page my site is on is on a IP-address (http://XXX.XXX.XX.X/campaign/) and not on a "valid url". Could that be an issue why some of the visitors isn't beeing tracked? Thanks in advance

    Read the article

  • With Google DFP (Small Business) is it possible to disable AdSense in an Ad Slot on a per-request basis?

    - by Daniel Pehrson
    Setup: I run a network of websites that target different hobby niches and have a section dedicated to community classifieds. I serve advertising on these sites through Google DFP for Small Business with AdSense enabled on the slots. Problem: One of the next sites in my network will be targeting the firearms/shooting industry and as such the classifieds section will not comply with the prohibited content guidelines of AdSense regarding the sale (or coordination of sale) of weapons. I work very hard to comply with the guidelines of my partners even if I don't understand/agree with them and after talking with many people have decided that the best option is to disable AdSense serving on that section of that website, while leaving it on for the rest of the network. Solution: Right now my only idea for this is to duplicate all my site's ad slots and tack a "_sensitive" onto the end of each one (eg. header and header_sensitive) conditionally registering ad slots based on whether or not I am in the sensitive section of the sensitive site. My hope however is that there may be a way to accomplish this without duplicating all my ad slots possibly with some sort of options to the GA_googleFillSlot() call that allows me to say "load ads from this slot but do not serve AdSense no matter what."

    Read the article

  • Why won't Google Chrome open Google Docs, and why is it slow with GMail?

    - by Philip
    OSX 10.6 Snow Leopard: Google Chrome works like a charm most of the time. I have many extensions installed, and I've tried disabling/re-enabling them one by one to find a culprit, with no luck. Here are the problems: (1) Chrome is slow to load GMail. I am fairly sure that clearing the cache alleviates this problem. But I can open Safari, type in the URL, and login to GMail in the time it sometimes takes Chrome to open the page. Shouldn't caching help the page load?!? And sometimes even after recent cache clears it still is slow to load..... Thoughts? (2) Chrome won't open Google Docs at all. Safari does. Again, I tried disabling extensions one-by-one, and I allow cookies and even third-party cookies. But I still am told it has a redirect loop. Any help would be much appreciated. Thanks!

    Read the article

  • Google Chrome 'ruined' after someone else logged in

    - by MHJ96
    Google Chrome was the default browser that came with my laptop. I have a Google account which I was logged into Chrome with. Someone else logged into my chrome using their account which has resulted in everything being lost. I logged in again to find my theme, bookmarks, history, most visited sites were gone and now instead of 'piling up' under the symbol already pinned to the Windows 7 task bar it opens a second symbol on the taskbar and 'piles up' under that instead which has never happened before. I have tried unpinning and repinning which didn't work. I have tried syncing my account numerous times to no avail, I have searched high and low on the Chrome forums for any kind of answer. I have tried accessing that thing in documents to try and recover my bookmarks but I can't find them (I had hidden files enabled etc etc). I really really want it back how it was as I had a lot of bookmarked sites and quick access to sites and everything was how i used it, and I hate that it now opens a new icon on the taskbar.

    Read the article

  • Will new Twitter API 1.1 allow hashtag/tweet/trend queries without any authentication, i.e. for a client that does not use an user's account at all?

    - by P5music
    I see that, even not being logged in Twitter with an account, if I google hashtags or twitter accounts, twitter show them. I think it should be also possible to get those tweets programmatically but I do not know it for sure, so I ask for confirmation here, especially for the future with the new Twitter API resctrictions. I mean, will it be possible to get tweets from hashtags or accounts without logging in an user account, and so not wanting to access the user settings, subscriptions, etc (because I do not need it), thus not having to respect any token limit? I found these API 1.1 faqs, have I to be concerned? Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will the Search API require authentication? The Search API is now part of the official REST API in version 1.1. In addition to serving results in a format consistent with other Tweet resources, usage will also require authentication.

    Read the article

  • More FlipBoard Magazines: Azure, XAML, ASP.NET MVC & Web API

    - by dwahlin
    In a previous post I introduced two new FlipBoard magazines that I put together including The AngularJS Magazine and The JavaScript & HTML5 Magazine. FlipBoard magazines provide a great way to keep content organized using a magazine-style format as opposed to trudging through multiple unorganized bookmarks or boring pages full of links. I think they’re really fun to read through as well. Based on feedback and the surprising popularity of the first two magazines I’ve decided to create some additional magazines on topics I like such as The Azure Magazine, The XAML Magazine and The ASP.NET MVC & Web API Magazine. Click on a cover below to get to the magazines using your browser. To subscribe to a given magazine you’ll need to create a FlipBoard account (not required to read the magazines though) which requires an iOS or Android device (the Windows Phone 8 app is coming soon they say). If you have a post or article that you think would be a good fit for any of the magazines please tweet the link to @DanWahlin and I’ll add it to my queue to review. I plan to be pretty strict about keeping articles “on topic” and focused.   The Azure Magazine   The XAML Magazine   The ASP.NET MVC & Web API Magazine   The AngularJS Magazine   The JavaScript & HTML5 Magazine

    Read the article

  • DB API for shell scripting (any shell)

    - by foampile
    I am faced with some legacy shell scripts that run batch data processing jobs in Oracle using SQL+. For the most part, the data tier does not have to communicate back to the script with retrieved data to be passed for shell-level processing but in a few cases it does. The problem is, SQL+ is really meant to be an end user app and not an API that can communicate with other clients programmaticaly. That is why people have invented APIs such as DBD::DBI for Perl, JDBC for Java, ODBC etc. The way it is done is they invoke SQL+ and then parse the output, which is clearly designed for human eye consumption, using tools like sed and awk. The whole thing is at best a hack and very prone to bugs. Since this client is rather conservative with their technology, they don't want to scale their scripts up to Perl or Python where there are data access APIs. So I am wondering whether there are similar APIs for shell, e.g. K or bash. What I would like is if an API would return data in a 2-dimensional array or strings (for the lack of type setting) so that I can just read DB data like that. The way they do it now is akin to parsing regular web page HTML to get a single stock quote rather than cleanly calling a web service and be done with it. Anybody know of a product I can use? Thanks

    Read the article

  • Balancing dependency injection with public API design

    - by kolektiv
    I've been contemplating how to balance testable design using dependency injection with providing simple fixed public API. My dilemma is: people would want to do something like var server = new Server(){ ... } and not have to worry about creating the many dependencies and graph of dependencies that a Server(,,,,,,) may have. While developing, I don't worry too much, as I use an IoC/DI framework to handle all that (I'm not using the lifecycle management aspects of any container, which would complicate things further). Now, the dependencies are unlikely to be re-implemented. Componentisation in this case is almost purely for testability (and decent design!) rather than creating seams for extension, etc. People will 99.999% of the time wish to use a default configuration. So. I could hardcode the dependencies. Don't want to do that, we lose our testing! I could provide a default constructor with hard-coded dependencies and one which takes dependencies. That's... messy, and likely to be confusing, but viable. I could make the dependency receiving constructor internal and make my unit tests a friend assembly (assuming C#), which tidies the public API but leaves a nasty hidden trap lurking for maintenance. Having two constructors which are implicitly connected rather than explicitly would be bad design in general in my book. At the moment that's about the least evil I can think of. Opinions? Wisdom?

    Read the article

  • Reflective discovery of an inner class in an API

    - by wassup
    Let me ask you, as this bothers me for quite a while but appears to be subjectively the best solution for my problem, if reflective discovery of an inner class for API purposes is that bad idea? First, let me explain what I mean by saying "reflective discovery" and all that stuff. I am sketching an API for a Java database system, that'll be centered around block-based entities (don't ask me what that means - that's a long story), and those entities can be read and returned to the Java code as objects subclassed from the Entity class. I have an Entity.Factory class, that, by means of fluent interfaces, takes a Class<? extends Entity> argument and then, uses an instance of Section.Builder, Property.Builder, or whatever builder the entity has, to put it into the back-end storage. The idea about registering all entity types and their builders just doesn't appeal to me, so I thought that the closest solution to the problem that'd suffice my design needs would be to discover, using reflection, all inner classes of Entity classes and find one that's called Builder. Looking for some expert insight :) And if I missed some important design details (which could happen as I tried to make this question as concise as possible), just tell me and I'll add them.

    Read the article

  • Yelp, Google's API for restaurants help

    - by chris
    Ok I have looked into this, and I'm not sure if anyone else has experience with it. I'm having termendous difficulties with Yelp and Google's API. To help explain what I am trying to do here is the concept of the website. We would have to pull restaurants based on user distance, and then randomize them based on quality of restaurant based on feedback from review websites (Yelp, Google, urbanspoon, zagat, opentable, kudzu, yahoo - doesn't have to be from all), and feedback from our users (on results page for the random restaurant users can select good recommendation/bad recommendation). There’s a lot we could calculate for our formula. Things that will dictate your results will be based on if you’re at home or work. If you’re at home you will have more time to drive out to the city to grab some dinner or lunch. If you’re at work we would have to recommend restaurants nearby as lunch is typically 30 minutes to a hour. A 30 minute lunch would require take out most likely or quick service. A hour lunch break you could dine in at a local fine dining restaurant. So in a nutshell, user comes to website. Select if they're at home or work, click submit and we will have a random restaurant selected for them to go. If they don't like it they can click retry and a new restaurant can show. The issue I am having is using the API to gather all the restaurants in the US. I know it can be done because there are similiar websites/apps that pull restaurants that are closest to you such as Ness, Alfred, and I believe there's two more but I can't remember the names. Anyone know if this can be accomplish?

    Read the article

  • Google App Engine + AdWords API: java.lang.NoClassDefFoundError: javax/xml/soap/SOAPException

    - by Konrad
    Hi all, I know a lot so far about this exception. But I am wondering, if any of you tried to use AdWords API on GAE. AdWords uses Axis as underlying WS library, which do not work on GAE and unfortunately I cannot find solution to make it working. I tried already this: http://dev.bizo.com/2009/04/calling-soap-web-services-on-google-app.html Does any of you know if there is a way to use AdWords API on GAE with Java? Thanks in advance for any help Konrad

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >