Search Results

Search found 31918 results on 1277 pages for 'google maps api'.

Page 118/1277 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Why are my Google searches redirected?

    - by Please Help
    This machine was infected with various malware. I have scanned the system with Malwarebytes. It found and removed some 600 or so infected files. Now the machine seems to be running well with only one exception. Some Google search results are being redirected to some shady search engines. If I were to copy the url from the Google Search results and paste it in the address bar it would go to the correct site but if I click the link I will be redirected somewhere else. Here is my log file from HijackThis: http://pastebin.com/ZE3wiCrk

    Read the article

  • Google Chrome not using local cache

    - by Steve
    Hi. I've been using Google Chrome as a substitute for Firefox not being able to handle having lots of tabs open at the same time. Unfortunately, it looks like Chrome is having the same problem. Freakin useless. I had to end Chrome as my whole system had slowed to a crawl. When I restarted it, I opted to restore the tabs that were last open. At this stage, every one of the 20+ tabs srated downloading the pages they had previously had open. My question is: why can't they open a locally stored/saved copy of the web page from cache? Does Google Chrome store pages in a cache? Also: after most of the pages had completed their downloading, I clicked on each tab to view the page. Half of them only display a white page, and I have to reload the page manually. What is causing this? Thanks for your help.

    Read the article

  • Split or individually edit repeating Google Calendar events

    - by Steve Crane
    Our company just moved from Microsoft Exchange with Outlook to Google Mail, Calendar, etc. and I am trying to modify a calendar event where it repeats, but the times across the days are not the same. I created an event from 10h00 to 16h30 and made it repeat for two days. Now the times need to be adjusted independently for the two days. I could just cancel the repeat and book a new appointment for the second day but the event has a confirmed room booking and with rooms at a premium I worry that someone else may grab the room before I can create the new second day event. Outlook had a way to modify repeating events individually but I'm not seeing anything like that in the Google Calendar web client. So my question is whether there is a way to individually edit repeating events, or failing that, to split the repeating event into individual ones?

    Read the article

  • Bringing Google Docs to the Desktop

    - by Jonathan Sampson
    Is there a stable way of accessing Google Docs (application) from my desktop without having to use a browser on Windows? On my accepted answer... While I did stipulate that I wanted to not use my browser, I didn't really mean I wanted to avoid browser technology. I meant I didn't want to open my browser, type in the web address for google docs, etc. TheTXI's answer required me to download/install nothing more than what I already had (Chrome) to acheive this. It created a desktop icon (similar to an application) that launches me right into my docs (similar to an application), without extra browser-items on the screen. This was an excellent suggestion, and won by virtue of parsimony.

    Read the article

  • How does Google searches for content? [closed]

    - by Akito
    I am trying to understand how does Google search for content within a page. When we search a page, it displays relevant results with keywords in the title or other important places. The thing that astonishes me is that how do they grab the starting area of the text? They show a small text with the search results. How do they manage to have it as there is nothing special in a webpage that makes a Google bot understand from where does the actual content start. Please help me out. Thanks

    Read the article

  • Change Google Chrome's Process model?

    - by mobius42
    See here: http://imgur.com/lKffI.png Does anyone here know how to stop Chrome doing this? Chrome seems to group all tabs I open through the same page into one process. If I copy and paste the links individually into separate tabs, it creates new processes, but when I just middle click links, it groups them into one. I want to force Chrome to create a new process for every tab because when one page locks up, it freezes pretty much all the tabs I have open and if one of the tabs crashes, it takes the rest with it. You can apparently alter Chrome's process model to one called "--process-per-tab" which seems to be what I'm looking for, but when I try and open Chrome with this argument via the terminal, it doesn't work. It's likely I'm not using the correct command; what I tried was: /Applications/"Google Chrome.app"/Contents/MacOS/"Google Chrome" --process-per-tab I'm on OSX and using the latest dev build 5.0.396.0.

    Read the article

  • Change Google Chrome’s Process model?

    - by mobius42
    See here: http://imgur.com/lKffI.png Does anyone know how to stop Chrome doing this? Chrome seems to group all tabs I open through the same page into one process. If I copy and paste the links individually into separate tabs, it creates new processes, but when I just middle click links, it groups them into one. I want to force Chrome to create a new process for every tab because when one page locks up, it freezes pretty much all the tabs I have open and if one of the tabs crashes, it takes the rest with it. You can apparently alter Chrome's process model to one called "--process-per-tab" which seems to be what I'm looking for, but when I try and open Chrome with this argument via the terminal, it doesn't work. It's likely I'm not using the correct command; what I tried was: /Applications/"Google Chrome.app"/Contents/MacOS/"Google Chrome" --process-per-tab I'm on OSX and using the latest dev build 5.0.396.0.

    Read the article

  • New .NET Library for Accessing the Survey Monkey API

    - by Ben Emmett
    I’ve used Survey Monkey’s API for a while, and though it’s pretty powerful, there’s a lot of boilerplate each time it’s used in a new project, and the json it returns needs a bunch of processing to be able to use the raw information. So I’ve finally got around to releasing a .NET library you can use to consume the API more easily. The main advantages are: Only ever deal with strongly-typed .NET objects, making everything much more robust and a lot faster to get going Automatically handles things like rate-limiting and paging through results Uses combinations of endpoints to get all relevant data for you, and processes raw response data to map responses to questions To start, either install it using NuGet with PM> Install-Package SurveyMonkeyApi (easier option), or grab the source from https://github.com/bcemmett/SurveyMonkeyApi if you prefer to build it yourself. You’ll also need to have signed up for a developer account with Survey Monkey, and have both your API key and an OAuth token. A simple usage would be something like: string apiKey = "KEY"; string token = "TOKEN"; var sm = new SurveyMonkeyApi(apiKey, token); List<Survey> surveys = sm.GetSurveyList(); The surveys object is now a list of surveys with all the information available from the /surveys/get_survey_list API endpoint, including the title, id, date it was created and last modified, language, number of questions / responses, and relevant urls. If there are more than 1000 surveys in your account, the library pages through the results for you, making multiple requests to get a complete list of surveys. All the filtering available in the API can be controlled using .NET objects. For example you might only want surveys created in the last year and containing “pineapple” in the title: var settings = new GetSurveyListSettings { Title = "pineapple", StartDate = DateTime.Now.AddYears(-1) }; List<Survey> surveys = sm.GetSurveyList(settings); By default, whenever optional fields can be requested with a response, they will all be fetched for you. You can change this behaviour if for some reason you explicitly don’t want the information, using var settings = new GetSurveyListSettings { OptionalData = new GetSurveyListSettingsOptionalData { DateCreated = false, AnalysisUrl = false } }; Survey Monkey’s 7 read-only endpoints are supported, and the other 4 which make modifications to data might be supported in the future. The endpoints are: Endpoint Method Object returned /surveys/get_survey_list GetSurveyList() List<Survey> /surveys/get_survey_details GetSurveyDetails() Survey /surveys/get_collector_list GetCollectorList() List<Collector> /surveys/get_respondent_list GetRespondentList() List<Respondent> /surveys/get_responses GetResponses() List<Response> /surveys/get_response_counts GetResponseCounts() Collector /user/get_user_details GetUserDetails() UserDetails /batch/create_flow Not supported Not supported /batch/send_flow Not supported Not supported /templates/get_template_list Not supported Not supported /collectors/create_collector Not supported Not supported The hierarchy of objects the library can return is Survey List<Page> List<Question> QuestionType List<Answer> List<Item> List<Collector> List<Response> Respondent List<ResponseQuestion> List<ResponseAnswer> Each of these classes has properties which map directly to the names of properties returned by the API itself (though using PascalCasing which is more natural for .NET, rather than the snake_casing used by SurveyMonkey). For most users, Survey Monkey imposes a rate limit of 2 requests per second, so by default the library leaves at least 500ms between requests. You can request higher limits from them, so if you want to change the delay between requests just use a different constructor: var sm = new SurveyMonkeyApi(apiKey, token, 200); //200ms delay = 5 reqs per sec There’s a separate cap of 1000 requests per day for each API key, which the library doesn’t currently enforce, so if you think you’ll be in danger of exceeding that you’ll need to handle it yourself for now.  To help, you can see how many requests the current instance of the SurveyMonkeyApi object has made by reading its RequestsMade property. If the library encounters any errors, including communicating with the API, it will throw a SurveyMonkeyException, so be sure to handle that sensibly any time you use it to make calls. Finally, if you have a survey (or list of surveys) obtained using GetSurveyList(), the library can automatically fill in all available information using sm.FillMissingSurveyInformation(surveys); For each survey in the list, it uses the other endpoints to fill in the missing information about the survey’s question structure, respondents, and responses. This results in at least 5 API calls being made per survey, so be careful before passing it a large list. It also joins up the raw response information to the survey’s question structure, so that for each question in a respondent’s set of replies, you can access a ProcessedAnswer object. For example, a response to a dropdown question (from the /surveys/get_responses endpoint) might be represented in json as { "answers": [ { "row": "9384627365", } ], "question_id": "615487516" } Separately, the question’s structure (from the /surveys/get_survey_details endpoint) might have several possible answers, one of which might look like { "text": "Fourth item in dropdown list", "visible": true, "position": 4, "type": "row", "answer_id": "9384627365" } The library understands how this mapping works, and uses that to give you the following ProcessedAnswer object, which first describes the family and type of question, and secondly gives you the respondent’s answers as they relate to the question. Survey Monkey has many different question types, with 11 distinct data structures, each of which are supported by the library. If you have suggestions or spot any bugs, let me know in the comments, or even better submit a pull request .

    Read the article

  • Publish limit on Facebook's Graph API

    - by Andy
    Hey guys, I've been using the Graph API for a while. One feature of my application is that it allows a user to post a message on their friends walls (dont worry it is not spam). Anyway...there is a limit on the API and it will only allow a certain number of posts before failing. I've read on the facebook bucket allocation limits but my app's limit has not moved. It was 26 when i created the app. It is still 26 even though there are about 20 users. What can I do to increase my pulish limit? And I promise this app is not used for anything spam related.

    Read the article

  • Does Google use any “Language” flags / tags set within a PDF file when determining its language?

    - by Ally Ak
    When determining the language of a HTML page, I understand that Google looks at any language declarations that the page owner has set, and then also applies its own language detection algorithms. But does Google similarly look at language meta data set in PDF files when determining a PDF file's language? (Authors of PDF files can set document-wide properties describing the language (or languages) contained within it.) Or does Google rely exclusively on language detection algorithms and disregard the language flag set within the PDF file? Can anyone shed any light?

    Read the article

  • django-rest-framework: api versioning

    - by w--
    so googling around it appears that the general consensus is that embedding version numbers in REST URIs is a bad practice and a bad idea. even on SO there are strong proponents supporting this. e.g. Best practices for API versioning? My question is about how to accomplish the proposed solution of using the accept header / content negotiation in the django-rest-framework to accomplish this. It looks like content negotiation in the framework, http://django-rest-framework.org/api-guide/content-negotiation.html is already configured to automatically return intended values based on accepted MIME types. If I start using the Accept header for custom types, I'll lose this benefit of the framework. Is there a better way to accomplish this in the framework?

    Read the article

  • ASP.NET Web API - Screencast series Part 4: Paging and Querying

    - by Jon Galloway
    We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools. In Part 2 we started to build up a sample that returns data from a repository in JSON format via GET methods. In Part 3, we modified data on the server using DELETE and POST methods. In Part 4, we'll extend on our simple querying methods form Part 2, adding in support for paging and querying. This part shows two approaches to querying data (paging really just being a specific querying case) - you can do it yourself using parameters passed in via querystring (as well as headers, other route parameters, cookies, etc.). You're welcome to do that if you'd like. What I think is more interesting here is that Web API actions that return IQueryable automatically support OData query syntax, making it really easy to support some common query use cases like paging and filtering. A few important things to note: This is just support for OData query syntax - you're not getting back data in OData format. The screencast demonstrates this by showing the GET methods are continuing to return the same JSON they did previously. So you don't have to "buy in" to the whole OData thing, you're just able to use the query syntax if you'd like. This isn't full OData query support - full OData query syntax includes a lot of operations and features - but it is a pretty good subset: filter, orderby, skip, and top. All you have to do to enable this OData query syntax is return an IQueryable rather than an IEnumerable. Often, that could be as simple as using the AsQueryable() extension method on your IEnumerable. Query composition support lets you layer queries intelligently. If, for instance, you had an action that showed products by category using a query in your repository, you could also support paging on top of that. The result is an expression tree that's evaluated on-demand and includes both the Web API query and the underlying query. So with all those bullet points and big words, you'd think this would be hard to hook up. Nope, all I did was change the return type from IEnumerable<Comment> to IQueryable<Comment> and convert the Get() method's IEnumerable result using the .AsQueryable() extension method. public IQueryable<Comment> GetComments() { return repository.Get().AsQueryable(); } You still need to build up the query to provide the $top and $skip on the client, but you'd need to do that regardless. Here's how that looks: $(function () { //--------------------------------------------------------- // Using Queryable to page //--------------------------------------------------------- $("#getCommentsQueryable").click(function () { viewModel.comments([]); var pageSize = $('#pageSize').val(); var pageIndex = $('#pageIndex').val(); var url = "/api/comments?$top=" + pageSize + '&$skip=' + (pageIndex * pageSize); $.getJSON(url, function (data) { // Update the Knockout model (and thus the UI) with the comments received back // from the Web API call. viewModel.comments(data); }); return false; }); }); And the neat thing is that - without any modification to our server-side code - we can modify the above jQuery call to request the comments be sorted by author: $(function () { //--------------------------------------------------------- // Using Queryable to page //--------------------------------------------------------- $("#getCommentsQueryable").click(function () { viewModel.comments([]); var pageSize = $('#pageSize').val(); var pageIndex = $('#pageIndex').val(); var url = "/api/comments?$top=" + pageSize + '&$skip=' + (pageIndex * pageSize) + '&$orderby=Author'; $.getJSON(url, function (data) { // Update the Knockout model (and thus the UI) with the comments received back // from the Web API call. viewModel.comments(data); }); return false; }); }); So if you want to make use of OData query syntax, you can. If you don't like it, you're free to hook up your filtering and paging however you think is best. Neat. In Part 5, we'll add on support for Data Annotation based validation using an Action Filter.

    Read the article

  • Anyone successfully using Commission Junction API ?

    - by Mauricio Scheffer
    Is anyone successfully using the CJ web services? I just keep getting java.lang.NullPointerExceptions even though my app is .net (clearly their errors). CJ support doesn't even know what a web service is. I googled and found many people getting this or other errors. Question is: is it a temporary problem or am I doomed to parse manually downloaded reports for eternity? The specific API I'm trying to use is the daily publisher commission service. Here is the WSDL. Links: CJ web services home API Reference

    Read the article

  • Why does my domain not show up in Google anymore?

    - by Earlz
    So I have had a website since about 2006. It's http://earlz.biz.tm . Recently I've noticed that no results will show up for it in google. I do have a secondary domain(that I plan on getting rid of) pointing to it but I don't understand why google would suddenly not show my site. I believe it was showing up a few months ago and my website is hardly ever down, like one or two days I believe has been the most it's been down in a row in this time period. Is there something wrong with my DNS or other configuration that would make google not index me? For reference I've tried: earlz.biz.tm site:earlz.biz.tm and the heading from my site "Earlz.biz.tm -- The reasoning is bacon" A few show up with the therusticstone.com domain(the one I plan to point somewhere else) but none show up directly linking to earlz.biz.tm.

    Read the article

  • Netflix OData API iPhone: Accessing more than just the title

    - by Neil Desai
    Netflix just recently announced that they have a new OData API which gives developers access to more of their catalog and is exactly what I've been looking for. Also, on odata.org they have a sample iphone objective-c sdk that accesses the netflix odata api and displays a few movie titles in a tableview with a navigationcontroller. http://odataobjc.codeplex.com/ I'm just messing around right now and I would like to access more than just the catalog titles but I have no idea how to. Preferably, I would like to just push another view controller that will implement a page that can display the synopsis etc. Any suggestions on how to access the other data elements of a movie? Thanks

    Read the article

  • How to capture page with google map?

    - by Max
    I have a UIComponent with Google map in the continer. I need to capture this container for making a preview. My integration looks like the following: <mx:UIComponent id="mapContainer" width="410" height="300" /> googleMap = new Map(); mapContainer.addChild(googleMap); But if I do("this" - is my UIComponent) var bmd:BitmapData = new BitmapData(this.width, this.height, true, 0x00ffffff); bmd.draw(this); I see the following: An ActionScript error has occurred: SecurityError: Error #2123: Security sandbox violation: BitmapData.draw: http://localhost/ cannot access http://mt1.google.com/vt/lyrs=m@121&hl=en&src=api&x=1&y=1&z=1&s=Gali. No policy files granted access. at flash.display::BitmapData/draw() I now, that I can to add it host to allowed on the custom client. But I need to have working system on any computer ) I've tried to hide it: templateGoogleMapRenderer.mapContainer.setVisible(false); templateGoogleMapRenderer.mapContainer.includeInLayout = false; But it was unsuccessfully. May be I can override some method in my UIComponent, that flex use during BitmapData/draw() ? Capture with hidden map is success result for me )

    Read the article

  • embed google map in wpf control

    - by Dave Turvey
    Hi, I am trying to create a wpf control that will display a map image using google maps. I want to be able to centre the map on a longitude and latitude specified by the application. Ideally, the control will then allow the user to move a map marker and store the latitude/longitude of the marker in the application. The only way I can think of doing this is to use a WebBrowser control and create a HTML string at runtime that shows a map of the desired location. This seems like an awkward solution and won't allow me to easily retrieve the marker location. Does anyone know of a better way to accomplish this?

    Read the article

  • Office Communicator Auto Accept Calls with C# API

    - by Dennis
    Is there a way to automatically accept calls programmed with the C# Api when someone calls 'me' to start a video call? Starting a video call with the API is easy: var contactArray = new ArrayList(); contactArray.Add("[email protected]"); object[] sipUris = new object[contactArray.Count]; int currentObject = 0; foreach (object contactObject in contactArray) { sipUris[currentObject] = contactObject; currentObject++; } var communicator = new Messenger(); communicator.OnIMWindowCreated += new DMessengerEvents_OnIMWindowCreatedEventHandler(communicator_OnIMWindowCreated); IMessengerAdvanced msgrAdv = communicator as CommunicatorAPI.IMessengerAdvanced; if (msgrAdv != null) { try { object obj = msgrAdv.StartConversation(CommunicatorAPI.CONVERSATION_TYPE.CONVERSATION_TYPE_VIDEO, sipUris, null, "Conference Wall CZ - Conversation", "1", null); } catch (COMException ex) { Console.WriteLine(ex.Message); } } But on the other side i want to automatically accept this call....

    Read the article

  • Is there a J2SE sensor API?

    - by Martin Strandbygaard
    Does anyone know of a "standardized" Java API for working with sensors, and which is closely tied to J2ME as is the case with JSR 256? I'm writing a Java library for interfacing with a sensor network consisting of several different types of sensors (mostly simple stuff such as temperature, humidity, GPS, etc.). So far I've rolled my own interface, and users have to write apps against this. I would like to change this approach and implement a "standard" API so that implementations aren't that closely tied to my library. I've looked at JSR 256, but that really isn't a great solution as it's for J2ME, and my library is mostly used by Android devices or laptops running the full J2SE.

    Read the article

  • How do I ask google not to index certain parts of my page?

    - by Gavin Mannion
    I was searching for an old review on my site today and I noticed that Google is indexing the headline text in my latest article list on every page that it appears, obviously I guess. The problem is if I search for my Dragon's Lair review specifically to my site like this http://www.google.co.za/search?sugexp=chrome,mod=9&sourceid=chrome&ie=UTF-8&q=site%3Alazygamer.net+dragons+lair+review Then it returns a ton of pages that aren't appropriate as they aren't related to the review at all. The reason why I care is that I have a second Dragon's Lair review that was posted years ago and now I can't find it. Is there a way to hint to google that certain text isn't relevant to the actual content on the page? is it a terrible idea?

    Read the article

  • Does placing Google Analytics code in an external file affect statistics?

    - by Jacob Hume
    I'm working with an outside software vendor to add Google Analytics code to their web app, so that we can track its usage. Their developer suggested that we place the code in an external ".js" file, and he could include that in the layout of his application. The StackOverflow question "Google Analytics: External .js file covers the technical aspect, so apparently tracking is possible via an external file. However, I'm not quite satisfied that this won't have negative implications. Does including the tracking code as an external file affect the statistics collected by Google?

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >