Search Results

Search found 25880 results on 1036 pages for 'google safe browsing'.

Page 285/1036 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • Oauth callback problem

    - by yogsma
    I am using OAuth with google data api. We have a portal only for authorized users. So when users are logged in and if they are accessing calendar page, they will be asked if they want to sync their calendars with google calendar. If yes, they will be redirected for authentication. Once user has granted access, google appends OAuth_Token to the callback URL. the callback URL was that of the page of calendar in portal. This url has its query string options encrypted. But when the redirection happens , it takes back to login page of url. url is like http://aaa.xyz.com/(encrypted part of query string) and after oauth_token is authorized, this url becomes http://aaa.xyz.com/(encrypted part of query string)&oauth_token. So the user sees the login page after redirection instead of original page. How should I handle this in code.

    Read the article

  • Translate a webpage in PHP

    - by Rob
    I'm looking to translate a webpage in PHP 5 so I can save the translation and make it easily accessible via mydomain.com/lang/fr/category/article.html rather than users having to go through google translate. I've found various easy ways to translate text via CURL, however what i'd really like to be able to do is translate an entire webpage but obviously ignore the tags. The problem is that Google Translate messes up all the HTML tags, class names etc Does anyone know of a php class that can translate an entire webpage whilst ignoring the tags? I'm guessing it may be possible via advanced regular expressions or something like that, but i'm not sure. I can't just curl Google's response as i'll have all the extra JS that they put in. Any ideas?

    Read the article

  • gae error when i login.

    - by zjm1126
    i am using http://code.google.com/p/gaema/source/browse/#hg/demos/webapp, and this is my traceback: Traceback (most recent call last): File "D:\Program Files\Google\google_appengine\google\appengine\ext\webapp\__init__.py", line 510, in __call__ handler.get(*groups) File "D:\gaema\demos\webapp\main.py", line 31, in get google_auth.get_authenticated_user(self._on_auth) File "D:\gaema\demos\webapp\gaema\auth.py", line 641, in get_authenticated_user OpenIdMixin.get_authenticated_user(self, callback) File "D:\gaema\demos\webapp\gaema\auth.py", line 83, in get_authenticated_user url = self._OPENID_ENDPOINT + "?" + urllib.urlencode(args) File "D:\Python25\lib\urllib.py", line 1250, in urlencode v = quote_plus(str(v)) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1: ordinal not in range(128) how to do this thanks ??????????????? ???????? ??????????? ?????????????? ???????

    Read the article

  • Sending the variable's content to my mailbox in Python?

    - by brilliant
    I have asked this question here about a Python command that fetches a URL of a web page and stores it in a variable. The first thing that I wanted to know then was whether or not the variable in this code contains the HTML code of a web-page: from google.appengine.api import urlfetch url = "http://www.google.com/" result = urlfetch.fetch(url) if result.status_code == 200: doSomethingWithResult(result.content) The answer that I received was "yes", i.e. the variable "result" in the code did contain the HTML code of a web page, and the programmer who was answering said that I needed to "check the Content-Type header and verify that it's either text/html or application/xhtml+xml". I've looked through several Python tutorials, but couldn't find anything about headers. So my question is where is this Content-Type header located and how can I check it? Could I send the content of that variable directly to my mailbox? Here is where I got this code. It's on Google App Engines.

    Read the article

  • Where to get 1440 free cron jobs a day?

    - by Nok Imchen
    I'm making a program for my own use. In this program, I need to set up cron job. The cron job should run every minute (24 hr * 60 mins = 1440 times). Thus, I'll need to set up a cron job with a frequency of 1 minute. I think Google App Engine gives free cron job. But I'm very new to it. I downloaded the java SDK and read the document but understood nothing :( So, I can't use Google App Engine. Is here any other free service like Google app engine which but with easier inferface??? All I want is a cron job with 1 minute frequency Please help/suggest me... Thank you

    Read the article

  • saving appengine mail from spam filters

    - by Fh
    One of my clients uses Trend Micro InterScan Messaging Security to protect their internal mail services. Suddenly InterScan decided to filter out all messages coming from Google App Engine. Unfortunately they haven't been able to whitelist the sender address as each e-mail gets a different one. For example, *3ckihSOVMMHlZHSL.JSMMHlZHSL.JS*@apphosting.bounces.google.com, with everything before the @ being variable. Update I'm including this screenshot of how Interscan sees the incoming e-mail. Notice that all senders are different: If I look into the e-mail headers, the apphosting domain appears inside the Return-Path field: Return-Path: <[email protected].google.com> The "From" field looks ok. It says what I set it to say, but the spam filter only looks at the Return-Path. My client sysadmin doesn't want to whitelist the whole apphosting domain, as it wouldn't be only whitelisting my application. How could I bypass this e-mail filters if I can't get an unique sender? Thanks,

    Read the article

  • How to make an NSURL that contains a | (pipe character)?

    - by aks
    Hi all, I am trying to access google maps' forward geocoding service from my iphone app. When i try to make an NSURL from a string with a pipe in it I just get a nil pointer. NSURL *searchURL = [NSURL URLWithString:@"http://maps.google.com/maps/api/geocode/json?address=6th+and+pine&bounds=37.331689,-122.030731|37.331689,-122.030731&sensor=false"]; I dont see any other way in the google api to send bounds coordinates with out a pipe. Any ideas about how I can do this?

    Read the article

  • Is there a performance gain from defining routes in app.yaml versus one large mapping in a WSGIAppli

    - by jgeewax
    Scenario 1 This involves using one "gateway" route in app.yaml and then choosing the RequestHandler in the WSGIApplication. app.yaml - url: /.* script: main.py main.py from google.appengine.ext import webapp class Page1(webapp.RequestHandler): def get(self): self.response.out.write("Page 1") class Page2(webapp.RequestHandler): def get(self): self.response.out.write("Page 2") application = webapp.WSGIApplication([ ('/page1/', Page1), ('/page2/', Page2), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() Scenario 2: This involves defining two routes in app.yaml and then two separate scripts for each (page1.py and page2.py). app.yaml - url: /page1/ script: page1.py - url: /page2/ script: page2.py page1.py from google.appengine.ext import webapp class Page1(webapp.RequestHandler): def get(self): self.response.out.write("Page 1") application = webapp.WSGIApplication([ ('/page1/', Page1), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() page2.py from google.appengine.ext import webapp class Page2(webapp.RequestHandler): def get(self): self.response.out.write("Page 2") application = webapp.WSGIApplication([ ('/page2/', Page2), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() Question What are the benefits and drawbacks of each pattern? Is one much faster than the other?

    Read the article

  • GAE datastore - count records between one minute ago and two minutes ago?

    - by Arthur Wulf White
    I am using GAE datastore with python and I want to count and display the number of records between two recent dates. for examples, how many records exist with a time signature between two minutes ago and three minutes ago in the datastore. Thank you. #!/usr/bin/env python import wsgiref.handlers from google.appengine.ext import db from google.appengine.ext import webapp from google.appengine.ext.webapp import template from datetime import datetime class Voice(db.Model): when = db.DateTimeProperty(auto_now_add=True) class MyHandler(webapp.RequestHandler): def get(self): voices = db.GqlQuery( 'SELECT * FROM Voice ' 'ORDER BY when DESC') values = { 'voices': voices } self.response.out.write(template.render('main.html', values)) def post(self): voice = Voice() voice.put() self.redirect('/') self.response.out.write('posted!') def main(): app = webapp.WSGIApplication([ (r'.*', MyHandler)], debug=True) wsgiref.handlers.CGIHandler().run(app) if __name__ == "__main__": main()

    Read the article

  • Send 404 when requesting index.php through .htaccess?

    - by Daniel
    I've recently refactored an existing CodeIgniter application to use url segments instead of query strings, and I'm using a rewriterule in htaccess to rewrite stuff to index.php: RewriteRule ^(.*)$ /index.php/$1 [L] My problem right now is that a lot of this website's pages are indexed by google with a link to index.php. Since I made the change to use url segments instead, I don't care about these google results anymore and I want to send a 404 (no need to use 301 Move permanently, there have been enough changes, it'll just have to recrawl everything). To get to the point: How do I redirect requests to /index.php?whatever to a 404 page? I was thinking of rewriting to a non-existent file that would cause apache to send a 404. Would this be an acceptable solution? How would the rewriterule for that look like? edit: Currently, existing google results will just cause the following error: An Error Was Encountered The URI you submitted has disallowed characters.

    Read the article

  • gmaps Address Component Types get country name

    - by gmapsuser
    hi .. iam trying to get the country name using the Address Component Types available from gmaps V3. i dont know how i can get it the right way.. http://code.google.com/apis/maps/documentation/javascript/services.html#GeocodingAddressTypes iam trying to alert the country name liks here : alert(results[1].address_component[country]); and here`s the code.. any help is really appreciated..thanks function codeLatLng() { var input = document.getElementById("latlng").value; var latlngStr = input.split(",",2); var lat = parseFloat(latlngStr[0]); var lng = parseFloat(latlngStr[1]); var latlng = new google.maps.LatLng(lat, lng); if (geocoder) { geocoder.geocode({'latLng': latlng}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { if (results[1]) { alert(results[1].address_component[country]); } else { alert("No results found"); } } else { alert("Geocoder failed due to: " + status); } }); } }

    Read the article

  • what's the javascript "var _gaq = _gaq || []; " for ?

    - by parvas
    The Async Tracking in google analytics looks like this: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXX-X']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); About The first line var _gaq = _gaq || []; I think it ensures that if _gaq is already defined we should use it otherwise we should an array. Can anybody explain what this is for ? Also, does it matter if _gaq gets renamed ? in other words does google analytics rely on a global object named _gaq ? Thanks Parvas

    Read the article

  • How to disable 3rd party cookies in Chrome?

    - by David Nordvall
    I have both the "stop websites from storing local data" and the "block all third party cookies without exception" settings enabled in Chrome 12 (I'm not sure what the exact names of these settings are in english as I run Chrome with swedish localization). I do however have two problems. My first problem is that when I'm visiting one of my local news paper's site (and surely other), cookies from www.facebook.com is allowed for some reason. I suspect that the reason is that I have added an exception to the www.facebook.com domain but as the setting "block all third party cookies without exception" implies, that shouldn't matter. My second problem is that if I check what cookies are stored on my computer after browsing for a while, I have tons of cookies that are not on my white list. Primarily from ad services. My expectations from enabling the above mentioned settings was that only cookies that fulfill the two folling requirements would be accepted: the cookies must be from the domain in my address bar the cookies must be from a domain on my whitelist Apparently this isn't the case. The question is, have I completely misunderstood the settings or is this a bug? And, either way, is there a way to accomplish my desired behavior?

    Read the article

  • Flash plugin locks up Firefox, Chrome and Safari behind a corporate proxy, IE6 works fine

    - by Shevek
    At work I am forced by corporate policy to use IE6. Obviously this is not so good so I use FF for most of my browsing. However there is a problem once I have installed the Flash plug-in - FF locks up when trying to load Flash media. Looking at the status bar at the time of the lock up it appears this happens when the browser tries to get cross domain data. The Flash Active X plug-in in IE does not suffer this issue. I have tried it in a brand new profile in FF with Flash as the only plug in and it locks up. We have 2 different proxy servers and both exhibit the same problem. I have also tried Chrome and Safari and both lock up with the plug-in installed. So, has anyone else had this problem and solved it? Or, is there any way to disable cross domain data access in the flash plug-in? Or, is there any way to disable the "This site needs an additional plug-in" ribbon which appears when the plug-in is not installed. Many thanks!

    Read the article

  • Unable to authenticate to Windows Server 2003 for file browsing as non-administrator user.

    - by Fopedush
    I've got a windows server 2003 box containing a raid 5 array I use for mass storage. I want to set up a special non-administrator account that can be used to browse files over the network, with only read access. Ideally I'll map my network drive as this user to avoid accidentally hosing my data, and mount as an administrator user on occasions where I actually need write access. I've created a non-administrator user on the Windows Server box (called "ReadOnly)", and granted the user read permissions on the folders I need. However, when I try to browse to the files, and authenticate as this user, I'm told "Permission denied". If I throw the readOnly user into the administrators group, however, I can authenticate and browse just fine. I am, of course, only attempting to browse to folder for which I have given this user read permissions. Obviously my ReadOnly user is missing some privilege here, but I can't figure out what it is. I've been digging around in group policy editor all day to no avail. What am I missing? Fake Edit: I'm doing my browsing from a Windows 7 box, but I don't think that is relevant.

    Read the article

  • Browser history, by window

    - by Alex
    I use Chrome --but can switch to another browser if the feature I am about to describe is available. Browsing history, AFAIK in Chrome is sorted exclusively chronologically. Very often however, I will be working on a particular task on my laptop and have multiple (read: a lot) of tabs open in a single Chrome window for that task. Before finishing that task, I may need to work on something else --so I will open another window and minimize the other one, and start researching an entirely different issue. Over the course of a day, I may end up with 10-15 windows with many tabs each. This raises two issues: (a) memory usage and (b) quickly switching between the most relevant two or three windows. I solve these two problems like any regular guy probably would --closing windows. I want to be able to reopen specific windows that I have closed, such that the tabs that were open in that window at the time the window was closed will reopen. Ideally, closed windows will be sorted by the time they were closed and identified by the tabs that were open (even more ideally, I would be able to name these windows (contemporaneously or in the history menu)). Now that I describe this, what I am asking is: does any browser offer the ability to "save and close" windows? (This is distinct from an option to auto-restore tabs upon reopening the browser) Thank you.

    Read the article

  • Can't connect to research.microsoft.com on home Qwest DSL connection

    - by rakingleaves
    I have a puzzling issue regarding accessing research.microsoft.com from my home Qwest DSL connection. By default, I frequently get timeouts when accessing research.microsoft.com from Firefox, Safari, or Chrome on my Mac. I also cannot access the site from Internet Explorer in a Windows VM. However, I am able to access the site through proxify.com, so I know the site is not down. Furthermore, I haven't noticed problems accessing other sites (in particular, www.microsoft.com works fine). Also, I can access research.microsoft.com when I'm connected to networks other than my home Qwest DSL connection. Together, the above make me suspect a problem with either my router (Airport Express) or, more likely, my ISP. Anyone have any thoughts on how I can narrow down the problem further? I could call my ISP and tell them the above, but my feeling is that probably won't get me very far. I can get by browsing research.microsoft.com through a proxy, but it would be nice to figure out what's going on here and fix the problem. Oh, the only relevant discussion I found via Google was here: http://forums.whirlpool.net.au/forum-replies-archive.cfm/1311734.html Update: Thanks to those who have tried to help! I found one other thing while Googling that may be vaguely relevant: http://thedaneshproject.com/posts/supportmicrosoftcom-not-working-behind-squid/ Disabling the Accept-Encoding headers in Firefox actually didn't make a difference for me. I just thought the above might spark some other ideas about how mishandling of HTTP headers somewhere might be causing this problem. Thanks again! Another update: In case anyone is still thinking about this; I've found that I can't surf research.microsoft.com using the links text-based browser, but I can reliably download individual files with wget. Maybe that helps?

    Read the article

  • Connect Chrome to TOR

    - by Jack M
    I'm having difficulty connecting Chrome to TOR. I started trying yesterday. I started Vidalia and the TOR Browser and then followed the advice at http://lifehacker.com/5614732/create-a-tor-button-in-chrome-for-on+demand-anonymous-browsing - downloading Proxy Switchy and setting it up as stated. This resulted in Error 130 (net::ERR_PROXY_CONNECTION_FAILED) (in Chrome, when I tried to load a webpage). So I looked into Vidalia's settings and noticed that it appeared to be using port 9051, so I set that instead of 8118 as everyone on the internet seems to be suggesting. Then I got a new error: Error 111 (net::ERR_TUNNEL_CONNECTION_FAILED). Digging a bit, I found that Tor should be set as a SOCKS proxy, not an HTTP proxy, so I unticked "use same settings for all protocols" in Proxy Switchy and just set localhost:9051 for SOCKS. That got me Error 7 (net::ERR_TIMED_OUT). And that's when I came here for help. I typed up the above question, but then at the last minute decided to do a bit more reading and found someone here suggested using some command line arguments via a Windows shortcut: "C:\snip\chrome.exe" --proxy-server=";socks=127.0.0.1:9051;sock4=127.0.0.1:9051;sock5=127.0.0.1:9051" --incognito check.torproject.org And that worked perfectly. Yesterday. Today it doesn't, so I'm having to post this question after all. check.torproject.org gives me a "no" with Chrome, but a "yes" with the default Tor Browser. I tried closing Chrome and restarting it (yes, with the correct shortcut) after Vidalia started, but still nothing. The port number hasn't changed or anything. What gives? EDIT: I realized I had a "non tor" instance of Chrome running and that possibly the was causing the command line args t be ignored when I started the new instance. Closed all instances of chrome and ran my Chrome Tor shortcut, and it did get rid of the "not using Tor" message -- because I got another Time Out error instead. Vidalia's bandwidth graph didn't even blink.

    Read the article

  • How Google Web Starter Kit serves adaptive image for mobile?

    - by 5argon
    My website weirdly (in a good way) serves smaller images when viewed on mobile. I wanted to know what cause this? As far as I know this is not the default behaviour, so I think it must be Google Web Starter Kit's doing.Here is the debug information when debugging on device. All images became 231 B size no matter how large it actually is. (On desktop debugging the size varies.) I tried using Google Web Starter Kit (https://github.com/google/web-starter-kit) recently. The tools in it are made of Ruby, Node.js, SASS and Gulp to help you 'build' website. Pre-build you can enjoy automatic reload because the Gulp script will watch all files for you. When build it will run various tools to minify HTML,CSS and compress images. According to this page https://developers.google.com/web/fundamentals/tools/build/build_site the gulp-imagemin was used. So I guess the imagemin is doing the mobile optimization for me? What kind of compression can serve automatically resized image on mobile? And why is the size 231 B? Is this related to my screen size?

    Read the article

  • The Power of Specialization &ndash; google ads for WebLogic Specialized Partners

    - by JuergenKress
    For WebLogic specialized partner we offer free google advertisement to promote your Oracle WebLogic service offerings on your website or your WebLogic events. We will host the complete campaign management. To create your google campaign please send Jürgen Kress below: Your campaign text: 3 lines of text each 35 letters (NOT more letters!) Your campaign link: direct link to the website you want to promote Ideas for the website which we will promote with the google ads: Your Oracle WebLogic Service offerings with concrete offering e.g. WebLogic Innovation Workshop Oracle WebLogic Specialized Logo Your Oracle WebLogic References Your WebLogic Implementation consultant with pictures Your WebLogic sales contact persons Example of an WebLogic Specialization text ad: Oracle SOA Specialized plan to become more agile? eProseed the Oracle SOA Experts An interview with Griffiths Waite's Business Development Director highlighting the benefits of the Oracle Specialization Programme. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Specialization,Google ads,Specialization benefits,WebLogic Specialization,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • OAuth in Rails - google, twitter, facebook, connect for login like stackoverflow login

    - by Sam
    Rails has the rest autho plugin which works well but is there a solution for incorporating twitter, facebook, google, yahoo, etc... Seems like each on has its own plugin and demands and mixing them is going to be a mess. This is for logging in users like how Stackoverflow gets things done not for using the robust features of the APIs. What I want to do is do what stackoverflow did for login but in rails.

    Read the article

  • google reader like architecture

    - by islam
    dear i want to make architect for application like google reader that save users feeds(rss,atoms) in database what is the best architecture i can do to make something like this ineed to know a good db desgin if i have to save something on files (xml or something) need to archive .etc hope to find some hints

    Read the article

  • Thrift / Google Protocol Buffers on Windows

    - by S73417H
    Hi All, Looking at Thrift and Google Protocol Buffers to implement some quick RPC code. Thrift would be perfect if the generated C++ code compiled on windows (which is what I need). And of course, GPB creates RPC stubs, but no implementation. Is there a way to get Thrift Windows friendly? Or, even better, are there any RPC implementations available freely for generated C++ protobuf stubs (a Java counterpart would be nice too, but is not necessary). Thanks

    Read the article

  • How do I get the username using DotNetOpenAuth with Google

    - by Vinicius
    I have an ASP.NET MVC project that uses DotNetOpenAuth as authentication provider. How do I get the username (or email address) when the user logs using https://www.google.com/accounts/o8/id? switch (response.Status) case AuthenticationStatus.Authenticated: string userOpenId = response.FriendlyIdentifierForDisplay; break; (...)

    Read the article

  • Google API to check number of indexed pages?

    - by Probocop
    Is there a Google API similar to Yahoo and Bing's API's to check for the number of indexed pages on a specified domain? For example, for Yahoo if I type in the following URL: http://search.yahooapis.com/SiteExplorerService/V1/pageData?appid=MTSlade&query=http://www.dave-sellers.co.uk&domain_only=1&results=1 Then it will return some XML detailing the number of pages indexed as 'totalResultsAvailable' Any idea? Thanks

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >