Search Results

Search found 7249 results on 290 pages for 'https everywhere'.

Page 48/290 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • getting autodiscover URL from Exchange email address

    - by Anthony
    I'm starting with an address for an Exchange 2007 server: [email protected] And I attempted to send an autodiscover request, as documented at MSDN. I attempted to use the generic autodiscover address documented at the TechNet White Paper. So, using curl on PHP, I sent the following request: <Autodiscover xmlns="http://schemas.microsoft.com/exchange/autodiscover/outlook/requestschema/2006"> <Request> <EMailAddress>[email protected]</EMailAddress> <AcceptableResponseSchema> http://schemas.microsoft.com/exchange/autodiscover/outlook/responseschema/2006a </AcceptableResponseSchema> </Request> </Autodiscover> to the following URL: https://domain.exchangeserver.org/autodiscover/autodiscover.xml But got no response, just an eventual timeout. I also tried: https://autodiscover.domain.exchangeserver.org/autodiscover/autodiscover.xml With the same result. Now, since my larger goal is to use Autodiscover with Exchange Web Services, and since all of the EWS URLs start with the same sub-domain as the Outlook Web Access address, I thought I'd give that a try: OWA: https://wmail.domain.exchangeserver.org So I tried: https://wmail.domain.exchangeserver.org/autodiscover/autodiscover.xml And sure enough, I got back the expected response. However, I only knew the OWA sub-domain because it's the server I have access to and that I'm using to test everything. I would not know it for sure or be able to guess it if this were a live app and the user was entering in their own Exchange email. I know that whatever generic autodiscover settings must be turned on, because I can enter: [email protected] into Apple Mail on Snow Leopard and it finds everything without trouble. So the question is... Should https://domain.exchangeserver.org/autodiscover/autodiscover.xml have worked, and I just missed a step when trying to connect to it? Or, Is there some trick (maybe involving pinging the email address?) that Apple Mail and other clients use to resolve the address to the OWA subdomain before sending the autodiscover request? Thanks to anyone who knows or can take a wild guess.

    Read the article

  • why OAuth request_token using openid4java is missing in the google's response?

    - by user454322
    I have succeed using openID and OAuth separately, but I can't make them work together. Am I doing something incorrect: String userSuppliedString = "https://www.google.com/accounts/o8/id"; ConsumerManager manager = new ConsumerManager(); String returnToUrl = "http://example.com:8080/isr-calendar-test-1.0-SNAPSHOT/GAuthorize"; List<DiscoveryInformation> discoveries = manager.discover(userSuppliedString); DiscoveryInformation discovered = manager.associate(discoveries); AuthRequest authReq = manager.authenticate(discovered, returnToUrl); session.put("openID-discoveries", discovered); FetchRequest fetch = FetchRequest.createFetchRequest(); fetch.addAttribute("email","http://schema.openid.net/contact/email",true); fetch.addAttribute("oauth", "http://specs.openid.net/extensions/oauth/1.0",true); fetch.addAttribute("consumer","example.com" ,true); fetch.addAttribute("scope","http://www.google.com/calendar/feeds/" ,true); authReq.addExtension(fetch); destinationUrl = authReq.getDestinationUrl(true); then destinationUrl is https://www.google.com/accounts/o8/ud?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.return_to=http%3A%2F%2Fexample.com%3A8080%2FgoogleTest%2Fauthorize&openid.realm=http%3A%2F%2Fexample.com%3A8080%2FgoogleTest%2Fauthorize&openid.assoc_handle=AMlYA9WVkS_oVNWtczp3zr3sS8lxR4DlnDS0fe-zMIhmepQsByLqvGnc8qeJwypiRQAuQvdw&openid.mode=checkid_setup&openid.ns.ext1=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ext1.mode=fetch_request&openid.ext1.type.email=http%3A%2F%2Fschema.openid.net%2Fcontact%2Femail&openid.ext1.type.oauth=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Foauth%2F1.0&openid.ext1.type.consumer=example.com&openid.ext1.type.scope=http%3A%2F%2Fwww.google.com%2Fcalendar%2Ffeeds%2F&openid.ext1.required=email%2Coauth%2Cconsumer%2Cscope" but in the response from google request_token is missing http://example.com:8080/googleTest/authorize?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fud&openid.response_nonce=2011-11-29T17%3A38%3A39ZEU2iBVXr_zQG5Q&openid.return_to=http%3A%2F%2Fexample.com%3A8080%2FgoogleTest%2Fauthorize&openid.assoc_handle=AMlYA9WVkS_oVNWtczp3zr3sS8lxR4DlnDS0fe-zMIhmepQsByLqvGnc8qeJwypiRQAuQvdw&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ext1%2Cext1.mode%2Cext1.type.email%2Cext1.value.email&openid.sig=5jUnS1jT16hIDCAjv%2BwAL1jopo6YHgfZ3nUUgFpeXlw%3D&openid.identity=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid%3Fid%3DAItOawk8YPjBcnQrqXW8tzK3aFVop63E7q-JrCE&openid.claimed_id=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid%3Fid%3DAItOawk8YPjBcnQrqXW8tzK3aFVop63E7q-JrCE&openid.ns.ext1=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ext1.mode=fetch_response&openid.ext1.type.email=http%3A%2F%2Fschema.openid.net%2Fcontact%2Femail&openid.ext1.value.email=boxiencosi%40gmail.com why?

    Read the article

  • Error accessing a Web Service with SSL

    - by Elie
    I have a program that is supposed to send a file to a web service, which requires an SSL connection. I run the program as follows: SET JAVA_HOME=C:\Program Files\Java\jre1.6.0_07 SET com.ibm.SSL.ConfigURL=ssl.client.props "%JAVA_HOME%\bin\java" -cp ".;Test.jar" ca.mypackage.Main This was works fine, but when I change the first line to SET JAVA_HOME=C:\Program Files\IBM\SDP\runtimes\base_v7\java\jre I get the following error: com.sun.xml.internal.ws.client.ClientTransportException: HTTP transport error: java.net.SocketException: java.lang.ClassNotFoundException: Cannot find the specified class com.ibm.websphere.ssl.protocol.SSLSocketFactory at com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.getOutput(HttpClientTransport.java:119) at com.sun.xml.internal.ws.transport.http.client.HttpTransportPipe.process(HttpTransportPipe.java:140) at com.sun.xml.internal.ws.transport.http.client.HttpTransportPipe.processRequest(HttpTransportPipe.java:86) at com.sun.xml.internal.ws.api.pipe.Fiber.__doRun(Fiber.java:593) at com.sun.xml.internal.ws.api.pipe.Fiber._doRun(Fiber.java:552) at com.sun.xml.internal.ws.api.pipe.Fiber.doRun(Fiber.java:537) at com.sun.xml.internal.ws.api.pipe.Fiber.runSync(Fiber.java:434) at com.sun.xml.internal.ws.client.Stub.process(Stub.java:247) at com.sun.xml.internal.ws.client.sei.SEIStub.doProcess(SEIStub.java:132) at com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:242) at com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:222) at com.sun.xml.internal.ws.client.sei.SEIStub.invoke(SEIStub.java:115) at $Proxy26.fileSubmit(Unknown Source) at com.testing.TestingSoapProxy.fileSubmit(TestingSoapProxy.java:81) at ca.mypackage.Main.main(Main.java:63) Caused by: java.net.SocketException: java.lang.ClassNotFoundException: Cannot find the specified class com.ibm.websphere.ssl.protocol.SSLSocketFactory at javax.net.ssl.DefaultSSLSocketFactory.a(SSLSocketFactory.java:7) at javax.net.ssl.DefaultSSLSocketFactory.createSocket(SSLSocketFactory.java:1) at com.ibm.net.ssl.www2.protocol.https.c.afterConnect(c.java:110) at com.ibm.net.ssl.www2.protocol.https.d.connect(d.java:14) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:902) at com.ibm.net.ssl.www2.protocol.https.b.getOutputStream(b.java:86) at com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.getOutput(HttpClientTransport.java:107) ... 14 more Caused by: java.lang.ClassNotFoundException: Cannot find the specified class com.ibm.websphere.ssl.protocol.SSLSocketFactory at javax.net.ssl.SSLJsseUtil.b(SSLJsseUtil.java:20) at javax.net.ssl.SSLSocketFactory.getDefault(SSLSocketFactory.java:36) at javax.net.ssl.HttpsURLConnection.getDefaultSSLSocketFactory(HttpsURLConnection.java:16) at javax.net.ssl.HttpsURLConnection.<init>(HttpsURLConnection.java:36) at com.ibm.net.ssl.www2.protocol.https.b.<init>(b.java:1) at com.ibm.net.ssl.www2.protocol.https.Handler.openConnection(Handler.java:11) at java.net.URL.openConnection(URL.java:995) at com.sun.xml.internal.ws.api.EndpointAddress.openConnection(EndpointAddress.java:206) at com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.createHttpConnection(HttpClientTransport.java:277) at com.sun.xml.internal.ws.transport.http.client.HttpClientTransport.getOutput(HttpClientTransport.java:103) ... 14 more So it seems that this problem would be related to the JRE I'm using, but what doesn't seem to make sense is that the non-IBM JRE works fine, but the IBM JRE does not. Any ideas, or suggestions?

    Read the article

  • Facebook Authentication Error when using apps.facebook.com as URL

    - by Adi Mathur
    I am trying to login on my website using Facebook Authentication and it works fine . How ever when i access the Application by using https://apps.facebook.com/myApp then i get an error The state does not match. You may be a victim of CSRF Here is the code that i am using from facebook , I think there is a problem in $my_url <?php $app_id = "YOUR_APP_ID"; $app_secret = "YOUR_APP_SECRET"; $my_url = "https://www.example.com/login.php"; session_start(); $code = $_REQUEST["code"]; if(empty($code)) { $_SESSION['state'] = md5(uniqid(rand(), TRUE)); //CSRF protection $dialog_url = "https://www.facebook.com/dialog/oauth?client_id=" . $app_id . "&redirect_uri=" . urlencode($my_url) . "&state=" . $_SESSION['state']; echo("<script> top.location.href='" . $dialog_url . "'</script>"); } if($_REQUEST['state'] == $_SESSION['state']) { $token_url = "https://graph.facebook.com/oauth/access_token?" . "client_id=" . $app_id . "&redirect_uri=" . urlencode($my_url) . "&client_secret=" . $app_secret . "&code=" . $code; $response = file_get_contents($token_url); $params = null; parse_str($response, $params); $graph_url = "https://graph.facebook.com/me?access_token=" . $params['access_token']; $user = json_decode(file_get_contents($graph_url)); echo("Hello " . $user->name); } else { echo("The state does not match. You may be a victim of CSRF."); } ?>

    Read the article

  • Problem getting Java Streams in HP Tandem (Non-Stop)

    - by AndreaG
    Hi. We are porting a simple Java application between Tandem NonStop systems, from G-Series to H-Series. Java version is 1.5.0_02. When performing basic I/O tasks like getting output stream from or opening a client socket, we receive exceptions like java.io.IOException: Value out of range or java.net.SocketException: Value out of range ("value out of range" is Tandem native jargon for, well, quite everything I suppose). Has anybody got similar issues? i.e. I/O corruption while for example messing with JNI? I suppose there is something wrong with the system, but where might it be? Thank you. EDIT: adding snippets as requested sample snippet (a) - using Runtime.exec () (adapted) Properties envVars = new Properties(); Process p = r.exec("/bin/env"); envVars.load(p.getInputStream()); Stack trace (a): java.io.IOException: Value out of range (errno:4034) at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:194) at java.lang.UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:221) at java.io.BufferedInputStream.read1(BufferedInputStream.java:254) at java.io.BufferedInputStream.read(BufferedInputStream.java:313) at sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:411) at sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:453) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:183) at java.io.InputStreamReader.read(InputStreamReader.java:167) at java.io.BufferedReader.fill(BufferedReader.java:136) at java.io.BufferedReader.readLine(BufferedReader.java:299) at java.io.BufferedReader.readLine(BufferedReader.java:362) at util.Environment.getVariables(Environment.java:39) Last line fails, and output gets redirected to console (!). sample snippet (b) - using HttpURLConnection: public WorkerThread (HttpURLConnection conn, String requestData, Logger logger) { this.conn = conn; ... } public void run () { OutputStream out = conn.getOutputStream (); } Stack trace (b): java.net.SocketException: Value out of range (errno:4034) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.Socket.connect(Socket.java:507) at sun.net.NetworkClient.doConnect(NetworkClient.java:155) at sun.net.www.http.HttpClient.openServer(HttpClient.java:365) at sun.net.www.http.HttpClient.openServer(HttpClient.java:477) at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:280) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:337) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:176) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:736) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:162) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:828) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:230) Case (a) can be avoided because it was a workaround for other issues with previous JRE version (!), but same behaviour with sockets is really nasty.

    Read the article

  • How to detect Links with out anchor element in a plain text

    - by dhee
    If user enters his text in the text box and saves it and again what's to add some more text he can edit that text and save it if required. Firstly if user enters that text with some links I, detected them and converted any hyperlinks to linkify in new tab. Secondly if user wants to add some more text and links he clicks on edit and add them and save it at this time I must ignore the links that already hyperlinked with anchor button Please help and advice For example: what = "<span>In the task system, is there a way to automatically have any site / page URL or image URL be hyperlinked in a new window?</span><br><br><span>So If I type or copy http://www.stackoverflow.com/&nbsp; for example anywhere in the description, in any of the internal messages or messages to clients, it automatically is a hyperlink in a new window.</span><br><a href="http://www.stackoverflow.com/">http://www.stackoverflow.com/</a><br> <br><span>Or if I input an image URL anywhere in support description, internal messages or messages to cleints, it automatically is a hyperlink in a new window:</span><br> <span>https://static.doubleclick.net/viewad/4327673/1-728x90.jpg</span><br><br><a href="https://static.doubleclick.net/viewad/4327673/1-728x90.jpg">https://static.doubleclick.net/viewad/4327673/1-728x90.jpg</a><br><br><br><span>This would save us a lot time in task building, reviewing and creating messages.</span> Test URL's http://www.stackoverflow.com/ http://stackoverflow.com/ https://stackoverflow.com/ www.stackoverflow.com //stackoverflow.com/ <a href='http://stackoverflow.com/'>http://stackoverflow.com/</a>"; I've tried this code function Linkify(what) { str = what; out = ""; url = ""; i = 0; do { url = str.match(/((https?:\/\/)?([a-z\-]+\.)*[\-\w]+(\.[a-z]{2,4})+(\/[\w\_\-\?\=\&\.]*)*(?![a-z]))/i); if(url!=null) { // get href value href = url[0]; if(href.substr(0,7)!="http://") href = "http://"+href; // where the match occured where = str.indexOf(url[0]); // add it to the output out += str.substr(0,where); // link it out += '<a href="'+href+'" target="_blank">'+url[0]+'</a>'; // prepare str for next round str = str.substr((where+url[0].length)); } else { out += str; str = ""; } } while(str.length>0); return out; } Please help Thanks.

    Read the article

  • Screen Scraping Twitter

    - by BRADINO
    I got an email today asking for help to scrape Twitter. In particular, to be able to login. So I am going to show everyone, NOT to encourage anyone to violate Twitters terms of use but as an educational blog post about how PHP and cURL can be used to post variables and store cookies. Again, I am using the cScrape class I wrote, which you can download. Step 1 First go to twitter.com and look at the source code of the login to get the form field names and the form post location. You will see that the form posts to https://twitter.com/session and the username and password fields are session[username_or_email] and session[password] respectively. Step 2 Now you are ready to login. So using the fetch function in the Scrape class you create an associative array to contain the form values you want to post. The other thing you will need to do is uncomment the lines for CURLOPT_COOKIEFILE and CURLOPT_COOKIEJAR. Cookies will be required to stay logged in and scrape around. The paths to the cookie files need to be writable by your app. Also you will need to uncomment the line about CURLOPT_FOLLOWLOCATION. $data = array('session[username_or_email]' => "bradino", 'session[password]' => "secret"); $scrape->fetch('https://twitter.com/sessions',$data); Step 1.5 Oops that didn't work. All I got back was 403 Forbidden: The server understood the request, but is refusing to fulfill it. Ahhh I see another variable called authenticity_token I bet Twitter was looking for that. So let's back up and first hit twitter.com to get the authenticity_token variable, and then make the login post request with that variable included in our array of parameters. $scrape->fetch('https://twitter.com'); $data = array('session[username_or_email]' => "bradino", 'session[password]' => "secret"); $data['authenticity_token'] = $scrape->fetchBetween('name="authenticity_token" type="hidden" value="','"',$scrape->result); $scrape->fetch('https://twitter.com/sessions',$data); echo $scrape->result; So that's basically it. Now you are logged in and can scrape around and request other pages as you normally would. Sorry it wasn't a longer post. I really do enjoy this kind of stuff so if anyone has a request, hit me up. Errors? 1) Make sure that you are properly parsing the token variable 2) Make sure that you uncommented the lines about CURLOPT_COOKIEFILE and CURLOPT_COOKIEJAR, those options need to be enabled and be sure the path set is writable by your application 3) Make sure that the path to the cookie file is writable and that it is getting data written to it 4) If you get a message about being redirected you need to uncomment the line about CURLOPT_FOLLOWLOCATION, that option needs to be enabled true

    Read the article

  • CakePhp on IIS: How can I Edit URL Rewrite module for SSL Redirects

    - by AdrianB
    I've not dealt much with IIS rewrites, but I was able to import (and edit) the rewrites found throughout the cake structure (.htaccess files). I'll explain my configuration a little, then get to the meat of the problem. So my Cake php framework is working well and made possible by the url rewrite module 2.0 which I have successfully installed and configured for the site. The way cake is set up, the webroot folder (for cake, not iis) is set as the default folder for the site and exists inside the following hierarchy inetpub -wwwroot --cakePhp root ---application ----models ----views ----controllers ----WEBROOT // *** HERE *** ---cake core --SomeOtherSite Folder For this implementation, the url rewrite module uses the following rules (from the web.config file) ... <rewrite> <rules> <rule name="Imported Rule 1" stopProcessing="true"> <match url="^(.*)$" ignoreCase="false" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Rewrite" url="index.php?url={R:1}" appendQueryString="true" /> </rule> <rule name="Imported Rule 2" stopProcessing="true"> <match url="^$" ignoreCase="false" /> <action type="Rewrite" url="/" /> </rule> <rule name="Imported Rule 3" stopProcessing="true"> <match url="(.*)" ignoreCase="false" /> <action type="Rewrite" url="/{R:1}" /> </rule> <rule name="Imported Rule 4" stopProcessing="true"> <match url="^(.*)$" ignoreCase="false" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Rewrite" url="index.php?url={R:1}" appendQueryString="true" /> </rule> </rules> </rewrite> I've Installed my SSL certificate and created a site binding so that if i use the https:// protocol, everything is working fine within the site. I fear that attempts I have made at creating a rewrite are too far off base to understand results. The rules need to switch protocol without affecting the current set of rules which pass along url components to index.php (which is cake's entry point). My goal is this- Create a couple of rewrite rules that will [#1] redirect all user pages (in this general form http://domain.com/users/page/param/param/?querystring=value ) to use SSL and then [#2} direct all other https requests to use http (is this is even necessary?). [e.g. http://domain.com/users/login , http://domain.com/users/profile/uid:12345 , http://domain.com/users/payments?firsttime=true] ] to all use SSL [e.g. https://domain.com/users/login , https://domain.com/users/profile/uid:12345 , https://domain.com/users/payments?firsttime=true] ] Any help would be greatly appreciated.

    Read the article

  • Apache reverse proxy POST 403

    - by qkslvrwolf
    I am trying to get Jira and Stash to talk to each other via a Trusted Application link. The setup, currently, looks like this: Jira - http - Jira Proxy -https- stash proxy -http- stash. Jira and the Jira proxy are on the same machine. The Jira Proxy is showing 403 Forbidden for POST requests from the stash server. It works (or seems to ) for everything else. I contend that since we're seeing 403 forbiddens in the access log for apache, Jira is never seeing the request. Why is apache forbidding posts,and how do I fix it? Note that the IPs for both Stash and the Stash Proxy are in the "trusted host" section. My config: LogLevel info CustomLog "|/usr/sbin/rotatelogs /var/log/apache2/access.log 86400" common ServerSignature off ServerTokens prod Listen 8443 <VirtualHost *:443> ServerName jira.company.com SSLEngine on SSLOptions +StrictRequire SSLCertificateFile /etc/ssl/certs/server.cer SSLCertificateKeyFile /etc/ssl/private/server.key SSLProtocol +SSLv3 +TLSv1 SSLCipherSuite DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA:AES128-SHA:EDH-RSA-DES-CBC3-SHA:EDH-DSS-DES-CBC3-SHA:DES-CBC3-SHA # If context path is not "/wiki", then send to /jira. RedirectMatch 301 ^/$ https://jira.company.com/jira RedirectMatch 301 ^/gsd(.*)$ https://jira.company.com/jira$1 ProxyRequests On ProxyPreserveHost On ProxyVia On ProxyPass /jira http://localhost:8080/jira ProxyPassReverse /jira http://localhost:8080/jira <Proxy *> Order deny,allow Allow from all </Proxy> RewriteEngine on RewriteLog "/var/log/apache2/rewrite.log" RewriteLogLevel 2 # Disable TRACE/TRACK requests, per security. RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] DocumentRoot /var/www DirectoryIndex index.html <Directory /var/www> Options FollowSymLinks AllowOverride None Order deny,allow Allow from all </Directory> <LocationMatch "/"> Order deny,allow Deny from all allow from x.x.71.8 allow from x.x.8.123 allow from x.x.120.179 allow from x.x.120.73 allow from x.x.120.45 satisfy any SetEnvif Remote_Addr "x.x.71.8" TRUSTED_HOST SetEnvif Remote_Addr "x.x.8.123" TRUSTED_HOST SetEnvif Remote_Addr "x.x.120.179" TRUSTED_HOST SetEnvif Remote_Addr "x.x.120.73" TRUSTED_HOST SetEnvif Remote_Addr "x.x.120.45" TRUSTED_HOST </LocationMatch> <LocationMatch ^> SSLRequireSSL AuthType CompanyNet PubcookieInactiveExpire -1 PubcookieAppID jira.company.com require valid-user RequestHeader set userid %{REMOTE_USER}s </LocationMatch> </VirtualHost> # Port open for SSL, non-pubcookie access. Used to access APIs with Basic Auth. <VirtualHost *:8443> SSLEngine on SSLOptions +StrictRequire SSLCertificateFile /etc/ssl/certs/server.cer SSLCertificateKeyFile /etc/ssl/private/server.key SSLProtocol +SSLv3 +TLSv1 SSLCipherSuite DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA:AES128-SHA:EDH-RSA-DES-CBC3-SHA:EDH-DSS-DES-CBC3-SHA:DES-CBC3-SHA ProxyRequests On ProxyPreserveHost On ProxyVia On ProxyPass /jira http://localhost:8080/jira ProxyPassReverse /jira http://localhost:8080/jira <Proxy *> Order deny,allow Allow from all </Proxy> RewriteEngine on RewriteLog "/var/log/apache2/rewrite.log" RewriteLogLevel 2 # Disable TRACE/TRACK requests, per security. RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] DocumentRoot /var/www DirectoryIndex index.html <Directory /var/www> Options FollowSymLinks AllowOverride None Order deny,allow Allow from all </Directory> </VirtualHost> <VirtualHost jira.company.com:80> ServerName jira.company.com RedirectMatch 301 /(.*)$ https://jira.company.com/$1 RewriteEngine on RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </VirtualHost> <VirtualHost *:80> ServerName go.company.com RedirectMatch 301 /(.*)$ https://jira.company.com/$1 RewriteEngine on RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </VirtualHost>

    Read the article

  • Scripts Casing Flash Intro Animation To Stop [migrated]

    - by ubique
    When my Flash website loads, it freezes halfway through the initial animation for 2-3 seconds and then continues. This obviously doesn't look great and I can't figure out what is causing it. Am thinking it is one of the scripts in index.html causing the issue and have tried all sorts of ways to correct it - what have I done wrong? <!DOCTYPE html> <html lang="en"> <head> <title>company name</title> . . . <link href="style.css" rel="stylesheet" type="text/css" /> <script type="text/javascript" src="js/flashobject.js"></script> <!--[if lt IE 7]> <link href="ie6.css" rel="stylesheet" type="text/css" /> <![endif]--> </head> <body> <header> <hgroup> <h1>company</h1> <h2>company</h2> </hgroup> </header> <div id="container"> <div id="head"> <div class="aligncenter"><a href="http://www.adobe.com/go/EN_US-H-GET-FLASH"> <img src="http://www.adobe.com/images/shared/download_buttons/get_adobe_flash_player.png" alt="" /></a> </div> </div> </div> <div class="g-plus" data-href="https://plus.google.com/100925740920754223119?rel=publisher" data-width="170" data-height="69" data-theme="light"> </body> <!-- Flash --> <script type="text/javascript"> var fo = new FlashObject("main_v10.swf", "head", "100%", "100%", "8", ""); fo.addParam("quality", "high"); fo.addParam("allowFullScreen", "true"); fo.write("head"); </script> <!-- Hello Bar --> <script type="text/javascript" src="//www.hellobar.com/hellobar.js"></script> <script type="text/javascript"> new HelloBar(39040,52484); </script> <!-- GPlus --> <script type="text/javascript"> window.___gcfg = {lang: 'en'}; (function() {var po = document.createElement("script"); po.type = "text/javascript"; po.async = true;po.src = "https://apis.google.com/js/plusone.js"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(po, s); })();</script> <!-- Google --> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxxxx-1']); _gaq.push(['_setSiteSpeedSampleRate', 10]); _gaq.push(['_trackPageview']); (function init() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga,s); })(); window.onload = init; </script> </html>

    Read the article

  • NGINX - CORS error affecting only Firefox

    - by wiherek
    this is an issue with Nginx that affects only firefox. I have this config: http://pastebin.com/q6Yeqxv9 upstream connect { server 127.0.0.1:8080; } server { server_name admin.example.com www.admin.example.com; listen 80; return 301 https://admin.example.com$request_uri; } server { listen 80; server_name ankieta.example.com www.ankieta.example.com; add_header Access-Control-Allow-Origin $http_origin; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, PATCH, DELETE'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Headers' 'Access-Control-Request-Method,Access-Control-Request-Headers,Cache,Pragma,Authorization,Accept,Accept-Encoding,Accept-Language,Host,Referer,Content-Length,Origin,DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; return 301 https://ankieta.example.com$request_uri; } server { server_name admin.example.com; listen 443 ssl; ssl_certificate /srv/ssl/14182263.pem; ssl_certificate_key /srv/ssl/admin_i_ankieta.example.com.key; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM; location / { proxy_pass http://connect; } } server { server_name ankieta.example.com; listen 443 ssl; ssl_certificate /srv/ssl/14182263.pem; ssl_certificate_key /srv/ssl/admin_i_ankieta.example.com.key; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM; root /srv/limesurvey; index index.php; add_header 'Access-Control-Allow-Origin' $http_origin; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, PATCH, DELETE'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Headers' 'Access-Control-Request-Method,Access-Control-Request-Headers,Cache,Pragma,Authorization,Accept,Accept-Encoding,Accept-Language,Host,Referer,Content-Length,Origin,DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; client_max_body_size 4M; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location ~ /*.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini include fastcgi_params; fastcgi_param SCRIPT_FILENAME /srv/limesurvey$fastcgi_script_name; # fastcgi_param HTTPS $https; fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } } this is basically an AngularJS app and a PHP app (LimeSurvey), served under two different domains by the same webserver (Nginx). AngularJS is in fact served by ConnectJS, which is proxied to by Nginx (ConnectJS listens only on localhost). In Firefox console I get this: Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://ankieta.example.com/admin/remotecontrol. This can be fixed by moving the resource to the same domain or enabling CORS. which of course is annoying. Other browsers work fine (Chrome, IE). Any suggestions on this?

    Read the article

  • Access Control Service: Transitioning between Active and Passive Scenarios

    - by Your DisplayName here!
    As I mentioned in my last post, ACS features a number of ways to transition between protocol and token types. One not so widely known transition is between passive sign ins (browser) and active service consumers. Let’s see how this works. We all know the usual WS-Federation handshake via passive redirect. But ACS also allows driving the sign in process yourself via specially crafted WS-Federation query strings. So you can use the following URL to sign in using LiveID via ACS. ACS will then redirect back to the registered reply URL in your application: GET /login.srf?   wa=wsignin1.0&   wtrealm=https%3a%2f%2faccesscontrol.windows.net%2f&   wreply=https%3a%2f%2fleastprivilege.accesscontrol.windows.net%3a443%2fv2%2fwsfederation&   wp=MBI_FED_SSL&   wctx=pr%3dwsfederation%26rm%3dhttps%253a%252f%252froadie%252facs2rp%252frest%252f The wsfederation bit in the wctx parameter indicates, that the response to the token request will be transmitted back to the relying party via a POST. So far so good – but how can an active client receive that token now? ACS knows an alternative way to send the token request response. Instead of doing the redirect back to the RP, it emits a page that in turn echoes the token response using JavaScript’s window.external.notify. The URL would look like this: GET /login.srf?   wa=wsignin1.0&   wtrealm=https%3a%2f%2faccesscontrol.windows.net%2f&   wreply=https%3a%2f%2fleastprivilege.accesscontrol.windows.net%3a443%2fv2%2fwsfederation&   wp=MBI_FED_SSL&   wctx=pr%3djavascriptnotify%26rm%3dhttps%253a%252f%252froadie%252facs2rp%252frest%252f ACS would then render a page that contains the following script block: <script type="text/javascript">     try{         window.external.Notify('token_response');     }     catch(err){         alert("Error ACS50021: windows.external.Notify is not registered.");     } </script> Whereas token_response is a JSON encoded string with the following format: {   "appliesTo":"...",   "context":null,   "created":123,   "expires":123,   "securityToken":"...",   "tokenType":"..." } OK – so how does this all come together now? As an active client (Silverlight, WPF, WP7, WinForms etc). application, you would host a browser control and use the above URL to trigger the right series of redirects. All the browser controls support one way or the other to register a callback whenever the window.external.notify function is called. This way you get the JSON string from ACS back into the hosting application – and voila you have the security token. When you selected the SWT token format in ACS – you can use that token e.g. for REST services. When you have selected SAML, you can use the token e.g. for SOAP services. In the next post I will show how to retrieve these URLs from ACS and a practical example using WPF.

    Read the article

  • Google Analytics recording event based on <a> title attribute

    - by rlsaj
    I am declaring: var title = (typeof(el.attr('title')) != 'undefined' ) ? el.attr('title') :""; and then have the following: else if (title.match(/^"Matching Content"\:/i)) { elEv.category = "Matching Content Click"; elEv.action = "click-Matching-Content"; elEv.label = href.replace(/^https?\:\/\//i, ''); elEv.non_i = true; elEv.loc = href; } However, using Google Analytics debugger this is not being recorded. Any suggestions? The complete function is: if (typeof jQuery != 'undefined') { jQuery(document).ready(function gLinkTracking($) { var filetypes = /\.(avi|csv|dat|dmg|doc.*|exe|flv|gif|jpg|mov|mp3|mp4|msi|pdf|png|ppt.*|rar|swf|txt|wav|wma|wmv|xls.*|zip)$/i; var baseHref = ''; if (jQuery('base').attr('href') != undefined) baseHref = jQuery('base').attr('href'); jQuery('a').on('click', function (event) { var el = jQuery(this); var track = true; var href = (typeof(el.attr('href')) != 'undefined' ) ? el.attr('href') :""; var title = (typeof(el.attr('title')) != 'undefined' ) ? el.attr('title') :""; var isThisDomain = href.match(document.domain.split('.').reverse()[1] + '.' + document.domain.split('.').reverse()[0]); if (!href.match(/^javascript:/i)) { var elEv = []; elEv.value=0, elEv.non_i=false; if (href.match(/^mailto\:/i)) { elEv.category = "Email link"; elEv.action = "click-email"; elEv.label = href.replace(/^mailto\:/i, ''); elEv.loc = href; } else if (title.match(/^"Matching Content"\:/i)) { elEv.category = "Matching Content Click"; elEv.action = "click-Matching-Content"; elEv.label = href.replace(/^https?\:\/\//i, ''); elEv.non_i = true; elEv.loc = href; } else if (href.match(filetypes)) { var extension = (/[.]/.exec(href)) ? /[^.]+$/.exec(href) : undefined; elEv.category = "File Downloaded"; elEv.action = "click-" + extension[0]; elEv.label = href.replace(/ /g,"-"); elEv.loc = baseHref + href; } else if (href.match(/^https?\:/i) && !isThisDomain) { elEv.category = "External link"; elEv.action = "click-external"; elEv.label = href.replace(/^https?\:\/\//i, ''); elEv.non_i = true; elEv.loc = href; } else if (href.match(/^tel\:/i)) { elEv.category = "Telephone link"; elEv.action = "click-telephone"; elEv.label = href.replace(/^tel\:/i, ''); elEv.loc = href; } else track = false; if (track) { _gaq.push(['_trackEvent', elEv.category.toLowerCase(), elEv.action.toLowerCase(), elEv.label.toLowerCase(), elEv.value, elEv.non_i]); if ( el.attr('target') == undefined || el.attr('target').toLowerCase() != '_blank') { setTimeout(function() { location.href = elEv.loc; }, 400); return false; } } } }); }); }

    Read the article

  • Using oauth2_access_token to get connections in linkedIn

    - by Pedro
    I'm trying to get the connections in linkedIn using their API, but when I try to retrieve the connections I get a 401 unauthorized error. in the official documentation says You must use an access token to make an authenticated call on behalf of a user Make the API calls You can now use this access_token to make API calls on behalf of this user by appending "oauth2_access_token=access_token" at the end of the API call that you wish to make. The API call that I'm trying to do is the following Error -- http://api.linkedin.com/v1/people/~/connections:(id,headline,first-name,last-name)?format=json&oauth2_access_token=access_token I have tried to do it with the following endpoint without any problems. OK -- https://api.linkedin.com/v1/people/~:(id,first-name,last-name,formatted-name,date-of-birth,industry,email-address,location,headline,picture-urls::(original))?format=json&oauth2_access_token=access_token this list of endpoints for the connections API are described here http://developer.linkedin.com/documents/connections-api I just copied and pasted one endpoint from there, so the question is what's the problem with the endpoint for getting the connections? what am I missing? EDIT: for the preAuth Url I'm using https://www.linkedin.com/uas/oauth2/authorization?response_type=code&client_id=ConsumerKey&scope=r_fullprofile%20r_emailaddress%20r_network&state&state=NewGuid&redirect_uri=Encoded_Url https://www.linkedin.com/uas/oauth2/accessToken?grant_type=authorization_code&code=QueryString_Code&redirect_uri=EncodedCallback&client_id=ConsummerKey&client_secret=ConsumerSecret please find attached the login screen requesting the permissions EDIT2: Switched to https and worked like a charm!

    Read the article

  • WCF, Metadata and BIGIP - Can I force the correct url for the WSDL items?

    - by Yossi Dahan
    We have a WCF service hosted on ServerA which is a server with no-direct Internet access and has a non-Internet routable IP address. The service is fronted by BIGIP which handles SSL encryption and decryption and forwards the unencrypted request to ServerA (at the moment it does NOT actually do any load balancing, but that is likely to be added in the future) on a specific port. What that means is that our clients would be calling the service through https://www.OurDomain.com/ServiceUrl and would get to our service on http://SeverA:85/ServiceUrl through the BIGIP device; When we browse to the WSDL published on https://www.OurDomain.com/ServiceUrl all the addresses contained in the WSDL are based on the http://SeverA:85/ServiceUrl base address We figured out that we could use the host headers setting to set the domain, but our problem is that while this would sort out the domain, we would still be using the wrong scheme – it would use http://www.OurDomain.com/ServiceUrl while we need it to be Https. Also – as we have other services (asmx based) hosted on that server we had some issues setting the host headers, and so we thought we could get away with creating another site on the server (using, say, port 82) and set the host header on that; now, on top of the http/https problem we have an issue as the WSDL contains the port number in all the urls, where BigIP works on port 443 (for the SSL) Is there a more flexible solution than implementing Host Headers? Ideally we need to retain flexibility and ease of supportability. Thanks for any help…

    Read the article

  • How do I read a secure rss feed into a SyndicationFeed without providing credentials?

    - by John Kaster
    For whatever reason, IBM uses https (without requiring credentials) for their RSS feeds. I'm trying to consume https://www.ibm.com/developerworks/mydeveloperworks/blogs/roller-ui/rendering/feed/gradybooch/entries/rss?lang=en with a .NET 4 SyndicationFeed. I can open this feed in a browser and it loads just fine. Here's the code: using (XmlReader xml = XmlReader.Create("https://www.ibm.com/developerworks/mydeveloperworks/blogs/roller-ui/rendering/feed/gradybooch/entries/rss?lang=en")) { var items = from item in SyndicationFeed.Load(xml).Items select item; } Here's the exception: System.Net.WebException was unhandled by user code Message=The remote server returned an error: (500) Internal Server Error. Source=System StackTrace: at System.Net.HttpWebRequest.GetResponse() at System.Xml.XmlDownloadManager.GetNonFileStream(Uri uri, ICredentials credentials, IWebProxy proxy, RequestCachePolicy cachePolicy) at System.Xml.XmlDownloadManager.GetStream(Uri uri, ICredentials credentials, IWebProxy proxy, RequestCachePolicy cachePolicy) at System.Xml.XmlUrlResolver.GetEntity(Uri absoluteUri, String role, Type ofObjectToReturn) at System.Xml.XmlReaderSettings.CreateReader(String inputUri, XmlParserContext inputContext) at System.Xml.XmlReader.Create(String inputUri, XmlReaderSettings settings, XmlParserContext inputContext) at System.Xml.XmlReader.Create(String inputUri) at EDN.Util.Test.FeedAggTest.LoadFeedInfoTest() in D:\cdn\trunk\CDN\Dev\Shared\net\EDN.Util\EDN.Util.Test\FeedAggTest.cs:line 126 How do I configure the reader to work with an https feed?

    Read the article

  • Hyperlink regex including http(s):// not working in C#

    - by Rory Fitzpatrick
    I think this is sufficiently different from similar questions to warrant a new one. I have the following regex to match the beginning hyperlink tags in HTML, including the http(s):// part in order to avoid mailto: links <a[^>]*?href=[""'](?<href>\\b(https?)://[^\[\]""]+?)[""'][^>]*?> When I run this through Nregex (with escaping removed) it matches correctly for the following test cases: <a href="http://www.bbc.co.uk"> <a href="http://bbc.co.uk"> <a href="https://www.bbc.co.uk"> <a href="mailto:[email protected]"> However when I run this in my C# code it fails. Here is the matching code: public static IEnumerable<string> GetUrls(this string input, string matchPattern) { var matches = Regex.Matches(input, matchPattern, RegexOptions.Compiled | RegexOptions.IgnoreCase); foreach (Match match in matches) { yield return match.Groups["href"].Value; } } And my tests: @"<a href=""https://www.bbc.co.uk"">bbc</a>".GetUrls(StringExtensions.HtmlUrlRegexPattern).Count().ShouldEqual(1); @"<a href=""mailto:[email protected]"">bbc</a>".GetUrls(StringExtensions.HtmlUrlRegexPattern).Count().ShouldEqual(0); The problem seems to be in the \\b(https?):// part which I added, removing this passes the normal URL test but fails the mailto: test. Anyone shed any light?

    Read the article

  • cakephp secure link using html helper link method..

    - by Aaron
    What's the best way in cakephp to extend the html-link function so that I can tell it to output a secure(https) link? Right now, I've added my own secure_link function to app_helpers that's basically a copy of the link function but adding a https to the beginning. But it seems like there should be a better way of overriding the html-link method so that I can specify a secure option. http://groups.google.com/group/cake-php/browse%5Fthread/thread/e801b31cd3db809a I also started a thread on the google groups and someone suggested doing something like $html->link('my account', array('base' => 'https://', 'controller' => 'users')); but I couldn't get that working. Just to add, this is what is outputted when I have the above code. <a href="/users/index/base:https:/">my account</a> I think there's a bug in the cake/libs/router.php on line 850. There's a keyword 'bare' and I think it should be 'base' Though changing it to base doesn't seem to fix it. From what I gather, it's telling it to exclude those keys that are passed in so that they don't get included as parameters. But I'm puzzled as to why it's a 'bare' keyword and the only reason I can come up with is that it's a type.

    Read the article

  • Broken ssl, what to do

    - by TIT
    I have a site and i implemented ssl there. but when i browse it, the security seals dont come. i asked to godaddy, they replaid: Thank you for contacting online support. I cannot replicate the issue you have described. The error you described is caused by the way your site has been designed. If you receive this error, you have a combination of secure and non-secure objects on the page. For example, if your secure website was https://www.domain.tld and you added an object (an image, script, flash file, etc.) to that page that was located at http://www.domain.tld/image.jpg, you would break the seal. You will need to change your design to link to objects using https (ie https://www.domain.tld/image.jpg) or modify your site design to use relative paths (/image.jpg). This error can only be corrected by modifying your site design. Please contact your web designer or the manufacturer of your web design software if you require additional assistance modifying your site design. but the problem is i made everything,all my images javascripts are unders https, but the seal still not coming, saying: some content insecure. what is the problem.

    Read the article

  • Cookie not renewing/overwriting in IE

    - by deceze
    I have a weird quirk with cookies in IE. When a user logs into the site, I'm generating a new session id and hence need to overwrite the cookie. The flow is basically: Client goes to https://secure.example.com/users/login page, automatically receiving a session id Client POSTs login credentials to same address Client receives the following headers together with a 302 redirect to https://secure.example.com/users/mypage: CAKEPHP=deleted; expires=Sun, 05-Apr-2009 04:50:35 GMT; path=/ CAKEPHP=98hnIO23...; expires=Mon, 12 Apr 2010 04:50:36 GMT; path=/; secure Client is supposed to visit https://secure.example.com/users/mypage, presenting the new session id. This works in all browsers, except IE (tested in 7 & 8). IE retains the old, unauthenticated session id, and is redirected back to the login page. It works on my local test environment (using a self-signed certificate at https://localhost:8443/...), but not on the live server. I'm using CakePHP and simply issue a $this->Session->renew(), which produces the above cookie headers. Any ideas how to get IE to accept the new cookie?

    Read the article

  • share the same cookie between two website using PHP cURL extension

    - by powerboy
    I want to get the contents of some emails in my gmail account. I would like to use the PHP cURL extension to do this. I followed these steps in my first try: In the PHP code, output the contents of https://www.google.com/accounts/ServiceLoginAuth. In the browser, the user input username and password to login. In the PHP code, save cookies in a file named cookie.txt. In the PHP code, send request to https://mail.google.com/ along with cookies retrieved from cookie.txt and output the contents. The following code does not work: $login_url = 'https://www.google.com/accounts/ServiceLoginAuth'; $gmail_url = 'https://mail.google.com/'; $cookie_file = dirname(__FILE__) . '/cookie.txt'; $ch = curl_init(); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch, CURLOPT_COOKIEJAR, $cookie_file); curl_setopt($ch, CURLOPT_URL, $login_url); $output = curl_exec($ch); echo $output; curl_setopt($ch, CURLOPT_URL, $gmail_url); curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie_file); $output = curl_exec($ch); echo $output; curl_close($ch);

    Read the article

  • XML-RPC over SSL with Ruby: end of file reached (EOFError)

    - by Michael Conigliaro
    Hello, I have some very simple Ruby code that is attempting to do XML-RPC over SSL: require 'xmlrpc/client' require 'pp' server = XMLRPC::Client.new2("https://%s:%d/" % [ 'api.ultradns.net', 8755 ]) pp server.call2('UDNS_OpenConnection', 'sponsor', 'username', 'password') The problem is that it always results in the following EOFError exception: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/protocol.rb:135:in `sysread': end of file reached (EOFError) So it appears that after doing the POST, I don't get anything back. Interestingly, this is the behavior I would expect if I tried to make an HTTP connection on the HTTPS port (or visa versa), and I actually do get the same exact exception if I change the protocol. Everything I've looked at indicates that using "https://" in the URL is enough to enable SSL, but I'm starting wonder if I've missed something. Note that Even though the credentials I'm using in the RPC are made up, I'm expecting to at least get back an XML error page (similar to if you access https://api.ultradns.net:8755/ with a web browser). I've tried running this code on OSX and Linux with the exact same result, so I have to conclude that I'm just doing something wrong here. Does anyone have any examples of doing XML-RPC over SSL with Ruby?

    Read the article

  • Http authentication with apache httpcomponents

    - by matdan
    Hi, I am trying to develop a java http client with apache httpcomponents 4.0.1. This client calls the page "https://myHost/myPage". This page is protected on the server by a JNDIRealm with a login form authentication, so when I try to get https://myHost/myPage I get a login page. I tried to bypass it unsuccessfully with the following code : //I set my proxy HttpHost proxy = new HttpHost("myProxyHost", myProxyPort); //I add supported schemes SchemeRegistry supportedSchemes = new SchemeRegistry(); supportedSchemes.register(new Scheme("http", PlainSocketFactory .getSocketFactory(), 80)); supportedSchemes.register(new Scheme("https", SSLSocketFactory .getSocketFactory(), 443)); // prepare parameters HttpParams params = new BasicHttpParams(); HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1); HttpProtocolParams.setContentCharset(params, "UTF-8"); HttpProtocolParams.setUseExpectContinue(params, true); ClientConnectionManager ccm = new ThreadSafeClientConnManager(params, supportedSchemes); DefaultHttpClient httpclient = new DefaultHttpClient(ccm, params); httpclient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, proxy); //I add my authentication information httpclient.getCredentialsProvider().setCredentials( new AuthScope("myHost/myPage", 443), new UsernamePasswordCredentials("username", "password")); HttpHost host = new HttpHost("myHost", 443, "https"); HttpGet req = new HttpGet("/myPage"); //show the page ResponseHandler<String> responseHandler = new BasicResponseHandler(); String rsp = httpClient.execute(host, req, responseHandler); System.out.println(rsp); When I run this code, I always get the login page, not myPage. How can I apply my credential parameters to avoid this login form? Any help would be fantastic

    Read the article

  • bitly php url wont work???

    - by mathiregister
    Hi guys, <?php include('bitly.php'); $bitly = new bitly('myusername', 'myapikey'); print $bitly->shorten('http://www.google.com'); ?> WORKING!!! $currenturl = (!empty($_SERVER['HTTPS'])) ? "https://".$_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'] : "http://".$_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI']; include('bitly.php'); $bitly = new bitly('myusername', 'myapikey'); print $bitly->shorten($currenturl); WORKING!!! include('bitly.php'); $currenturl = (!empty($_SERVER['HTTPS'])) ? "https://".$_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'] : "http://".$_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI']; $url = "somehashtag" $shareurl = $currenturl . '#' . $url; $bitly = new bitly('myusername', 'myapikey'); print $bitly->shorten($shareurl); NOT WORKING!!! Any idea why? If i print out the $shareurl i can see that it's a completely normal url that i could paste onto the normal bit.ly website. I don't get it! Any ideas? Would be great if you could help me!

    Read the article

  • I've registered my oath about 5 times now, but... (twitteR package R)

    - by user2985989
    I'm attempting to mine twitter data in R, and am having trouble getting started. I created a twitter account, an app in twitter developers, changed the settings to read, write, and access, created my access token, and followed instructions to the letter in registering it: My code: > library(twitteR) > download.file(url="http://curl.haxx.se/ca/cacert.pem", + destfile="cacert.pem") > requestURL <- "https://api.twitter.com/oauth/request_token" > accessURL <- "https://api.twitter.com/oauth/access_token" > authURL <- "https://api.twitter.com/oauth/authorize" > consumerKey <-"my key" #took this part out for privacy's sake > consumerSecret <- "my secret" #this too > twitCred <- OAuthFactory$new(consumerKey=consumerKey, consumerSecret = consumerSecret, requestURL = requestURL, accessURL = accessURL, authURL = authURL) > twitCred$handshake(cainfo="cacert.pem") To enable the connection, please direct your web browser to: https://api.twitter.com/oauth/authorize?oauth_token=zxgHXJkYAB3wQ2IVAeyJjeyid7WK6EGPfouGmlx1c When complete, record the PIN given to you and provide it here: 0010819 > registerTwitterOAuth(twitCred) [1] TRUE > save(list="twitCred", file="twitteR_credentials") And yet, this: > s <- searchTwitter('#United', cainfo="cacert.pem") [1] "Unauthorized" Error in twInterfaceObj$doAPICall(cmd, params, "GET", ...) : Error: Unauthorized I'm about to have a temper tantrum. I'd be extremely grateful if someone could explain to me what is going wrong, or, better yet, how to fix it. Thank you.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >