Search Results

Search found 5809 results on 233 pages for 'curl multi'.

Page 88/233 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)

    - by ArunS
    Hello there, We have online shopping site. When I am going to checkout page i am getting a error like this "error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)" From the apache error log i can see some attempts to connect to api.paypal.com. Here is the part of my apache error log About to connect() to api.paypal.com port 443 (#0) Trying 66.211.168.123... * connected Connected to api.paypal.com (66.211.168.123) port 443 (#0) successfully set certificate verify locations: CAfile: none CApath: /etc/ssl/certs error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Closing connection #0 When i tried to connect to api.paypal.com using curl i am getting a error like this curl -iv https://api.paypal.com/ * About to connect() to api.paypal.com port 443 (#0) * Trying 66.211.168.91... connected * Connected to api.paypal.com (66.211.168.91) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Request CERT (13): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS alert, Server hello (2): * error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure * Closing connection #0 curl: (35) error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Can anyone help me to figure out this. Thanks in Advance. Arun S

    Read the article

  • Header set Access-Control-Allow-Origin not working with mod_rewrite + mod_jk

    - by tharant
    My first question on here on SF so please forgive me if I manage to bork the post. :) Anyways, I'm using mod_rewrite on one of my machines with a simple rule that redirects to a webapp on another machine. I'm also setting the header 'Access-Control-Allow-Origin' on both machines. The problem is that when I hit the rewrite rule, I loose the 'Access-Control-Allow-Origin' header setting. Here's an example of the Apache config for the first machine: NameVirtualHost 10.0.0.2:80 <VirtualHost 10.0.0.2:80> DocumentRoot /var/www/host.example.com ServerName host.example.com JkMount /webapp/* jkworker Header set Access-Control-Allow-Origin "*" RewriteEngine on RewriteRule ^/otherhost http://otherhost.example.com/webapp [R,L] </VirtualHost> And here's an example of the Apache config for the second: NameVirtualHost 10.0.1.2:80 <VirtualHost 10.0.1.2:80> DocumentRoot /var/www/otherhost.example.com ServerName otherhost.example.com JkMount /webapp/* jkworker Header set Access-Control-Allow-Origin "*" </VirtualHost> When I hit host.example.com we see that the header is set: $ curl -i http://host.example.com/ HTTP/1.1 302 Moved Temporarily Server: Apache/2.2.11 (FreeBSD) mod_ssl/2.2.11 OpenSSL/0.9.7e-p1 DAV/2 mod_jk/1.2.26 Content-Length: 0 Access-Control-Allow-Origin: * Content-Type: text/html;charset=ISO-8859-1 And when I hit otherhost.example.com we see that it too is setting the header: $ curl -i http://otherhost.example.com HTTP/1.1 200 OK Server: Apache/2.0.46 (Red Hat) Location: http://otherhost.example.com/index.htm Content-Length: 0 Access-Control-Allow-Origin: * Content-Type: text/html;charset=UTF-8 But when I try to hit the rewrite rule at host.example.com/otherhost we get no love: $ curl -i http://host.example.com/otherhost/ HTTP/1.1 302 Found Server: Apache/2.2.11 (FreeBSD) mod_ssl/2.2.11 OpenSSL/0.9.7e-p1 DAV/2 mod_jk/1.2.26 Location: http://otherhost.example.com/ Content-Length: 0 Content-Type: text/html; charset=iso-8859-1 Can anybody point out what I'm doing wrong here? Could mod_jk be part of the problem?

    Read the article

  • error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)

    - by ArunS
    We have online shopping site. When I am going to checkout page i am getting a error like this "error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)" From the apache error log i can see some attempts to connect to api.paypal.com. Here is the part of my apache error log * About to connect() to api.paypal.com port 443 (#0) * Trying 66.211.168.123... * connected * Connected to api.paypal.com (66.211.168.123) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure * Closing connection #0 When i tried to connect to api.paypal.com using curl i am getting a error like this curl -iv https://api.paypal.com/ * About to connect() to api.paypal.com port 443 (#0) * Trying 66.211.168.91... connected * Connected to api.paypal.com (66.211.168.91) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Request CERT (13): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS alert, Server hello (2): * error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure * Closing connection #0 curl: (35) error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Can anyone help me to figure out this. Thanks in Advance. Arun S

    Read the article

  • SSL Returning Blank Page, No Catalina Errors

    - by Mr.Peabody
    This is my second, maybe third, time configuring SSL with Tomcat. Earlier I had created a self signed, which worked, and now using my signed is proving fruitless. I am using Tomcat, operating from the Amazon Linux API. When using the signed cert/keystore, my server is starting normally without errors. However, when trying to navigate to the domain it is giving me an "ERR_SSL_VERSION_OR_CIPHER_MISMATCH" error. My server.xml file looks as follows: <Connector port="8443" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" disableUploadTimeout="true" acceptCount="100" scheme="https" secure="true" SSLEnabled="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/home/ec2-user/.keystore/starchild.jks" keystorePass="d6b5385812252f180b961aa3630df504" /> It couldn't hurt to also mention that I'm using a wildcard certificate. Please let me know if anything looks amiss! EDIT: After looking more into this, I've determined there may be nothing is wrong with the Server.xml, or the listening ports. This is becoming more of an actual certificate error, as the curl request is giving me this error: curl: (35) Unknown SSL protocol error in connection to jira.mywebsite.com:-9824 Though, I can't seem to figure out what the "-9824" is. When comparing this curl to another similar setup (using the same Wildcard Certificate) it's turning up the full handshake, which is to be expected. I believe this is now between the protocol/cypher set default on JIRA servers.

    Read the article

  • Unable to specify parameters to cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] Manually Copy/Pasting this command in, verbatim, works perfectly fine, but as part of a script, the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is --sout behaving one way in a script (non-interactive shell?) vs. another way in the foreground (interactive shell) ?

    Read the article

  • Unable to run cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] But the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is it ignoring my --sout input?

    Read the article

  • Nginx, as reverse proxy, could not proxy_pass to a domain pointing to the local JBOSS

    - by larryzhao
    My environment is Ubuntu 12.04, Nginx 1.20, and Torquebox 2.0.3 which is actually JBoss AS 7. I have two app deployed on Torquebox, it listens to 8080 and have different hostnames, app1.mydomain.com and app2.mydomain.com. I added 127.0.0.1 app1.mydomain.com and 127.0.0.1 app2.mydomain.com in /etc/hosts then I curl app1.mydomain.com:8080 and curl app2.mydomain.com:8080 both have correct return. Then I go to my nginx. I would like nginx to pass the visit to www.app1.com to app1.mydomain.com:8080, so I have the following configuration: # primary server - proxypass to torquebox server { listen 80; server_name www.app1.com; access_log off; error_log off; # proxy to Torquebox location / { proxy_pass http://app1.mydomain:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } But it doesn't work. curl www.app1.com returns nothing. And if I visit www.app1.com in Safari, the http return code is 404. I don't know why, need help.

    Read the article

  • TCP dies on a Linux laptop

    - by Roman Cheplyaka
    Once in several days I have the following problem. My laptop (Debian GNU/Linux testing) suddenly becomes unable to work with TCP connections to the internet. The following things continue to work fine: UDP (DNS), ICMP (ping) — I get instant response TCP connections to other machines in the local network (e.g. I can ssh to a neighbour laptop) everything is ok for other machines in my LAN But when I try TCP connections from my laptop, they time out (no response to SYN packets). Here's a typical curl output: % curl -v google.com * About to connect() to google.com port 80 (#0) * Trying 173.194.39.105... * Connection timed out * Trying 173.194.39.110... * Connection timed out * Trying 173.194.39.97... * Connection timed out * Trying 173.194.39.102... * Timeout * Trying 173.194.39.98... * Timeout * Trying 173.194.39.96... * Timeout * Trying 173.194.39.103... * Timeout * Trying 173.194.39.99... * Timeout * Trying 173.194.39.101... * Timeout * Trying 173.194.39.104... * Timeout * Trying 173.194.39.100... * Timeout * Trying 2a00:1450:400d:803::1009... * Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable * Success * couldn't connect to host * Closing connection #0 curl: (7) Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable Restarting the connection and/or reloading the network card kernel module doesn't help. The only thing that helps is reboot. Clearly something is wrong with my system (everything else works fine), but I have no idea what exactly. I don't know how to reproduce this, but as I said, it happens every several days. My setup is a wireless router that is connected to the ISP via PPPoE. Any advice?

    Read the article

  • Wisdom of merging 100s of Oracle instances into one instance

    - by hoytster
    Our application runs on the web, is mostly an inquiry tool, does some transactions. We host the Oracle database. The app has always had a different instance of Oracle for each customer. A customer is a company which pays us to provide our service to the company's employees, typically 10,000-25,000 employees per customer. We do a major release every few years, and migrating to that new release is challenging: we might have a team at the customer site for a couple weeks, explaining new functionality and setting up the driving data to suit that customer. We're considering going multi-client, putting all our customers into a single shared Oracle 11g instance on a big honkin' Windows Server 2008 server -- in order to reduce costs. I'm wondering if that's advisable. There are some advantages to having separate instances for each customer. Tell me if these are bogus, please. In my rough guess about decreasing importance: Our customers MyCorp and YourCo can be migrated separately when breaking changes are made to the schema. (With multi-client, we'd be migrating 300+ customers overnight!?!) MyCorp's data can be easily backed up and (!!!) restored, without affecting other customers. MyCorp's data is securely separated from their competitor YourCo's data, without depending on developers to get the code right and/or DBAs getting the configuration right. Performance is better because the database is smaller (5,000 vs 2,000,000 rows in ~50 tables). If MyCorp's offices are (mostly) in just one region, then the MyCorp's instance can be geographically co-located there, so network lag doesn't hurt performance. We can provide better service to global clients, for the same reason. In MyCorp wants to take their database in-house, then we can easily export their instance, to get MyCorp their data. Load-balancing is easier because instances can be placed on different servers (this is with a web farm). When a DEV or QA instance is needed, it's easier to clone the real instance and anonymize the data, because there's much less data. Because they're small enough, developers can have their own instance running locally, so they can work on code while waiting at the airport and while in-flight, without fighting VPN hassles. Q1: What are other advantages of separate instances? We are contemplating changing the database schema and merging all of our customers into one Oracle instance, running on one hefty server. Here are advantages of the multi-client instance approach, most important first (my WAG). Please snipe if these are bogus: Less work for the DBAs, since they only need to maintain one instance instead of hundreds. Less DBA work translates to cheaper, our main motive for this change. With just one instance, the DBAs can do a better job of optimizing performance. They'll have time to add appropriate indexes and review our SQL. It will be easier for developers to debug & enhance the application, because there is only one schema and one app (there might be dozens of schema versions if there are hundreds of instances, with a different version of the app for each version of the schema). This reduces costs too. The alternative is having to start every debug session with (1) What version is this customer running and (2) Let's struggle to recreate the corresponding development environment, code and database. (We need a Virtual Machine that includes the code AND database instance for each patch and release!) Licensing Oracle is cheaper because it's priced per server irrespective of heft (or something -- I don't know anything about the subject). The database becomes a viable persistent store for web session data, because there is just one instance. Some database operations are easier with one multi-client instance, like finding a participant when they're hazy about which customer they (or their spouse, maybe) works for: all the names are in one table. Reporting across customers is straightforward. Q2: What are other advantages of having multiple clients in one instance? Q3: Which approach do you think is better (why)? Instance per customer, or all customers in one instance? I'm concerned that having one multi-client instance makes migration near-impossible, and that's a deal killer... ... unless there is a compromise solution like having two multi-client instances, the old and the new. In that case case, we would design cross-instance solutions for finding participants, reporting, etc. so customers could go from one multi-client instance to the next without anything breaking. THANKS SO MUCH for your collective advice! This issue is beyond me -- but not beyond the collective you. :) Hoytster

    Read the article

  • hadoop install cluster

    - by chnet
    Is it possible to run hadoop on multi-node and these nodes install hadoop on different paths? I noticed the tutorial, such as Michael Noll's, http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/, asks hadoop to be installed on different nodes with the same installation path. I tried to run hadoop on multi-node. Each node installed hadoop on different path. But not success. I noticed that the hadoop assume installation paths are the same in default.

    Read the article

  • hadoop install cluster

    - by chnet
    Is it possible to run hadoop on multi-node and these nodes install hadoop on different paths? I noticed the tutorial, such as Michael Noll's, http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/, asks hadoop to be installed on different nodes with the same installation path. I tried to run hadoop on multi-node. Each node installed hadoop on different path. But not success. I noticed that the hadoop assume installation paths are the same in default. The problem is when I execute start-dfs.sh it tries to start hadoop deamons on the salve nodes from the same path as on master. "Whereas the installation path is different. Were the relevant files modified to reflect the installed locations on each node"? Which file I should modify to reflect the installed locations on each node?

    Read the article

  • Macports irssi & perl5 installation issues

    - by Dmitri DB
    Long time reader, first time poster. Big, appreciative thanks for everyone's collective questioning and answering here and at stackoverflow, it's helped me quite a lot over the time I've been learning answers through these sites! Apologies in advance if I didn't search hard enough on posts already up on this site to find out what I could do about this issue, but I thought I'd just reach out for the sake of trying at least once. I've experienced this issue while starting up my macports-installed version of irssi: 13:25 -!- Irssi: Error in script dispatch: 13:25 Can't locate lib.pm in @INC (@INC contains: /opt/local/lib/perl5/site_perl/5.12.4/darwin-multi-2level /opt/local/lib/perl5/site_perl/5.12.4 /opt/local/lib/perl5/vendor_perl/5.12.4/darwin-multi-2level /opt/local/lib/perl5/vendor_perl/5.12.4 /opt/local/lib/perl5/5.12.4/darwin-multi-2level /opt/local/lib/perl5/5.12.4 /opt/local/lib/perl5/site_perl/5.12.3/darwin-multi-2level /opt/local/lib/perl5/site_perl/5.12.3 /opt/local/lib/perl5/site_perl /opt/local/lib/perl5/vendor_perl .) at (eval 18) line 1. 13:25 BEGIN failed--compilation aborted at (eval 18) line 1. 13:25 Huh, strange. I looked into it a bit: [email protected] /opt/local/lib/perl5 ?- find . -name "lib.pm" -ls 14673887 16 -r--r--r-- 1 root admin 6853 25 Jun 23:39 ./5.12.4/darwin-thread-multi- 2level/lib.pm [email protected] /opt/local/lib/perl5 ?- l 5.12.4/darwin-thread-multi-2level total 1864 drwxr-xr-x 55 root admin 1870 28 Jun 19:28 . drwxr-xr-x 158 root admin 5372 28 Jun 19:28 .. -rw-r--r-- 1 root admin 177814 25 Jun 23:39 .packlist drwxr-xr-x 6 root admin 204 28 Jun 19:28 B -r--r--r-- 1 root admin 25714 25 Jun 23:39 B.pm drwxr-xr-x 64 root admin 2176 28 Jun 19:28 CORE drwxr-xr-x 3 root admin 102 28 Jun 19:28 Compress -r--r--r-- 1 root admin 3000 25 Jun 23:39 Config.pm -r--r--r-- 1 root admin 228094 25 Jun 23:39 Config.pod -r--r--r-- 1 root admin 409 25 Jun 23:39 Config_git.pl -r--r--r-- 1 root admin 38759 25 Jun 23:39 Config_heavy.pl -r--r--r-- 1 root admin 21174 25 Jun 23:39 Cwd.pm -r--r--r-- 1 root admin 63535 25 Jun 23:39 DB_File.pm drwxr-xr-x 3 root admin 102 28 Jun 19:28 Data drwxr-xr-x 5 root admin 170 28 Jun 19:28 Devel drwxr-xr-x 4 root admin 136 28 Jun 19:28 Digest -r--r--r-- 1 root admin 25185 25 Jun 23:39 DynaLoader.pm drwxr-xr-x 22 root admin 748 28 Jun 19:28 Encode -r--r--r-- 1 root admin 29731 25 Jun 23:39 Encode.pm -r--r--r-- 1 root admin 6736 25 Jun 23:39 Errno.pm -r--r--r-- 1 root admin 5445 25 Jun 23:39 Fcntl.pm drwxr-xr-x 5 root admin 170 28 Jun 19:28 File drwxr-xr-x 3 root admin 102 28 Jun 19:28 Filter -r--r--r-- 1 root admin 1819 25 Jun 23:39 GDBM_File.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Hash drwxr-xr-x 3 root admin 102 28 Jun 19:28 I18N drwxr-xr-x 11 root admin 374 28 Jun 19:28 IO -r--r--r-- 1 root admin 1404 25 Jun 23:39 IO.pm drwxr-xr-x 6 root admin 204 28 Jun 19:28 IPC drwxr-xr-x 4 root admin 136 28 Jun 19:28 List drwxr-xr-x 4 root admin 136 28 Jun 19:28 MIME drwxr-xr-x 3 root admin 102 28 Jun 19:28 Math -r--r--r-- 1 root admin 2519 25 Jun 23:39 NDBM_File.pm -r--r--r-- 1 root admin 4208 25 Jun 23:39 O.pm -r--r--r-- 1 root admin 15563 25 Jun 23:39 Opcode.pm -r--r--r-- 1 root admin 21011 25 Jun 23:39 POSIX.pm -r--r--r-- 1 root admin 58962 25 Jun 23:39 POSIX.pod drwxr-xr-x 5 root admin 170 28 Jun 19:28 PerlIO -r--r--r-- 1 root admin 2515 25 Jun 23:39 SDBM_File.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Scalar -r--r--r-- 1 root admin 10837 25 Jun 23:39 Socket.pm -r--r--r-- 1 root admin 41003 25 Jun 23:39 Storable.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Sys drwxr-xr-x 3 root admin 102 28 Jun 19:28 Text drwxr-xr-x 5 root admin 170 28 Jun 19:28 Time drwxr-xr-x 3 root admin 102 28 Jun 19:28 Unicode -r--r--r-- 1 root admin 14462 25 Jun 23:39 attributes.pm drwxr-xr-x 38 root admin 1292 28 Jun 19:28 auto -r--r--r-- 1 root admin 19892 25 Jun 23:39 encoding.pm -r--r--r-- 1 root admin 6853 25 Jun 23:39 lib.pm -r--r--r-- 1 root admin 11044 25 Jun 23:39 mro.pm -r--r--r-- 1 root admin 997 25 Jun 23:39 ops.pm -r--r--r-- 1 root admin 13945 25 Jun 23:39 re.pm drwxr-xr-x 3 root admin 102 28 Jun 19:28 threads -r--r--r-- 1 root admin 33283 25 Jun 23:39 threads.pm So, it sort of seems to me that the permissions which perl5 got installed with for these modules has gotten mixed up somehow? I'm not really a perl user beyond enjoying it for massive directory-recursive find/replace operations within text files, so I haven't much of an idea what the permissions here are supposed to look like, and I'm not really sure how to go about determining how macports has gone and installed perl this way when it's otherwise worked without failure for years now. Does anyone have any recommendations for the sanest path towards rectifying this issue? Also, is there any interesting reason as to why the macports default for the perl5 port installs 5.12.4, and not 5.16.0, which has to be explicitly installed via the perl5.16 port? Thanks again!

    Read the article

  • ASIHTTPRequest - PostData but GET Method

    - by RyanJM
    Is there a way to see what request ASIHTTPRequest is making? My code is: ASIHTTPRequest *request = [ASIHTTPRequest requestWithURL:url]; [request appendPostData:[data dataUsingEncoding:NSUTF8StringEncoding]]; [request setRequestMethod:@"GET"]; [request addRequestHeader:@"Content-Type" value:@"application/json"]; And I'm trying to duplicate: curl -X GET "http://www.myurl.com/_api/my/url" -H "Content-Type: application/json" -d {"api_key":"my_special_api_key_123"} The curl works fine, but I can't get the ASIHTTPRequest to work properly. Any ideas?

    Read the article

  • Postfix sends email to spam (gmail, hotmail)

    - by razorxan
    I recently installed a postfix + dovecot + dkim multi domain, multi user, multi alias mail server on my debian squeeze system. Everything works except for one big issue that basically makes the whole thing useless: Every single email sent by my server goes straight into spam. (gmail, hotmail) First thing i did is doing the well known allaboutspam test and all is checked (green) except for the BATV thing (yellow): Reverse dns: green HELO Greeting: green RBL: green BATV: yellow SPF: green DKIM: green URIBL: green SPAMAssassin: green Greylist: green I'm really confused and i can't see a way to solve this issue. Ask me any detail if you need.

    Read the article

  • Can't locate RRDs.pm in @INC

    - by User4283
    Hi, If i run any of my perl script without "use lib qw( /opt/rrdtool-1.4.4/lib/perl );" after perl interpreter. I've to face the following error. Can't locate RRDs.pm in @INC (@INC contains: /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib/perl5/vendor_perl/5.8.8/i386-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib/perl5/5.8.8/i386-linux-thread-multi /usr/lib/perl5/5.8.8 .) Its hard for me to use: "use lib qw( /opt/rrdtool-1.4.4/lib/perl );" in all of my scripts because there are hundreds of scripts. Can anyone help to resolve this....?

    Read the article

  • problem with twitter api friends_timeline

    - by siznax
    i can get my user_timeline fine: curl -u user:pwd http://www.twitter.com/statuses/user_timeline/user.json {blob of tweets} but when i try to get the friends_timeline, i get an auth error: curl -u user:pwd http://www.twitter.com/statuses/friends_timeline.json {"request":"\/statuses\/friends_timeline.json", "error":"Could not authenticate you."} do i just not understand the documentation? http://apiwiki.twitter.com/REST+API+Documentation#friendstimeline

    Read the article

  • server performance: multiple external connections and performance

    - by websiteguru
    I am creating a php script that requires the server to make several cURL requests per run. I'll be running this script through cron every 3 minutes. Im looking to maximize the amount of cURL requests I can make in a 24 hr period. What I am wondering is if it would be better from a performance standpoint to get a dedicated server, or several small shared hosting accounts. With the problem being number of external connections and not system resources I'm wondering which is the best approach.

    Read the article

  • Erlang: HTTP Accept Header with Inets

    - by Ted Karmel
    I am trying to do the equivalent of the following curl command : curl -H "Accept: text/plain" http://127.0.0.1:8033/stats I tried with an Inets simple http request. But, it isn't processed. How can I specify in Inets (or some other Erlang http client for that matter) the accept header requirement?

    Read the article

  • Unexpected error when attempting to delete a facebook story

    - by blueberryfields
    I'm attempting to delete a facebook story/action, like so: curl -F 'access_token=[valid_token]' -X DELETE https://graph.facebook.com/[action_id] Facebook is responding with an internal server error, like so: {"error": {"message":"An unexpected error has occurred. Please retry your request later.", "type":"OAuthException","code":2}} Is this an error caused by my actions, or something on Facebook's end? Additional info When I run curl -X GET https://graph.facebook.com/[action_id]?access_token=[valid_token] the result is "false"

    Read the article

  • Correct place to load an extension in WordPress

    - by Quassnoi
    My hosting provider does not have curl extension enabled by default, however, I can load it using dl(). What would be the correct place in WordPress to load the extension so that it could use curl for wp_remote_* functions? I'd like it to survive the possible upgrades of WordPress code.

    Read the article

  • Best way to simulate virtual networks

    - by user179529
    I'm wanting to have a vitual test network on either hyper-v or ESXI ( I Don't care which one) I want to have a multi network virtual environment. Lets say three networks: 192.168.1.0, 192.168.2.0 and 192.168.3.0. I want to use the networks to simulate a multi domain or multi site domain. Whats the best way of doing this. I have VMware workstation but it won't create more then one NAT virtual network. So how would I go about doing this in either Hyper-v 2012 or ESXI?

    Read the article

  • How to Best Optimize up Model Transforms, Import 3DS Animations Into XNA 4.0?

    - by Jason R. Mick
    Relative beginner to XNA, but trying to build a multi-purpose (3D) game frameworking in XNA 4. Been using the Reed (O'Reilly) and Cawood/McGee (McGraw Hill) guides. My question is multi-faceted and involves how to most efficiently handle models. I'm using 3DS Max 2010 with kw-Xport to ship out my models as .X files. Solved an early problem by using my depth stencil state. My models are now loading properly (yay!) and I have basic bounding working, I just want to optimize transforming models and get animations working as a next step. My questions on models are: 1. Do you have any suggestions for good resources on exporting 3DS animations to XNA? I've seen some resources on how to handle animations in XNA, but most skimp on basic topics of how to convert multi-animation 3DS files. For example how do I take one big long string of keyframed animations (say running, frame 5-20, climbing frames 25-45, etc.) and turned them into named XNA animations. To my understanding every XNA animation has to have a name, but I haven't seen any tutorials on creating a new named animation from a subset of frames. 2. Is it faster to load a model once and animate/transform that base model on the fly @ draw time, or to load multiple models? My game will have multiple enemies, and I've already seen some lagginess in XNA, so II want to make my code efficient... 3. I've heard people on app hub talking about making custom content processors for models-- what is the benefit of this? Does it speed up transforming or animating the models? If so, can you point me towards any good (model-centric) tutorials? (I've built a custom height map content processor to generate terrain, following Cawood's examples, I'm just a bit confused as to how a model content processor would be implemented.)

    Read the article

  • What characteristic of a software determines its operational scope?

    - by Dark Star1
    How can I classify whether a software is a medium with the ability to grow into an enterprise level software or whether it is already there? And how should I use the information to choose the appropriate language/tool to create the software? At first I was asking whether Java or PHP is the best tool to design enterprise level software, however I've suddenly realized that I am unable to put the software I'm tasked with redesigning into the proper scope so I'm lost. Edit: I guess I'm looking for tell tale signs in software that may tip the favour towards enterprise level type software in the sense of functional and operational characteristics; functional: what it does (multi-functional), Architectural characteristics such as highly modular. operational: multi-sourced and multi-homed, databases, e.t.c. To be honest the reason I ask is because I'm skeptical about the use of PhP to design a piece of employee and partial accounting software. I'm more tipping towards the use of JSP and an hmvc framework such as JSF, wickets, e.t.c. where as the other guy wants to go the PhP way although I'm not experienced with PhP, as far as I know it's not an OO oriented language hence my skepticsm towards it.

    Read the article

  • Unnamed Refactoring

    - by Liam McLennan
    This post is a message in a bottle. It cast it into the sea in the hope that it will one day return to me, stuffed to the cork with enlightenment. Yesterday I  tweeted, what is the name of the pattern where you replace a multi-way conditional with an associative array? I said ‘pattern’ but I meant ‘refactoring’. Anyway, no one replied so I will describe the refactoring here. Programmers tend to think imperatively, which leads to code such as: public int GetPopulation(string country) { if (country == "Australia") { return 22360793; } else if (country == "China") { return 1324655000; } else if (country == "Switzerland") { return 7782900; } else { throw new Exception("What ain't no country I ever heard of. They speak English in what?"); } } which is horrid. We can write a cleaner version, replacing the multi-way conditional with an associative array, treating the conditional as data: public int GetPopulation(string country) { if (!Populations.ContainsKey(country)) throw new Exception("The population of " + country + " could not be found."); return Populations[country]; } private Dictionary<string, int> Populations { get { return new Dictionary<string, int> { {"Australia", 22360793}, {"China", 1324655000}, {"Switzerland", 7782900} }; } } Does this refactoring already have a name? Otherwise, I propose Replace multi-way conditional with associative array

    Read the article

  • M-Audio Delta 1010LT on 12.04

    - by user74039
    I have 12.04 64bit installed, my soundcard is a Delta 1010LT, it seems to be partially detected, I've been following steps here: https://help.ubuntu.com/community/SoundTroubleshooting/ lspci -v | grep -A7 -i "audio" shows this: 04:07.0 Multimedia audio controller: VIA Technologies Inc. ICE1712 [Envy24] PCI Multi-Channel I/O Controller (rev 02) Subsystem: VIA Technologies Inc. M-Audio Delta 1010LT Flags: bus master, medium devsel, latency 64, IRQ 22 I/O ports at ec00 [size=32] I/O ports at e880 [size=16] I/O ports at e800 [size=16] I/O ports at e480 [size=64] Capabilities: <access denied> Kernel driver in use: snd_ice1712 aplay shows this: **** List of PLAYBACK Hardware Devices **** card 0: M1010LT [M Audio Delta 1010LT], device 0: ICE1712 multi [ICE1712 multi] Subdevices: 1/1 Subdevice #0: subdevice #0 In the sound settings on the desktop all I see is the ICE1712 S/PDIF, which I don't use, I want to use the individual outputs on the card, I'm not so bothered about inputs, I just want the playback for now. If I open alsamixer in the console, I see all of the output and input channels, i've raised the volume on them but I don't get anything in the sound settings on the desktop and when I play any sound, I hear nothing. Can someone help?

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >