Search Results

Search found 64048 results on 2562 pages for 'http post'.

Page 108/2562 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • Find and Replace using Perl for a dynamic url based on wordpress post

    - by user1068544
    How do you find the following div using perl. The url and image location will consistently change based on the post url, so i need to use a wild card. I must use a regular expression because I am limited in what i can use due to the software i am using. http://community.autoblogged.com/entries/344640-common-search-and-replace-patterns <div class="tweetmeme_button" style="float: right; margin-left: 10px;"> <a href="http://api.tweetmeme.com/share?url=http%3A%2F%2Fjumpinblack.com%2F2011%2F11%2F25%2Fdrake-and-rick-ross-you-only-live-once-ep-mixtape-2011-download%2F"><br /> <img src="http://api.tweetmeme.com/imagebutton.gif?url=http%3A%2F%2Fjumpinblack.com%2F2011%2F11%2F25%2Fdrake-and-rick-ross-you-only-live-once-ep-mixtape-2011-download%2F&amp;source=jumpinblack1&amp;style=compact&amp;b=2" height="61" width="50" /><br /> </a> </div> I tried using <div class="tweetmeme_button" style="float: right; margin-left: 10px;">.*<\/div>

    Read the article

  • post values to external page explicitly PHP

    - by JPro
    Hi, I want to post values to a page through a hyperlink in another page. Is it possible to do that? Let's say I have a page results.php which has the starting line, <?php if(isset($_POST['posted_value'])) { echo "expected value"; // do something with the data } else { echo "no the value expected"; } If from another page say link.php I place a hyperlink like this: <a href="results.php?posted_value=1"> , will this be accepetd by the results page? If instead if I replace the above starting line with if(isset($_REQUEST['posted_value'])), will this work? I believe the above hyperlink evaluates to GET, but since the only visibility difference between GET and POST that is you can see parameters in the address bar with GET But, is there any other way to place a hyperlink which can post values to a page? or can we use jquery in the place of hyperlink to POST the values? Can anyone please suggest me something on this please? Thanks.

    Read the article

  • Is it possible to get multiple forms to work with one ajax post function

    - by Scarface
    Hey guys I have a system where there is one form for each friend you have and I used to have an ajax post function for each form, but I want to save code and was wondering if it was possible to get multiple forms to work with just one post function. If anyone has any advice on how to achieve this I would appreciate it. For example <div id="message"> <form id='submit' class='message-form' method='POST' > <input type='hidden' id='to' value='friend1' maxlength='255' > Subject<br><input type='text' id='subject' maxlength='50'><br> Message<br><textarea id='message2' cols='50' rows='15'></textarea> <input type='submit' id='submitmessage' class='responsebutton' value='Send'> </form> </div> $(document).ready(function(){ $(".message-form").submit(function() { $("#submitmessage").attr({ disabled:true, value:\"Sending...\" }); var to = $('#to').attr('value'); var subject = $('#subject').attr('value'); var message = $('#message2').attr('value'); $.ajax({ type: "POST", url: "messageprocess.php", data: 'to='+ to + '&subject=' + subject + '&message=' + message, success: function(response) { if(response == "OK") { $('.message-form').html("<div id='message'></div>"); $('#message').html("<h2>Email has been sent!</h2>") .append("<p>Please wait...</p>") .hide() .fadeIn(1500, function() { $('#message').append(\"<img id='checkmark' src='images/check.png' />\"); });

    Read the article

  • refresh a <ui:composition when j_security_check connection interrupted (http 408)

    - by José Osuna Barrios
    I have a "j_security_check connection interrupted (http code 408)" and proposed solution is <meta http-equiv="refresh" content="#{session.maxInactiveInterval}"/> by http://stackoverflow.com/a/2141274/1852036 but my page structure is a composition using a template.xhtml and a view.xhtml like a <ui:composition: my template.xhtml: <html ... <f:view ... <h:body ... <ui:insert name="content"> ... my view.xhtml to refresh when session.maxInactiveInterval <ui:composition ... <ui:define name="content"> ... may anyone help me to do this? I want to refresh this <ui:composition view, I can't use <meta http-equiv="refresh" content="#{session.maxInactiveInterval}"/> on template.xhtml because it's used by several views

    Read the article

  • Jquery - $.(post) data response not consistent with PHP

    - by Sasha
    Jquery code: var code = $('#code'), id = $('input[name=id]').val(), url = '<?php echo base_url() ?>mali_oglasi/mgl_check_paid'; code.on('focusout', function(){ var code_value = $(this).val(); if(code_value.length != 16 ) { if ($('p[role=code_msg]').length != 0 ) $('p[role=code_msg]').remove() ; code.after('<p role=code_msg>Pogrešan kod je unešen.</p>'); } else { if ($('p[role=code_msg]').length != 0 ) $('p[role=code_msg]').remove() ; $.post(url, {id : id, code : code_value}, function(data){ if(data != 'TRUE'){ code.after('<p role=code_msg>Uneti kod je neispravan.</p>'); } else { code.after('<p role=code_msg>Status malog oglasa je promenjen.</p>'); code.after(create_image()); code.remove(); } }); } }); PHP (Codeigniter) code: function mgl_check_paid() { $code = $this->input->post('code'); $id = $this->input->post('id'); echo ($this->mgl->mgl_check_paid($code, $id)) ? 'TRUE' : 'FALSE'; } Problem is following: When code is sent and if it is correct, PHP part will echo TRUE, and JS will execute ELSE part (after post), but for some reason it is not doing that (it is executing the first part of the statment)? What is wrong with this code?

    Read the article

  • Last (I think and hope) problems configuring SSL certificate with Apache and VirtualHosts

    - by user65567
    Finally I set apache2 to get a single certificate for all subdomains. [...] # Go ahead and accept connections for these vhosts # from non-SNI clients SSLStrictSNIVHostCheck off # Apache setup which will listen for and accept SSL connections on port 443. Listen 443 # Listen for virtual host requests on all IP addresses NameVirtualHost *:443 # Because this virtual host is defined first, it will # be used as the default if the hostname is not received # in the SSL handshake, e.g. if the browser doesn't support # SNI. <VirtualHost *:443> ServerName domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/domain/public" <Directory "/Users/<my_user_name>/Sites/domain/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain1.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain1/public" <Directory "/Users/<my_user_name>/Sites/subdomain1/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain2.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain2/public" <Directory "/Users/<my_user_name>/Sites/subdomain2/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> So, for example, I can correctly access https://subdomain1.domain.localhost https://subdomain2.domain.localhost ... Now, anyway, I have problems on accessing http://subdomain1.domain.localhost http://subdomain2.domain.localhost ... Since I use a Mac Os, on accessing the "http: version", I get a default page "Your website." (instead of a error). Why does it happen?

    Read the article

  • SSH over HTTPS with proxytunnel and nginx

    - by Thermionix
    I'm trying to setup an ssh over https connection using nginx. I haven't found any working examples, so any help would be appreciated! ~$ cat .ssh/config Host example.net Hostname example.net ProtocolKeepAlives 30 DynamicForward 8118 ProxyCommand /usr/bin/proxytunnel -p ssh.example.net:443 -d localhost:22 -E -v -H "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32)" ~$ ssh [email protected] Local proxy ssh.example.net resolves to 115.xxx.xxx.xxx Connected to ssh.example.net:443 (local proxy) Tunneling to localhost:22 (destination) Communication with local proxy: -> CONNECT localhost:22 HTTP/1.0 -> Proxy-Connection: Keep-Alive -> User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32) <- <html> <- <head><title>400 Bad Request</title></head> <- <body bgcolor="white"> <- <center><h1>400 Bad Request</h1></center> <- <hr><center>nginx/1.0.5</center> <- </body> <- </html> analyze_HTTP: readline failed: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host Nginx config on the server; ~$ cat /etc/nginx/sites-enabled/ssh upstream tunnel { server localhost:22; } server { listen 443; server_name ssh.example.net; location / { proxy_pass http://tunnel; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } ssl on; ssl_certificate /etc/ssl/certs/server.cer; ssl_certificate_key /etc/ssl/private/server.key; } ~$ tail /var/log/nginx/access.log 203.xxx.xxx.xxx - - [08/Feb/2012:15:17:39 +1100] "CONNECT localhost:22 HTTP/1.0" 400 173 "-" "-"

    Read the article

  • How to set up that specific domains are tunneled to another server

    - by Peter Smit
    I am working at an university as research assistant. Often I would like to connect from home to university resources over http or ssh, but they are blocked from outside access. Therefore, they have a front-end ssh server where we can ssh into and from there to other hosts. For http access they advise to set up an ssh tunnel like this ssh -L 1234:proxyserver.university.fi:8080 publicsshserver.university.fi and put the proxy settings of your browser to point to port 1234 All nice and working, but I would not like to let all my other internet traffic go over this proxy server, and everytime I want to connect to the university I have to do this steps again. What would I like: - Set up a ssh tunnel everytime I log in my computer. I have a certificate, so no passwords are needed - Have a way to redirect some wildcard-domains always through the ssh-server first. So that when I type intra.university.fi in my browser, transparently the request is going through the tunnel. Same when I want to ssh into another resource within the university Is this possible? For the http part I think I maybe should set up my own local transparent proxy to have this easily done. How about the ssh part?

    Read the article

  • Forward real IP through Haproxy => Nginx => Unicorn

    - by Hendrik
    How do I forward the real visitors ip adress to Unicorn? The current setup is: Haproxy => Nginx => Unicorn How can I forward the real IP address from Haproxy, to Nginx, to Unicorn? Currently it is always only 127.0.0.1 I read that the X headers are going to be depreceated. http://tools.ietf.org/html/rfc6648 - how will this impact us? Haproxy Config: # haproxy config defaults log global mode http option httplog option dontlognull option httpclose retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 # Rails Backend backend deployer-production reqrep ^([^\ ]*)\ /api/(.*) \1\ /\2 balance roundrobin server deployer-production localhost:9000 check Nginx Config: upstream unicorn-production { server unix:/tmp/unicorn.ordify-backend-production.sock fail_timeout=0; } server { listen 9000 default; server_name manager.ordify.localhost; root /home/deployer/apps/ordify-backend-production/current/public; access_log /var/log/nginx/ordify-backend-production_access.log; rewrite_log on; try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; proxy_pass http://unicorn-production; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }

    Read the article

  • Unable to browse to apache service, Service is running

    - by Jeff
    Summary I have a very peculiar problem. I am not able to open the "It Works!" page after installing a fresh server with apache. I am able to ssh to the box (from outside the network). Apache seems to be running on my Centos6.4x86_64 box just fine. Nothing useful in /var/logs/httpd/*. What am I missing? The setup I am outside the network right now. The "server" is a VM on my home computer running bridged mode. public ip: A.B.C.D Host: 192.168.1.5 VM: 192.168.1.8 I have a verizon fios router that is forwarding ports 22, 80, and 8888 to the VM. I am able to ssh over port 22, but I am not able to browse to the public URL over port 80. so A.B.C.D:22 is working, but http://A.B.C.D:80 is not. What I've tried nmap to see if it is listening: nmap -sT -O localhost Starting Nmap 5.51 ( http://nmap.org ) at 2013-10-25 11:10 EDT Nmap scan report for localhost (127.0.0.1) Host is up (0.000040s latency). Other addresses for localhost (not scanned): 127.0.0.1 Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 25/tcp open smtp 80/tcp open http 3306/tcp open mysql I tried going to it locally (lynx) and it does work. So, is the problem in my ports?

    Read the article

  • Firefox takes a really long time to load some sites on Ubuntu

    - by Dave
    Hello guys, I have an issue here. Some sites - just a few - takes a really long time to load on Firefox. One example is A List Apart (http://www.alistapart.com/) which takes more than 30 minutes (yes, minutes, not seconds). On Opera, ou even through a telnet session, the problematic sites run without problem, fast as expected. I am using Linux 8.04, running Firefox 3.6.3 downloaded from mozilla site, with a 10M ADSL connection. I tried many tweaks I found googling, like disable IPv6, and change http pipelining settings on FF's about:config. None worked. I also used Firebug to find what phase during negotiation is the bottleneck. Findings are in the screenshot. Well guys, any idea what is the issue? And how to solve it? I repeat, this only happens with firefox (3.6.3 and prior versions), for a few sites only (even sites with much more requests, images, javascripts, stylesheets work fine), and http pipelines and IPv6 tweaks on about:config didn't work. Thanks

    Read the article

  • Last (I think and hope) problems configuring SSL certificate with Apache and VirtualHosts

    - by user65567
    Finally I set apache2 to get a single certificate for all subdomains. [...] # Go ahead and accept connections for these vhosts # from non-SNI clients SSLStrictSNIVHostCheck off # Apache setup which will listen for and accept SSL connections on port 443. Listen 443 # Listen for virtual host requests on all IP addresses NameVirtualHost *:443 # Because this virtual host is defined first, it will # be used as the default if the hostname is not received # in the SSL handshake, e.g. if the browser doesn't support # SNI. <VirtualHost *:443> ServerName domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/domain/public" <Directory "/Users/<my_user_name>/Sites/domain/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain1.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain1/public" <Directory "/Users/<my_user_name>/Sites/subdomain1/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain2.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain2/public" <Directory "/Users/<my_user_name>/Sites/subdomain2/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> So, for example, I can correctly access https://subdomain1.domain.localhost https://subdomain2.domain.localhost ... Now, anyway, I have problems on accessing http://subdomain1.domain.localhost http://subdomain2.domain.localhost ... Since I use a Mac Os, on accessing the "http: version", I get a default page "Your website." (instead of a error). Why does it happen?

    Read the article

  • Unusual HEAD requests to nonsense URLs from Chrome

    - by JeremyDWill
    I have noticed unusual traffic coming from my workstation the last couple of days. I am seeing HEAD requests sent to random character URLs, usually three or four within a second, and they appear to be coming from my Chrome browser. The requests repeat only three or four times a day, but I have not identified a particular pattern. The URL characters are different for each request. Here is an example of the request as recorded by Fiddler 2: HEAD http://xqwvykjfei/ HTTP/1.1 Host: xqwvykjfei Proxy-Connection: keep-alive Content-Length: 0 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The response to this request is as follows: HTTP/1.1 502 Fiddler - DNS Lookup Failed Content-Type: text/html Connection: close Timestamp: 08:15:45.283 Fiddler: DNS Lookup for xqwvykjfei failed. No such host is known I have been unable to find any information through Google searches related to this issue. I do not remember seeing this kind of traffic before late last week, but it may be that I just missed it before. The one modification I made to my system last week that was unusual was adding the Delicious add-in/extension to both IE and Chrome. I have since removed both of these, but am still seeing the traffic. I have run virus scan (Trend Micro) and HiJackThis looking for malicious code, but I have not found any. I would appreciate any help tracking down the source of the requests, so I can determine if they are benign, or indicative of a bigger problem. Thanks.

    Read the article

  • How to Link Sub Taxonomy to products in wordpress

    - by rahul251
    I have created custom post types called Internal products I Have a page page-internal-products.php which list all the custom taxonomy for the Custom post type Internal product On clicking on the taxonomies takes me to a page which lists the sub taxonomies for the particular parent taxonomy for which i have created the page called taxonomy-internalproducts_categories.php On clicking on the sub taxonomy. I need to go to a page which lists all the products for this sub taxonomy. How can I achieve this?

    Read the article

  • how to refresh mulitple divs when after posting using jquery

    - by oo
    i have a screen with multiple little widgets (all with different divs around them). i have one form and when i post (using jquery) right now it updates the single form using ajax. i want two other divs to refresh as well (that are outside the form). What is the best way to trigger a refresh of multiple different divs on a single jquery ajax post callback?

    Read the article

  • page sends file to curl i want to get download link insted

    - by Ben
    there is a page that i need to post a password to it and then i get a file to download. the post goes to the same page address its loads again and pop up the download manager. now i want to do the same but in curl, i posted the data to the url and then its sends me the file back but i dont want my script to download the whole file i want only to get a link to download it by myself. how can i do that?

    Read the article

  • Extra GET Request on META Refresh Redirect (CGI-C)

    - by Koray Alkan
    I have a form (on page form.html) submitting with POST method to a CGI-C page - let's call it form.cgi - and what form.cgi does is it redirects the user to the previous page (to form.html) with appending query strings using HTTP-EQUIV Refresh META after 5 seconds. However, if I monitor the Web server's access.log although I see the appropriate POST request for form.cgi there is an additional GET request for form.cgi again, after 5 seconds just before redirecting the user to form.html Has anyone faced with such an issue?

    Read the article

  • Can I create an ASP.NET ImageButton that doesn't postback?

    - by Giffyguy
    I'm trying to use the ImageButton control for client-side script execution only. I can specify the client-side script to execute using the OnClientClick property, but how do I stop it from trying to post every time the user clicks it? There is no reason to post when this button is clicked. I've set CausesValidation to False, but this doesn't stop it from posting.

    Read the article

  • how to write an artificial request.

    - by Bunny Rabbit
    how can i construct a artificial request to login to twitter or any site for that matter that accpets post forms. what i've been trying is to extract the headers and post request parameters from the origional request(directed at the action atribute of the form) and copy it to the outgoing url object that i am making.but it just won't work. And i am aware of the apis and i don't wanna use them i am trying this to write a web proxy site.

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >