Search Results

Search found 15272 results on 611 pages for 'request for enhancement'.

Page 187/611 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Apache Web Server character encoding

    - by OBY
    I've recently transferred my webapp from my localhost (LH) to a VPS, and have had hebrew chars-encoding probs since. Whenever I send a request with a heb-char it results in "?????" saved to the DB. My LH config was tomcat6, MySQL, and centOS 6.2, opened to the web. In the VPS env I'm behind an Apache Web Server, and the rest is quite the same (though I haven't done anything to its installation). Please note I already have had this problem before, on my LH when the request was sent from IE/chrome (not FF!). The solution was to apply a filter on the the context and change the char-type to UTF-8. My webapp content char-encode is utf-8, MySql server set to utf8 using charset utf8;, and my centOS set to iw_IL.UTF8 using export LANG=iw_IL.UTF8. When I use locale the bash output seems to be set correctly. Any suggestions?

    Read the article

  • Ping myself, works with ipv6 not ipv4 in Windows 7

    - by user68546
    Hi! I've tried to solve the following problem with no luck and I need some proffesional help. The following is possible: Ping all computers (that I tried) in the domain without problem. Ping myself with localhost which use ::1. Ping myself with my given ipv6 IP. Internet access. The following is not possible: Noone can ping me (request timeout) with computername/ipv4/ipv6. I cannot ping myself with my given ipv4 IP or 127.0.0.1 (request timeout). Tried to enable/disable TCP/IPv4. Same issue. Turned off windows firewall. Added an inbound rule to allow icmp (just in case). Same same.. Is there someone out there that has any idea what the issue could be? Any help would be most appreciated!

    Read the article

  • Using command line to connect to a wireless network with an http login

    - by Shane
    I'm trying to connect to a wifi network where it hijacks all requests and redirects you to a page where you have to agree to a terms of use before it lets you connect to the actual outside world. This is a pretty common practice, and usually doesn't pose much of a problem. However, I've got a computer running Ubuntu 9.10 server with no windowing system. How can I use the command line to agree to the terms of use? I don't have internet access on the computer to download packages via apt-get or anything like that. Sure, I can think of any number of workarounds, but I suspect there's an easy way to use wget or curl or something. Basically, I need a command line solution for sending an HTTP POST request essentially clicking on a button. For future reference, it'd be helpful to know how to send a POST request with, say, a username and password if I ever find myself in that situation in another hotel or airport.

    Read the article

  • Log incoming requests on Ubuntu (ports 80, 443)

    - by Maxim Eliseev
    We have Tomcat running on Ubuntu server. It runs a web service, open to the internet. Sometimes it has a sudden spike of traffic and goes down. There is nothing unusual in Tomcat access logs. I guess it is because some of the requests are so 'heavy' that they never finish and hence are not recorded to Tomcat access logs. Is there a way to configure Ubuntu to log incoming requests in the following format (below)? Date, Time, URL (with query string params), IP address (of client) There should be one line per request. Each request should be logged before it is executed. Only incoming requests to ports 80 and 443 should be logged.

    Read the article

  • How to filter Varnish logs based on XID?

    - by Martijn Heemels
    I'm running into infrequent 503 errors which appear hard to pinpoint. Varnishlog is driving me mad, since I can't seem to get the information I want out of it. I'd like to see both the client- and backend-communications as seen by Varnish. I thought the XID number, which is logged on Varnish's default error page, would allow me to filter the exact request out of the logging buffer. However, no combination of varnishlog parameters gives me the output I need. The following only shows the client-side communication: varnishlog -d -c -m ReqStart:1427305652 while this only shows the resulting backend communication: varnishlog -d -b -m TxHeader:1427305652 Is there a one-liner to show the entire request?

    Read the article

  • Can't set up Usermin correctly to allow users to login outside of local network, what am I missing?

    - by thecraic
    I'm fairly new at creating a server, but the biggest problem I am currently having at the moment is getting Usermin set up to be accessible from outside the LAN. I talked to other people that use it and was told that all I need to do is type the url:20000 to access the login screen, but that doesn't work. I have also tried the ip:20000 and that doesn't lead to anything. Instead I get the error message: Error - Bad Request This web server is running in SSL mode. Try the URL https://hostname:10000/ instead. (where hostname is my server's hostname) I know it must be a configuration issue, but I have checked all my settings and as far as I can tell I don't have the ports blocked anywhere. I have the correct ports forwarded on my router and my server firewall doesn't have the port block either. Is there anything I am missing? Any help would be appreciated and I will add more information upon request. Thank You.

    Read the article

  • wamp server REDIRECT_STATUS

    - by user143039
    I have noticed my remote server returns $_SERVER[REDIRECT_STATUS] with every request. Example: [REDIRECT_STATUS] = 200 But my localhost wamp server does not. Interestingly, when I manually set a header: Example: header('HTTP/1.1 302 Redirect This'); The variable: $_SERVER[REDIRECT_STATUS] is still not set. Though I can view the headers, including the one I set, with HttpFox. $_SERVER[REDIRECT_STATUS] is set when .htaccess throws(?) an ErrorDocument, like ErrorDocument 404 for example. Any ideas how to get wamp server to set the variable $_SERVER[REDIRECT_STATUS] with every request?

    Read the article

  • mod_rewrite ssl redirect

    - by Thomas
    Hi all, I want to use mod_rewrite to ensure that certain pages are served with SSL and all others normally, but I am having trouble getting it to work This works (redirect to SSL when request uri is for users or cart) RewriteCond %{SERVER_PORT} 80 RewriteCond %{REQUEST_URI} users [OR] RewriteCond %{REQUEST_URI} cart RewriteRule ^(.*)$ https://secure.host.tld/$1 [R,L] So, to accomodate for a user not to keep browsing the site with ssl, when requesting other uris, I thought the below, but doesn't work: (when port is 443 and request uri is not one of uris that need to be served by ssl, redirect back to normal host) RewriteCond %{SERVER_PORT} 443 RewriteCond %{REQUEST_URI} !^/users [OR] RewriteCond %{REQUEST_URI} !group RewriteRule ^/?(users|groups)(.*)$ http://host.tld/$1 [R,L] Any help? Thanks

    Read the article

  • Odd SVN Checkout failures occur frequenctly on VMWare virtual machines

    - by snowballhg
    We've recently been experiencing seemingly random SVN checkout failures on our Hudson build system. Google search has failed me; I'm hoping the super user community can help me out :-) We are occasionally receiving the following SVN error when our Hudson build jobs checkout source via the Hudson Subversion plug-in (which uses svn kit): ERROR: Failed to check out http://server/svnroot/trunk org.tmatesoft.svn.core.SVNException: svn: Processing REPORT request response failed: XML document structures must start and end within the same entity. (/svnroot/!svn/vcc/default) svn: REPORT request failed on '/svnroot/!svn/vcc/default' This issue seems to only occur when checking out from our Virtual Machines (Windows XP, Fedora 9, Fedora 12) using Hudson's SVN Plug-in. Systems that use the traditional SVN client seem to work. SVN Server version: 1.6.6 Hudson version: 1.377 Hudson SVN Plugin Version: 1.17 Has anyone dealt with this issue, or have any suggestions? Thanks

    Read the article

  • Returning "200 OK" in Apache on HTTP OPTIONS requests

    - by i..
    I'm attempting to implement cross-domain HTTP access control without touching any code. I've got my Apache(2) server returning the correct Access Control headers with this block: Header set Access-Control-Allow-Origin "*" Header set Access-Control-Allow-Methods "POST, GET, OPTIONS" I now need to prevent Apache from executing my code when the browser sends a HTTP OPTIONS request (it's stored in the REQUEST_METHOD environment variable), returning 200 OK. How can I configure Apache to respond "200 OK" when the request method is OPTIONS? I've tried this mod_rewrite block, but the Access Control headers are lost. RewriteEngine On RewriteCond %{REQUEST_METHOD} OPTIONS RewriteRule ^(.*)$ $1 [R=200,L]

    Read the article

  • Binding services to localhost and using SSH tunnels - can requests be forged?

    - by Martin
    Given a typical webserver, with Apache2, common PHP scripts and a DNS server, would it be sufficient from a security perspective to bind administration interfaces like phpmyadmin to localhost and access it via SSH tunnels? Or could somebody, who knew eg. that phpmyadmin (or any other commonly availible script) is listening at a certain port on localhost easily forge requests that would be executed if no other authentication was present? In other words: could somebody from somewhere in the internet easily forge a request, so that the webserver would accept it, thinking it originated from 127.0.0.1 if the server is listening on 127.0.0.1 only? If there were a risk, could it be somehow dealt with on a lower level than the application, eg. by using iptables? The idea being, that if someone found a weakness in a php script or apache, the network would still block this request because it did not arrive via a SSH-tunnel?

    Read the article

  • How to choose between using a Domain Event, or letting the application layer orchestrate everything

    - by Mr Happy
    I'm setting my first steps into domain driven design, bought the blue book and all, and I find myself seeing three ways to implement a certain solution. For the record: I'm not using CQRS or Event Sourcing. Let's say a user request comes into the application service layer. The business logic for that request is (for whatever reason) separated into a method on an entity, and a method on a domain service. How should I go about calling those methods? The options I have gathered so far are: Let the application service call both methods Use method injection/double dispatch to inject the domain service into the entity, letting the entity do it's thing and then let it call the method of the domain service (or the other way around, letting the domain service call the method on the entity) Raise a domain event in the entity method, a handler of which calls the domain service. (The kind of domain events I'm talking about are: http://www.udidahan.com/2009/06/14/domain-events-salvation/) I think these are all viable, but I'm unable to choose between them. I've been thinking about this a long time and I've come to a point where I no longer see the semantic differences between the three. Do you know of some guidelines when to use what?

    Read the article

  • How can I make my browser(s) finish AJAX requests instead of stopping them when I switch to another page?

    - by Tom Wijsman
    I usually need to deal with things on a page right before switching to yet another page, this ranges from "liking / upvoting a comment or post" up to "an important action" and doesn't always come with feedback on whether the action actually proceeded. This is a huge problem! I assume the action to proceed once I start the particular AJAX request, but because I switch to another page it didn't actually happen because the AJAX request got aborted. This has left me several times with coming back to the page and seeing my action didn't take place at all; to give you an idea how bad this is, this even happened once when commenting on Super User! Is there a way to tell my browser to not drop these AJAX connections but simply let them finish?

    Read the article

  • Setting up a transparent proxy with only one box.

    - by Scott Chamberlain
    I am playing around with transparent proxies, unfortunately I do not have two machines to test it out with. The current way I am doing things is the program makes a request to a computer on port 80, I use iptables -t nat -A OUTPUT -p tcp --destination-port 80 -j REDIRECT --to-port 1234 to redirect to my proxy that I am playing with. the proxy will send out a request to port 81 (as all outbound port 80 are being fed back in to the proxy so I want to do something like iptables -t nat -A OUTPUT -p tcp --destination-port 81 -j DNAT --to-destination xxxx:80 The problem lies with the xxxx part. How do I change the destination port without changing changing the destination ip? Or am I doing this setup completely wrong, I am learning after all and constructive criticism is definitely appreciated. The machine I am using is pretty low end so I would like not not have to create a VM with a second box unless absolutely necessary.

    Read the article

  • Automated VLAN creation with residential Wireless devices

    - by Zephyr Pellerin
    We've got a few WRT devices from Linksys here, and the issue has arisen to deploy them in a relatively small environment, However, in the interest of manageability we'd like to be able to automatically VLAN (ideally NOT subnet) every user from one another. It seems obvious to me that the default firmware isn't capable of this - can OpenWRT/Tomato/DD-WRT support any sort of functionality such that new users are automatically VLANed or otherwise logically separated from other users? It seems like there's an easy IPtables or PF solution here, but I've been wrong before. (If that seemed a little ambiguous, heres an example) User 1 sends DHCP request to server, new VLAN (We'll call VLAN 1) is created, user is placed in that VLAN. Then, user 2 sends a DHCP request and is placed in VLAN 2 etc. etc.

    Read the article

  • Ping reply not getting to LAN machines but getting in Linux router Gateway

    - by Kevin Parker
    I have configured Ubuntu 12.04 as Gateway machine.its having two interfaces eth0 with ip 192.168.122.39(Static) and eth1 connected to modem with ip address 192.168.2.3(through DHCP). ip-forwarding is enabled in router box. Client machine is configured as: ip address 192.168.122.5 and gateway 192.168.122.39 Client machines can ping router box(192.168.122.39).but when pinged 8.8.8.8 reply is not reaching Client machines but in the tcpdump output on gateway i can see echo request for 8.8.8.8 but never echo reply.Is this because of 122.5 not forwarding request to 2.0 network.Can u please help me in fixing this.

    Read the article

  • Apache configuration file visualization/testing

    - by Matt Holgate
    Is there a tool available (or a debug mode built into Apache) that will allow me to interactively test and explain an Apache configuration for a given request? In particular, I'd like to be able to see which directives will apply when requesting a specific URL. For example, the output for the URL http://myserver.com/foo/bar/bar.html might look something like: Allow from 192.168.0.3 <-- From <Location /foo/bar> in myserver.com vhost Require valid user <-- From <Directory /var/www/foo> in global configuration Satisfy any <-- From <File bar.html> in global configuration [Background: why do I want this? The apache merging rules for configuration directives are quite complex to get right. It would be great to have a tool which allows you to check that your rules are doing exactly what you want, and would be a good learning tool]. If there isn't such a tool, is there a debug option in Apache that will log such information for each incoming request?

    Read the article

  • How to manage SOAP requests to a pool of VM each listening on a HTTP port with a priority value in these requests?

    - by sputnick
    I have a front SOAP web-server under Linux. It will have to communicate with Windows Servers VM listening each on a HTTP port, for a HTTP POST request. The chosen VM should return a report of the task to the SOAP client. In the SOAP requests, there's a special variable : the priority of the request (kind of SLA), and my question is coming right now : I think of using a ha software (nginx, HAProxy, HeartBeat...) that can manage priority in this point of view. Is it relevant or do you think I need to implement a queue by myself with some specific developments? Ex: I have a SOAP requests with low priority in the pipe : the weight priority for these VM should be decreased if I have high priority SOAP requests at the same time. Any clue will be really appreciated.

    Read the article

  • Running Tomcat 7 and Apache 2 on the same server

    - by Thorn
    Part of my site needs to run over HTTPS and I'm creating a sub-domain for that part. I have apache httpd 2 AND Tomcat 7 running on the same server with the same IP, Apache is on port 80 of course, while Tomcat is running on port 8080. Right now I am doing domain forwarding for requests that need to run off tomcat. For example, mathteamhosting.com/mathApp can forward to mathteamhosting.com:8080/mathApp. I would like to have Tomcat handle the https requests for that subdomain. I don't think this forwarding technique can work in this case. How do I set that up so that Tomcat receives the requests on port 443 while apache handles port 80. To be more specific: http://proctinator.com == request goes to Apache web server https://private.proctinator.com == request goes to Apache web server

    Read the article

  • How do I capture and playback http web requests against multiple web servers?

    - by KevM
    My overall goal is to not interrupt a production system while capturing HTTP Posts to a web application so that I can reverse engineer the telemetry coming from a closed application. I have control over the transmitter of the HTTP Posts but not the receiving web application. It seems like I need a request "forking" proxy. Sort of a reverse proxy that pushes the request to 2 endpoints, a master and slave, only relaying the response from the master endpoint back to the requester. I am not a server geek so something like this may exist but I don't know the term of art for what I am looking for. Another possibility could be a simple logging proxy. Capture a log of the web requests. Rewrite the log to target my "slave" web application. Playback the log with curl or something. Thank you for your assistance.

    Read the article

  • Nginx load distribution and multi-domain SSL

    - by Steve Clark
    I'm researching into the best methods of two new parts of our infrastructure, hopefully finding a single solution for both. 1) We're currently running a single application server, and we're going to be adding an additional application server and load balance between the two. 2) We handle a few thousand domains across the application server(s), and we're looking to support SSL. The best method i've come across so far is using nginx for it's Load Distribution to serve the requests to the application servers, and for it's SSL support. If a request is using SSL, nginx accepts the request on, terminates SSL and pipes to apache (app servers). Now, that's all good, but i'm yet to figure out how we can let nginx handle multiple domains using SSL. We're potentially looking at using UCC SSL Certs, so we can support 150 domains on a single certificate, with each cert on a single IP. I'm all new to this (My experience is just with physical load balancers and a single domains on SSL), so any advice would be very much appreciated.

    Read the article

  • Cannot access to my app's dashboard (iTunes Connect)

    - by Skynext
    Since more than a week I can't access to my app's admin page on iTunes Connect. I'am able to connect me to iTunes Connect and to access the "manage my Apps" section but when I select a specific app I get the following message: Unable to Process Request Your request could not be processed. For additional help, send an email to [email protected] I have deleted cookies and data navigation in Safari, tried with chrome and fierfox: nothing. I contacted the support of iTunes Connect more than one week ago but nothing moves. If anyone has experienced the same situation and can help me... Thank You !

    Read the article

  • Long domain lookup on .dev domain inside vmware

    - by skelle
    I'm developing on my macbook and normally I have a local running webserver which just works finde. Now I have to use a vmware image where the webserver is running. I set up everything and my dev site is running under site.dev inside vmware. I can connect to the webserver but EVERY request takes a very long time. I already red that this is related with iIPv6 and the way OSX handles /etc/hosts. There I added 192.168.155.42 site.dev and I already did this (Resolving to virtual host very slow on Mac OS X Lion) but my lookup still takes ~30seconds on every request. What can I do to fix this issue?

    Read the article

  • Why do Apache access logs - timer resolution issue?

    - by Rob
    When going through Apache 2.2 access logs, logging with the %D directive (The time taken to serve the request, in microseconds), that it's very common for a 200 response to have a given number of bytes, but a "time to serve" of zero. For example, a given URL might be requested 10 times in a single day, and a 200 response is sent for them all, and all return, say 1000 bytes. However, 7 of them have a "time to serve" of zero, while the other 3 have a time to serve of 1 second. Is this simply because the request was served faster than the resolution of the timer Apache uses?

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >