Search Results

Search found 17460 results on 699 pages for 'validate request'.

Page 223/699 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • nginx + Jetty - thousands of connections stuck in LAST_ACK

    - by virulence
    I have a FreeBSD machine with jails -- two in particular, one that runs nginx and another that runs a Java program that accepts requests via Jetty (embedded mode) Jetty receives upwards of 500 requests/sec constantly and there has been an issue lately where I will constantly have over 60,000 connections in the LAST_ACK state between nginx and jetty. Distribution of all connections (includes some other services, particularly php-fpm) root@host:/root # netstat -an > conns.txt root@host:/root # cat conns.txt | awk '{print $6}' | sort | uniq -c | sort -n 18 LISTEN 112 CLOSING 485 ESTABLISHED 650 FIN_WAIT_2 1425 FIN_WAIT_1 3301 TIME_WAIT 64215 LAST_ACK Distribution of nginx - jetty connections root@host:/root # cat conns.txt | grep '10.10.1.57' | awk '{print $6}' | sort | uniq -c | sort -n 1 3 CLOSE_WAIT 3 LISTEN 18 FIN_WAIT_2 125 ESTABLISHED 64193 LAST_ACK I'd prefer every request to fully close the connection. Clients requests are about 10 minutes apart from each other so connections must be closed. Some of the connections, tcp4 0 0 10.10.1.50.46809 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46805 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46797 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46794 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46790 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46789 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46771 10.10.1.57.9050 LAST_ACK etc.. On Jetty's end I've set maxIdleTime to 2000 -- before this all connections were in ESTABLISHED but they are now LAST_ACK On Jetty's end I've set Connection: close (i.e response.setHeader(HttpHeaders.CONNECTION, HttpHeaderValues.CLOSE);) Jetty never reports a lot of open connections -- always very few. PF/IPFW is not currently being used nginx - reset_timedout_connection is on I cannot figure out how to get nginx or jetty to forcibly close the connection, is this simply something that needs to be fixed in Jetty so that it fully closes the socket after the request finishes? Thanks a lot in advance EDIT: forgot my nginx config for the proxy setup- proxy_pass http://10.10.1.57:9050; proxy_set_header HTTP_X_GEOIP $http_x_geoip; proxy_set_header GEOIP_COUNTRY_CODE $geoip_country_code; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header Connection ""; proxy_http_version 1.1; EDIT2: Forcing Jetty to close the connection via request.getConnection().getEndPoint().close() does nothing -- it's obvious the connection IS being closed (as it's in LAST_ACK) but why isn't it getting past this? Is Nginx keeping the connection open to the backend for some reason?

    Read the article

  • Is hierarchical product backlog a good idea in TFS 2012-2013?

    - by Matías Fidemraizer
    I'd like to validate I'm not in the wrong way. My team project is using Visual Studio Scrum 2.x. Since each area/product has a lot of kind of requirements (security, user interface, HTTP/REST services...), I tried to manage this creating "parent backlogs" which are "open forever" and they contain generic requirements. Those parent backlogs have other "open forever" backlogs, and/or sprint backlogs. For example: HTTP/REST Services (forever) ___ Profiles API (forever) ________ POST profile (forever) _______________ We need a basic HTTP/REST profiles' API to register new user profiles (sprint backlog) Is it the right way of organizing the product backlog? Note: I know there're different points of view and that would be right for some and wrong for others. I'm looking for validation about if this is a possible good practice on TFS with Visual Studio Scrum.

    Read the article

  • How to create an GUI that communicate with the USB Devices

    - by VINAYAK
    I am doing my Project using win 32 programming.I am just learning about win32 programming and able to create an UI.I want to communicate with an USB Device with that UI.SO,How can i go for that?Is there any predefined functions will be there are we need to create the function for communicating with the OS and get the devices List and got the details about them. My purpose is to , 1.Creating an UI that tells about the Basic information about the device(We want to send a control request to the device to get the descriptors). 2.For that first of all i want to communicate with the OS for device attachment.That will lead to get the information about the device and Enumeration takes place and then only i request the device information through descriptors by using standard Requests. 3.And also i want to create the driver for my device.That will also need to achieve for communicating with OS(Windows). So,can anyone help me about this?How can i achieve this or approach this? Note: I am at the entry level now so anyone give response will be in a detailed format like step by step process would be appreciable.

    Read the article

  • How to REALLY start thinking in terms of objects?

    - by Mr Grieves
    I work with a team of developers who all have several years of experience with languages such as C# and Java. Most of them are young enough to have been shown OOP as a standard way to develop software in university and are very comfortable with concepts such as inheritance, abstraction, encapsulation and polymorphism. Yet, many of them, and I have to include myself, still tend to create classes which are meant to be used in a very functional fashion. The resulting software is often several smaller classes which correctly represent business objects which get passed through larger classes which only supply ways to modify and use those objects (functions). Large complex difficult-to-maintain classes named Manager are usually the result of such behaviour. I can see two theoretical reasons why people might write this type of code: It's easy to start thinking of everything in terms of the database Deep down, for me, a computer handling a web request feels more like a functional operation than an object oriented operation when you think about Request Handlers, Threads, Processes, CPU Cores and CPU operations... I want source code which is easy to read and easy to modify. I have seen excellent examples of OO code which meet these objectives. How can I start writing code like this? How I can I really start thinking in an object oriented fashion? How can I share such a mentality with my colleagues?

    Read the article

  • Bundling in visual studio 2012 for web optimization

    - by Jalpesh P. Vadgama
    I have been writing a series of posts about Visual Studio 2012 features. This series describes what are the new features in the Visual Studio 2012. This post will also be part of Visual Studio 2012 feature series. As we know now days web applications or site are providing more and more features and due to that we have include lots of JavaScript and CSS files in our web application.So once we load site then we will have all the JavaScript  js files and CSS files loaded in the browsers and If you have lots of JavaScript files then its consumes lots of time when browser request them. Following images show the same situation over there.   Here you can see total 25 files loaded into the system and it's almost more than 1MB of total size. As we need to have our web application of site very responsive and need to have high performance application/site, this will be a performance bottleneck to our site. In situation like this, the bundling feature of Visual Studio 2012 and ASP.NET 4.5 comes very handy. With the help of this feature we do optimization there and we can increase performance of our application. To enable this feature in Visual Studio 2012 we just made debug=”false” in web.config of our application like following. Now once you enable this feature and run this application in the browser to see your traffic it will have less items like following. As you can see in the above image there are only 8 items. So after enabling bundling it will automatically convert all js and css files into the one request. Isn’t that cool feature? This feature will surely going to have great impact on performance. Hope you like it. Stay tuned for more.. Till then happy programming!!

    Read the article

  • wget has a 4 second delay

    - by guisius
    Hello. I have tried to wget a page with windows/mac, and the response is instant while the linux vesion needs to wait for 4 seconds before it shows the response. I just hope this can be solved. More information added: in Ubuntu : wget xxx://192.168.0.135/test.cgi?cmd= -O test.txt --2011-03-04 14:21:17-- xxx://192.168.0.135/test.cgi?cmd= Connecting to 192.168.0.135:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `test.txt' [ <=> ] 17 --.-K/s in 0s 2011-03-04 14:21:22 (1.88 MB/s) - `test.txt' saved [17] while in Mac OS : wget xxx://192.168.0.135/test.cgi?cmd= -O test.txt --2011-03-04 14:22:33-- xxx://192.168.0.135/test.cgi?cmd= Connecting to 192.168.0.135:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `test.txt' [ <=> ] 17 --.-K/s in 0s 2011-03-04 14:22:33 (755 KB/s) - `test.txt' saved [17] in ubuntu it delays 4 seconds while windows and mac will not i believe it may related to some setting in the network config such as packet size , window frame , but i have no idea to set this PS: because the limit of the post not allow to post the url so i mark this as xxx

    Read the article

  • Modeling Websites and Native Code

    I've blogged previously about the Architecture tools in Visual Studio 2010. These tools offer a fantastic way to understand an existing application, design some new functionality, and validate an implementation against architectural rules and constraints. Recently, we announced the availability of the Visualization and Modeling Feature Pack for MSDN subscribers, which complements the Architecture tools in Visual Studio 2010 by adding support for: C/C++ code visualization Website visualization Improved...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Modeling Websites and Native Code

    I've blogged previously about the Architecture tools in Visual Studio 2010. These tools offer a fantastic way to understand an existing application, design some new functionality, and validate an implementation against architectural rules and constraints. Recently, we announced the availability of the Visualization and Modeling Feature Pack for MSDN subscribers, which complements the Architecture tools in Visual Studio 2010 by adding support for: C/C++ code visualization Website visualization Improved...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why is rsync.exe [cwRsync] trying to open a port when in client mode?

    - by hemancuso
    I'm trying to use a cygwin compiled version of rsync [the cwrsync package] on Windows and in seemingly whatever configuration I test in there is a request to the user presented by Windows Firewall to allow inbound traffic. If you deny this request, everything works fine - as expected. I'm doing a vanilla push rsync.exe localpath user@remotepath:/absolutepath and it works just fine. I've also attempted this command having deleted ssh from the path and using rsync on local paths - still a firewall prompt. Why is this listen() happening and is there a way I can force the client to not attempt to listen without recompiling and maintaing a patch?

    Read the article

  • Bridging two sockets

    - by Itehnological
    I wondered if it is possible to bridge two incoming tcp sockets. For example: Client A -----> Server <----- Client B The the server sends it's magic to both clients and then they connect to each other bypassing the server Server Client A ----------><---------- Client B UPDATE: The idea is when those clients can't bind to ports to listen to still be able to create connection between each other with the help of the server. For example Client A and Client B have tcp sockets with the server. User A decides to chat with User B and creates a new tcp connection with the server with the request to bridge it with User B. The server sends that request to Client B and it also opens up a new tcp connection with the server for that chat line. Now when the server has both chat connections from A and B it bridges them and they can work without the server, and as a result the server won't have to process all the messages and files the two users share. That's the idea/

    Read the article

  • thick client migration to web based application

    - by user1151597
    This query is related to application design the technology that I should consider during migration. The Scenario: I have a C#.net Winform application which communicates with a device. One of the main feature of this application is monitoring cyclic data(rate 200ms) sent from the device to the application. The request to start the cyclic data is sent only once in the beginning and then the application starts receiving data from the device until it sends a stop request. Now this same application needs to be deployed over the web in a intranet. The application is composed of a business logic layer and a communication layer which communicates with the device through UDP ports. I am trying to look at a solution which will allow me to have a single instance of the application on the server so that the device thinks that it is connected as usual and then from the business logic layer I can manage the clients. I want to reuse the code of the business layer and the communication layer as much as possible. Please let me know if webserives/WCF/ etc what i should consider to design the migration. Thanks in advance.

    Read the article

  • Android From Local DB (DAO) to Server sync (JSON) - Design issue

    - by Taiko
    I sync data between my local DB and a Server. I'm looking for the cleanest way to modelise all of this. I have a com.something.db package That contains a Data Helper and couple of DAO classes that represents objects stored in the db (I didn't write that part) com.something.db --public DataHelper --public Employee @DatabaseField e.g. "name" will be an actual column name in the DB -name @DatabaseField -salary etc... (all in all 50 fields) I have a com.something.sync package That contains all the implementation detail on how to send data to the server. It boils down to a ConnectionManager that is fed by different classes that implements a 'Request' interface com.something.sync --public interface ConnectionManager --package ConnectionManagerImpl --public interface Request --package LoginRequest --package GetEmployeesRequest My issue is, at some point in the sync process, I have to JSONise and de-JSONise my data (E.g. the Employee class). But I really don't feel like having the same Employee class be responsible for both his JSONisation and his actual representation inside the local database. It really doesn't feel right, because I carefully decoupled the rest, I am only stuck on this JSON thing. What should I do ? Should I write 3 Employee classes ? EmployeeDB @DatabaseField e.g. "name" will be an actual column name in the DB -name @DatabaseField -salary -etc... 50 fields EmployeeInterface -getName -getSalary -etc... 50 fields EmployeeJSON -JSON_KEY_NAME = "name" The JSON key happens to be the same as the table name, but it isn't requirement -name -JSON_KEY_SALARY = "salary" -salary -etc... 50 fields It feels like a lot of duplicates. Is there a common pattern I can use there ?

    Read the article

  • Ubuntu getting wrong hostname from DHCP

    - by sam
    When provisioning new Ubuntu Precise (12.04) servers, the hostname they're getting seems to be generated from the DNS search path, not a reverse lookup on the hostname. Take the following configuration BIND is configured with the hostname, and reverse name Normal zone $TTL 600 $ORIGIN srv.local.net. @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. @ IN MX 5 mail.local.net. my-new-server IN A 10.32.2.30 And reverse @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. $ORIGIN 32.10.in-addr.arpa. 30.2 IN PTR my-new-server.srv.local.net. Then DHCPD is configured to hand out static leases based on mac addresses like so subnet 10.32.2.0 netmask 255.255.254.0 { option subnet-mask 255.255.254.0; option routers 10.32.2.1; option domain-name-servers 10.32.2.1; option domain-name "util.of1.local.net of1.local.net srv.local.net"; site-option-space "pxelinux"; option pxelinux.magic f1:00:74:7e; if exists dhcp-parameter-request-list { option dhcp-parameter-request-list = concat(option dhcp-parameter-request-list,d0,d1,d2,d3); } group { option pxelinux.configfile "pxelinux.cfg/pxeboot"; host my-new-server { fixed-address my-new-server.srv.local.net; hardware ethernet aa:aa:aa:bb:bb:bb; } } } So the hostname should be my-new-server.srv.local.net, however when building a Ubuntu 12.04 node, the hostname ends up as my-new-server.util.of1.local.net When building Lucid (10.04) hosts, the hostname will be correct, it's only on Precise/12.04 nodes we have the problem. Doing a normal and reverse lookup on the host and IP returns the correct result Sams-MacBook-Pro:~ sam$ host my-new-server my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host my-new-server.srv.local.net my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host 10.32.2.30 30.2.32.10.in-addr.arpa domain name pointer my-new-server.srv.local.net. The contents of the hosts file is incorrect too 127.0.0.1 localhost 127.0.1.1 my-new-server.util.of1.local.net of1.local.net srv.local.net my-new-server So it looks like when it creates the hosts file, it puts the entire contents of the DNS search path into the local address so the FQDN according to the server is the short hostname as defined, then the first domain in the search path. Is there a way to get around this behaviour, or fix this so it gets the hostname correctly? It's picking up the first part of the hostname, then the rest is wrong.

    Read the article

  • Diagramming software with API allowing high customisation of shapes and actions

    - by jenson-button-event
    I am after something like Visio or Lucid. A relatively simple charting/diagramming tool, to build tree-like structures from (my) pre-defined nodes (squares), but with a powerful API. Requirements: limit the type of objects allowed to be dropped on the diagram validate a model (e.g. node of type A must precede node of type B; must enter node Title) export a model import a model Our domain is very specific, and its a tool we'd want to offer to some of our power users. The $500 Visio licence isn't really within the business model. I'll put no constraints on framework or deployment (web or desktop) - is there anything out there?

    Read the article

  • Generate a Word document from list data

    - by PeterBrunone
    This came up on a discussion list lately, so I threw together some code to meet the need.  In short, a colleague needed to take the results of an InfoPath form survey and give them to the user in Word format.  The form data was already in a list item, so it was a simple matter of using the SharePoint API to get the list item, formatting the data appropriately, and using response headers to make the client machine treat the response as MS Word content.  The following rudimentary code can be run in an ASPX (or an assembly) in the 12 hive.  When you link to the page, send the list name and item ID in the querystring and use them to grab the appropriate data. // Clear the current response headers and set them up to look like a word doc.HttpContext.Current.Response.Clear();HttpContext.Current.Response.Charset ="";HttpContext.Current.Response.ContentType ="application/msword";string strFileName = "ThatWordFileYouWanted"+ ".doc";HttpContext.Current.Response.AddHeader("Content-Disposition", "inline;filename=" + strFileName);// Using the current site, get the List by name and then the Item by ID (from the URL).string myListName = HttpContext.Current.Request.Querystring["listName"];int myID = Convert.ToInt32(HttpContext.Current.Request.Querystring["itemID"]);SPSite oSite = SPContext.Current.Site;SPWeb oWeb = oSite.OpenWeb();SPList oList = oWeb.Lists["MyListName"];SPListItem oListItem = oList.Items.GetItemById(myID);// Build a string with the data -- format it with HTML if you like. StringBuilder strHTMLContent = newStringBuilder();// *// Here's where you pull individual fields out of the list item.// *// Once everything is ready, spit it out to the client machine.HttpContext.Current.Response.Write(strHTMLContent);HttpContext.Current.Response.End();HttpContext.Current.Response.Flush();

    Read the article

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

  • Capturing BizTalk 2004 SQLAdapter failures

    - by DanBedassa
    I was recently working on a BizTalk 2004 project where I encountered an issue with capturing exceptions (inside my orchestration) occurring from an external source. Like database server down, non-existing stored procedure, …   I thought I might write-up this in case it might help someone …   To reproduce an issue, I just rename the database to something different.   The orchestration was failing at the point where I make a SQL request via a Response-Request Port. The exception handlers were bypassed but I can see a warning in the event log saying: "The adapter failed to transmit message going to send port "   After scratching my head for a while (as a newbie to BTS 2004) to find a way to catch the exceptions from the SQLAdapter in an orchestration, here is the solution I had.   ·         Put the Send and Receive shapes inside a Scope shape ·         Set the Scope’s transaction type to “Long Running” ·         Add a Catch block expecting type “System.Exception” ·         Set the “Delivery Notification” of the associated Port to “Transmitted” ·         Change the “Retry Count” of the associated port to 0 (This will make sure BizTalk will raise the exception, instead of a warning, and you can capture that) ·         Now capture and do whatever with the exception inside the Catch block

    Read the article

  • Windows Advanced Firewall certificate based IPSEC

    - by Tim Brigham
    I'm working on migrating from using IPSEC settings stored under the 'IP Security Policies on Active Directory' to using the 'Windows Firewall with Advanced Security' for my 2008+ boxes. I have successfully been able to get this set up using Kerberos authentication, however my openswan implementation on my Linux boxes is using certificates. Whenever I try changing the authentication method to computer certificate (using RSA and my root CA) the connection is bombing out. I've made this change at both a connection request policy and on the IPSEC settings on the root Windows Firewall with Advanced Security node. The windows event log shows the authentication request is taking place but failing negotiating a mode. What am I missing here?

    Read the article

  • Service layer coupling

    - by Justin
    I am working on writing a service layer for an order system in php. It's the typical scenario, you have an Order that can have multiple Line Items. So lets say a request is received to store a line item with pictures and comments. I might receive a json request such as { 'type': 'Bike', 'color': 'Red', 'commentIds': [3193,3194] 'attachmentIds': [123,413] } My idea was to have a Service_LineItem_Bike class that knows how to take the json data and store an entity for a bike. My question is, the Service_LineItem class now needs to fetch comments and file attachments, and store the relationships. Service_LineItem seems like it should interact with a Service_Comment and a Service_FileUpload. Should instances of these two other services be instantiated and passed to the Service_LineItem constructor,or set by getters and setters? Dependency injection seems like the right solution, allowing a service access to a 'service fetching helper' seems wrong, and this should stay at the application level. I am using Doctrine 2 as a ORM, and I can technically write a dql query inside Service_LineItem to fetch the comments and file uploads necessary for the association, but this seems like it would have a tighter coupling, rather then leaving this up to the right service object.

    Read the article

  • Spreading incoming batched data into a real-time stream

    - by pr1001
    I would like to display some events in 'real-time'. However, I must fetch the data from another source. I can request the last X minutes, though the source is updated approximately every 5 minutes. This means that there will be a delay between the most recent data retrieved and the point in time that I make the request. Second, because I will be receiving a batch of data, I don't want to just fire out all the events down a socket once my fetcher has retrieved it: I would like to spread out the events so that they are both accurately spaced amongst each other and in sync with their original occurrences (e.g. an event is always displayed 6 minutes after it actually happened). My thought is to fetch the data every 5 minutes from the source, knowing that I won't get the very latest data. The original data would be then queued to be sent down the socket 7.5 minutes from its original timestamp – that is, at least ~2.5 minutes from when its batch was fetched and at most 7.5 minutes since then. My question is this: is this the best way to approach the problem? Does this problem have any standard approaches or associated literature related to implementation best-practices and edge cases? I am a bit worried that the frequency of my fetches and the frequency in which the source is updated will get out of sync, leading to points where no data will be retrieved from the source. However, since my socket delay is greater than my fetch frequency, the subsequent fetch should retrieve newer data before the socket queue is empty. Is that correct? Am I missing something? Thanks!

    Read the article

  • How to collaborate on features using github

    - by Robert Dailey
    github encourages 1 fork per user, so that that user can work independently on a feature and then request that feature to be accepted into the main repository via pull request. However, what if 2 developers need to collaborate on that feature? What is the ideal workflow for this? I could see a number of options: Both developers fork the original repository. Each developer pulls/pushes changes between each other's repository. This seems like a lot of work (tiny micro operations) and also creates a delay between changes, so increases the window for conflicts. Developer 1 forks from the main repository, developer 2 forks from developer 1. Same as #1 mainly but hopefully simplifies Developer 2's life a little? Developer 1 gives Developer 2 permissions to his own fork, so they both work out of the same central repository. Not sure if this is ideal. I'm also curious where branches come into this. Obviously there would be a branch for the feature itself but that branch can't exist in a single place, it would have to exist on multiple forks and be synchronized. Basically just really confused about this workflow, would like an approach for how this can be best accomplished.

    Read the article

  • nginx automatic failover load balancing

    - by robinmag
    Hi, I'm using nginx and NginxHttpUpstreamModule for loadbalancing. My config is very simple: upstream lb { server 127.0.0.1:8081; server 127.0.0.1:8082; } server { listen 89; server_name localhost; location / { proxy_pass http://lb; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } But with this config, when one of 2 backend server is down, nginx still routes request to it and it results in timeout half of the time :( Is there any solution to make nginx to automatically route the request to another server when it detects a downed server. Thank you.

    Read the article

  • HTTP Caching Server that supports POST

    - by Jeroen
    I am hosting a REST service which is sending appropriate cache-control headers. I use Varnish as a caching server in front of my webserver. However, a limitation of varnish is that it doesn't support caching HTTP POST and HTTP PUT. Is there any alternate caching server that will be able to cache these requests? I understand that caching POST is a bit tricky because you cannot just cache based on the url as a key like for GET; it needs to actually inspect the request body. In case of multipart/form-data requests, there should probably be a limit on the size of the request body for it to be cached (so that big file uploads, etc won't be cached). Nevertheless I really want to be able to cache short HTTP POST, or at least the application/x-www-form-urlencoded ones.

    Read the article

  • xinet vs iptables for port forwarding performance

    - by jamie.mccrindle
    I have a requirement to run a Java based web server on port 80. The options are: Web proxy (apache, nginx etc.) xinet iptables setuid The baseline would be running the app using setuid but I'd prefer not to for security reasons. Apache is too slow and nginx doesn't support keep-alives so new connections are made for every proxied request. xinet is easy to set up but creates a new process for every request which I've seen cause problems in a high performance environment. The last option is port forwarding with iptables but I have no experience of how fast it is. Of course, the ideal solution would be to do this on a dedicated hardware firewall / load balancer but that's not an option at present.

    Read the article

  • Why won't IE let users login to a website unless in In Private mode?

    - by Richard Fawcett
    I'm not entirely sure this belongs on SuperUser.com. I also considered ServerFault.com and StackOverflow.com, but on balance, I think it should belong here? We host a website which has the same code responding to multiple domain names. On 28th December (without any changes deployed to the website) a percentage of users suddenly could not login, and the blank login page was just rendered again even when the correct credentials were entered. The issue is still ongoing. After remote controlling an affected user's PC, we've found the following: The issue affects Internet Explorer 9. The user can login from the same machine on Chrome. The user can login from an In Private browser session using IE9. The user can login if the website is added to the Trusted Sites security zone. The user can NOT login from an IE session in safe mode (started with iexplore -extoff). Only one hostname that the website responds to prevents login, the same user account on the other hostname works fine (note that this is identical code and database running server side), even though that site is not in trusted sites zone. Series of HTTP requests in the failure case: GET request to protected page, returns a 302 FOUND response to login page. GET request to login page. POST to login page, containing credentials, returns redirect to protected page. GET request to protected page... for some reason auth fails and browser is redirected to login page, as in step 1. Other information: Operating system is Windows 7 Ultimate Edition. AV system is AVG Internet Security 2012. I can think of lots of things that could be going wrong, but in every case, one of the findings above is incompatible with the theory. Any ideas what is causing login to fail? Update 06-Jan-2012 Enhanced logging has shown that the .ASPXAUTH cookie is being set in step 3. Its expiry date is 28 days in the future, its path is /, the domain is mysite.com, and its value is an encrypted forms ticket, as expected. However, the cookie is not being received by the web server during step 4. Other cookies are being presented to the server during step 4, it's just this one that is missing. I've seen that cookies are usually set with a domain starting with a period, but mine isn't. Should it be .mysite.com instead of mysite.com? However, if this was wrong, it would presumably affect all users?

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >