Search Results

Search found 6078 results on 244 pages for 'processing'.

Page 126/244 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Is it possible to load balance requests from a single source?

    - by Shawn
    In our application, Server A establishes a TCP connection with Server B, then it sends a large amount of requests to Server B over the TCP connection. The request message is XML-based. Server B needs to respond within a very short period, and it takes time to process the requests. So we hope a load balancer can be introduced and we can expedite the processing by using multiple Server B's. This is not a web application. I did some research but failed to find a similar application of load balancer. Can anyone tell me if there's a load balancer can help in our application?

    Read the article

  • Why does IHttpAsyncHandler leak memory under load?

    - by Anton
    I have noticed that the .NET IHttpAsyncHandler (and the IHttpHandler, to a lesser degree) leak memory when subjected to concurrent web requests. In my tests, the development web server (Cassini) jumps from 6MB memory to over 100MB, and once the test is finished, none of it is reclaimed. The problem can be reproduced easily. Create a new solution (LeakyHandler) with two projects: An ASP.NET web application (LeakyHandler.WebApp) A Console application (LeakyHandler.ConsoleApp) In LeakyHandler.WebApp: Create a class called TestHandler that implements IHttpAsyncHandler. In the request processing, do a brief Sleep and end the response. Add the HTTP handler to Web.config as test.ashx. In LeakyHandler.ConsoleApp: Generate a large number of HttpWebRequests to test.ashx and execute them asynchronously. As the number of HttpWebRequests (sampleSize) is increased, the memory leak is made more and more apparent. LeakyHandler.WebApp TestHandler.cs namespace LeakyHandler.WebApp { public class TestHandler : IHttpAsyncHandler { #region IHttpAsyncHandler Members private ProcessRequestDelegate Delegate { get; set; } public delegate void ProcessRequestDelegate(HttpContext context); public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { Delegate = ProcessRequest; return Delegate.BeginInvoke(context, cb, extraData); } public void EndProcessRequest(IAsyncResult result) { Delegate.EndInvoke(result); } #endregion #region IHttpHandler Members public bool IsReusable { get { return true; } } public void ProcessRequest(HttpContext context) { Thread.Sleep(10); context.Response.End(); } #endregion } } LeakyHandler.WebApp Web.config <?xml version="1.0"?> <configuration> <system.web> <compilation debug="false" /> <httpHandlers> <add verb="POST" path="test.ashx" type="LeakyHandler.WebApp.TestHandler" /> </httpHandlers> </system.web> </configuration> LeakyHandler.ConsoleApp Program.cs namespace LeakyHandler.ConsoleApp { class Program { private static int sampleSize = 10000; private static int startedCount = 0; private static int completedCount = 0; static void Main(string[] args) { Console.WriteLine("Press any key to start."); Console.ReadKey(); string url = "http://localhost:3000/test.ashx"; for (int i = 0; i < sampleSize; i++) { HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); request.Method = "POST"; request.BeginGetResponse(GetResponseCallback, request); Console.WriteLine("S: " + Interlocked.Increment(ref startedCount)); } Console.ReadKey(); } static void GetResponseCallback(IAsyncResult result) { HttpWebRequest request = (HttpWebRequest)result.AsyncState; HttpWebResponse response = (HttpWebResponse)request.EndGetResponse(result); try { using (Stream stream = response.GetResponseStream()) { using (StreamReader streamReader = new StreamReader(stream)) { streamReader.ReadToEnd(); System.Console.WriteLine("C: " + Interlocked.Increment(ref completedCount)); } } response.Close(); } catch (Exception ex) { System.Console.WriteLine("Error processing response: " + ex.Message); } } } }

    Read the article

  • Apache mod_wsgi error: ImportError: No module named django.core.handlers.wsgi

    - by bigmac
    I am using Python 2.7 with mod_python 3.3.1 and mod_wsgi 3.3. I get an Internal Server Error and this stack trace in the apache logs: [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] import django.core.handlers.wsgi [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] ImportError: No module named django.core.handlers.wsgi [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] mod_wsgi (pid=4463): Target WSGI script '/home/one/codebase/campman/wsgi_handler.py' cannot be loaded as Python module. [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] mod_wsgi (pid=4463): Exception occurred processing WSGI script '/home/one/codebase/campman/wsgi_handler.py'. [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] Traceback (most recent call last): [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] File "/home/one/codebase/campman/wsgi_handler.py", line 13, in <module> [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] import django.core.handlers.wsgi [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] ImportError: No module named django.core.handlers.wsgi

    Read the article

  • NAS that supports NZB downloading for around £150 ($220) or less (without hard drive)

    - by Jigs
    I have seen a number of NAS's that are around that price, but I am worried that they may not be able to handle the processing of .rar files (I know that can be quite CPU intensive). Does anyone have any experiences with sabnzbd or hellanzb - or similar on their NAS? In terms of features the main requirement is NZB downloading - I am quite flexible on the other features. Wifi support would be nice, but not essential. Torrent downloading would also be nice. One disk drive would probably be enough. Easy installation of application would be nice... but again I am sure I can follow a tutorial.

    Read the article

  • PHP script timed out, or otherwise killed on Apache under CentOS (shared host)

    - by MarkS
    When trying to run a PHP script (CentOS, Apache, PHP 5.2), that may take a long time, it is apparently killed after 45 minutes. PHP script is invoked from a web browser, and in certain situations, it will do a lot of work processing a POP3 mailbox and sending emails as part of an automated monitoring system. Running the PHP script from the command line might be a better option, but I want to understand what is going on so far. I ran a test script, and it appeared to finally give an internal server error (500?) after 45 minutes. Where is this limit set and what is killing the script, if that is what is happening? It's running on a shared host on Hostgator.com.

    Read the article

  • Exchange 2007 to 2010 public folder replication error 1129

    - by Keith
    I currently upgrading from an Exchange server 2007 to 2010. I have moved all mailboxes and OAB. I am having issues replicating the public folders. This is the error I'm getting in the event log on the 2007 box: Error 1129 occurred while processing a replication event. Folder: (6-11ED8367F0C) IPM_SUBTREE\Marketing\Marketing I have looked online and everything about these errors seems to relate from an old 2003 server. Well, we never had a 2003 server. I'm really not sure what to do at this point. Any help?

    Read the article

  • MySQL dump, output each table row on a new line whilst using --extended-insert

    - by soopadoubled
    I'm having an issue, where for ease of use, I'd like to be able to format a command line MySQL dump so that each row of a given table is on a new line when using the --extended-insert option. Usually when using --extended-insert, every row of a given table is outputted on one line, and as far as I am aware there's no way to change this, other than post-processing the dump with perl or such like. The format I'm looking for is: -- -- Dumping data for table `ww_tbCountry` -- INSERT INTO `ww_tbCountry` (`iCountryId_PK`, `vCountryName`, `vShortName`, `iSortFlag`, `fTax`, `vCountryCode`, `vSageTaxCode`) VALUES (22, 'Albania', 'AL', 1, 0.00, '8', 'T9'), (33, 'Austria', 'AT', 1, 15.00, '40', 'T9'), (40, 'Belarus', 'BY', 1, 0.00, '112', 'T9'), (41, 'Belgium', 'BE', 1, 15.00, '56', 'T9'), (51, 'Bulgaria', 'BG', 1, 15.00, '100', 'T9' However, when I dump a database using Phpmyadmin, using --extended-insert, each row is dumped on a new line (as shown by the example above). I've gone through Phpmyadmin and can't find any documentation that would explain this. Is anyone able to shed any light on this? Thanks in advance, Ian

    Read the article

  • In SharePoint, why can I "multiple document upload" a 47,297 byte file, but not a 47,298 byte file?

    - by Jim
    It's strange. I can upload a document named 47k.txt that is 47,297 bytes using the "Multiple Document Upload" feature. However, if I add a single character to the end of the text file, the upload fails. Also, if I rename the file to 47k*x*.txt and try to upload it, it fails. This is the error I get in the SharePoint logs: Category: General Event ID: 8jzm Level: High Message: #90012: An error was encountered while processing files on the server. Try uploading one file at a time by using the single upload page. The same error is reported in a message box on the client side. Does anybody know why this would happen?

    Read the article

  • Can I provision half a core as a virtual CPU?

    - by ramdaz
    I am virtualization newbie. Please advise on these questions. Please note using a commercial VM software like Citrix or VMware is not a choice for me. I have at my disposal a couple of 2x 4 core servers with 32 GB RAM. I need to create 16 VMs on each server to test some web applications. Can I provision half a core as a virtual CPU for each VM? To my best knowledge I can't do so on Xen. Is it possible on KVM or some other free open source VM solution? If it's not possible to assign half a core, how do I ensure that uniform processing power is available for all VMs? Since the job is to create separate instances for hosting 16 web apps in a physical server, do you recommend setting up a private cloud using Ubuntu Enterprise Cloud as a better option? Is there HA solution under KVM, like Remus for Xen?

    Read the article

  • Strange requests coming from Korean Site

    - by Jim Jeffers
    Lately I've been finding a lot of strange requests like this coming to my rails app: Processing ApplicationController#index (for 189.30.242.61 at 2009-12-14 07:38:24) [GET] Parameters: {"_SERVER"=>{"DOCUMENT_ROOT"=>"http://www.usher.co.kr/bbs/id1.txt???"}} ActionController::RoutingError (No route matches "/browse/brand/nike ///" with {:method=>:get}): It looks like it's automated as I get a lot of them and notice the strange parameters they're trying to send: _SERVER"=>{"DOCUMENT_ROOT"=>"http://www.usher.co.kr/bbs/id1.txt??? Is this something malicious and if so what should I do about it?

    Read the article

  • Increase application performance on Amazon AWS

    - by Honus Wagner
    I've got a client with an MVC v1 (.NET) application running on a micro instance. On this instance, I've got .NET, IIS 7.5, and MS SQL Server 2008 running to handle the application. The client has reported that it is taking nearly 10 seconds to process each request. Even loading the initial login page takes about that long, then logging in takes that long, etc etc. The currently running instance specs are as follows: 615 MB RAM Intel Xenon CPU E5430 @ 2.66GHz 2.78 GHz 64-Bit Is the memory availability the issue? or is it the processing power? I forsee two options: Change to a larget instance Set up a 2-tier architecture with two micro instances Which of these will give the application better performance? Thanks in advance.

    Read the article

  • Is Flash typically slow on Linux?

    - by CSarnia
    Specifically, I'm running Mint 8 (Helena). I'm extremely new to Linux, and was searching for a solution that was user-friendly and GUI oriented. The box won't be used for much other than web browsing and word processing. Anyway, it runs relatively smoothly, except for Youtube videos... especially full-screen, which runs at like 1 FPS, and even after closing, slows Firefox to a crawl until I restart it. I'd seen an xkcd comic on the matter, but regarded it as a joke until now. Is this actually a problem? Are there any remedies I can try to smooth the applications?

    Read the article

  • samsung HMX-H100P camcorder and video encoding with mencoder

    - by jskg
    Hi everyone, my background is totally not related to video stuff so pardon my newbie style. I own a samsung HMX-H100P camcorder and I'm trying to encode videos to be uploaded to Youtube and Vimeo. First problem: videos generated by the camera with no processing appear like this: http://www.youtube.com/watch?v=AANbl_DTuzE when I play them with Totem(Linux) or VideoLan. Second problem: When I try to encode the videos produced by the camera using mencoder I get the video at the resolution I chose but those ugly lines and lagging are still present. Here's the command I use: mencoder $inputFile -aspect 16:9 -of lavf -lavfopts format=psp -oac lavc -ovc lavc -lavcopts aglobal=1:vglobal=1:coder=0:vcodec=libx264:acodec=libfaac:vbitrate=4500:abitrate=128 -vf scale=1280:720 -ofps 25000/1001 -o $outputFile Any ideas? Thanks in advance

    Read the article

  • Can Acer Aspire Revo (Atom 330) be used with two monitors simultaneously?

    - by LeeD
    I'm so attracted to Acer Revo for the price & the look. As long as I can work on two monitors simultaneously, I'll be happy. Not planning to do heavy video editing or gaming. Occasional movie streaming would be fine. Will mainly use it to do trading, lots of word processing, some photo editing, connecting with friends. Anyone has experience using Revo with 2 or more monitors? The spec says it has VGA and HDMI output but Acer sales person over the phone told me it can support one monitor only..??

    Read the article

  • Calculating Utilization in a Stop-And-Wait Protocol

    - by AlanTuring
    So theres this question in my book and it doesn't state exactly how to go about actually calculating utilization anywhere, and i'm not being able to find any substantial information regarding everything i need to solve this question.(My mid term is next week). Anyway, here's the question: The distance from earth to a distant planet is approximately 9 × 10^10 m. What is the channel utilization if a stop-and-wait protocol is used for frame transmission on a 64 Mbps point-to-point link? Assume that the frame size is 32 KB and the speed of light is 3 × 10^8 m/s. Suppose a sliding window protocol is used instead. For what send window size will the link utilization be 100%? You may ignore the protocol processing times at the sender and the receiver. thanks to anyone who has any idea.

    Read the article

  • gpupdate failing when using Samba 4 AD DC

    - by darthfoolish
    I have a Samba 4 AD domain running with 2 DCs on Centos 6.5, with a named DNS backend. I have multiple Windows 7 machines joined to this domain, which is fine. However, I can't get GPOs to apply. When running gpupdate, I get the following output The processing of Group Policy failed. Windows attempted to read the file \\sysvol\\Policies{31B2F340-016D-11D2-945F-00C04FB984F9}\gpt.ini Obviously, you don't normally see what it's trying to connect to when it's successful, but I would have thought the first place shows up, I should be seeing So, what governs what data gets put in between those angle brackets? If it is just supposed to be the domain, then what else could be going wrong? Thanks in advance for any help.

    Read the article

  • In SSIS Convert European Currency Format to United States Currency Format

    - by Rob
    I have an interesting problem. I have an SSIS package that processes account data. We are now processing files from Europe. These files are in a CSV format using text qualifiers. For an example of the problem: In the United States the currency format is 123456.99 (We purposely leave the thousands separator out). The files sent from Europe are coming in with two formats. One is 123456,99 and the other is 123.456,00. SSIS is attempting to parse the text file and place it into a NUMERIC(20,2) field. This causes a parsing error in SSIS even with the text qualifiers. If I change the field to CURRENCY it sends a conversion error. I would like for SSIS to deal with this directly without requiring the data to be in the United States format. Has anyone had this problem? Any help will be greatly appreciated. Rob

    Read the article

  • Understanding this error: apr_socket_recv: Connection reset by peer (104)

    - by matthewsteiner
    So, if I do some benchmarking with apache benchmark (ab), and I use large numbers of requests. Then sometimes in the middle of a test I get this error. I don't even know what it means. So how can I fix it? Or is it just something that will happen if the server gets too many hits anyway? The problem is, if I run 10,000 hits, it'll all run perfectly. If I run it again, it'll get to 4000 and get the error: apr_socket_recv: Connection reset by peer (104) A little about my setup: I have nginx taking static requests and processing dynamic ones to apache. The file in question is served from cache by nginx, so I guess it's probably got to do with how nginx is handling the requests? Ideas?

    Read the article

  • HTTP/1.1 Status Codes 400 and 417, cannot choose which

    - by TheDeadLike
    I have been referred to here that it might be of better help, I've got a processing file which handles the user sent data, before that, however, it compares the input from client to the expected values to ensure no client-side data change. I can say I don't know lot about HTTP status codes, but I have made up some research on it, and to choose which one is the best for unexpected input handling. So I came up with: 400 Bad Request: The request cannot be fulfilled due to bad syntax 417 Expectation Failed: The server cannot meet the requirements of the Expect request-header field Now, I cannot be really sure which one to use, I have seen 400 Bad Request being used alot, however, whatI get from explanation is that the error is due to an unexistent request rather than an illegal input. On the other side 417 Expectation Failed seems to just fit for my use, however, I have never seen or experimented this header status before. I need your experience and opinions, thanks alot! For a full detailed with form/process page drafts, and my experiments, follow this link.

    Read the article

  • Hide notification area GPO not applying

    - by Richard
    I have created a GPO to hide the notification area on Windows XP SP3. The GPO must apply to all students but only in certain rooms so I've also enabled loopback processing on the GPO and linked to the OUs the computers are in. I've then added a group to the security filter that contains all student accounts. This is not applying. It doesn't even show up in gpresult. I have also tried linking it in the Students OU which contains all student accounts and applying a security filter with a group of the computers I want it to apply to. This didn't work either. It's possible I'm missing something straightforward. Would a WMI filter do the job, and if so how would I go about writing one so that it'll only apply to computers whose name begins with XX-RT for example.

    Read the article

  • Using Performance Monitor To Get IIS7 Response Turnaround Time

    - by alphadogg
    I have a MVC2 web application on W2KR2/IIS7 that I'd like to benchmark/monitor. Some XHR requests by a browser-based client are suddenly taking 8-10 sec when they used to take much less time (as per Chrome Developer Tool timings). The underlying SQL Server queries, using the same params, runs in 1.4s according to total execution time client statistics from SSMS. I'm assuming that there are various counters that can specifically dissect time taken/waiting/processing between IIS7 itself and the web application? For example, can I check how long it takes to get a response from IIS7 app and DB? How about how long it takes to serve IIS7?

    Read the article

  • Predictive vs Least Connection Load Balancing Techniques

    - by Mani
    I have a windows based desktop application that communicates via TCP to the application servers. (windows 2003). No sticky sessions between client calls. We have exactly 2 servers to load balance and we are thinking to use a F5 hardware NLB. The application is a heavy load types, doing not much bussiness logic in the services but retrieving quite a big amount of data at most of the times. May be on an average 5000 to 10000 records at all times. Used mainly for storing and retirieving data and no special processing of data or calculations running on the server side. I am favouring 'predictive' considering my services take a while at times to return data and hence tracking the feedback would yield some better routing as in predictive. I am not sure if the given data is sufficient enough to suggest some ideas but considering these, what would be some suggestions\things to consider\best between Predictive and Least Connections ? Thanks.

    Read the article

  • Firefox "auto-complete" is very slow

    - by netvope
    Firefox version: 3.6 My places.sqlite is rather big (114MB, after being optimized by SpeedyFox.) If I turn on auto-complete, it may take 1 or 2 seconds for Firefox to accept a newly typed URL. To reproduce the issue: Type a URL into the URL bar, press enter. Nothing happens, and Firefox consumes 100% CPU (actually 50% of 2 cores) for 1 to 2 seconds Then Firefox start the network connection and load the webpage. Since it consumes 100% CPU, I don't think the bottleneck is the disk. I have some experience with SQLite and I know a 100MB DB is very small. To achieve the delay Firefox must be doing some expensive processing or inefficient queries. The issue does not appear if: auto-complete is turned off, or the URL is frequently used, or a new profile with no history is used Does anyone have any idea how to solve the problem? Should I file this as a bug? I don't want to give up my 100MB history, but I don't want to give up auto-complete either :)

    Read the article

  • mysql settings - using the available resources

    - by Christian Payne
    I've got a lot of processing work I need to run on a mysql server. I've installed mysql 5.1.45-community on a Win 2007 64bit. Its running on a xenon, 3ghz 6 processors with 8 gig ram. It doesn't seem to matter what queries I run (or the number I run at the same time), when I look in task manager, I'll see one processor is out at 100%. The other 5 are idol. Memory is static at 1.54 gig. When I installed mysql, I used the wizard and selected the default "server" (not workstation) option. I feel like I should be getting more bang for my buck. Is there something else I should be monitoring or something I should change to use the other system resources???

    Read the article

  • automatic IIS worker process recycle fails

    - by Sander Rijken
    The server is set to its default configuration to recycle the app pool every 1740 minutes. When this happens the following message is logged: A worker process with process id of '1234' serving application pool 'XX' has requested a recycle because the worker process reached its allowed processing time limit. Directly after logging this message, the web site is unresponsive. The only way to get it back online is by running iisreset manually. Does anyone know a fix for this behavior, other than turning the recycle feature off? Is it a known problem?

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >