Search Results

Search found 22993 results on 920 pages for 'global load balancing'.

Page 10/920 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Coldfusion 8 Application Crashes Under Heavy Load

    - by KM01
    Hello, We have a CF8 app that runs for 20-25 minutes before crashing under heavy load ~ 1200 users. This load is generated by our load testing tool: 1200 users ramped up in 5 mins (approx behavior of our users), running for an hour. We have this app on Solaris 10, Apache 2, JRun 4 and Oracle 10g. Java version is 1.6. During the initial load tests, the thread dumps pointed to monitor deadlocks that pointed to sessions. "jrpp-173": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" "scheduler-1": waiting to lock monitor 0x026c3ce0 (object 0x6abe2f20, a coldfusion.monitor.memory.SessionMemoryMonitor$TopMemoryUsedSessions), which is held by "jrpp-167" "jrpp-167": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" We increased the number of sessions relative to the number of CPUs (48 simultaneous threads against 32 CPUs), and the deadlock went away. While varying the simultaneous threads helped a little bit in terms of response time, the CF server still tanked in 20-25 minutes during all of these tests. We ran more thread dumps, and saw a thread locking a monitor, for e.g.: "jrpp-475" prio=3 tid=0x02230800 nid=0x2c5 runnable [0x4397d000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.getEntry(HashMap.java:347) at java.util.HashMap.containsKey(HashMap.java:335) at java.util.HashSet.contains(HashSet.java:184) at coldfusion.monitor.memory.MemoryTracker.onAddObject(MemoryTracker.java:124) at coldfusion.monitor.memory.MemoryTrackerProxy.onReplaceValue(MemoryTrackerProxy.java:598) at coldfusion.monitor.memory.MemoryTrackerProxy.onPut(MemoryTrackerProxy.java:510) at coldfusion.util.CaseInsensitiveMap.put(CaseInsensitiveMap.java:250) at coldfusion.util.FastHashtable.put(FastHashtable.java:43) - locked <0x6f7e1a78> (a coldfusion.runtime.Struct) at coldfusion.runtime.CfJspPage._arrayset(CfJspPage.java:1027) at coldfusion.runtime.CfJspPage._arraySetAt(CfJspPage.java:2117) at cfvalidation2ecfc1052964961$funcSETUSERAUDITDATA.runFunction(/app/docs/apply/cfcs/validation.cfc:377) As you see in the last line above there were several references CFMs and CFCs, and the lines have "cflock" tags, which were scoped to the "application." We (the dev team) then changed them to be scoped to a "name". After more load tests, there is no locking going on and there no deadlocks, but now the application tanks in 7-10 minutes. We've gotten system, network and DB reports from the respective admins, and they are not being taxed; even watched the server stats with server monitor, top, prstat, ran sar reports, etc. So we believe it is an issue with the CF server or maybe the JVM. I am running out of ideas as to what else we can try. Disclaimer: I am not a CF developer or Admin. I am just running the load test, analyzing the reports, threads etc, and sharing the results with the dev and admin teams, and trying the next change, and so on. So far no dice. Has anyone run into something similar? How did you go about diagnosing and troubleshooting? All thoughts and pointers welcome. Thank you for your time! KM

    Read the article

  • Perform action based on load avg

    - by sfx
    I'm running some web applications on an debian server and have to struggle with ddos attacks sometimes. It's eating up all my resources and I can't ssh anymore into the server. An idea was to drop all connections if the load avg is too high, so there are still resources for me and accept new connections if the load avg is low enough. Since this has to work under heavy load I'm afraid a cronjob wouldn't be fast enough or take too much resources. tl;dr: Is there a way to configure the behavior if the load avg is above a specific threshold?

    Read the article

  • No apparent reason for high load average

    - by Oz.
    We have several web servers running on Amazon (ec2) c1.xlarge, over Amazon AMI. The servers are duplicates of each other, running the exact same hardware and software. Each server spec is: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge A couple of weeks ago we have run a yum upgrade on one of the servers. Starting on this upgrade the upgraded server started showing a high load average. Needless to say, we did not update the other servers and we can not do so until we understand the reason for this behavior. The strange thing is that when we compare the servers using top or iostat, we can not find the reason for the high load. Note that we have moved traffic from the "problematic" server to the others, which have made the "problematic" server less crowded in terms of requests, and still his load is higher. Do you have any idea what could it be, or where else can we check? Many thanks for the help! Oz. # # proper server # w command # 00:42:26 up 2 days, 19:54, 2 users, load average: 0.41, 0.48, 0.49 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/1 82.80.137.29 00:28 14:05 0.01s 0.01s -bash pts/2 82.80.137.29 00:38 0.00s 0.02s 0.00s w # # proper server # iostat command # Linux 3.2.12-3.2.4.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 9.03 0.02 4.26 0.17 0.13 86.39 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.63 1.50 55.00 367236 13444008 xvdfp1 4.41 45.93 70.48 11227226 17228552 xvdfp2 2.61 2.01 59.81 491890 14620104 xvdfp3 8.16 14.47 94.23 3536522 23034376 xvdfp4 0.98 0.79 45.86 192818 11209784 # # problematic server # w command # 00:43:26 up 2 days, 21:52, 2 users, load average: 1.35, 1.10, 1.17 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/0 82.80.137.29 00:28 15:04 0.02s 0.02s -bash pts/1 82.80.137.29 00:38 0.00s 0.05s 0.00s w # # problematic server # iostat command # Linux 3.2.20-1.29.6.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 7.97 0.04 3.43 0.19 0.07 88.30 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.10 1.49 76.54 374660 19253592 xvdfp1 5.64 40.98 85.92 10308946 21612112 xvdfp2 3.97 4.32 93.18 1087090 23439488 xvdfp3 10.87 30.30 115.14 7622474 28961720 xvdfp4 1.12 0.28 65.54 71034 16487112

    Read the article

  • FTP Load Balancer on EC2

    - by inakiabt
    I need an EC2 instance to balance all incoming FTP connections to a list of FTP servers (EC2 instances too). This list will be changed dynamically due to the load of the FTP servers (launch a new FTP server when the FTP servers are overloaded or shutdown a FTP server when the load is low). What you recommend? a FTP proxy? DNS server? Load balancer? Note: The FTP servers must support Passive Mode

    Read the article

  • FTP Load Balancer

    - by inakiabt
    I need an EC2 instance to balance all incoming FTP connections to a list of FTP servers (EC2 instances too). This list will be changed dynamically due to the load of the FTP servers (launch a new FTP server when the FTP servers are overloaded or shutdown a FTP server when the load is low). What you recommend? a FTP proxy? DNS server? Load balancer? Note: The FTP servers must support Passive Mode

    Read the article

  • Load testing nginx inside AWS

    - by andy
    I'm trying to load test nginx running on AWS. I need to try to optimise it to handle 1Gbps of inbound traffic. Currently I've got it to peak at 85Mbit/s by running nginx on an m1.large with 4 other machines hitting it by using ab with -i (for head requests) -k (keepalives) -r (ignore failed requests) -n 500000 -c 20000. I'm struggling to generate more than 85 Mbit/s traffic from 4 machines, yet when I do scp a large file I get nearly 0.25Gbit/s of traffic going over the network. Are there any tools or approaches that I could use to load test nginx that might generate more load? I'm only interested in inbound traffic, so perhaps a DoS tool could help if it chucks away responses? I'm hitting a very small (40 byte) static asset, and have peaked at handling 50K concurrent connections and getting 25k reqs/s when just using a single load generator machine.

    Read the article

  • global variables in c#.net

    - by scatman
    how can i set a global variable in a c# web application? what i want to do, is to set a variable on a page (master page maybe) and access this variable from any page. i neither want to use cache nor sessions. i think i have to use global.asax... any help?

    Read the article

  • global variable and Python

    - by rejinacm
    A global variable created in one function cannot be used in another function directly. Instead I need to store the global variable in a local variable of the function which needs its access. Am I correct? Why is it so?

    Read the article

  • Global variables in Python

    - by rejinacm
    A global variable created in one function cannot be used in another function directly. Instead I need to store the global variable in a local variable of the function which needs its access. Am I correct? Why is it so?

    Read the article

  • Global menu stopped working for most applications

    - by ltorgo
    I have Ubuntu 12.04 32 bits and I had been enjoying global menus and HUD since the beginning. However, for some strange reason, I stopped having global menus for most applications and thus no HUD fun anymore. Actually, the only two applications where I continue to have global menus are Thunderbird and Firefox, so not sure if it has something to do with these two... I've already browsed through some similar reports and already tried the following: checking if indicator-appmenu was installed. tried to use Unsettings to switch on and off, with reboots in the middle, global menus. logged in with a different user to see if it was some configuration of my user. also checked, using dconf-editor that, com ? canonical ? indicator ? appmenu ? menu-mode = "global" Any hint/help would be appreciated! Thank you in advance

    Read the article

  • script engine with no global environment (java)

    - by user1886930
    I am curious about how global variables are handled by script engines. I am looking for a script engine that does not preserve the state of global variables upon invocation. Are there such engines out there? We are looking for a scripting language we can use under the script engine API for Java. When making multiple invocations of a script engine, top-level calls to eval() or evaluate() method preserves the state of global variables, meaning that consequent calls to eval() will use the global variables as they were left by the last invocation. Is there a script engine that does not preserve the state, or provides the ability to reset the state, so that global variables are at their initial state every time the script engine is invoked?

    Read the article

  • `this` in global scope in ECMAScript 6

    - by Nathan Wall
    I've tried looking in the ES6 draft myself, but I'm not sure where to look: Can someone tell me if this in ES6 necessarily refers to the global object? Will this object have same members as the global scope? If you could answer for ES5 that would be helpful as well. I know this in global scope refers to the global object in the browser and in most other ES environments, like Node. I just want to know if that's the defined behavior by the spec or if that's extended behavior that implementers have added (and if this behavior will continue in ES6 implementations). In addition, is the global object always the same thing as the global scope? Or are there distinctions? Update - Why I want to know: I am basically trying to figure out how to get the global object reliably in ES5 & 6. I can't rely on window because that's specific to the browser, nor can I rely on global because that's specific to environments like Node. I know this in Node can refer to module in module scope, but I think it still refers to global in global scope. I want a cross-environment ES5 & 6 compliant way to get the global object (if possible). It seems like in all the environments I know of this in global scope does that, but I want to know if it's part of the actual spec (and so reliable across any environment that I may not be familiar with). I also need to know if the global scope and the global object are the same thing by the spec. In other words will all variables in global scope be the same as globalobject.variable_name?

    Read the article

  • ASP.NET Request.ServerVariables["SERVER_PORT_SECURE"] and proxy SSL by load balancer

    - by frankadelic
    We have some legacy ASP.NET code that detects if a request is secure, and redirects to the https version of the page if required. This code uses Request.ServerVariables["SERVER_PORT_SECURE"] to detect if SSL is needed. Our operations team has suggested doing proxy SSL at the load balancer (F5 Big-IP) instead of on the web servers (assume for the purposes of this question that this is a requirement). The consequence would be that all requests appear as HTTP to the web server. My question: how can we let the web servers known that the incoming connection was secure before it hit the load balancer? Can we continue to use Request.ServerVariables["SERVER_PORT_SECURE"]? Do you know of a load balancer config that will send headers so that no application code changes are needed?

    Read the article

  • AWS Load Balancer with a static IP address

    - by user965904
    I have a set-up running on Amazon cloud with a couple of EC2 Instances running through a load balancer. It is important that the site has a unique(static) IP or set of IPs as I'm plugging in 3rd party APIs which only accept requests made from IPs which have been added to their whitelist. So basically unless we can give these 3rd parties a static IP or range of IPs that the requests from the site will always come from then we would be unable to make any calls to them. Anyone knows how to achieve this as I know that Elastic IPs are not compatible with load balancers? If I were to look up the IP of the load balancer DNS name (e.g. dualstack.awseb-BAMobile-ENV-xxxxxxxxx.eu-west-1.elb.amazonaws.com resolves to 200.200.200.200) would that IP be Static? Any help/advise is greatly appreciated guys.

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • PHP question about global variables and form requests

    - by user220201
    Hi, This is probably a stupid question but will ask anyway sine I have no idea. I have written basic php code which serve forms. Say I have a login page and I serve it using the login.php page and it will be called in the login.html page like this - <form action="login.php" method="post"> By this it is also implied that every POST needs its own php file, doesn't it? This kind of feels weird. Is there a way to have a single file, say code.php, and just have each of the forms as functions instead? EDIT: Specifically, say I have 5 forms that are used one after the other in my application. Say after login the user does A, B, C and D tasks each of which are sent to the server as a POST request. So instead of having A.php, B.php, C.php and D.php I would like to have a single code.php and have A(), B(), C() and D() as functions. Is there a way to do this? Also on the same note, how do I deal with say a global array (e.g. an array of currently logged in users) across multiple forms? I want to do this without writing to a DB. I know its probably better to write to a DB and query but is it even possible to do it with a global array? The reason I was thinking about having all the form functions in one file is to use a global array. Thanks, - Pav

    Read the article

  • system crash after declaring global object of the class

    - by coming out of void
    hi, i am very new to c++. i am getting system crash (not compilation error) in doing following: i am declaring global pointer of class. BGiftConfigFile *bgiftConfig; class BGiftConfigFile : public EftBarclaysGiftConfig { } in this class i am reading tags from xml file. it is crashing system when this pointer is used to retrieve value. i am doing coding for verifone terminal. int referenceSetting = bgiftConfig->getreferencesetting(); //system error getreferencesetting() is member fuction of class EftBarclaysGiftConfig i am confused about behavior of pointer in this case. i know i am doing something wrong but couldn't rectify it. When i declare one object of class locally it retrieves the value properly. BGiftConfigFile bgiftConfig1; int referenceSetting = bgiftConfig1->getreferencesetting(); //working But if i declare this object global it also crashes the system. i need to fetch values at different location in my code so i forced to use someting global. please suggest me how to rectify this problem.

    Read the article

  • NAT and ISP Subnet when load balancing on pfsense?

    - by dannymcc
    I have a pfsense box that I'm trying to plan the configuration for. I am going to be load balancing two ISP's, each with their own /29 static IP subnet. The question I have is in relation to the way those IP's are associated with workstations on the local network. Currently I have some workstations with local (192.168.1.0/29) IP addresses, and other more complicated workstation setups have their own public IP address. Some of the more complicated systems have a NAT 1:1 configuration where I forward a public IP address to a local IP address. Others however are directly on the ISP subnet and cannot be seen on our local network. Is this configuration possible with pfsense? If so, what terms should I be looking through the documentation for? Here is a simple/brief diagram of what I am trying to achieve.

    Read the article

  • LAMP Server without single failure point + Global Server Load Balancing?

    - by José Nobile
    I want implement a LAMP Server (Linux Apache MySQL PHP) without a single failure point and with Global Server Load Balancing. I have a server in Cali, Colombia, and other server will be installed in Melbourne, Australia, user in America can use the Cali Server and in Europe, Asia, Africa or Oceania use the Melbourne Server. If any server fail (or load is excessively high), a server must answer all request. Data in MYSQL must be in sync, php files, any configuration in both server must be in sync. I read about of Google DNS Server 8.8.8.8 and 8.8.4.4 and ANY Cast, also about MySQL semisynchronous replication and MySQL Cluster, but what about other things, as crontabs, and the configurations in server? The solution can't depend of APNIC or BGP, only open source software running in Linux.

    Read the article

  • Delphi - Why is my global variable "inacessible" when i debug

    - by Antoine Lpy
    I'm building an application that contains around 30 Forms. I need to manage sessions, so I would like to have a global LoggedInUser variable accessible from all forms. I read "David Heffernan"'s post about global variables, and how to avoid them but I thought it would be easier to have a global User variable rather than 30 forms having their own User variable. So i have a unit : GlobalVars unit GlobalVars; interface uses User; // I defined my TUser class in a unit called User var LoggedInUser: TUser; implementation initialization LoggedInUser:= TUser.Create; finalization LoggedInUser.Free; end. Then in my LoginForm's LoginBtnClick procedure I do : unit FormLogin; interface uses [...],User; type TForm1 = class(TForm) [...] procedure LoginBtnClick(Sender: TObject); private { Déclarations privées } public end; var Form1: TForm1; AureliusConnection : IDBConnection; implementation {$R *.fmx} uses [...]GlobalVars; procedure TForm1.LoginBtnClick(Sender: TObject); var Manager : TObjectManager; MyCriteria: TCriteria<TUser>; u : TUser; begin Manager := TObjectManageR.Create(AureliusConnection); MyCriteria :=Manager.Find<TUtilisateur> .Add(TExpression.Eq('login',LoginEdit.Text)) .Add(TExpression.Eq('password',PasswordEdit.Text)); u := MyCriteria.UniqueResult; if u = nil then MessageDlg('Login ou mot de passe incorrect',TMsgDlgType.mtError,[TMsgDlgBtn.mbOK],0) else begin LoggedInUser:=u; //Here I assign my local User data to my global User variable Form1.Destroy; A00Form.visible:=true; end; Manager.Free; end; Then in another form I would like to access this LoggedInUser object in the Menu1BtnClick procedure : Unit C01_Deviations; interface uses System.SysUtils, System.Types, System.UITypes, System.Classes, System.Variants, FMX.Types, FMX.Graphics, FMX.Controls, FMX.Forms, FMX.Dialogs, FMX.StdCtrls, FMX.ListView.Types, FMX.ListView, FMX.Objects, FMX.Layouts, FMX.Edit, FMX.Ani; type TC01Form = class(TForm) [...] Menu1Btn: TButton; [...] procedure Menu1BtnClick(Sender: TObject); private { Déclarations privées } public { Déclarations publiques } end; var C01Form: TC01Form; implementation uses [...]User,GlobalVars; {$R *.fmx} procedure TC01Form.Menu1BtnClick(Sender: TObject); var Assoc : TUtilisateur_FonctionManagement; ValidationOK : Boolean; util : TUser; begin ValidationOK := False; util := GlobalVars.LoggedInUser; // Here i created a local user variable for debug purposes as I thought it would permit me to see the user data. But i get "Inaccessible Value" as its value util.Nom:='test'; for Assoc in util.FonctionManagement do // Here is were my initial " access violation" error occurs begin if Assoc.FonctionManagement.Libelle = 'Reponsable équipe HACCP' then begin ValidationOK := True; break; end; end; [...] end; When I debug I see "Inaccessible Value" in the value column of my user. Do you have any idea why ? I tried to put an integer in this GlobalVar unit, and i was able to set its value from my login form and read it from my other form.. I guess I could store the user's id, which is an integer, and then retrieve the user from the database using its id. But it seems really unefficient.

    Read the article

  • Determining the load on a particular core in a multicore processor

    - by S.Man
    In a multicore processor, there are ways to tell the particular application to run in either single core or 2 cores or 3 cores. During such scenarios, how will a scheduler be able to determine the load(number of threads) on a particular core in a multicore processor and accordingly distribute(balance) the load(allocate threads) across the various cores ?

    Read the article

  • In terms of load handling, which is better: one server or two of equivalent power?

    - by seldary
    My goal is to figure out if i'm better off with one strong server, or multiple weaker servers with a load balancer. Does the fact of splitting the load between servers have an effect on the total load my website could take? It's hard to single that out, because there are of course a lot of parameters that affect the results, so some assumptions: Putting failover considerations aside - I know it matters, but for the sake of the question's simplicity, lets assume nothing fails. The servers in the multiple servers option have an accumulated "power" equivalent to the one server option (about the same amount of cores and RAM space). If that is too theoretical, here is a concrete question that could help: Suppose I have several instances of exactly the same server - lets call it S. Suppose that server S can serve a load of up to X calls per time unit. Will two S servers with a load balancer serve 2X calls per time unit? significantly more? significantly less?

    Read the article

  • Ubuntu Server 12.04 CPU Load

    - by zertux
    I have a Server (2x Hexa-Core Xeon E5649 2.53GHz w/HT with 32GB RAM and 20000 GB Bandwidth) running Ubuntu Server 12.04 LTS. The server runs LAMP and serves one website only, the estimated number of users is to be ~ 15,000 at the same time. At the moment i have around 2000 users online each of them runs 50 MySQL queries (small values mostly select and insert) from the beginning until the end of the session. Server CPU Load is high at this number of connections while the RAM usage is almost 1GB out of 32GB its worth mentioning that the server was running very fast with no problems at all but am concerned about the load average. http://s12.postimage.org/z7hi6mz3h/photo.png top - 03:02:43 up 9 min, 2 users, load average: 50.83, 30.14, 12.83 Tasks: 432 total, 1 running, 430 sleeping, 0 stopped, 1 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 66.5%id, 33.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32939992k total, 3111604k used, 29828388k free, 84108k buffers Swap: 2048280k total, 0k used, 2048280k free, 1621640k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2860 root 20 0 25820 2288 1420 S 3 0.0 0:11.18 htop 1182 root 20 0 0 0 0 D 2 0.0 0:01.46 kjournald 1935 mysql 20 0 12.3g 161m 7924 S 1 0.5 102:31.45 mysqld 11 root 20 0 0 0 0 S 0 0.0 0:00.38 kworker/0:1 1822 www-data 20 0 247m 25m 4188 D 0 0.1 0:01.81 apache2 2920 www-data 20 0 0 0 0 Z 0 0.0 0:01.20 apache2 <defunct> 2942 www-data 20 0 247m 23m 3056 D 0 0.1 0:00.20 apache2 3516 www-data 20 0 247m 23m 3028 D 0 0.1 0:00.06 apache2 3521 www-data 20 0 247m 23m 3020 D 0 0.1 0:00.09 apache2 3664 www-data 20 0 247m 23m 3132 D 0 0.1 0:00.09 apache2 3674 www-data 20 0 247m 23m 3252 D 0 0.1 0:00.06 apache2 3713 www-data 20 0 247m 23m 3040 D 0 0.1 0:00.09 apache2 1 root 20 0 24328 2284 1344 S 0 0.0 0:03.09 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0 0.0 0:00.01 ksoftirqd/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 8 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 9 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/1:0 root@server:~/codes# vmstat 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 19 0 0 29684012 86112 1689844 0 0 19 590 254 231 48 0 47 5 23 0 0 29704812 86128 1697672 0 0 4 320 11100 8121 77 1 22 0 33 0 0 29671044 86156 1705308 0 0 0 5440 13190 9140 95 1 4 0 33 3 0 29670088 86160 1706288 0 0 0 32932 12275 7297 99 0 1 0 35 0 0 29693456 86188 1710724 0 0 4 676 12701 7867 98 1 1 0 ^C I have not changed any of the default configurations that comes with Ubuntu. Is this load normal for such powerful server ? is there any optimization i can make to Apache/MySQL to minimize the load ? What do you recommend ?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >