Search Results

Search found 65991 results on 2640 pages for 'run app'.

Page 323/2640 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • How to configure a zone cluster on Solaris Cluster 4.0

    - by JuergenS
    This is a short overview on how to configure a zone cluster on Solaris Cluster 4.0. This is a little bit different as in Solaris Cluster 3.2/3.3 because Solaris Cluster 4.0 is only running on Solaris 11. The name of the zone cluster must be unique throughout the global Solaris Cluster and must be configured on a global Solaris Cluster. Please read all the requirements for zone cluster in Solaris Cluster Software Installation Guide for SC4.0. For Solaris Cluster 3.2/3.3 please refer to my previous blog Configuration steps to create a zone cluster in Solaris Cluster 3.2/3.3. A. Configure the zone cluster into the already running global clusterCheck if zone cluster can be created # cluster show-netprops to change number of zone clusters use # cluster set-netprops -p num_zoneclusters=12 Note: 12 zone clusters is the default, values can be customized! Create config file (zc1config) for zone cluster setup e.g: Configure zone cluster # clzc configure -f zc1config zc1 Note: If not using the config file the configuration can also be done manually # clzc configure zc1 Check zone configuration # clzc export zc1 Verify zone cluster # clzc verify zc1 Note: The following message is a notice and comes up on several clzc commands Waiting for zone verify commands to complete on all the nodes of the zone cluster "zc1"... Install the zone cluster # clzc install zc1 Note: Monitor the consoles of the global zone to see how the install proceed! (The output is different on the nodes) It's very important that all global cluster nodes have installed the same set of ha-cluster packages! Boot the zone cluster # clzc boot zc1 Login into non-global-zones of zone cluster zc1 on all nodes and finish Solaris installation. # zlogin -C zc1 Check status of zone cluster # clzc status zc1 Login into non-global-zones of zone cluster zc1 and configure the shell environment for root (for PATH: /usr/cluster/bin, for MANPATH: /usr/cluster/man) # zlogin -C zc1 If using additional name service configure /etc/nsswitch.conf of zone cluster non-global zones. hosts: cluster files netmasks: cluster files Configure /etc/inet/hosts of the zone cluster zones Enter all the logical hosts of non-global zones B. Add resource groups and resources to zone cluster Create a resource group in zone cluster # clrg create -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Note1: Use command # cluster status for zone cluster resource group overview. Note2: You can also run all commands for zone cluster in global cluster by adding the option -Z to the command. e.g: # clrg create -Z zc1 -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Set up the logical host resource for zone cluster In the global zone do: # clzc configure zc1 clzc:zc1 add net clzc:zc1:net set address=<zone-logicalhost-ip> clzc:zc1:net end clzc:zc1 commit clzc:zc1 exit Note: Check that logical host is in /etc/hosts file In zone cluster do: # clrslh create -g app-rg -h <zone-logicalhost> <zone-logicalhost>-rs Set up storage resource for zone cluster Register HAStoragePlus # clrt register SUNW.HAStoragePlus Example1) ZFS storage pool In the global zone do: Configure zpool eg: # zpool create <zdata> mirror cXtXdX cXtXdX and # clzc configure zc1 clzc:zc1 add dataset clzc:zc1:dataset set name=zdata clzc:zc1:dataset end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p zpools=zdata app-hasp-rs Example2) HA filesystem In the global zone do: Configure SVM diskset and SVM devices. and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/data clzc:zc1:fs set special=/dev/md/datads/dsk/d0 clzc:zc1:fs set raw=/dev/md/datads/rdsk/d0 clzc:zc1:fs set type=ufs clzc:zc1:fs add options [logging] clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/data app-hasp-rs Example3) Global filesystem as loopback file system In the global zone configure global filesystem and it to /etc/vfstab on all global nodes e.g.: /dev/md/datads/dsk/d0 /dev/md/datads/dsk/d0 /global/fs ufs 2 yes global,logging and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/zone/fs (zc-lofs-mountpoint) clzc:zc1:fs set special=/global/fs (globalcluster-mountpoint) clzc:zc1:fs set type=lofs clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: (Create scalable rg if not already done) # clrg create -p desired_primaries=2 -p maximum_primaries=2 app-scal-rg # clrs create -g app-scal-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/zone/fs hasp-rs More details of adding storage available in the Installation Guide for zone cluster Switch resource group and resources online in the zone cluster # clrg online -eM app-rg # clrg online -eM app-scal-rg Test: Switch of the resource group in the zone cluster # clrg switch -n zonehost2 app-rg # clrg switch -n zonehost2 app-scal-rg Add supported dataservice to zone cluster Documentation for SC4.0 is available here Example output: Appendix: To delete a zone cluster do: # clrg delete -Z zc1 -F + Note: Zone cluster uninstall can only be done if all resource groups are removed in the zone cluster. The command 'clrg delete -F +' can be used in zone cluster to delete the resource groups recursively. # clzc halt zc1 # clzc uninstall zc1 Note: If clzc command is not successful to uninstall the zone, then run 'zoneadm -z zc1 uninstall -F' on the nodes where zc1 is configured # clzc delete zc1

    Read the article

  • Configuring ant to run unit tests. Where should libraries be? How should classpath be configured? av

    - by chillitom
    Hi All, I'm trying to run my junit tests using ant. The tests are kicked off using a JUnit 4 test suite. If I run this direct from Eclipse the tests complete without error. However if I run it from ant then many of the tests fail with this error repeated over and over until the junit task crashes. [junit] java.util.zip.ZipException: error in opening zip file [junit] at java.util.zip.ZipFile.open(Native Method) [junit] at java.util.zip.ZipFile.(ZipFile.java:114) [junit] at java.util.zip.ZipFile.(ZipFile.java:131) [junit] at org.apache.tools.ant.AntClassLoader.getResourceURL(AntClassLoader.java:1028) [junit] at org.apache.tools.ant.AntClassLoader$ResourceEnumeration.findNextResource(AntClassLoader.java:147) [junit] at org.apache.tools.ant.AntClassLoader$ResourceEnumeration.nextElement(AntClassLoader.java:130) [junit] at org.apache.tools.ant.util.CollectionUtils$CompoundEnumeration.nextElement(CollectionUtils.java:198) [junit] at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:43) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.checkForkedPath(JUnitTask.java:1128) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.executeAsForked(JUnitTask.java:1013) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.execute(JUnitTask.java:834) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.executeOrQueue(JUnitTask.java:1785) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.execute(JUnitTask.java:785) [junit] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) [junit] at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [junit] at java.lang.reflect.Method.invoke(Method.java:597) [junit] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [junit] at org.apache.tools.ant.Task.perform(Task.java:348) [junit] at org.apache.tools.ant.Target.execute(Target.java:357) [junit] at org.apache.tools.ant.Target.performTasks(Target.java:385) [junit] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1337) [junit] at org.apache.tools.ant.Project.executeTarget(Project.java:1306) [junit] at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) [junit] at org.apache.tools.ant.Project.executeTargets(Project.java:1189) [junit] at org.apache.tools.ant.Main.runBuild(Main.java:758) [junit] at org.apache.tools.ant.Main.startAnt(Main.java:217) [junit] at org.apache.tools.ant.launch.Launcher.run(Launcher.java:257) [junit] at org.apache.tools.ant.launch.Launcher.main(Launcher.java:104) my test running task is as follows: <target name="run-junit-tests" depends="compile-tests,clean-results"> <mkdir dir="${test.results.dir}"/> <junit failureproperty="tests.failed" fork="true" showoutput="yes" includeantruntime="false"> <classpath refid="test.run.path" /> <formatter type="xml" /> <test name="project.AllTests" todir="${basedir}/test-results" /> </junit> <fail if="tests.failed" message="Unit tests failed"/> </target> I've verified that the classpath contains the following as well as all of the program code and libraries: ant-junit.jar ant-launcher.jar ant.jar easymock.jar easymockclassextension.jar junit-4.4.jar I've tried debugging to find out which ZipFile it is trying to open with no luck, I've tried toggling includeantruntime and fork and i've tried running ant with ant -lib test/libs where test/libs contains the ant and junit libraries. Any info about what causes this exception or how you've configured ant to successfully run unit tests is gratefully received. ant 1.7.1 (ubuntu), java 1.6.0_10, junit 4.4 Thanks. Update - Fixed Found my problem. I had included my classes directory in my path using a fileset as opposed to a pathelement this was causing .class files to be opened as ZipFiles which of course threw an exception.

    Read the article

  • Configuring Jenkins for running with BitBucket

    - by Claus
    I'm trying to setup Jenkins on my mac mini in order to pull my iOS project source code from BitBucket and build it automatically. I've already gone through the major well know problems generating the ssh keys,uploading them in BitBucket,performing an ssh connection by console for adding the host to the well know list (you can find all my adventure here and here). Now,there are 3 user in my system: A,B and Shared. When I installed Jenkins it automatically placed itself in Shared, but I generated the ssh keys with the user A. So just to be clear In the A home directory there is an .ssh directory with public and private keys. When I try to run by Jenkins job I get this error message: Started by user anonymous Building in workspace /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace Checkout:workspace / /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace - hudson.remoting.LocalChannel@625cb0bb Using strategy: Default Cloning the remote Git repository Cloning repository [email protected]:myuser/myproject.git git --version git version 1.8.0 ERROR: Error cloning remote repo 'origin' : Could not clone [email protected]:myuser/myproject.git hudson.plugins.git.GitException: Could not clone [email protected]:myuser/myproject.git at hudson.plugins.git.GitAPI.clone(GitAPI.java:271) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1036) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:978) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitSCM.determineRevisionToBuild(GitSCM.java:978) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1134) at hudson.model.AbstractProject.checkout(AbstractProject.java:1325) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:676) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:581) at hudson.model.Run.execute(Run.java:1516) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236) Caused by: hudson.plugins.git.GitException: Command "/usr/local/git/bin/git clone --progress -o origin [email protected]:myuser/myproject.git /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace" returned status code 128: stdout: Cloning into '/Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace'... stderr: Host key verification failed. fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:885) at hudson.plugins.git.GitAPI.access$000(GitAPI.java:40) at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:267) at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:246) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitAPI.clone(GitAPI.java:246) ... 14 more Trying next repository ERROR: Could not clone repository FATAL: Could not clone hudson.plugins.git.GitException: Could not clone at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1048) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:978) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitSCM.determineRevisionToBuild(GitSCM.java:978) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1134) at hudson.model.AbstractProject.checkout(AbstractProject.java:1325) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:676) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:581) at hudson.model.Run.execute(Run.java:1516) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236) As you can see it fails when Hudson try to run the GIT command. The odd things is that if I try to run /usr/local/git/bin/git clone --progress -o origin [email protected]:myuser/myproject.git /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace In my console, it works fine (after fixing a small problem relative the folder write permission with chmod) I found a post reporting a similar error which names a number of possible options but I'm not sure how to perform correctly these operations on my console. It looks like Jenkins is trying to run a command with a user which doesn't have permission to retrieve the appropriate keys from my .ssh directory.Not really sure.Maybe this output can help: MacMini:~ myuser$ ps axu | grep "/jenkins" myuser 11660 0.0 4.6 2918124 97096 ?? S 6:59pm 1:05.63 /usr/bin/java -jar /Users/myuser/Library/Caches/org.jenkins-ci.jenkins/jenkins.war jenkins 9896 0.0 9.0 2939824 188552 ?? Ss 4:06pm 17:55.91 /usr/bin/java -jar /Applications/Jenkins/jenkins.war myuser 11930 0.0 0.0 2432768 588 s000 S+ 10:28am 0:00.00 grep /jenkins MacMini:~ myuser$ ps axu | grep tomcat myuser 11932 0.0 0.0 2432768 588 s000 S+ 10:28am 0:00.00 grep tomcat MacMini:~ myuser$ I really hope to fix this problem, because I would like to write a very detailed tutorial with all the information I found disseminated around the web.

    Read the article

  • This program will not run - Windows did not trust this program because its identity can't be verified.

    - by r0ca
    Hi all, I just installed Windows 7 (MSDN) on a HP Z200. When I install McAfee 8.7i, i get in the Action Center a message that I need to turn on McAfee. When I hit "Turn on", i get the following error message: Please note that this picture has been taken from Google and this is related to another program, not McAfee So McAfee is installed on 4 other Z200 with the same Windows 7. So I'm kinda clueless now. Any takers?

    Read the article

  • VMWare workstation: from command line, how to start a VM in service mode (run in background)?

    - by GenEric35
    Hi, I have tried the vmrun and vmware.exe executables, but both of them start the vmware GUI when starting the VM. What I want to do is start the VM without starting the VMWare GUI. The reason I am doing this is after a few hours of idle, the guest OS becomes sluggish. It has lots of RAM but the only way I found to keep it's responsiveness optimal is to shutdown(dumps the memory) and the start; a restart of the guest OS doesnt dump the memory so I need to be able to do a stop of the VM, and then a start. So far the command I use are: C:\Program Files (x86)\VMware\VMware Workstationvmrun stop F:\VirtualMachines\R2\R2.vmx C:\Program Files (x86)\VMware\VMware Workstationvmrun start F:\VirtualMachines\R2\R2.vmx But the start command actually starts the VMWare Workstation GUI, which I don't need. I'm looking for a solution to start the VM without the VMWare Wokstation GUI, or a solution to what is causing the VM to become sluggish after a few hours of running idle.

    Read the article

  • ubuntu apache subdomains pointing to main domain

    - by Suhail Thakur
    i have a ubuntu server with apache setup, the main domain on the server is a subdomain app.example.com, which is working fine, now if i setup john.app.example.com, then that also is displaying the web page of app.example.com, the DocumentRoot for john.app.example.com is different, still it shows the web page of app.example.com. how can i resolve this, so john.app.example.com displays the pages that are there in its DocumentRoot.

    Read the article

  • How do I run a non-service program as a service on Windows 2008 Server?

    - by Lasse V. Karlsen
    I found this page that tells me how to set up Windows Live Sync as a background service on Windows 2003 Server, unfortunately the resource kit tools for 2003 that are mentioned does not work on 2008 server. http://mswhs.freeforums.org/windows-live-sync-as-a-service-on-whs-t623.html Also, apparently there is no resource kit tools downloadable for Windows 2008 Server that I can find. Perhaps someone has a link to the relevant tools? (INSTSRV.EXE and SRVANY.EXE.) Using just plain SC.EXE doesn't work as I assume that the program is then required to be a normal service, and not just any executable. What other options do I have? Can I use the task scheduler on 2008 server to start the WindowsLiveSync executable, will that work? I need the executable to stay running even after I've logged off from the server.

    Read the article

  • Configure mod_wsgi WSGIScriptAlias with mod_rewrite

    - by Lazik
    I want to redirect ex.com to www.ex.com but I still want www.ex.com/ to point to my app.wsgi without it showing up in the url. When I use the conf below and I go to ex.com, I get a 404 error saying can't find www.ex.com/app.wsgi/ If I change the WSGIScriptAlias / /var/www/vhosts/ex/app.wsgi to WSGIScriptAlias /app.wsgi /var/www/vhosts/ex/app.wsgi Then all my url look like www.ex.com/app.wsgi/blabla/... Is it possible to use some kind of rule to redirect ex.com to www.ex.com and still keeping / as the app.wsgi root? my conf file <VirtualHost *:80> ServerName www.ex.com ServerAlias ex.com *.ex.com RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] WSGIDaemonProcess ex user=www-data group=www-data processes=1 threads=5 WSGIScriptAlias / /var/www/vhosts/ex/app.wsgi <Directory /var/www/vhosts/ex> WSGIProcessGroup ex WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • eMachine W5243 won't POST; fans run but optical drive will not open.

    - by NicciAdonai
    Symptoms are what is described in the title. The machine reacts to the power button being hit by spinning up the two fans: CPU and PSU. The hard drive (SATA) spins up as well. No other reaction. This one symptom is particularly weird, though: the optical drive will not open with the IDE cable attached, but if I unplug it from the mobo it will. I can turn the PC on with it attached, won't open; then unplug IDE while it is still on, WILL open; then plug IDE back in with the PC STILL ON, WON'T open. I have disconnected every peripheral unnecessary to POST. These include: mouse/keyboard, PCI modem, the IDE optical drive (power and data), and the SATA HDD (power and data). Video is onboard. The only two things connected are DB15 video and power cable. There were 2 512 MB DDR2 sticks of RAM in it. I have tried running it with just one of them, then switched the other in. Currently seated is a completely different 1 GB stick that I keep around for troubleshooting purposes, and I have tried it in both slots. I have replaced the CMOS battery with a used one I had lying around, and which worked in the computer it came out of. I have tested the PSU with a tester to confirm it was good, then tried connecting another PSU just in case--same symptoms. I have even tried a suggestion I found elsewhere on this site wherein one disconnects power from the PSU and then presses the PC's power button twice, thereby "resetting" the PSU. Currently I am trying yet another suggestion: turn it on and wait an inordinate amount of time for POST. Any help would be appreciated.

    Read the article

  • Can you run Snow Leopard Server on a VM?

    - by Crash893
    I'm thinking of buying a mac mini server for my email needs (small office server for 999 with all the features i want build in) I wanted to give it a test drive before i commit so i was thinking "hey this sounds like the perfect job for a vm" however i havent been able to see any examples of running snow lepoard server on a vm does anyone know if its possible and/or there is a premade vm package to test it out with?

    Read the article

  • apache mod_rewrite

    - by eduard-schnittlauch
    Hi, I want mod_rewrite to do this: http://server/* - redirect to http://server/app/* http://server/app/* should not be redirected http://server.domain/* - redirect to http://server/app/* http://server.domain/app* - redirect to http://server/app/* It has to work with mod_jk! Edit: this is the final solution ` force use of host 'server' RewriteCond %{HTTP_HOST} !^server$ RewriteRule ^(.*)$ server$1 [R,NE,L] ` prepend /app to URL if missing RewriteCond %{request_uri} !^/app.*? RewriteRule ^(.+?)$ app/$1 [R,NE,L] Thanks to you, fahadsadah and Insanity5902! I'm hesitant to flag either one of you as 'correct', as both have provided valuable input that made up the final solution.

    Read the article

  • Run Explorer in SYSTEM account on Windows Vista or 7 using Sysinternal’s psexec tool?

    - by Rob
    Has anyone been successful at launching an instance of Windows Explorer in the SYSTEM account on Windows Vista or 7? It is possible to do this on XP, but I haven't been able to get it to completely work in Vista or 7. Trying to launch Explorer as SYSTEM into session 1 (my user session) results in Explorer exiting immediately and returning an error code of 1. I can launch Explorer as SYSTEM into session 0 with the following command: psexec -i 0 -s explorer That will create an instance of explorer running as SYSTEM with a taskbar and start menu on the hidden session 0 desktop, but won't let you open a file browser window. If you switch to the hidden session 0 desktop and try to open an Explorer window from there to browse files, the following error message appears: "The server process could not be started because the configured identity is incorrect. Check the username and password." I have set the following registry key to 1 for my user account and the SYSTEM account: \Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced\SeparateProcess There has got to be a way to make this work? If it is not possible, can anyone explain why? -Rob

    Read the article

  • How to run Android-x86 project's ISO in VirtualBox with ethernet?

    - by Shiki
    I managed to find a way just days ago, but I had to leave my other PC and now I have no clue how to get it working again. Basically you have to get the image, then install it in a VirtualBox guest. Now the problem is ... when you launch your VM, there is no internet connection. No with NAT or Bridged. Tried all the network cards too. Since internet connection is crucial for Android development, I have to get this thing working. (As I said, I managed to fix it once.) I'm using: - The 4.0 RC1 images from Android-x86 - VirtualBox - Eclipse 4.2 Juno with the latest Android ADT - Android SDK v18 - upgraded to 19 via the Package manager. Now I seen a lot of different builds on the net, about different Android builds for VirtualBox. I have checked Buildroid for example, but there is no network connection. I have imported the virtual machine just as the howto said. The extension package is also installed and it's up to date.

    Read the article

  • Need to increase nginx throughput to an upstream unix socket -- linux kernel tuning?

    - by Ben Lee
    I am running an nginx server that acts as a proxy to an upstream unix socket, like this: upstream app_server { server unix:/tmp/app.sock fail_timeout=0; } server { listen ###.###.###.###; server_name whatever.server; root /web/root; try_files $uri @app; location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } Some app server processes, in turn, pull requests off /tmp/app.sock as they become available. The particular app server in use here is Unicorn, but I don't think that's relevant to this question. The issue is, it just seems that past a certain amount of load, nginx can't get requests through the socket at a fast enough rate. It doesn't matter how many app server processes I set up, it doesn't even matter what the app is (tried it with a dummy app with just a single endpoint that returned an empty page with status 404). The bottleneck seems to be the socket, not the app. I'm getting a flood of these messages in the nginx error log: connect() to unix:/tmp/app.sock failed (11: Resource temporarily unavailable) while connecting to upstream Many requests result in status code 502, and those that don't take a long time to complete. The nginx write queue stat hovers around 1000. Anyway, I feel like I'm missing something obvious here, because this particular configuration of nginx and app server is pretty common, especially with Unicorn (it's the recommended method in fact). Are there any linux kernel options that needs to be set, or something in nginx? Any ideas about how to increase the throughput to the upstream socket? Something that I'm clearly doing wrong? Additional information on the environment: $ uname -a Linux app1 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ ruby -v ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux] $ unicorn -v unicorn v4.3.1 $ nginx -V nginx version: nginx/1.2.1 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled Current kernel tweaks: net.core.rmem_default = 65536 net.core.wmem_default = 65536 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_mem = 16777216 16777216 16777216 net.ipv4.tcp_window_scaling = 1 net.ipv4.route.flush = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_moderate_rcvbuf = 1 net.core.somaxconn = 8192 net.netfilter.nf_conntrack_max = 131072

    Read the article

  • Is it possible to run two servers in one system ?

    - by srikanth
    I have a small problem while trying to execute the wamp server. At present in my system I am running Apache server. I have a php application. for that I am trying to install wamp server in my system. wamp server is not running. I change the port no of wamp server as : in my C:\wamp\bin\apache\Apache2.2.11\conf\ I have httpd.conf file. in that I change listener and host name with another port no. then also it is not working.

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >