Search Results

Search found 35507 results on 1421 pages for 'performance test'.

Page 12/1421 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Spring Test / JUnit problem - unable to load application context

    - by HDave
    I am using Spring for the first time and must be doing something wrong. I have a project with several Bean implementations and now I am trying to create a test class with Spring Test and JUnit. I am trying to use Spring Test to inject a customized bean into the test class. Here is my test-applicationContext.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="............."> <bean id="MyUuidFactory" class="com.myapp.UuidFactory" scope="singleton" > <property name="typeIdentifier" value="CLS" /> </bean> <bean id="ThingyImplTest" class="com.myapp.ThingyImplTest" scope="singleton"> <property name="uuidFactory"> <idref local="MyUuidFactory" /> </property> </bean> </beans> The injection of MyUuidFactory instance goes along with the following code from within the test class: private UuidFactory uuidFactory; public void setUuidFactory(UuidFactory uuidFactory) { this.uuidFactory = uuidFactory; } However, when I go to run the test (in Eclipse or command line) I get the following error (stack trace omitted for brevity): Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'MyImplTest' defined in class path resource [test-applicationContext.xml]: Initialization of bean failed; nested exception is org.springframework.beans.ConversionNotSupportedException: Failed to convert property value of type 'java.lang.String' to required type 'com.myapp.UuidFactory' for property 'uuidFactory'; nested exception is java.lang.IllegalStateException: Cannot convert value of type [java.lang.String] to required type [com.myapp.UuidFactory] for property 'uuidFactory': no matching editors or conversion strategy found Funny thing is, the Eclipse/Spring XML editor shows errors of I misspell any of the types or idrefs. If I leave the bean in, but comment out the dependency injection, everything work until I get a NullPointerException while running the test...which makes sense.

    Read the article

  • How to measure disk performance?

    - by Jakub Šturc
    I am going to "fix" a friend's computer this weekend. By the symptoms he describes it looks like he has a disk performance problem with his 5400 rpm disk. I want to be sure that disk is the problem so I want to "scientificaly" measure the performance. Which tools do you recommend me for this job? Is there any standard set of numbers I can compare the result of measurement with?

    Read the article

  • Question about network topology and routing performance

    - by algorithms
    Hello I am currently working on a uni project about routing protocols and network performance, one of the criteria i was going to test under was to see what effect lan topology has, ie workstations arranged in mesh, star, ring etc, but i am having doubts as to whether that would have any affect on the routing performance thus would be useless to do, rather i'm thinking it would be better to test under the topology of the routers themselves, ie routers arranged in either star, mesh ring etc. I would appreciate some feedback on this as I am rather confused. Thank You

    Read the article

  • The Agile Engineering Rules of Test Code

    - by Malcolm Anderson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Lots of test code gets written, a lot of it is waste, some of it is well engineered waste.Companies hire Agile Engineering Coaches because agile engineering is easy to do wrong.Very easy.So here's a quick tool you can use for self coaching.It's what I call, "The Agile Engineering Rules of Test Code" and it's going to act as a sort of table of contents for some future posts.The Agile Engineering Rules of Test Code Malcolm Anderson   Test code is not throw away code Test code is production code   8 questions to determine the quality of your test code Does the test code have appropriate comments?Is the test code executed as part of the build?Every Time?Is the test code getting refactored?Does everyone use the same test code?Can the test code be described as “Well Maintained”?Can a bright six year old tell you why any particular test failed?Are the tests independent and infinitely repeatable?

    Read the article

  • VMWare Server - Writing files to virtual hard drive performance

    - by Ardman
    We have just moved our infrastructure from physical servers to virtual machines. Everything is running great and we are happy with the result of the move. We have identified one problem, and that is reading/writing performance. We have an application that compiles files and writes to disk. This is considerably slower on the new virtual machines compared to the physical machines. Is there a performance bottleneck when writing to a virtual hard drive compared to a physical hard drive?

    Read the article

  • After writing SQL statements in MySQL, how to measure the speed / performance of them?

    - by Jian Lin
    I saw something from an "execution plan" article: 10 rows fetched in 0.0003s (0.7344s) How come there are 2 durations shown? What if I don't have large data set yet. For example, if I have only 20, 50, or even just 100 records, I can't really measure how faster 2 different SQL statements compare in term of speed in real life situation? In other words, there needs to be at least hundreds of thousands of records, or even a million records to accurately compares the performance of 2 different SQL statements?

    Read the article

  • VMWare - Writing files to virtual hard drive performance

    - by Ardman
    We have just moved our infrastructure from physical servers to virtual machines. Everything is running great and we are happy with the result of the move. We have identified one problem, and that is reading/writing performance. We have an application that compiles files and writes to disk. This is considerably slower on the new virtual machines compared to the physical machines. Is there a performance bottleneck when writing to a virtual hard drive compared to a physical hard drive?

    Read the article

  • Linux RAID-0 performance doesn't scale up over 1 GB/s

    - by wazoox
    I have trouble getting the max throughput out of my setup. The hardware is as follow : dual Quad-Core AMD Opteron(tm) Processor 2376 16 GB DDR2 ECC RAM dual Adaptec 52245 RAID controllers 48 1 TB SATA drives set up as 2 RAID-6 arrays (256KB stripe) + spares. Software : Plain vanilla 2.6.32.25 kernel, compiled for AMD-64, optimized for NUMA; Debian Lenny userland. benchmarks run : disktest, bonnie++, dd, etc. All give the same results. No discrepancy here. io scheduler used : noop. Yeah, no trick here. Up until now I basically assumed that striping (RAID 0) several physical devices should augment performance roughly linearly. However this is not the case here : each RAID array achieves about 780 MB/s write, sustained, and 1 GB/s read, sustained. writing to both RAID arrays simultaneously with two different processes gives 750 + 750 MB/s, and reading from both gives 1 + 1 GB/s. however when I stripe both arrays together, using either mdadm or lvm, the performance is about 850 MB/s writing and 1.4 GB/s reading. at least 30% less than expected! running two parallel writer or reader processes against the striped arrays doesn't enhance the figures, in fact it degrades performance even further. So what's happening here? Basically I ruled out bus or memory contention, because when I run dd on both drives simultaneously, aggregate write speed actually reach 1.5 GB/s and reading speed tops 2 GB/s. So it's not the PCIe bus. I suppose it's not the RAM. It's not the filesystem, because I get exactly the same numbers benchmarking against the raw device or using XFS. And I also get exactly the same performance using either LVM striping and md striping. What's wrong? What's preventing a process from going up to the max possible throughput? Is Linux striping defective? What other tests could I run?

    Read the article

  • Optimize windows 2008 performance

    - by Giorgi
    Hello, I have windows server 2008 sp2 installed as virtual machine on my personal laptop. I use it only for source control (visual svn) and continuous integration (teamcity). As the virtual machine resources are limited I'd like to optimize it's performance by disabling services and features that are not necessary for my purposes. Can anyone recommend where to start or provide with tips for getting better performance. Thanks.

    Read the article

  • Randomly poor 2D performance in Linux Mint 11 when using nvidia driver

    - by SDD
    I am using: - Linux Mint 11 - Geforce 560ti - nVidia driver (installed via helper programm, not from nvidia page) The third party nvidia drivers radomly cause very poor 2D performance. Radomly because the performance can be very great, but after the next reboot or login become very poor. After another reboot or login, this might change again to better or worse. I have no idea why and how and I need your help. Thank you.

    Read the article

  • rake test:units fails with status ()

    - by ander163
    New user, haven't been building tests as I go, so I'm an idiot. The application is running, but the tests fail. Here is what appears to be relevant: .... ** Execute test:units /usr/local/bin/ruby -I"lib:test" "/usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/unit/event_test.rb" "test/unit/helpers/calendar1_helper_test.rb" "test/unit/helpers/events_helper_test.rb" "test/unit/helpers/homepage_helper_test.rb" "test/unit/helpers/main_helper_test.rb" "test/unit/helpers/mobile_helper_test.rb" "test/unit/helpers/notes_helper_test.rb" "test/unit/helpers/password_resets_helper_test.rb" "test/unit/helpers/projects_helper_test.rb" "test/unit/helpers/search_helper_test.rb" "test/unit/helpers/start_helper_test.rb" "test/unit/helpers/superadmin_helper_test.rb" "test/unit/helpers/tasks_helper_test.rb" "test/unit/helpers/user_sessions_helper_test.rb" "test/unit/helpers/users_helper_test.rb" "test/unit/note_test.rb" "test/unit/notifier_test.rb" "test/unit/project_test.rb" "test/unit/task_test.rb" "test/unit/user_session_test.rb" "test/unit/user_test.rb" /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement /usr/lib/ruby/gems/1.8/gems/hpricot-0.6.164/lib/universal-java1.6/fast_xs.bundle: [BUG] Segmentation fault ruby 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10.2.0] rake aborted! Command failed with status (): [/usr/local/bin/ruby -I"lib:test" "/usr/loc...] /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:995:in sh' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1010:incall'

    Read the article

  • How to measure disk-performance under Windows?

    - by Alphager
    I'm trying to find out why my application is very slow on a certain machine (runs fine everywhere else). I think i have traced the performance-problems to hard-disk reads and writes and i think it's simply the very slow disk. What tool could i use to measure hd read and write performance under Windows 2003 in a non-destructive way (the partitions on the drives have to remain intact)?

    Read the article

  • Copy-and-Pasted Test Code: How Bad is This?

    - by joshin4colours
    My current job is mostly writing GUI test code for various applications that we work on. However, I find that I tend to copy and paste a lot of code within tests. The reason for this is that the areas I'm testing tend to be similar enough to need repetition but not quite similar enough to encapsulate code into methods or objects. I find that when I try to use classes or methods more extensively, tests become more cumbersome to maintain and sometimes outright difficult to write in the first place. Instead, I usually copy a big chunk of test code from one section and paste it to another, and make any minor changes I need. I don't use more structured ways of coding, such as using more OO-principles or functions. Do other coders feel this way when writing test code? Obviously I want to follow DRY and YAGNI principles, but I find that test code (automated test code for GUI testing anyway) can make these principles tough to follow. Or do I just need more coding practice and a better overall system of doing things? EDIT: The tool I'm using is SilkTest, which is in a proprietary language called 4Test. As well, these tests are mostly for Windows desktop applications, but I also have tested web apps using this setup as well.

    Read the article

  • VS2012 Coded UI Test closes browser by default

    - by Tarun Arora
    *** Thanks to Steve St. Jean for asking this question and Shubhra Maji for answering this question on the ALM champs list *** 01 – Introduction The default behaviour of coded UI tests running in an Internet Explorer browser has changed between MTM 2010 and MTM 2012. When running a Coded UI test recorded in MTM 2012 or VS 2012 at the end of the test execution the instance of the browser is closed by default. 02 – Description Let’s take an example. As you can see the CloseDinnerNowWeb() method is commented out.  In VS 2010, upon running this test the browser would be left open after the test execution completes. In VS 2012 RTM the behaviour has changed. At the end of the test run, the IE window is closed even though there is no command from the test to do so. In the example below when the test runs, it opens 2 IE windows to the website. When the test run completes both the windows are closed, even though there is no command in the test to close the window. 03 – How to change the CUIT behaviour not to close the IE window after test execution? This change to this functionality in VS 2012 is by design. It is however possible to rollback the behaviour to how it originally was in VS 2010 i.e. the IE window will not close after the test execution unless otherwise commanded by the test to do so. To go back to the original functionality, set BrowserWindow.CloseOnPlaybackCleanup = false More details on the CloseOnPlaybackCleanup property can be found here http://msdn.microsoft.com/en-us/library/microsoft.visualstudio.testtools.uitesting.applicationundertest.closeonplaybackcleanup.aspx  HTH

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

  • Homoscedascity test for Two-Way ANOVA

    - by aL3xa
    I've been using var.test and bartlett.test to check basic ANOVA assumptions, among others, homoscedascity (homogeniety, equality of variances). Procedure is quite simple for One-Way ANOVA: bartlett.test(x ~ g) # where x is numeric, and g is a factor var.test(x ~ g) But, for 2x2 tables, i.e. Two-Way ANOVA's, I want to do something like this: bartlett.test(x ~ c(g1, g2)) # or with list; see latter: var.test(x ~ list(g1, g2)) Of course, ANOVA assumptions can be checked with graphical procedures, but what about "an arithmetic option"? Is that manageable? How do you test homoscedascity in Two-Way ANOVA?

    Read the article

  • Benchmarking a file server

    - by Joel Coel
    I'm working on building a new file server... a simple Windows Server box with a few terabytes of disk space to share on the LAN. Pain for current hard drive prices aside :( -- I would like to get some benchmarks for this device under load compared to our old server. The old server was installed in 2005 and had 5 136GB 10K disks in RAID 5. The new server has 8 1TB disks in two RAID 10 volumes (plus a hot spare for each volume), but they're only 7.2K rpm, and of course with a much larger cache size. I'd like to get an idea of the performance expectations of the new server relative to the old. Where do I get started? I'd like to know both raw potential under different kinds of load for each server, as well an idea of what our real-world load looks like and how it will translate. Will disk load even matter, or will performance be more driven by the network connection? I could probably fumble through some disk i/o and wait counters in performance monitor, but I don't really know what to look for, which counters to watch, or for how long and when. FWIW, I'm expecting a nice improvement because of the benefits of having two different volumes and the better RAID 10 performance vs RAID 5, in spite of using slower disks... but I'd like to get an idea of how much.

    Read the article

  • How can dev teams prevent slow performance in consumer apps?

    - by Crashworks
    When I previously asked what's responsible for slow software, a few answers I've received suggested it was a social and management problem: This isn't a technical problem, it's a marketing and management problem.... Utimately, the product mangers are responsible to write the specs for what the user is supposed to get. Lots of things can go wrong: The product manager fails to put button response in the spec ... The QA folks do a mediocre job of testing against the spec ... if the product management and QA staff are all asleep at the wheel, we programmers can't make up for that. —Bob Murphy People work on good-size apps. As they work, performance problems creep in, just like bugs. The difference is - bugs are "bad" - they cry out "find me, and fix me". Performance problems just sit there and get worse. Programmers often think "Well, my code wouldn't have a performance problem. Rather, management needs to buy me a newer/bigger/faster machine." The fact is, if developers periodically just hunt for performance problems (which is actually very easy) they could simply clean them out. —Mike Dunlavey So, if this is a social problem, what social mechanisms can an organization put into place to avoid shipping slow software to its customers?

    Read the article

  • "Class ref in pre-verified class resolved to unexpected implementation" when running android tests i

    - by Mike
    I have a module that builds an app called MyApp. I have another that builds some testcases for that app, called MyAppTests. They both build their own APKs, and they both work fine from within my IDE. I'd like to build them using ant so that I can take advantage of continuous integration. Building the app module works fine. I'm having difficulty getting the Test module to compile and run. Using Christopher's tip from a previous question, I used android create test-project -p MyAppTests -m ../MyApp -n MyAppTests to create the necessary build files to build and run my test project. This seems to work great (once I remove an unnecessary test case that it constructed for me and revert my AndroidManifest.xml to the one I was using before it got replaced by android create), but I have two problems. The first problem: The project doesn't compile because it's missing libraries. $ ant run-tests Buildfile: build.xml [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -install-tested-project: [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 1 source file to /Users/mike/Projects/myapp/android/MyApp/bin/classes -dex: [echo] Converting compiled files and external libraries into /Users/mike/Projects/myapp/android/MyApp/bin/classes.dex... [echo] -package-resources: [echo] Packaging resources [aaptexec] Creating full resource package... -package-debug-sign: [apkbuilder] Creating MyApp-debug-unaligned.apk and signing it with a debug key... [apkbuilder] Using keystore: /Users/mike/.android/debug.keystore debug: [echo] Running zip align on final apk... [echo] Debug Package: /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk install: [echo] Installing /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk onto default emulator or device... [exec] 1567 KB/s (288354 bytes in 0.179s) [exec] pkg: /data/local/tmp/MyApp-debug.apk [exec] Success -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... [mkdir] Created dir: /Users/mike/Projects/myapp/android/MyAppTests/gen [mkdir] Created dir: /Users/mike/Projects/myapp/android/MyAppTests/bin [mkdir] Created dir: /Users/mike/Projects/myapp/android/MyAppTests/bin/classes -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 5 source files to /Users/mike/Projects/myapp/android/MyAppTests/bin/classes [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:4: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:8: package com.google.gson does not exist [javac] import com.google.gson.JsonElement; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:9: package com.google.gson does not exist [javac] import com.google.gson.JsonParser; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:11: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class GsonTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:6: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:7: package roboguice.util does not exist [javac] import roboguice.util.RoboLooperThread; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:11: package com.google.gson does not exist [javac] import com.google.gson.JsonObject; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:15: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class HttpTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:4: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:12: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class LinksTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:4: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:5: package roboguice.util does not exist [javac] import roboguice.util.RoboAsyncTask; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:6: package roboguice.util does not exist [javac] import roboguice.util.RoboLooperThread; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:12: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class SafeAsyncTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectResource': class file for roboguice.inject.InjectResource not found [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectResource' [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView': class file for roboguice.inject.InjectView not found [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView' [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView' [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView' [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:15: cannot find symbol [javac] symbol : class JsonParser [javac] location: class com.myapp.test.GsonTest [javac] final JsonParser parser = new JsonParser(); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:15: cannot find symbol [javac] symbol : class JsonParser [javac] location: class com.myapp.test.GsonTest [javac] final JsonParser parser = new JsonParser(); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:18: cannot find symbol [javac] symbol : class JsonElement [javac] location: class com.myapp.test.GsonTest [javac] final JsonElement e = parser.parse(s); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:20: cannot find symbol [javac] symbol : class JsonElement [javac] location: class com.myapp.test.GsonTest [javac] final JsonElement e2 = parser.parse(s2); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:19: cannot find symbol [javac] symbol : method getInstrumentation() [javac] location: class com.myapp.test.HttpTest [javac] assertEquals("MyApp", getInstrumentation().getTargetContext().getResources().getString(com.myapp.R.string.app_name)); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:62: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.HttpTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:82: cannot find symbol [javac] symbol : method assertTrue(java.lang.String,boolean) [javac] location: class com.myapp.test.HttpTest [javac] assertTrue(result[0], result[0].contains("Search")); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:87: cannot find symbol [javac] symbol : class JsonObject [javac] location: class com.myapp.test.HttpTest [javac] final JsonObject[] result = {null}; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:90: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.HttpTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:117: cannot find symbol [javac] symbol : class JsonObject [javac] location: class com.myapp.test.HttpTest [javac] final JsonObject[] result = {null}; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:120: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.HttpTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:27: cannot find symbol [javac] symbol : method assertTrue(boolean) [javac] location: class com.myapp.test.LinksTest [javac] assertTrue(m.matches()); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:28: cannot find symbol [javac] symbol : method assertEquals(java.lang.String,java.lang.String) [javac] location: class com.myapp.test.LinksTest [javac] assertEquals( map.get(url), m.group(1) ); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:19: cannot find symbol [javac] symbol : method getInstrumentation() [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals("MyApp", getInstrumentation().getTargetContext().getString(com.myapp.R.string.app_name)); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:27: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:65: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:74: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:105: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:113: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:144: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:154: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:187: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/StoriesTest.java:11: cannot access roboguice.activity.GuiceListActivity [javac] class file for roboguice.activity.GuiceListActivity not found [javac] public class StoriesTest extends ActivityUnitTestCase<Stories> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/StoriesTest.java:21: cannot access roboguice.application.GuiceApplication [javac] class file for roboguice.application.GuiceApplication not found [javac] setApplication( new MyApplication( getInstrumentation().getTargetContext() ) ); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/StoriesTest.java:22: incompatible types [javac] found : com.myapp.activity.Stories [javac] required: android.app.Activity [javac] final Activity activity = startActivity(intent, null, null); [javac] ^ [javac] 39 errors [javac] 6 warnings BUILD FAILED /opt/local/android-sdk-mac/platforms/android-1.6/templates/android_rules.xml:248: Compile failed; see the compiler error output for details. Total time: 24 seconds That's not a hard problem to solve. I'm not sure it's the right thing to do, but I copied the missing libraries (roboguice and gson) from the MyApp/libs directory to the MyAppTests/libs directory and everything seems to compile fine. But that leads to the second problem, which I'm currently stuck on. The tests compile fine but they won't run: $ cp ../MyApp/libs/gson-r538.jar libs/ $ cp ../MyApp/libs/roboguice-1.1-SNAPSHOT.jar libs/ 0 10:23 /Users/mike/Projects/myapp/android/MyAppTests $ ant run-testsBuildfile: build.xml [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -install-tested-project: [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 1 source file to /Users/mike/Projects/myapp/android/MyApp/bin/classes -dex: [echo] Converting compiled files and external libraries into /Users/mike/Projects/myapp/android/MyApp/bin/classes.dex... [echo] -package-resources: [echo] Packaging resources [aaptexec] Creating full resource package... -package-debug-sign: [apkbuilder] Creating MyApp-debug-unaligned.apk and signing it with a debug key... [apkbuilder] Using keystore: /Users/mike/.android/debug.keystore debug: [echo] Running zip align on final apk... [echo] Debug Package: /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk install: [echo] Installing /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk onto default emulator or device... [exec] 1396 KB/s (288354 bytes in 0.201s) [exec] pkg: /data/local/tmp/MyApp-debug.apk [exec] Success -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 5 source files to /Users/mike/Projects/myapp/android/MyAppTests/bin/classes [javac] Note: /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java uses unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. -dex: [echo] Converting compiled files and external libraries into /Users/mike/Projects/myapp/android/MyAppTests/bin/classes.dex... [echo] -package-resources: [echo] Packaging resources [aaptexec] Creating full resource package... -package-debug-sign: [apkbuilder] Creating MyAppTests-debug-unaligned.apk and signing it with a debug key... [apkbuilder] Using keystore: /Users/mike/.android/debug.keystore debug: [echo] Running zip align on final apk... [echo] Debug Package: /Users/mike/Projects/myapp/android/MyAppTests/bin/MyAppTests-debug.apk install: [echo] Installing /Users/mike/Projects/myapp/android/MyAppTests/bin/MyAppTests-debug.apk onto default emulator or device... [exec] 1227 KB/s (94595 bytes in 0.075s) [exec] pkg: /data/local/tmp/MyAppTests-debug.apk [exec] Success run-tests: [echo] Running tests ... [exec] [exec] android.test.suitebuilder.TestSuiteBuilder$FailedToCreateTests:INSTRUMENTATION_RESULT: shortMsg=Class ref in pre-verified class resolved to unexpected implementation [exec] INSTRUMENTATION_RESULT: longMsg=java.lang.IllegalAccessError: Class ref in pre-verified class resolved to unexpected implementation [exec] INSTRUMENTATION_CODE: 0 BUILD SUCCESSFUL Total time: 38 seconds Any idea what's causing the "Class ref in pre-verified class resolved to unexpected implementation" error?

    Read the article

  • Openfiler iSCSI performance

    - by Justin
    Hoping someone can point me in the right direction with some iSCSI performance issues I'm having. I'm running Openfiler 2.99 on an older ProLiant DL360 G5. Dual Xeon processor, 6GB ECC RAM, Intel Gigabit Server NIC, SAS controller with and 3 10K SAS drives in a RAID 5. When I run a simple write test from the box directly the performance is very good: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 4.64468 s, 226 MB/s So I created a LUN, attached it to another box I have running ESXi 5.1 (Core i7 2600k, 16GB RAM, Intel Gigabit Server NIC) and created a new datastore. Once I created the datastore I was able to create and start a VM running CentOS with 2GB of RAM and 16GB of disk space. The OS installed fine and I'm able to use it but when I ran the same test inside the VM I get dramatically different results: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 26.8786 s, 39.0 MB/s [root@localhost ~]# Both servers have brand new Intel Server NIC's and I have Jumbo Frames enabled on the switch, the openfiler box as well as the VMKernel adapter on the ESXi box. I can confirm this is set up properly by using the vmkping command from the ESXi host: ~ # vmkping 10.0.0.1 -s 9000 PING 10.0.0.1 (10.0.0.1): 9000 data bytes 9008 bytes from 10.0.0.1: icmp_seq=0 ttl=64 time=0.533 ms 9008 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.736 ms 9008 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.570 ms The only thing I haven't tried as far as networking goes is bonding two interfaces together. I'm open to trying that down the road but for now I am trying to keep things simple. I know this is a pretty modest setup and I'm not expecting top notch performance but I would like to see 90-100MB/s. Any ideas?

    Read the article

  • Terminal server performance over high latency links

    - by holz
    Our datacenter and head office is currently in Brisbane, Australia, and we have a branch office in the UK. We have a private WAN with a 768k link to our UK office and the latency is at about 350ms. The terminal server performance is reeeeealy bad. Applications that don't have too much animation or any images seem to be okay. But as soon as they do, the session is almost unusable. Powerpoint and internet explorer are good examples of apps that make it run slow. And if there is an image in your email signature, outlook will hang for about 10 seconds each time a new line is inserted, while the image gets moved down a few pixels. We are currently running server 2003. I have tried Server 2008 R2 RDS, and also a third party solution called Blaze by a company called Ericom, but it is still not too much better. We currently have a 5 levels dynamic class of service with the priority in the following order. VoIP Video Terminal Services Printing Everything else When testing the terminal server performance, the link monitored using net-flows, and have plenty we of bandwidth available, so I believe that it is a latency issue rather than bandwidth. Is there anything that can be done to improve performance. Would citrix help at all?

    Read the article

  • mysql medium int vs. int performance?

    - by aviv
    Hi, I have a simple users table, i guess the maximum users i am going to have is 300,000. Currently i am using: CREATE TABLE users ( id INT UNSIGEND AUTOINCEREMENT PRIMARY KEY, .... Of course i have many other tables that the users(id) is a FOREIGN KEY in them. I read that since the id is not going to use the full maximum of INT it is better to use: MEDIUMINT and it will give better performance. Is it true? (I am using mysql on Windows Server 2008) Thanks.

    Read the article

  • Performance monitoring on Linux/Unix

    - by ervingsb
    I run a few Windows servers and (Debian and Ubuntu) Linux and AIX servers. I would like to continously monitor performance on these systems in order to easily identify bottlenecks as well as to have an overview of the general activity on the servers. On Windows, I use Windows Performance Monitor (perfmon) for this. I set up these counters: For bottlenecks: Processor utilization : System\Processor Queue Length Memory utilization : Memory\Pages Input/Sec Disk Utilization : PhysicalDisk\Current Disk Queue Length\driveletter Network problems: Network Interface\Output Queue Length\nic name For general activity: Processor utilization : Processor\% Processor Time_Total Memory utilization : Process\Working Set_Total (or per specific process) Memory utilization : Memory\Available MBytes Disk Utilization : PhysicalDisk\Bytes/sec_Total (or per process) Network Utilization : Network Interface\Bytes Total/Sec\nic name (More information on the choice of these counters on: http://itcookbook.net/blog/windows-perfmon-top-ten-counters ) This works really well. It allows me to look in one place and identify most common bottlenecks. So my question is, how can I do something equivalent (or just very similar) on Linux servers? I have looked a bit on nmon (http://www.ibm.com/developerworks/aix/library/au-analyze_aix/) which is a free performance monitoring tool developed for AIX but also availble for Linux. However, I am not sure if nmon allows me to set up the above counters. Maybe it is because Linux and AIX does not allow monitoring these exact same measures. Is so, which ones should I choose and why? If nmon is not the tool to use for this, then what do you recommend?

    Read the article

  • sbt: "test" works "test:run" not

    - by Martin
    I try to establish a build pipeline on Jenkins with a Play(2.0.2) project. As I want to just build the sources once and use the classes for downstream builds, I now have created a "compile"-job, that runs "sbt test:compile". That works so far. The next job should then just run the compiled tests. If I use "sbt test" it works as expected, but compiles the sources again. But if I try to run "sbt test:run" it says: [info] Loading project definition from ~/myproject/project [info] Set current project to myproject (in build file: ~/myproject/) java.lang.RuntimeException: No main class detected. at scala.sys.package$.error(package.scala:27) [error] {file:~/myproject/test:run: No main class detected. The same happens locally. I can run "sbt test" but not "sbt test:run". Same error. Is there someone who can point me to the right direction?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >