Search Results

Search found 10206 results on 409 pages for 'tooling and testing'.

Page 49/409 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • What are best practices on virtual lab/test bed architecture?

    - by WooYek
    I am currently preparing a new small virtual environment for development and testing with Windows Server + SQL Server + AD + Sharepoint + Exchange + IIS(ASP.NET) + Biztalk + ?, for a small (up to 5) dev team. What are pros and cons on different approaches, eg. splitting up over different machines or packing everything up per machine. I your experience what are the best practices I should follow in terms of architecture and various system/servers placement. What to share and what to split per person. I would like to achieve some flexibility for the dev and testing process (so teammebers would not be steeping on each other's toes) and limit administrative effort needed to propagate settings, integrate work items and revert changes when something breaks up. It's not supposed to be an everyday development working environment, more a tier 2 developer testing environment, and not yet an integration or QA testing environment with formal change process. IMO the two borderline solutions are: creating one all-inclusive machine for each dev team member giving them freedom to manage creating shared environment managed by the one with somehow formalized change request process What golden mean would you recommend, and why?

    Read the article

  • WPF: OnRender and Hit Testing

    - by stefan.at.wpf
    Hello, when using OnRender to draw something on the screen, is there any way to perform Hit Testing on the drawn graphics? Sample Code protected override void OnRender(System.Windows.Media.DrawingContext drawingContext) { base.OnRender(drawingContext); drawingContext.DrawRectangle(Brushes.Black, null, new Rect(50, 50, 100, 100)); } Obviously one has no reference to the drawn Rectangle which would be necessary to perform hit testing or am I wrong about this? I know I can use DrawingVisual, I'm just curious if my understanding is correct, that using OnRender to draw something you can't perform any hit testing on the drawn things?

    Read the article

  • Your thoughts on Best Practices for Scientific Computing?

    - by John Smith
    A recent paper by Wilson et al (2014) pointed out 24 Best Practices for scientific programming. It's worth to have a look. I would like to hear opinions about these points from experienced programmers in scientific data analysis. Do you think these advices are helpful and practical? Or are they good only in an ideal world? Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P (2014) Best Practices for Scientific Computing. PLoS Biol 12:e1001745. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745 Box 1. Summary of Best Practices Write programs for people, not computers. (a) A program should not require its readers to hold more than a handful of facts in memory at once. (b) Make names consistent, distinctive, and meaningful. (c) Make code style and formatting consistent. Let the computer do the work. (a) Make the computer repeat tasks. (b) Save recent commands in a file for re-use. (c) Use a build tool to automate workflows. Make incremental changes. (a) Work in small steps with frequent feedback and course correction. (b) Use a version control system. (c) Put everything that has been created manually in version control. Don’t repeat yourself (or others). (a) Every piece of data must have a single authoritative representation in the system. (b) Modularize code rather than copying and pasting. (c) Re-use code instead of rewriting it. Plan for mistakes. (a) Add assertions to programs to check their operation. (b) Use an off-the-shelf unit testing library. (c) Turn bugs into test cases. (d) Use a symbolic debugger. Optimize software only after it works correctly. (a) Use a profiler to identify bottlenecks. (b) Write code in the highest-level language possible. Document design and purpose, not mechanics. (a) Document interfaces and reasons, not implementations. (b) Refactor code in preference to explaining how it works. (c) Embed the documentation for a piece of software in that software. Collaborate. (a) Use pre-merge code reviews. (b) Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems. (c) Use an issue tracking tool. I'm relatively new to serious programming for scientific data analysis. When I tried to write code for pilot analyses of some of my data last year, I encountered tremendous amount of bugs both in my code and data. Bugs and errors had been around me all the time, but this time it was somewhat overwhelming. I managed to crunch the numbers at last, but I thought I couldn't put up with this mess any longer. Some actions must be taken. Without a sophisticated guide like the article above, I started to adopt "defensive style" of programming since then. A book titled "The Art of Readable Code" helped me a lot. I deployed meticulous input validations or assertions for every function, renamed a lot of variables and functions for better readability, and extracted many subroutines as reusable functions. Recently, I introduced Git and SourceTree for version control. At the moment, because my co-workers are much more reluctant about these issues, the collaboration practices (8a,b,c) have not been introduced. Actually, as the authors admitted, because all of these practices take some amount of time and effort to introduce, it may be generally hard to persuade your reluctant collaborators to comply them. I think I'm asking your opinions because I still suffer from many bugs despite all my effort on many of these practices. Bug fix may be, or should be, faster than before, but I couldn't really measure the improvement. Moreover, much of my time has been invested on defence, meaning that I haven't actually done much data analysis (offence) these days. Where is the point I should stop at in terms of productivity? I've already deployed: 1a,b,c, 2a, 3a,b,c, 4b,c, 5a,d, 6a,b, 7a,7b I'm about to have a go at: 5b,c Not yet: 2b,c, 4a, 7c, 8a,b,c (I could not really see the advantage of using GNU make (2c) for my purpose. Could anyone tell me how it helps my work with MATLAB?)

    Read the article

  • Books or Articles on Using NUnit to Test Entire Features

    - by INTPnerd
    Are there any books or articles that show you how to use NUnit to test entire features of a program? Is there a name for this type of testing? This is different from the typical use of NUnit for unit testing where you test individual classes. This is similar to acceptance testing except that it is written by the developer to discern that the program does what they interpreted as being what the customer wants the program to do. I don't need it to be readable by non-programmers or to produce a readable specification for non-programmers. The problem I am having is keeping this feature testing code maintainable. I need help in organizing my feature testing code. I also need help organizing the program code to be drivable in this way. I am having a hard time being able to issue commands to the program while still having good code design.

    Read the article

  • Do I have to create a static library to test my application?

    - by Christopher Gateley
    I'm just getting started with TDD and am curious as to what approaches others take to run their tests. For reference, I am using the google testing framework, but I believe the question is applicable to most other testing frameworks and to languages other than C/C++. My general approach so far has been to do either one of three things: Write the majority of the application in a static library, then create two executables. One executable is the application itself, while the other is the test runner with all of the tests. Both link to the static library. Embed the testing code directly into the application itself, and enable or disable the testing code using compiler flags. This is probably the best approach I've used so far, but clutters up the code a bit. Embed the testing code directly into the application itself, and, given certain command-line switches either run the application itself or run the tests embedded in the application. None of these solutions are particularly elegant... How do you do it?

    Read the article

  • How do you run your unit tests? Compiler flags? Static libraries?

    - by Christopher Gateley
    I'm just getting started with TDD and am curious as to what approaches others take to run their tests. For reference, I am using the google testing framework, but I believe the question is applicable to most other testing frameworks and to languages other than C/C++. My general approach so far has been to do either one of three things: Write the majority of the application in a static library, then create two executables. One executable is the application itself, while the other is the test runner with all of the tests. Both link to the static library. Embed the testing code directly into the application itself, and enable or disable the testing code using compiler flags. This is probably the best approach I've used so far, but clutters up the code a bit. Embed the testing code directly into the application itself, and, given certain command-line switches either run the application itself or run the tests embedded in the application. None of these solutions are particularly elegant... How do you do it?

    Read the article

  • Is it feasible and useful to auto-generate some code of unit tests?

    - by skiwi
    Earlier today I have come up with an idea, based upon a particular real use case, which I would want to have checked for feasability and usefulness. This question will feature a fair chunk of Java code, but can be applied to all languages running inside a VM, and maybe even outside. While there is real code, it uses nothing language-specific, so please read it mostly as pseudo code. The idea Make unit testing less cumbersome by adding in some ways to autogenerate code based on human interaction with the codebase. I understand this goes against the principle of TDD, but I don't think anyone ever proved that doing TDD is better over first creating code and then immediatly therafter the tests. This may even be adapted to be fit into TDD, but that is not my current goal. To show how it is intended to be used, I'll copy one of my classes here, for which I need to make unit tests. public class PutMonsterOnFieldAction implements PlayerAction { private final int handCardIndex; private final int fieldMonsterIndex; public PutMonsterOnFieldAction(final int handCardIndex, final int fieldMonsterIndex) { this.handCardIndex = Arguments.requirePositiveOrZero(handCardIndex, "handCardIndex"); this.fieldMonsterIndex = Arguments.requirePositiveOrZero(fieldMonsterIndex, "fieldCardIndex"); } @Override public boolean isActionAllowed(final Player player) { Objects.requireNonNull(player, "player"); Hand hand = player.getHand(); Field field = player.getField(); if (handCardIndex >= hand.getCapacity()) { return false; } if (fieldMonsterIndex >= field.getMonsterCapacity()) { return false; } if (field.hasMonster(fieldMonsterIndex)) { return false; } if (!(hand.get(handCardIndex) instanceof MonsterCard)) { return false; } return true; } @Override public void performAction(final Player player) { Objects.requireNonNull(player); if (!isActionAllowed(player)) { throw new PlayerActionNotAllowedException(); } Hand hand = player.getHand(); Field field = player.getField(); field.setMonster(fieldMonsterIndex, (MonsterCard)hand.play(handCardIndex)); } } We can observe the need for the following tests: Constructor test with valid input Constructor test with invalid inputs isActionAllowed test with valid input isActionAllowed test with invalid inputs performAction test with valid input performAction test with invalid inputs My idea mainly focuses on the isActionAllowed test with invalid inputs. Writing these tests is not fun, you need to ensure a number of conditions and you check whether it really returns false, this can be extended to performAction, where an exception needs to be thrown in that case. The goal of my idea is to generate those tests, by indicating (through GUI of IDE hopefully) that you want to generate tests based on a specific branch. The implementation by example User clicks on "Generate code for branch if (handCardIndex >= hand.getCapacity())". Now the tool needs to find a case where that holds. (I haven't added the relevant code as that may clutter the post ultimately) To invalidate the branch, the tool needs to find a handCardIndex and hand.getCapacity() such that the condition >= holds. It needs to construct a Player with a Hand that has a capacity of at least 1. It notices that the capacity private int of Hand needs to be at least 1. It searches for ways to set it to 1. Fortunately it finds a constructor that takes the capacity as an argument. It uses 1 for this. Some more work needs to be done to succesfully construct a Player instance, involving the creation of objects that have constraints that can be seen by inspecting the source code. It has found the hand with the least capacity possible and is able to construct it. Now to invalidate the test it will need to set handCardIndex = 1. It constructs the test and asserts it to be false (the returned value of the branch) What does the tool need to work? In order to function properly, it will need the ability to scan through all source code (including JDK code) to figure out all constraints. Optionally this could be done through the javadoc, but that is not always used to indicate all constraints. It could also do some trial and error, but it pretty much stops if you cannot attach source code to compiled classes. Then it needs some basic knowledge of what the primitive types are, including arrays. And it needs to be able to construct some form of "modification trees". The tool knows that it needs to change a certain variable to a different value in order to get the correct testcase. Hence it will need to list all possible ways to change it, without using reflection obviously. What this tool will not replace is the need to create tailored unit tests that tests all kinds of conditions when a certain method actually works. It is purely to be used to test methods when they invalidate constraints. My questions: Is creating such a tool feasible? Would it ever work, or are there some obvious problems? Would such a tool be useful? Is it even useful to automatically generate these testcases at all? Could it be extended to do even more useful things? Does, by chance, such a project already exist and would I be reinventing the wheel? If not proven useful, but still possible to make such thing, I will still consider it for fun. If it's considered useful, then I might make an open source project for it depending on the time. For people searching more background information about the used Player and Hand classes in my example, please refer to this repository. At the time of writing the PutMonsterOnFieldAction has not been uploaded to the repo yet, but this will be done once I'm done with the unit tests.

    Read the article

  • Test Driven Development (TDD) with Rails

    - by macek
    I am looking for TDD resources that are specific to Rails. I've seen the Rails Guide: The Basics of Creating a Rails Plugin which really spurred my interest in the topic. I have the Agile Development with Rails book and I see there's some testing-related information there. However, it seems like the author takes you through the steps of building the app, then adds testing afterward. This isn't really Test Driven Development. Ideally, I'd like a book on this, but a collection of other tutorials or articles would be great if such a book doesn't exist. Things I'd like to learn: Primary goal: Best Practices Unit testing How to utilize Fixtures Possibly using existing development data in place of fixtures What's the community standard here? Writing tests for plugins Testing with session data User is logged in User can access URL /foo/bar Testing success of sending email Thanks for any help!

    Read the article

  • Sequence Number in testing Spring application with JUnit (Hibernating, Spring MVC)

    - by MBK
    I am testing DAO in Spring Application. @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = "classpath:/applicationContext.xml") @TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true) @Transactional public class CommentDAOImplTest { @Autowired //testing mehods here} The tests are running good. Iam able to add an comment and I also have a defaultRollback property set. So, the added comment will be deleted automatically. happy!..Now the problem is with the sequence number for mcomment. Can I, in any way rollback the seq number? any suggestins on that. I dont want to mess up the sequrnce number. Business requires comment Id to be showed. (I still dont know why). I know in memory db is an option....but I am guessing defaultRollback purpose is to eliminate in memory db testing and mocking. (Just my opinion.)

    Read the article

  • Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

    - by RoyOsherove
    One of the talks I did at QCON London was about a subject that I’ve come across fairly recently , when I was building SilverUnit – a “pure” unit test framework for silverlight objects that depend on the silverlight runtime to run. It is the concept of “cogs in the machine” – when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be: your custom control running inside silverlight runtime in the browser your plug-in running inside an IDE your activity running inside a windows workflow your code running inside a java EE bean your code inheriting from a COM+ (enterprise services) component etc.. Not all of these are necessarily testability problems. The main testability problem usually comes when your code actually inherits form something inside the system. For example. one of the biggest problems with testing objects like silverlight controls is the way they depend on the silverlight runtime – they don’t implement some silverlight interface, they don’t just call external static methods against the framework runtime that surrounds them – they actually inherit parts of the framework: they all inherit (in this case) from the silverlight DependencyObject Wrapping it up? An inheritance dependency is uniquely challenging to bring under test, because “classic” methods such as wrapping the object under test with a framework wrapper will not work, and the only way to do manually is to create parallel testable objects that get delegated with all the possible actions from the dependencies.    In silverlight’s case, that would mean creating your own custom logic class that would be called directly from controls that inherit from silverlight, and would be tested independently of these controls. The pro side is that you get the benefit of understanding the “contract” and the “roles” your system plays against your logic, but unfortunately, more often than not, it can be very tedious to create, and may sometimes feel unnecessary or like code duplication. About perimeters A perimeter is that invisible line that your draw around your pieces of logic during a test, that separate the code under test from any dependencies that it uses. Most of the time, a test perimeter around an object will be the list of seams (dependencies that can be replaced such as interfaces, virtual methods etc.) that are actually replaced for that test or for all the tests. Role based perimeters In the case of creating a wrapper around an object – one really creates a “role based” perimeter around the logic that is being tested – that wrapper takes on roles that are required by the code under test, and also communicates with the host system to implement those roles and provide any inputs to the logic under test. in the image below – we have the code we want to test represented as a star. No perimeter is drawn yet (we haven’t wrapped it up in anything yet). in the image below is what happens when you wrap your logic with a role based wrapper – you get a role based perimeter anywhere your code interacts with the system: There’s another way to bring that code under test – using isolation frameworks like typemock, rhino mocks and MOQ (but if your code inherits from the system, Typemock might be the only way to isolate the code from the system interaction.   Ad-Hoc Isolation perimeters the image below shows what I call ad-hoc perimeter that might be vastly different between different tests: This perimeter’s surface is much smaller, because for that specific test, that is all the “change” that is required to the host system behavior.   The third way of isolating the code from the host system is the main “meat” of this post: Subterranean perimeters Subterranean perimeters are Deep rooted perimeters  - “always on” seams that that can lie very deep in the heart of the host system where they are fully invisible even to the test itself, not just to the code under test. Because they lie deep inside a system you can’t control, the only way I’ve found to control them is with runtime (not compile time) interception of method calls on the system. One way to get such abilities is by using Aspect oriented frameworks – for example, in SilverUnit, I’ve used the CThru AOP framework based on Typemock hooks and CLR profilers to intercept such system level method calls and effectively turn them into seams that lie deep down at the heart of the silverlight runtime. the image below depicts an example of what such a perimeter could look like: As you can see, the actual seams can be very far away form the actual code under test, and as you’ll discover, that’s actually a very good thing. Here is only a partial list of examples of such deep rooted seams : disabling the constructor of a base class five levels below the code under test (this.base.base.base.base) faking static methods of a type that’s being called several levels down the stack: method x() calls y() calls z() calls SomeType.StaticMethod()  Replacing an async mechanism with a synchronous one (replacing all timers with your own timer behavior that always Ticks immediately upon calls to “start()” on the same caller thread for example) Replacing event mechanisms with your own event mechanism (to allow “firing” system events) Changing the way the system saves information with your own saving behavior (in silverunit, I replaced all Dependency Property set and get with calls to an in memory value store instead of using the one built into silverlight which threw exceptions without a browser) several questions could jump in: How do you know what to fake? (how do you discover the perimeter?) How do you fake it? Wouldn’t this be problematic  - to fake something you don’t own? it might change in the future How do you discover the perimeter to fake? To discover a perimeter all you have to do is start with a wishful invocation. a wishful invocation is the act of trying to invoke a method (or even just create an instance ) of an object using “regular” test code. You invoke the thing that you’d like to do in a real unit test, to see what happens: Can I even create an instance of this object without getting an exception? Can I invoke this method on that instance without getting an exception? Can I verify that some call into the system happened? You make the invocation, get an exception (because there is a dependency) and look at the stack trace. choose a location in the stack trace and disable it. Then try the invocation again. if you don’t get an exception the perimeter is good for that invocation, so you can move to trying out other methods on that object. in a future post I will show the process using CThru, and how you end up with something close to a domain specific test framework after you’re done creating the perimeter you need.

    Read the article

  • “It’s only test code…”

    - by Chris George
    “Let me hack this in, it’s only test code”, “Don’t worry about getting it reviewed, it’s only test code”, “It doesn’t have to be elegant or efficient, it’s only test code”… do these phrases sound familiar? Chances are if you’ve working with test automation, at one point or other you will have heard these phrases, you have probably even used them yourself! What is certain is that code written under this “it’s only test code” mantra will come back and bite you in the arse! I’ve recently encountered a case where a test was giving a false positive, therefore hiding a real product bug because that test code was very badly written. Firstly it was very difficult to understand what the test was actually trying to achieve let alone how it was doing it, and this complexity masked a simple logic error. These issues are real and they do happen. Let’s take a step back from this and look at what we are trying to do. We are writing test code that tests product code, and we do this to create a suite of tests that will help protect our software against regressions. This test code is making sure that the product behaves as it should by employing some sort of expected result verification. The simple cases of these are generally not a problem. However, automation allows us to explore more complex scenarios in many more permutations. As this complexity increases then so does the complexity of the test code. It is at this point that code which has not been architected properly will cause problems.   Keep your friends close… So, how do we make sure we are doing it right? The development teams I have worked on have always had Test Engineers working very closely with their Software Engineers. This is something that I have always tried to take full advantage of. They are coding experts! So run your ideas past them, ask for advice on how to structure your code, help you design your data structures. This may require a shift in your teams viewpoint, as contrary to this section title and folklore, Software Engineers are not actually the mortal enemy of Test Engineers. As time progresses, and test automation becomes more and more ingrained in what we do, the two roles are converging more than ever. Over the 16 years I have spent as a Test Engineer, I have seen the grey area between the two roles grow significantly larger. This serves to strengthen the relationship and common bond between the two roles which helps to make test code activities so much easier!   Pair for the win Possibly the best thing you could do to write good test code is to pair program on the task. This will serve a few purposes. you will get the benefit of the Software Engineers knowledge and experience the Software Engineer will gain knowledge on the testing process. Sharing the love is a wonderful thing! two pairs of eyes are always better than one… And so are two brains. Between the two of you, I will guarantee you will derive more useful test cases than if it was just one of you.   Code reviews Another policy which certainly pays dividends is the practice of code reviews. By having one of your peers review your code before you commit it serves two purposes. Firstly, it forces you to explain your code. Just the act of doing this will often pick up errors in your code. Secondly, it gets yet another pair of eyes on your code! I cannot stress enough how important code reviews are. The benefits they offer apply as much to product code as test code. In short, Software and Test Engineers should all be doing them! It can be extended even further by getting test code reviewed by a Software Engineer and a Test Engineer, and likewise product code. This serves to keep both functions in the loop with changes going on within your code base.   Learn from your devs I briefly touched on this earlier but I’d like to go into more detail here. Pairing with your Software Engineers when writing your test code is such an amazing opportunity to improve your coding skills. As I sit here writing this article waiting to be called into court for jury service, it reminds me that it takes a lot of patience to be a Test Engineer, almost as much as it takes to be a juror! However tempting it is to go rushing in and start writing your automated tests, resist that urge. Discuss what you want to achieve then talk through the approach you’re going to take. Then code it up together. I find it really enlightening to ask questions like ‘is there a better way to do this?’ Or ‘is this how you would code it?’ The latter question, especially, is where I learn the most. I’ve found that most Software Engineers will be reluctant to show you the ‘right way’ to code something when writing tests because they perceive the ‘right way’ to be too complicated for the Test Engineer (e.g. not mentioning LINQ and instead doing something verbose). So by asking how THEY would code it, it unleashes their true dev-ness and advanced code usually ensues! I would like to point out, however, that you don’t have to accept their method as the final answer. On numerous occasions I have opted for the more simple/verbose solution because I found the code written by the Software Engineer too advanced and therefore I would find it unreadable when I return to the code in a months’ time! Always keep the target audience in mind when writing clever code, and in my case that is mostly Test Engineers.  

    Read the article

  • Apache keeps resetting while testing on localhost...

    - by Scott
    Hello everyone. I'm getting errors while testing web pages on localhost. I'm running Windows 7 64-bit. I'm not using Wamp or Xampp. This is what the error.log tells me (I've highlighted the errors in question): [Sat Mar 06 05:10:55 2010] [notice] Apache/2.2.14 (Win32) PHP/5.2.13 configured -- resuming normal operations [Sat Mar 06 05:10:55 2010] [notice] Server built: Sep 28 2009 22:41:08 [Sat Mar 06 05:10:55 2010] [notice] Parent: Created child process 6588 httpd.exe: Could not reliably determine the server's fully qualified domain name, using 192.168.2.2 for ServerName httpd.exe: Could not reliably determine the server's fully qualified domain name, using 192.168.2.2 for ServerName [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Child process is running [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Acquired the start mutex. [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Starting 1000 worker threads. [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Starting thread to listen on port 80. Any input would be greatly appreciated. Thanks.

    Read the article

  • testing ssl cert for smtps => "secure connection could not be established with this website"

    - by cc young
    testing ssl cert on server using a web service. https, imaps and pop3s all check, but smtps yields the message "we advise you not to submit any confidential or personal data to this website because a secure connection could not be established with this website." running postfix tls logging: connect from s097.networking4all.com[213.249.64.242] lost connection after UNKNOWN from s097.networking4all.com[213.249.64.242] disconnect from s097.networking4all.com[213.249.64.242] these work correctly: telnet mydomain.net 587 openssl s_client -starttls smtp -crlf -connect mydomain.net:587 but cannot get email using ssl to log into either 587 or 564 - get same "UNKNOWN" problem. email smtp w/o ssh works fine. the test site is http://www.networking4all.com/en/support/tools/site+check/

    Read the article

  • Methodologies for performance-testing a WAN link

    - by Chopper3
    We have a pair of new diversely-routed 1Gbps Ethernet links between locations about 200 miles apart. The 'client' is a new reasonably-powerful machine (HP DL380 G6, dual E56xx Xeons, 48GB DDR3, R1 pair of 300GB 10krpm SAS disks, W2K8R2-x64) and the 'server' is a decent enough machine too (HP BL460c G6, dual E55xx Xeons, 72GB, R1 pair of 146GB 10krpm SAS disks, dual-port Emulex 4Gbps FC HBA linked to dual Cisco MDS9509s then onto dedicated HP EVA 8400 with 128 x 450GB 15krpm FC disks, RHEL 5.3-x64). Using SFTP from the client we're only seeing about 40Kbps of throughput using large (2GB) files. We've performed server to 'other local server' tests and see around 500Mbps through the local switches (Cat 6509s), we're going to do the same on the client side but that's a day or so away. What other testing methods would you use to prove to the link providers that the problem is theirs?

    Read the article

  • gtk2/mate apps are choppy on debian testing [migrated]

    - by b0ti
    I have recently upgraded to a core-i7 system and now all the apps from mate (GTK2 based) are very slow. Basically when I switch to a workspace that has a couple mate-terminals open, they are redrawn as if these were sent over the network. GTK3 apps and firefox work properly and are refreshed instantly. I'm running Debian testing. I have tried to reinstall xorg and related packages and everything seems to work fine except for this. Here is my xorg.log. Any hints?

    Read the article

  • Install package from debian stable unavailable in testing/unstable repositories

    - by overprescribed
    I'm currently running Debian testing and would like to install a package only available in the stable repositories. (I'm surprised I haven't come across this issue before) I could download the .deb directly and use dpkg to manually install it, but installing packages from one release into another is usually frowned upon. What's the best course of action? EDIT: Zoredache is right, I didn't realize this package has been removed from future versions of Debian as it no longer has a maintainer. It is of course, also pointed out by Zoredache, important to find out why a particular package has been removed before attempting to install it. I've altered the title slightly to reflect the actual issue.

    Read the article

  • Missing kernel on debian-testing-amd64-DVD-1

    - by Kyrol
    I want to install the Debian testing linux version. But at a certain point of the installation, an error occurs: Impossible to install kernel. a compatible kernel version is missing... … or something like that. I see that the image ISO of the DVD is less than 4GB, but is impossible that there is no compatible kernel version for the system. I have an Asus X53Sc with Intel core I7-2630QM, Geforce GT 520MX.

    Read the article

  • ipv6 ssh tunnel service for testing?

    - by Geuis
    I need to do some testing on a service that I run to make sure that it can handle ipv6 addresses. Basically, I need to connect to it from an ipv6 address. I've created a tunnel via tunnelbroker.net, but I'm finding the steps required to get a tunnel configured on my machine and router to be a lot of trouble. Given that I'm not a networking specialist and that I haven't had to dig into routing configuration in years, I'd like to know if there's an existing service that I can just ssh into and use it as my ipv6 endpoint. Simply being able to curl or wget from such an endpoint to my service would be more than enough to test what I need. Thanks!

    Read the article

  • What are various methods for discovering test cases

    - by NativeByte
    All, I am a developer but like to know more about testing process and methods. I believe this helps me write more solid code as it improves the cases I can test using my unit tests before delivering product to the test team. I have recently started looking at Test Driven Development and Exploratory testing approach to software projects. Now it's easier for me to find test cases for the code that I have written. But I am curios to know how to discover test cases when I am not the developer for the functionality under test. Say for e.g. let's have a basic user registration form that we see on various websites. Assuming the person testing it is not the developer of the form, how should one go about testing the input fields on the form, what would be your strategy? How would you discover test cases? I believe this kind of testing benefits from exploratory testing approach, i may be wrong here though. I would appreciate your views on this. Thanks, Byte

    Read the article

  • Test SSL Certificate for MQ SSL Testing

    - by user171523
    I am in the process of testing MQ calls over the SSL. I woul like to know where i can get some demo SSL certificates. That i can use them for testing. I also would like to know if there any code sample which i can use to pass SSL kind of connection. The exaple i am looking is in C#

    Read the article

  • testing In-app purchases iphone??

    - by hemant
    while testing my application i bought a product it had on my phone through the test account...now i deleted the application and reinstalled it but in my application it still shows that the product is already bought...when we buy through in-app purchase does the product i bought or its id gets stored on iphone filesystem?? i am just testing the application so i dont know much about what could be wrong in it??

    Read the article

  • stop cassiniDev after stop testing

    - by senzacionale
    i run cassiniDev from cmd C:\CruiseControl.NET-1.5.0.6237\cassinidev.3.5.0.5.src-repack\CassiniDev\bin\Debug\CassiniDev.exe /a:D:_CCNET\proj /pm:Specific /p:3811 and then start debugging and testing. How can i stop cassiniDev from CMD after i finished testing. I try with cassiniDev_console but console not working so i am using cassiniDev from console.

    Read the article

  • Good hosted sites to manage quality testing?

    - by Chirag Patel
    Basically, I would like to manage quality testing with an issue management system focused on quality testing? I can't use the typical issue management system such as Lighthouse, FogBugz because each test is written as a ticket and 20 to 30 tickets need to be duplicated (w/ no history) every time we start a quality cycle. Do any (hosted) sites exist? We're currently using a Google Spreadsheet so it can be collaboratively edited.

    Read the article

  • Suggestions for open source testing tool for cloud computing

    - by vikraman
    Hi, I want to know if there is any open source testing tool for cloud computing. We have built a cloud framework with Xen, Eucalyptus, Hadoop, HBase as different layers. I am not looking at testing each of these tools separately, but i want to test them from the perspective of fitting into a cloud environment (for example scalability of xen hypervisor to handle multiple VMs). Would be great if you can suggest me some tool (open source) for the above.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >