Search Results

Search found 14797 results on 592 pages for 'gui testing'.

Page 5/592 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • GUI is not displayed anymore

    - by mdobrinin
    I am running 12.04 LTS on a VM. I encountered a problem with the screen being black and the familiar orange background showing for a fraction of a second when clicking or resizing the screen. I had to shutdown the machine and now I have no GUI. I only get a black command-line interface. As some other posts suggest, I attempted: sudo service lightdm restart This doesn't work for me because it gets stuck at this point: Skipping profile in /etc/apparmor.d/disable: user.bin.firefox Skipping profile in /etc/apparmor.d/disable: user.sbin.rsyslogd * Starting AppArmor profiles [ OK ] Any ideas?

    Read the article

  • Function testing on Netbeans 6.8

    - by ron
    While not a torrent, but some articles can be found on the net about function testing (particularly http://blogs.sun.com/geertjan/entry/gui_testing_on_the_netbeans). However the tools mentioned by them do not seem to be maintained, or don't have a plugin working with the most recent version of Netbeans (6.8). Do you have any function test setup for GUI? What is your level of integration into the development process (IDE integration, ant, etc). Additional candy is that Netbeans is not only the IDE, but the GUI app is also developed for Netbeans 6.8 Platform (so I'm mainly interested in GUI testing NB-platform apps, but tips for any Swing apps in general would be a help too).

    Read the article

  • GUI based backup utility [closed]

    - by Chethan S.
    Possible Duplicate: Comparison of backup tools I have read favorable reviews for 'Back In Time' for the purpose stated above. Still I am posting this question as I have some demands in my mind. Few years back I was using ThinkVantage Rescue and Recovery by IBM on my Lenovo PC under Windows. That provided me nice features like compressed backups, boot time options - OS Repair, Restore entire OS, restore entire system to an older date, restore individual files etc. Out of these the feature I liked the most was compressed backups. Similar features are available in software like Norton Ghost too. In Back In Time I was surprised to see that the snapshot takes up same amount of space as that of the original contents, no compression at all. Furthermore, I was not able to find options to change the compression ratio etc. under settings. According to me compression of backups is a must have feature. Therefore, can anyone suggest me any other utility which can serve the purpose. I insist on GUI based tool since I don't want to mess up with backups!

    Read the article

  • Mock Objects for Unit Testing

    - by user9009
    Hello How often QA engineers are responsible for developing Mock Objects for Unit Testing. So dealing with Mock Objects is just developer job ?. The reason i ask is i'm interested in QA as my career and am learning tools like JUnit , TestNG and couple of frameworks. I just want to know until what level of unit testing is done by developer and from what point QA engineer takes over testing for better test coverage ? Thanks

    Read the article

  • Development processes, the use of version control, and unit-testing

    - by ct01
    Preface I've worked at quite a few "flat" organizations in my time. Most of the version control policy/process has been "only commit after it's been tested". We were constantly committing at each place to "trunk" (cvs/svn). The same was true with unit-testing - it's always been a "we need to do this" mentality but it never really materializes in a substantive form b/c there is no institutional knowledge base to do it - no mentorship. Version Control The emphasis for version control management at one place was a very strict protocol for commit messages (format & content). The other places let employees just do "whatever". The branching, tagging, committing, rolling back, and merging aspect of things was always ill defined and almost never used. This sort of seems to leave the version control system in the position of being a fancy file-storage mechanism with a meta-data component that never really gets accessed/utilized. (The same was true for unit testing and committing code to the source tree) Unit tests It seems there's a prevailing "we must/should do this" mentality in most places I've worked. As a policy or standard operating procedure it never gets implemented because there seems to be a very ill-defined understanding about what that means, what is going to be tested, and how to do it. Summary It seems most places I've been to think version control and unit testing is "important" b/c the trendy trade journals say it is but, if there's very little mentorship to use these tools or any real business policies, then the full power of version control/unit testing is never really expressed. So grunts, like myself, never really have a complete understanding of the point beyond that "it's a good thing" and "we should do it". Question I was wondering if there are blogs, books, white-papers, or online journals about what one could call the business process or "standard operating procedures" or uses cases for version control and unit testing? I want to know more than the trade journals tell me and get serious about doing these things. PS: @Henrik Hansen had a great comment about the lack of definition for the question. I'm not interested in a specific unit-testing/versioning product or methodology (like, XP) - my interest is more about work-flow at the individual team/developer level than evangelism. This is more-or-less a by product of the management situation I've operated under more than a lack of reading software engineering books or magazines about development processes. A lot of what I've seen/read is more marketing oriented material than any specifically enumerated description of "well, this is how our shop operates".

    Read the article

  • Advancing Code Review and Unit Testing Practice

    - by Graviton
    As a team lead managing a group of developers with no experience ( and see no need) in code review and unit testing, how can you advance code review and unit testing practice? How are you going to create a way so that code review and unit testing to naturally fit into the developer's flow? One of the resistance of these two areas is that "we are always tight on dateline, so no time for code review and unit testing". Another resistance for code review is that we currently don't know how to do it. Should we review the code upon every check-in, or review the code at a specified date?

    Read the article

  • unit testing variable state explicit tests in dynamically typed languages

    - by kris welsh
    I have heard that a desirable quality of unit tests is that they test for each scenario independently. I realised whilst writing tests today that when you compare a variable with another value in a statement like: assertEquals("foo", otherObject.stringFoo); You are really testing three things: The variable you are testing exists and is within scope. The variable you are testing is the expected type. The variable you are testing's value is what you expect it to be. Which to me raises the question of whether you should test for each of these implicitly so that a test fail would occur on the specific line that tests for that problem: assertTrue(stringFoo); assertTrue(stringFoo.typeOf() == "String"); assertEquals("foo", otherObject.stringFoo); For example if the variable was an integer instead of a string the test case failure would be on line 2 which would give you more feedback on what went wrong. Should you test for this kind of thing explicitly or am i overthinking this?

    Read the article

  • Testing loses its effectiveness if all programmers don't use them

    - by Jeff O
    Let's assume you are convinced that the extra time spent unit testing has merit and improves production. Does that still hold up when everyone working on the same code doesn't use them? This question makes me wonder if fixing tests that everyone doesn't use is a waste of time. If you correct a test so the new code will pass, you're assuming the new code is correct. The person updating the test better have a firm understanding of the reasoning behind the code change and decide if the test or the new code needs to be fixed. This much inconsistency in a team when it comes to testing is probably an indication of other problems as well. There is a certain amount of risk involved that someone else on the team will alter code that is covered by testing. Is this the point where testing becomes counter-productive?

    Read the article

  • Is verification and validation part of testing process?

    - by user970696
    Based on many sources I do not believe the simple definition that aim of testing is to find as many bugs as possible - we test to ensure that it works or that it does not. E.g. followint are goals of testing form ISTQB: Determine that (software products) satisfy specified requirements ( I think its verificication) Demonstrate that (software products) are fit for purpose (I think that is validation) Detect defects I would agree that testing is verification, validation and defect detection. Is that correct?

    Read the article

  • JUnit Testing in Multithread Application

    - by e2bady
    This is a problem me and my team faces in almost all of the projects. Testing certain parts of the application with JUnit is not easy and you need to start early and to stick to it, but that's not the question I'm asking. The actual problem is that with n-Threads, locking, possible exceptions within the threads and shared objects the task of testing is not as simple as testing the class, but testing them under endless possible situations within threading. To be more precise, let me tell you about the design of one of our applications: When a user makes a request several threads are started that each analyse a part of the data to complete the analysis, these threads run a certain time depending on the size of the chunk of data (which are endless and of uncertain quality) to analyse, or they may fail if the data was insufficient/lacking quality. After each completed its analysis they call upon a handler which decides after each thread terminates if the collected analysis-data is sufficient to deliver an answer to the request. All of these analysers share certain parts of the applications (some parts because the instances are very big and only a certain number can be loaded into memory and those instances are reusable, some parts because they have a standing connection, where connecting takes time, ex.gr. sql connections) so locking is very common (done with reentrant-locks). While the applications runs very efficient and fast, it's not very easy to test it under real-world conditions. What we do right now is test each class and it's predefined conditions, but there are no automated tests for interlocking and synchronization, which in my opionion is not very good for quality insurances. Given this example how would you handle testing the threading, interlocking and synchronization?

    Read the article

  • Database unit testing is now available for SSDT

    - by jamiet
    Good news was announced yesterday for those that are using SSDT and want to write unit tests, unit testing functionality is now available. The announcement was made on the SSDT team blog in post Available Today: SSDT—December 2012. Here are a few thoughts about this news. Firstly, there seems to be a general impression that database unit testing was not previously available for SSDT – that’s not entirely true. Database unit testing was most recently delivered in Visual Studio 2010 and any database unit tests written therein work perfectly well against SQL Server databases created using SSDT (why wouldn’t they – its just a database after all). In other words, if you’re running SSDT inside Visual Studio 2010 then you could carry on freely writing database unit tests; some of the tight integration between the two (e.g. right-click on an object in SQL Server Object Explorer and choose to create a unit test) was not there – but I’ve never found that to be a problem. I am currently working on a project that uses SSDT for database development and have been happily running VS2010 database unit tests for a few months now. All that being said, delivery of database unit testing for SSDT is now with us and that is good news, not least because we now have the ability to create unit tests in VS2012. We also get tight integration with SSDT itself, the like of which I mentioned above. Having now had a look at the new features I was delighted to find that one of my big complaints about database unit testing has been solved. As I reported here on Connect a refactor operation would cause unit test code to get completely mangled. See here the before and after from such an operation: SELECT    * FROM    bi.ProcessMessageLog pml INNER JOIN bi.[LogMessageType] lmt     ON    pml.[LogMessageTypeId] = lmt.[LogMessageTypeId] WHERE    pml.[LogMessage] = 'Ski[LogMessageTypeName]of message: IApplicationCanceled' AND        lmt.[LogMessageType] = 'Warning'; which is obviously not ideal. Thankfully that seems to have been solved with this latest release. One disappointment about this new release is that the process for running tests as part of a CI build has not changed from the horrendously complicated process required previously. Check out my blog post Setting up database unit testing as part of a Continuous Integration build process [VS2010 DB Tools - Datadude] for instructions on how to do it. In that blog post I describe it as “fiddly” – I was being kind when I said that! @Jamiet

    Read the article

  • Mock Objects for Testing - Test Automation Engineer Perspective

    - by user9009
    Hello How often QA engineers are responsible for developing Mock Objects for Unit Testing. So dealing with Mock Objects is just developer job ?. The reason i ask is i'm interested in QA as my career and am learning tools like JUnit , TestNG and couple of frameworks. I just want to know until what level of unit testing is done by developer and from what point QA engineer takes over testing for better test coverage ? Thanks Edit : Based on the answers below am providing more details about what QA i was referring to . I'm interested in more of Test Automation rather than simple QA involved in record and play of script. So Test Automation engineers are responsible for developing frameworks ? or do they have a team of developers dedicated in Framework development ? Yes i was asking about usage of Mock Objects for testing from Test Automation engineer perspective.

    Read the article

  • CppUnit for unit-testing executable files?

    - by hagubear
    I am not sure if anyone has done it. I am trying to do something that is in general, uncommon i.e. unit-testing executable (Windows) or ELFs (Linux). I know that CppUnit provides a good unit testing facility, but I have never used it for unit-testing (used UnitTest++). I hear rumours that you can unit-test executables too. Does anyone have the experience in this? A relevant post regarding the philosophy of it was here

    Read the article

  • Scenario to illustrate how unit testing leads to better design

    - by Cocowalla
    For an internal training session, I'm trying to come up with a simple scenario that illustrates how unit testing leads to better design, by forcing you to think about things like coupling before you start coding. The idea is that I get the participants to code something first, without considering unit testing, then we do it again, but considering unit testing. Hopefully the code produced second time round should be more decoupled and maintainable. I'm struggling to come up with a scenario that can be coded quickly, yet can still demonstrate how unit testing can lead to better overall design.

    Read the article

  • What is the aim of software testing?

    - by user970696
    Having read many books, there is a basic contradiction: Some say, "the goal of testing is to find bugs" while other say "the goal of the testing is to equalize the quality of the product", meaning that bugs are its by-products. I would also agree that if testing would be aimed primarily on a bug hunt, who would do the actual verification and actually provided the information, that the software is ready? Even e.g. Kaner changed his original definiton of testing goal from bug hunting to quality assesement provision but I still cannot see the clear difference. I percieve both as equally important. I can verify software by its specification to make sure it works and in that case, bugs found are just by products. But also I perform tests just to brake things. Also what definition is more accurate?

    Read the article

  • Code testing practice

    - by Robin Castlin
    So now I have come to the conclusion like many others that having some way of constantly testing your code is good practice since it enables fewer people to be involved (colleges and customers alike) by simply knowing what's wrong before someone else finds out the hard way. I've heard and read some about Unit Testing and understand what it's supposed to do and all. The there are so many different types of bugs. It can be everything from web browser not being able not being able to send correct values, javascript failing, a global function messing up a piece of code somewhere to a change that looked good when testing it out but fails in some special case which was hard to anticipate. My simply finding these errors I learn to rarely repeat them again, but there seems to always be new bugs to be found and learnt from. I would guess maybe the best practice would be to run every page and it's functions a couple of times, witness the result and repeat this in Firefox, Chrome and Internet Explorer (and all smartphones apparently) to make sure it works as intended. However this would take quite some time to do consider I don't work with patches/versions and do little fixes here and there a couple of times per week. What I prefer would be some kind of page I can just load that tests as much things as possible to make sure the site works as intended. Basicly just run a lot of cURL's with POST-values and see if I get expected result. But how would I preferably not increase the IDs of every mysql rows if I delete these testing rows? It feels silly to be on ID 1000 with maybe 50 rows in total. If I could build a new project from scratch I would probably implement some kind of smooth way to return a "TRUE" on testing instead of the actual page. But this solution would for the moment being have to be passed on existing projects. My question What would you recommend to be the best way to test my site to make sure that existing functions does their job upon editing the code? Should I consider to implement a lot of edits first, then test manually the entire code to make sure it still works? Is there any nice way of testing codes without "hurting" the ID columns? Extra thoughs Would it be a good idea to associate all of my files to the different parts of my site which they affect? For instance if I edit home.php I will through documentation test if my homepage's start works as intended since it's the only part of my site it should affect.

    Read the article

  • Performance Testing Versus Unit Testing

    - by Mystagogue
    I'm reading Osherove's "The Art of Unit Testing," and though I've not yet seen him say anything about performance testing, two thoughts still cross my mind: Performance tests generally can't be unit tests, because performance tests generally need to run for long periods of time. Performance tests generally can't be unit tests, because performance issues too often manifest at an integration or system level (or at least the logic of a single unit test needed to re-create the performance of the integration environment would be too involved to be a unit test). Particularly for the first reason stated above, I doubt it makes sense for performance tests to be handled by a unit testing framework (such as NUnit). My question is: do my findings / leanings correspond with the thoughts of the community?

    Read the article

  • Automating GUI testing using C#

    - by ladar
    I am doing on a project to built automatic GUI testing for graphical application in .NET. I will use C# but i am trying to reading to get some ideas. But I don't have any idea on how to record and replay back. So can you give me your ideas.

    Read the article

  • Intermittent temporary GUI freeze in Ubuntu 11.10

    - by Oscar
    I've been using Ubuntu 11.10 for a month or so. In the last week it's started freezing randomly (every few hours or minutes). I can still move the mouse and switch to other terminals with ctrl+alt. I thought this was purely a gui issue as I could continue entering commands (mouse clicks and keys) which seem to be processed once the system resumes (generally 30 seconds to a few minutes). I'm using gnome and metacity. I can't identify anything in particular that triggers the freezes. Saving a file in LibreOffice causes the system to hang. I tried disabling most of the services I've installed (dropbox, autokey, etc.) but doesn't help. Switching to another terminal and running top, the CPU column is shared equally among all of my processes (i.e. non-root). I have no idea what that signifies. My PC is unusable in this state. CPU model name : Pentium(R) Dual-Core CPU E6700 @ 3.20GHz [7m PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND [0;10m[39;49m[K [0;10m[0;10m 1499 ogga 20 0 404m 32m 13m R 10 0.8 0:28.19 python [0;10m[39;49m [0;10m[0;10m 1501 ogga 20 0 216m 13m 6224 R 10 0.3 0:18.28 ibus-x11 [0;10m[39;49m [0;10m[0;10m 1679 ogga 20 0 449m 34m 15m R 10 0.9 0:41.10 gnome-panel [0;10m[39;49m [0;10m[0;10m 1710 ogga 20 0 350m 15m 8324 R 10 0.4 0:18.25 bluetooth-apple [0;10m[39;49m [0;10m[0;10m 1752 ogga 20 0 458m 37m 13m R 10 0.9 0:22.62 autokey-gtk [0;10m[39;49m [0;10m[0;10m 2081 ogga 20 0 354m 17m 9800 R 10 0.5 0:16.36 update-notifier [0;10m[39;49m [0;10m[0;10m 5439 ogga 20 0 640m 104m 38m R 10 2.6 0:45.17 chromium-browse [0;10m[39;49m [0;10m[0;10m 5586 ogga 20 0 381m 42m 21m R 10 1.1 0:20.17 chromium-browse [0;10m[39;49m [0;10m[0;10m 6422 ogga 20 0 529m 59m 18m R 10 1.5 0:28.15 sublime_text [0;10m[39;49m [0;10m[0;10m 1362 ogga 20 0 264m 14m 7884 R 8 0.4 0:18.29 gnome-session [0;10m[39;49m [0;10m[0;10m 1673 ogga 20 0 351m 17m 9768 R 8 0.4 0:21.78 metacity [0;10m[39;49m [0;10m[0;10m 1708 ogga 20 0 249m 13m 7156 R 8 0.3 0:18.23 gnome-fallback- [0;10m[39;49m [0;10m[0;10m 1709 ogga 20 0 572m 28m 15m R 8 0.7 0:18.37 nautilus [0;10m[39;49m [0;10m[0;10m 1722 ogga 20 0 467m 18m 9m R 8 0.5 0:18.43 nm-applet [0;10m[39;49m [0;10m[0;10m 1727 ogga 20 0 225m 12m 6304 R 8 0.3 0:18.24 polkit-gnome-au [0;10m[39;49m [0;10m[0;10m 1731 ogga 20 0 422m 19m 10m R 8 0.5 0:26.62 gnome-sound-app [0;10m[39;49m [0;10m[0;10m 1735 ogga 20 0 306m 31m 13m R 8 0.8 0:18.37 python [0;10m[39;49m [0;10m[0;10m 1754 ogga 20 0 286m 16m 8912 R 8 0.4 0:18.90 vino-server [0;10m[39;49m [0;10m[0;10m 1798 ogga 20 0 246m 15m 7476 R 8 0.4 0:18.25 gnome-screensav [0;10m[39;49m [0;10m[0;10m 1851 ogga 20 0 185m 14m 7256 R 8 0.4 0:18.18 gdu-notificatio [0;10m[39;49m [0;10m[0;10m 1923 ogga 20 0 251m 28m 11m R 8 0.7 0:17.96 applet.py [0;10m[39;49m [0;10m[0;10m 4085 ogga 20 0 378m 22m 11m R 8 0.6 0:18.19 gnome-terminal [0;10m[39;49m [0;10m 4213 ogga 20 0 263m 73m 15m S 2 1.9 3:57.44 skype [0;10m[39;49m [0;10m 1 root 20 0 24188 1492 1320 S 0 0.0 0:00.45 init [0;10m[39;49m [0;10m 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd [0;10m[39;49m [0;10m 3 root 20 0 0 0 0 S 0 0.0 0:02.27 ksoftirqd/0 [0;10m[39;49m [0;10m 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 [0;10m[39;49m [0;10m 7 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 [0;10m[39;49m [0;10m 9 root 20 0 0 0 0 S 0 0.0 0:01.97 ksoftirqd/1 [0;10m[39;49m [0;10m 10 root 20 0 0 0 0 S 0 0.0 0:01.16 kworker/0:1 [0;10m[39;49m [0;10m 11 root 0 -20 0 0 0 S 0 0.0 0:00.00 cpuset [0;10m[39;49m [0;10m 12 root 0 -20 0 0 0 S 0 0.0 0:00.00 khelper [0;10m[39;49m [0;10m 13 root 0 -20 0 0 0 S 0 0.0 0:00.00 netns [0;10m[39;49m [0;10m 15 root 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers [0;10m[39;49m [0;10m 16 root 20 0 0 0 0 S 0 0.0 0:00.00 bdi-default [0;10m[39;49m [0;10m 17 root 0 -20 0 0 0 S 0 0.0 0:00.00 kintegrityd [0;10m[39;49m [0;10m 18 root 0 -20 0 0 0 S 0 0.0 0:00.00 kblockd [0;10m[39;49m [0;10m 19 root 0 -20 0 0 0 S 0 0.0 0:00.00 ata_sff [0;10m[39;49m [0;10m 20 root 20 0 0 0 0 S 0 0.0 0:00.00 khubd [0;10m[39;49m [0;10m 21 root 0 -20 0 0 0 S 0 0.0 0:00.00 md [0;10m[39;49m [0;10m 23 root 20 0 0 0 0 S 0 0.0 0:00.00 khungtaskd [0;10m[39;49m [0;10m 24 root 20 0 0 0 0 S 0 0.0 0:00.14 kswapd0 [0;10m[39;49m [0;10m 25 root 25 5 0 0 0 S 0 0.0 0:00.00 ksmd [0;10m[39;49m [0;10m 26 root 39 19 0 0 0 S 0 0.0 0:00.00 khugepaged [0;10m[39;49m [0;10m 27 root 20 0 0 0 0 S 0 0.0 0:00.00 fsnotify_mark [0;10m[39;49m [0;10m 28 root 20 0 0 0 0 S 0 0.0 0:00.00 ecryptfs-kthrea [0;10m[39;49m [0;10m 29 root 0 -20 0 0 0 S 0 0.0 0:00.00 crypto [0;10m[39;49m [0;10m 37 root 0 -20 0 0 0 S 0 0.0 0:00.00 kthrotld [0;10m[39;49m [0;10m 38 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_0 [0;10m[39;49m [0;10m 39 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_1 [0;10m[39;49m [0;10m 41 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_2 [0;10m[39;49m [0;10m 42 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_3 [0;10m[39;49m [0;10m 64 root 20 0 0 0 0 S 0 0.0 0:02.98 kworker/0:2 [0;10m[39;49m [0;10m 242 root 20 0 0 0 0 S 0 0.0 0:00.39 jbd2/sdb1-8 [0;10m[39;49m [0;10m 243 root 0 -20 0 0 0 S 0 0.0 0:00.00 ext4-dio-unwrit [0;10m[39;49m [0;10m 288 root 20 0 17236 448 448 S 0 0.0 0:00.04 upstart-udev-br [0;10m[39;49m [0;10m 295 root 20 0 21752 884 796 S 0 0.0 0:00.06 udevd And at another time: [7m PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND [0;10m[39;49m[K [0;10m[0;10m 1757 ogga 20 0 222m 9932 6300 R 13 0.2 0:05.69 polkit-gnome-au [0;10m[39;49m [0;10m[0;10m 1559 ogga 20 0 152m 9764 6112 R 13 0.2 0:05.77 ibus-x11 [0;10m[39;49m [0;10m[0;10m 1786 ogga 20 0 457m 33m 13m R 13 0.9 0:06.10 autokey-gtk [0;10m[39;49m [0;10m[0;10m 1395 ogga 20 0 262m 12m 7880 R 12 0.3 0:05.88 gnome-session [0;10m[39;49m [0;10m[0;10m 1557 ogga 20 0 403m 31m 13m R 12 0.8 0:14.95 python [0;10m[39;49m [0;10m[0;10m 1745 ogga 20 0 247m 11m 7196 R 12 0.3 0:05.69 gnome-fallback- [0;10m[39;49m [0;10m[0;10m 1767 ogga 20 0 237m 26m 11m R 12 0.7 0:05.87 python [0;10m[39;49m [0;10m[0;10m 1713 ogga 20 0 440m 25m 13m R 12 0.6 0:13.76 gnome-panel [0;10m[39;49m [0;10m[0;10m 1747 ogga 20 0 348m 13m 8328 R 11 0.3 0:05.22 bluetooth-apple [0;10m[39;49m [0;10m[0;10m 1754 ogga 20 0 465m 16m 10m R 11 0.4 0:05.21 nm-applet [0;10m[39;49m [0;10m[0;10m 1710 ogga 20 0 167m 11m 7564 R 11 0.3 0:05.21 metacity [0;10m[39;49m [0;10m[0;10m 1761 ogga 20 0 406m 17m 9928 R 11 0.4 0:12.71 gnome-sound-app [0;10m[39;49m [0;10m[0;10m 1789 ogga 20 0 283m 13m 8852 R 11 0.3 0:05.55 vino-server [0;10m[39;49m [0;10m[0;10m 1815 ogga 20 0 243m 11m 7452 R 11 0.3 0:05.17 gnome-screensav [0;10m[39;49m [0;10m[0;10m 1885 ogga 20 0 182m 11m 7256 R 11 0.3 0:05.18 gdu-notificatio [0;10m[39;49m [0;10m[0;10m 1957 ogga 20 0 249m 25m 11m R 11 0.7 0:05.32 applet.py [0;10m[39;49m [0;10m[0;10m 2067 ogga 20 0 260m 12m 7828 R 11 0.3 0:05.21 update-notifier [0;10m[39;49m [0;10m 1975 ogga 20 0 292m 48m 11m S 0 1.2 0:08.28 ubuntuone-syncd [0;10m[39;49m [0;10m[0;10m 2363 ogga 20 0 21468 1384 988 R 0 0.0 0:00.01 top [0;10m[39;49m [0;10m 1 root 20 0 24284 2296 1320 S 0 0.1 0:00.46 init [0;10m[39;49m [0;10m 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd [0;10m[39;49m [0;10m 3 root 20 0 0 0 0 S 0 0.0 0:00.05 ksoftirqd/0 [0;10m[39;49m [0;10m 4 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/0:0 [0;10m[39;49m [0;10m 5 root 20 0 0 0 0 S 0 0.0 0:00.19 kworker/u:0 [0;10m[39;49m [0;10m 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 [0;10m[39;49m [0;10m 7 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 [0;10m[39;49m [0;10m 8 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/1:0 [0;10m[39;49m [0;10m 9 root 20 0 0 0 0 S 0 0.0 0:00.06 ksoftirqd/1 [0;10m[39;49m [0;10m 10 root 20 0 0 0 0 S 0 0.0 0:00.09 kworker/0:1 [0;10m[39;49m [0;10m 11 root 0 -20 0 0 0 S 0 0.0 0:00.00 cpuset [0;10m[39;49m [0;10m 12 root 0 -20 0 0 0 S 0 0.0 0:00.00 khelper [0;10m[39;49m [0;10m 13 root 0 -20 0 0 0 S 0 0.0 0:00.00 netns [0;10m[39;49m [0;10m 14 root 20 0 0 0 0 S 0 0.0 0:00.25 kworker/u:1 [0;10m[39;49m [0;10m 15 root 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers [0;10m[39;49m [0;10m 16 root 20 0 0 0 0 S 0 0.0 0:00.00 bdi-default [0;10m[39;49m [0;10m 17 root 0 -20 0 0 0 S 0 0.0 0:00.00 kintegrityd [0;10m[39;49m [0;10m 18 root 0 -20 0 0 0 S 0 0.0 0:00.00 kblockd [0;10m[39;49m [0;10m 19 root 0 -20 0 0 0 S 0 0.0 0:00.00 ata_sff [0;10m[39;49m [0;10m 20 root 20 0 0 0 0 S 0 0.0 0:00.00 khubd [0;10m[39;49m [0;10m 21 root 0 -20 0 0 0 S 0 0.0 0:00.00 md [0;10m[39;49m [0;10m 22 root 20 0 0 0 0 S 0 0.0 0:00.22 kworker/1:1 [0;10m[39;49m [0;10m 23 root 20 0 0 0 0 S 0 0.0 0:00.00 khungtaskd [0;10m[39;49m [0;10m 24 root 20 0 0 0 0 S 0 0.0 0:00.00 kswapd0 [0;10m[39;49m [0;10m 25 root 25 5 0 0 0 S 0 0.0 0:00.00 ksmd [0;10m[39;49m [0;10m 26 root 39 19 0 0 0 S 0 0.0 0:00.00 khugepaged [0;10m[39;49m [0;10m 27 root 20 0 0 0 0 S 0 0.0 0:00.00 fsnotify_mark [0;10m[39;49m [0;10m 28 root 20 0 0 0 0 S 0 0.0 0:00.00 ecryptfs-kthrea [0;10m[39;49m [0;10m 29 root 0 -20 0 0 0 S 0 0.0 0:00.00 crypto [0;10m[39;49m [0;10m 37 root 0 -20 0 0 0 S 0 0.0 0:00.00 kthrotld [0;10m[39;49m [0;10m 38 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_0 [0;10m[39;49m [0;10m 39 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_1 [0;10m[39;49m [0;10m 40 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/u:2 [0;10m[39;49m [0;10m 41 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_2 [0;10m[39;49m [0;10m 42 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_3 [0;10m[39;49m [0;10m 43 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/u:3 [0;10m[39;49m [0;10m 44 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/u:4 [0;10m[39;49m [0;10m 45 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/u:5 [0;10m[39;49m[6;1H[K Sorry about the horrible formatting. Thanks for any suggestions... Edit: I notice that my virtual computer (win7 64 on virtualbox) continues to respond most of the time during these 'freezes' Edit2: I suspect this is something to do with UI priority being too low... but I don't know enough about linux to know how to address that.

    Read the article

  • GUI testing with Instrumentation in Android

    - by Sara
    I want to test my Android applications UI, with keyevents and pressed buttons and so on. I've read som documentation that Instrumentation would be able to use for this purpose. Anyone with expericence with using Instrumentation for UI testing?

    Read the article

  • Can it be useful to build an application starting with the GUI?

    - by Grant Palin
    The trend in application design and development seems to be starting with the "guts": the domain, then data access, then infrastructure, etc. The GUI seems to usually come later in the process. I wonder if it could ever be useful to build the GUI first... My rationale is that by building at least a prototype GUI, you gain a better idea of what needs to happen behind the scenes, and so are in a better position to start work on the domain and supporting code. I can see an issue with this practice in that if the supporting code is not yet written, there won't be much for the GUI layer to actually do. Perhaps building mock objects or throwaway classes (somewhat like is done in unit testing) would provide just enough of a foundation to build the GUI on initially. Might this be a feasible idea for a real project? Maybe we could add GDD (GUI Driven Development) to the acronym stable...

    Read the article

  • Testing Workflows &ndash; Test-After

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-after.aspxIn this post I’m going to outline a few common methods that can be used to increase the coverage of of your test suite.  This won’t be yet another post on why you should be doing testing; there are plenty of those types of posts already out there.  Assuming you know you should be testing, then comes the problem of how do I actual fit that into my day job.  When the opportunity to automate testing comes do you take it, or do you even recognize it? There are a lot of ways (workflows) to go about creating automated tests, just like there are many workflows to writing a program.  When writing a program you can do it from a top-down approach where you write the main skeleton of the algorithm and call out to dummy stub functions, or a bottom-up approach where the low level functionality is fully implement before it is quickly wired together at the end.  Both approaches are perfectly valid under certain contexts. Each approach you are skilled at applying is another tool in your tool belt.  The more vectors of attack you have on a problem – the better.  So here is a short, incomplete list of some of the workflows that can be applied to increasing the amount of automation in your testing and level of quality in general.  Think of each workflow as an opportunity that is available for you to take. Test workflows basically fall into 2 categories:  test first or test after.  Test first is the best approach.  However, this post isn’t about the one and only best approach.  I want to focus more on the lesser known, less ideal approaches that still provide an opportunity for adding tests.  In this post I’ll enumerate some test-after workflows.  In my next post I’ll cover test-first. Bug Reporting When someone calls you up or forwards you a email with a vague description of a bug its usually standard procedure to create or verify a reproduction plan for the bug via manual testing and log that in a bug tracking system.  This can be problematic.  Often reproduction plans when written down might skip a step that seemed obvious to the tester at the time or they might be missing some crucial environment setting. Instead of data entry into a bug tracking system, try opening up the test project and adding a failing unit test to prove the bug.  The test project guarantees that all aspects of the environment are setup properly and no steps are missing.  The language in the test project is much more precise than the English that goes into a bug tracking system. This workflow can easily be extended for Enhancement Requests as well as Bug Reporting. Exploratory Testing Exploratory testing comes in when you aren’t sure how the system will behave in a new scenario.  The scenario wasn’t planned for in the initial system requirements and there isn’t an existing test for it.  By definition the system behaviour is “undefined”. So write a new unit test to define that behaviour.  Add assertions to the tests to confirm your assumptions.  The new test becomes part of the living system specification that is kept up to date with the test suite. Examples This workflow is especially good when developing APIs.  When you are finally done your production API then comes the job of writing documentation on how to consume the API.  Good documentation will also include code examples.  Don’t let these code examples merely exist in some accompanying manual; implement them in a test suite. Example tests and documentation do not have to be created after the production API is complete.  It is best to write the example code (tests) as you go just before the production code. Smoke Tests Every system has a typical use case.  This represents the basic, core functionality of the system.  If this fails after an upgrade the end users will be hosed and they will be scratching their heads as to how it could be possible that an update got released with this core functionality broken. The tests for this core functionality are referred to as “smoke tests”.  It is a good idea to have them automated and run with each build in order to avoid extreme embarrassment and angry customers. Coverage Analysis Code coverage analysis is a tool that reports how much of the production code base is exercised by the test suite.  In Visual Studio this can be found under the Test main menu item. The tool will report a total number for the code coverage, which can be anywhere between 0 and 100%.  Coverage Analysis shouldn’t be used strictly for numbers reporting.  Companies shouldn’t set minimum coverage targets that mandate that all projects must have at least 80% or 100% test coverage.  These arbitrary requirements just invite gaming of the coverage analysis, which makes the numbers useless. The analysis tool will break down the coverage by the various classes and methods in projects.  Instead of focusing on the total number, drill down into this view and see which classes have high or low coverage.  It you are surprised by a low number on a class this is an opportunity to add tests. When drilling through the classes there will be generally two types of reaction to a surprising low test coverage number.  The first reaction type is a recognition that there is low hanging fruit to be picked.  There may be some classes or methods that aren’t being tested, which could easy be.  The other reaction type is “OMG”.  This were you find a critical piece of code that isn’t under test.  In both cases, go and add the missing tests. Test Refactoring The general theme of this post up to this point has been how to add more and more tests to a test suite.  I’ll step back from that a bit and remind that every line of code is a liability.  Each line of code has to be read and maintained, which costs money.  This is true regardless whether the code is production code or test code. Remember that the primary goal of the test suite is that it be easy to read so that people can easily determine the specifications of the system.  Make sure that adding more and more tests doesn’t interfere with this primary goal. Perform code reviews on the test suite as often as on production code.  Hold the test code up to the same high readability standards as the production code.  If the tests are hard to read then change them.  Look to remove duplication.  Duplicate setup code between two or more test methods that can be moved to a shared function.  Entire test methods can be removed if it is found that the scenario it tests is covered by other tests.  Its OK to delete a test that isn’t pulling its own weight anymore. Remember to only start refactoring when all the test are green.  Don’t refactor the tests and the production code at the same time.  An automated test suite can be thought of as a double entry book keeping system.  The unchanging, passing production code serves as the tests for the test suite while refactoring the tests. As with all refactoring, it is best to fit this into your regular work rather than asking for time later to get it done.  Fit this into the standard red-green-refactor cycle.  The refactor step no only applies to production code but also the tests, but not at the same time.  Perhaps the cycle should be called red-green-refactor production-refactor tests (not quite as catchy).   That about covers most of the test-after workflows I can think of.  In my next post I’ll get into test-first workflows.

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Keeping crosshairs & GUI onscreen - SFML

    - by nihohit
    I read this question, but didn't understand the implementation suggestions with SFML on c#. For example, right now I'm just trying to make sure that the mouse crosshairs stay onscreen constatnly. I tried using this code: View lastView = this._mainWindow.GetView(); this._mainWindow.SetView(this._mainWindow.DefaultView); this._mainWindow.Draw(crosshair); this._mainWindow.SetView(lastView); after drawing all other sprites and before call this._mainWindow.display(), when beforehand I set crosshair.Position based on its position relative to the window, not the view. This just keeps the screen locked and prevents screen scrolling. Any suggestions?

    Read the article

  • ltsp: install in chroot with GUI-Installer (lirc)

    - by Roberto
    I am trying to install "lirc" into chroot. Installing lirc requires "answering 2 questions". I am on Ubuntu Desktop 12.04. (in Virtualbox) I try this way: https://help.ubuntu.com/community/UbuntuLTSP/GuiInstallLocalApp but I had errors and then the server would hang on reboot. No big surprise, the guide says it is valid for 9.04. Maybe I could create " debconf.seeds" ( https://help.ubuntu.com/community/UbuntuLTSP/FatClients ) but I dont know how to. Could somebody point me in the right direction? thanks Roberto

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >