Search Results

Search found 65503 results on 2621 pages for 'real application testing'.

Page 95/2621 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • What is the value of checking in failing unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • What is the value of checking in failing unit tests?

    - by user20194
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Test Driven Development (TDD) with Rails

    - by macek
    I am looking for TDD resources that are specific to Rails. I've seen the Rails Guide: The Basics of Creating a Rails Plugin which really spurred my interest in the topic. I have the Agile Development with Rails book and I see there's some testing-related information there. However, it seems like the author takes you through the steps of building the app, then adds testing afterward. This isn't really Test Driven Development. Ideally, I'd like a book on this, but a collection of other tutorials or articles would be great if such a book doesn't exist. Things I'd like to learn: Primary goal: Best Practices Unit testing How to utilize Fixtures Possibly using existing development data in place of fixtures What's the community standard here? Writing tests for plugins Testing with session data User is logged in User can access URL /foo/bar Testing success of sending email Thanks for any help!

    Read the article

  • Sequence Number in testing Spring application with JUnit (Hibernating, Spring MVC)

    - by MBK
    I am testing DAO in Spring Application. @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = "classpath:/applicationContext.xml") @TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true) @Transactional public class CommentDAOImplTest { @Autowired //testing mehods here} The tests are running good. Iam able to add an comment and I also have a defaultRollback property set. So, the added comment will be deleted automatically. happy!..Now the problem is with the sequence number for mcomment. Can I, in any way rollback the seq number? any suggestins on that. I dont want to mess up the sequrnce number. Business requires comment Id to be showed. (I still dont know why). I know in memory db is an option....but I am guessing defaultRollback purpose is to eliminate in memory db testing and mocking. (Just my opinion.)

    Read the article

  • Is there any Application Server Frameworks for other languages/platforms than JavaEE and .NET?

    - by Jonas
    I'm a CS student and has rare experience from the enterprise software industry. When I'm reading about enterprise software platforms, I mostly read about these two: Java Enterprise Edition, JavaEE .NET and Windows Communication Foundation By "enterprise software platforms" I mean frameworks and application servers with support for the same characteristics as J2EE and WCF has: [JavaEE] provide functionality to deploy fault-tolerant, distributed, multi-tier Java software, based largely on modular components running on an application server. WCF is designed in accordance with service oriented architecture principles to support distributed computing where services are consumed by consumers. Clients can consume multiple services and services can be consumed by multiple clients. Services are loosely coupled to each other. Is there any alternatives to these two "enterprise software platforms"? Isn't any other programming languages used in a bigger rate for this problem area? I.e Why isn't there any popular application servers for C++/Qt?

    Read the article

  • Do you have to call .Save() when modifying a application setting that is bound to a control property

    - by Jordan S
    I am programming in .NET I have an application setting of type string. On my form I have a textbox. I bound the text property of the textbox to my application setting. If I type something in the textbox it changes the value that is held in the Application setting but the next time I start the program it goes back to the default value. Do I need to call Properties.Settings.Default.Save(); after the text is entered for the new value to be saved? Shouldn't it do this automatically? Is there a way I can make it do it automatically?

    Read the article

  • Set Context User Principal for Customized Authentication in SignalR

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/27/set-context-user-principal-for-customized-authentication-in-signalr.aspxCurrently I'm working on a single page application project which is built on AngularJS and ASP.NET WebAPI. When I need to implement some features that needs real-time communication and push notifications from server side I decided to use SignalR. SignalR is a project currently developed by Microsoft to build web-based, read-time communication application. You can find it here. With a lot of introductions and guides it's not a difficult task to use SignalR with ASP.NET WebAPI and AngularJS. I followed this and this even though it's based on SignalR 1. But when I tried to implement the authentication for my SignalR I was struggled 2 days and finally I got a solution by myself. This might not be the best one but it actually solved all my problem.   In many articles it's said that you don't need to worry about the authentication of SignalR since it uses the web application authentication. For example if your web application utilizes form authentication, SignalR will use the user principal your web application authentication module resolved, check if the principal exist and authenticated. But in my solution my ASP.NET WebAPI, which is hosting SignalR as well, utilizes OAuth Bearer authentication. So when the SignalR connection was established the context user principal was empty. So I need to authentication and pass the principal by myself.   Firstly I need to create a class which delivered from "AuthorizeAttribute", that will takes the responsible for authenticate when SignalR connection established and any method was invoked. 1: public class QueryStringBearerAuthorizeAttribute : AuthorizeAttribute 2: { 3: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 4: { 5: } 6:  7: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 8: { 9: } 10: } The method "AuthorizeHubConnection" will be invoked when any SignalR connection was established. And here I'm going to retrieve the Bearer token from query string, try to decrypt and recover the login user's claims. 1: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 2: { 3: var dataProtectionProvider = new DpapiDataProtectionProvider(); 4: var secureDataFormat = new TicketDataFormat(dataProtectionProvider.Create()); 5: // authenticate by using bearer token in query string 6: var token = request.QueryString.Get(WebApiConfig.AuthenticationType); 7: var ticket = secureDataFormat.Unprotect(token); 8: if (ticket != null && ticket.Identity != null && ticket.Identity.IsAuthenticated) 9: { 10: // set the authenticated user principal into environment so that it can be used in the future 11: request.Environment["server.User"] = new ClaimsPrincipal(ticket.Identity); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } In the code above I created "TicketDataFormat" instance, which must be same as the one I used to generate the Bearer token when user logged in. Then I retrieve the token from request query string and unprotect it. If I got a valid ticket with identity and it's authenticated this means it's a valid token. Then I pass the user principal into request's environment property which can be used in nearly future. Since my website was built in AngularJS so the SignalR client was in pure JavaScript, and it's not support to set customized HTTP headers in SignalR JavaScript client, I have to pass the Bearer token through request query string. This is not a restriction of SignalR, but a restriction of WebSocket. For security reason WebSocket doesn't allow client to set customized HTTP headers from browser. Next, I need to implement the authentication logic in method "AuthorizeHubMethodInvocation" which will be invoked when any SignalR method was invoked. 1: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 2: { 3: var connectionId = hubIncomingInvokerContext.Hub.Context.ConnectionId; 4: // check the authenticated user principal from environment 5: var environment = hubIncomingInvokerContext.Hub.Context.Request.Environment; 6: var principal = environment["server.User"] as ClaimsPrincipal; 7: if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated) 8: { 9: // create a new HubCallerContext instance with the principal generated from token 10: // and replace the current context so that in hubs we can retrieve current user identity 11: hubIncomingInvokerContext.Hub.Context = new HubCallerContext(new ServerRequest(environment), connectionId); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } Since I had passed the user principal into request environment in previous method, I can simply check if it exists and valid. If so, what I need is to pass the principal into context so that SignalR hub can use. Since the "User" property is all read-only in "hubIncomingInvokerContext", I have to create a new "ServerRequest" instance with principal assigned, and set to "hubIncomingInvokerContext.Hub.Context". After that, we can retrieve the principal in my Hubs through "Context.User" as below. 1: public class DefaultHub : Hub 2: { 3: public object Initialize(string host, string service, JObject payload) 4: { 5: var connectionId = Context.ConnectionId; 6: ... ... 7: var domain = string.Empty; 8: var identity = Context.User.Identity as ClaimsIdentity; 9: if (identity != null) 10: { 11: var claim = identity.FindFirst("Domain"); 12: if (claim != null) 13: { 14: domain = claim.Value; 15: } 16: } 17: ... ... 18: } 19: } Finally I just need to add my "QueryStringBearerAuthorizeAttribute" into the SignalR pipeline. 1: app.Map("/signalr", map => 2: { 3: // Setup the CORS middleware to run before SignalR. 4: // By default this will allow all origins. You can 5: // configure the set of origins and/or http verbs by 6: // providing a cors options with a different policy. 7: map.UseCors(CorsOptions.AllowAll); 8: var hubConfiguration = new HubConfiguration 9: { 10: // You can enable JSONP by uncommenting line below. 11: // JSONP requests are insecure but some older browsers (and some 12: // versions of IE) require JSONP to work cross domain 13: // EnableJSONP = true 14: EnableJavaScriptProxies = false 15: }; 16: // Require authentication for all hubs 17: var authorizer = new QueryStringBearerAuthorizeAttribute(); 18: var module = new AuthorizeModule(authorizer, authorizer); 19: GlobalHost.HubPipeline.AddModule(module); 20: // Run the SignalR pipeline. We're not using MapSignalR 21: // since this branch already runs under the "/signalr" path. 22: map.RunSignalR(hubConfiguration); 23: }); On the client side should pass the Bearer token through query string before I started the connection as below. 1: self.connection = $.hubConnection(signalrEndpoint); 2: self.proxy = self.connection.createHubProxy(hubName); 3: self.proxy.on(notifyEventName, function (event, payload) { 4: options.handler(event, payload); 5: }); 6: // add the authentication token to query string 7: // we cannot use http headers since web socket protocol doesn't support 8: self.connection.qs = { Bearer: AuthService.getToken() }; 9: // connection to hub 10: self.connection.start(); Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Android RestTemplate Ok on emulator but fails on real device

    - by Hossein
    I'm using spring RestTemplate and it works perfect on emulator but if I run my app on real device I get HttpMessageNotWritableException ............ nested exception is java.net.SocketException: Broken pipe Here is some lines of my code(keep in mind my app works perfect on emulator) ............ LoggerUtil.logToFile(TAG, "url is [" + url + "]"); LoggerUtil.logToFile(TAG, "NetworkInfo - " + connectivityManager.getActiveNetworkInfo()); ResponseEntity<T> responseEntity = restTemplate.exchange(url, HttpMethod.POST, requestEntity, clazz); ............. I know my device's network works perfect because all other applications on my device are working and also using device browser I'm able to connect to my server so my server is available. My server is up and running and my device is able to connect to my server so why I get java.net.SocketException: Broken pipe ?!!!!!!! Before I call restTemplate.exchange() I log NetworkInfo and it looks ok -type: WIFI -status: CONNECTED/CONNECTED -isAvailable: true Thanks in advance. Update: It is really weird Even if I use HttpURLConnection, it works perfectly on emulator but on real device I get 400 Bad Request Here is my code HttpURLConnection con = null; try { String url = ....; LoggerUtil.logToFile(TAG, "url [" + url + "]" ); con = (HttpURLConnection) new URL(url).openConnection(); con.setRequestMethod("POST"); con.setRequestProperty("Connection", "Keep-Alive"); con.setDoInput(true); con.setDoOutput(true); con.setUseCaches(false); con.connect(); LoggerUtil.logToFile(TAG, "con.getResponseCode is " + con.getResponseCode()); LoggerUtil.logToFile(TAG, "con.getResponseMessage is " + con.getResponseMessage()); } catch(Throwable t){ LoggerUtil.logToFile(TAG, "*** failed [" + t + "]" ); } in log file I see con.getResponseCode is 400 con.getResponseMessage is Bad Request

    Read the article

  • Flash movies in inactive browser tabs pause or don't execute in real time

    - by ZenBlender
    I'm noticing some unexpected behavior. Some time in the last few months, a change in either Firefox, the Flash player, or both, has made it so that Flash movies that are in inactive browser tabs no longer execute in real time. They appear to still execute, but only in bursts, and not in a predictable way. This is a problem because I develop a Flash-based (Actionscript 2.0, Flash CS3) multiplayer game that maintains a network connection and allows players to chat, etc. Many of our players complain about Firefox crashing while playing the game. I have noticed it too, not too frequently, but it crashes several times a week. (Firefox crashes, I do not get a message from Flash player that indicates an infinite loop or problem in my code) My theory is that this new behavior is causing crashes when there is a lot of activity in my game, leading to lots of unhandled network traffic for my game getting buffered before Firefox/Flash will give it a chance to execute. Maybe this leads to a buffer overflow or missing packets, and as a result, something crashes. At times I will switch back to the tab that is running my game and discover a display bug, which looks as though Flash has simply failed to execute something that it was supposed to. I would assume this new behavior is on purpose, for example to prevent all the Flash-based advertisements in inactive tabs from executing and therefore killing performance. In a quick test on Chrome (5.0.342.9 beta), this "pausing" of Flash seems to be there as well, but somehow it seems much less of a problem. My users have only complained about Firefox crashing, not other browsers. My machine: Windows 7 x64 Firefox 3.6.3 Flash Player 10.1.50.426 My game: triplejack.com Any ideas? Ideally I'd like to disable this behavior for my Flash game so it can execute in real time even when in an inactive tab. Thanks for any help!

    Read the article

  • Change the playback rate of a track in real time on Android

    - by android_dev
    Hello, I would like to know if somebody knows a library to changing the playback rate of a track in real time. My idea is to load a track and change its playback rate to half or double. Firstly, I tried with MusicPlayer but is was not possible at all and then I tried with SoundPool. The problem is that with SoundPool I can´t change the rate once the track is loaded. Here is the code I am using (proof of concept): float j = 1; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Button b = (Button)findViewById(R.id.Button01); b.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { j = (float) (j +.5); } }); AssetFileDescriptor afd; try { SoundPool sp = new SoundPool(1, AudioManager.STREAM_MUSIC, 0); afd = getAssets().openFd("wav/sample.wav"); int id = sp.load(afd, 1); sp.play(id, 1, 1, 1, 0, j); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } When I press the button, the playback rate should increase but it does not happen. Any idea of how change the rate in real time? Thanks in advance.

    Read the article

  • Remove application entry from switch-app menu

    - by bdhar
    What am I trying to achieve? I am writing a windows-form based application in C# .NET 2.0. The application should behave like this: No form should be visible; just a system tray icon is the entire application. So, I have to hide the form during startup and make a NotifyIcon available in the system tray with a ContextMenuStrip attached to it. What have I done so far? I have created a Windows application with the default form's properties WindowState-Minimized and ShowInTaskbar-false. Added a NotifyIcon and attached a ContextMenuStrip to it. What's happening? The application starts as a system tray icon and the form is hidden. So far so good. But when I am working with other applications and when I switch between other application using the Alt-Tab combination in Windows, the application icon appears in the switch-application menu; and when I select my application, the form appears. What's expected? The application should not be available in the switch-application menu; because, the form is empty and there is no functionality attached to it. All that is needed is the system-tray icon. How to hide the application entry from the switch-application menu? Thanks.

    Read the article

  • Group vs role (Any real difference?)

    - by Ondrej
    Can anyone tell me, what's the real difference between group and role? Ive been trying to figure this out for some time now and the more information I read, the more I get the sence, that this is brought up just to confuse people and there is no proper difference in this. Both can do the other one's job. Ive always used a group to manage users and their access rights. Recently, I've come accross an administration software, where is a bunch of users. Each user can have assigned a module (whole system is split into a few parts called modules ie. Administration module, Survey module, Orders module, Customer module). On top of it, each module have a list of functionalities, that can be allowed or denied for each user. So let's say, a user John Smith can access module Orders and can edit any order, but havent given a right to delete any of them. If there was more users with the same competency, I would use a group to manage that. I would aggregate such users into the same group and assign access rights to modules and their functions to the group. All users in the same group would have the same access rights. Why call it a group and not role? I don't know, I just feel it that way. It seems to me, that simply it just doesnt really matter :] But I still would like to know the real difference. What about you guys? Any suggestions why this should be rather called role than group or the other way round? Thanks to everyone.

    Read the article

  • mod_rewrite: no access to real files and directories

    - by tshabalala
    Hello. I use mod_rewrite/.htaccess for pretty URLs. I forward all the requests to my index.php, like this: RewriteRule ^/?([a-zA-Z0-9/-]+)/?$ /index.php [NC,L] The index.php then handles the requests. I'm also using this condition/rule to eliminate trailing slashes (or rather rewrite them to the URL without a trailing slash, with a 301 redirect; I'm doing this to avoid duplicate content and because I like no trailing slashes better): RewriteCond %{HTTP_HOST} !^\.localhost$ [NC] RewriteRule ^(.+)/$ http://%{HTTP_HOST}/$1 [R=301,L] This works well, except that I now get an infinite loop when trying to access a (real) directory (the rewrite rule removes the trailing slash, the server adds it again, ...). I solved this by setting the DirectorySlash directive to Off: DirectorySlash Off I don't know how good this solution is, I don't feel too confident about it tbh. Anyway, what I'd like to do is completely ignore "real" files and directories, since I don't need them and I only use pretty URLs with "virtual" files/directories anyway. This would allow me to avoid the DirectorySlash workaround/hack too. Is this possible? Thanks!

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • What is causing sudden freezing during running real-time program?

    - by Trevor Boyd Smith
    So I run a high intensive (CPU/GPU) real-time program. During normal execution suddenly everything freezes for 1-4 seconds. I opened "Process Explorer" in the background to help gain insight and maybe identify something. Here is what the CPU/GPU graphs looks like when I align them in time: Notice the 4 distinct drops in both the CPU/GPU. You can see that it goes from some sort of positive CPU/GPU usage to almost zero. These drops in the graph align with when the real-time program suddenly freezes. How do I find what is causing these sudden drops? NOTE: When you put your mouse over the graph it tells you the time, accurate to the second, for where your cursor is. Maybe this mouse over feature could be helpful in some way (e.g. what if you had a log of all processes every 100ms). EDIT: The real-time program is a video game and so I can't watch some sort of instrumentation while the video game is running. I need a solution that let's you look back in time somehow to see what was happening when the slow down occurred. EDIT: RE - Recording Data vs using real-time monitor: So the windows performance recorder is for some reason not recording what I expect it to record. So I switched to using "perfmon" and then opening it's "resource monitor". RE - Setting it up so I can view real-time monitor: In the video game I set it to spectate and then put the video game in "windowed" mode so that I can view the real time display that Resource Monitor has. Now that I can get semi-real time (only once per second... how do you get more than once per second?) I started looking at the various real time data readouts. Getting to the cause: I noticed a strong correlation in high disk IO and low CPU usage (which is also seen by having in-game freezing). How do you use resource monitor to find out who is doing all this offending disk IO?

    Read the article

  • Application that will identify percentage of your system disk bandwidth used on a user-application by user-application basis?

    - by Warren P
    I always (subjectively) feel my computer is far too slow (however fast it is), and so I'm always looking for ways to measure and understand what my computer is actually doing, that is making it seem "slow" to me. It has been my observation that my software-developer workload is most often disk-bound (I am waiting for Disk I/O) more than CPU bound. What has made it worse, is that I am using a corporate PC that has in-memory active-scanning anti-virus software that I do not have control over, and also some IT department mandated services that seem to suck up a lot of available hard-disk bandwidth. The best tool I have seen (in Windows 7) is the Resource Monitor which I usually acess from the button in the task Manager. The disk IO page, however, seems to label Disk Activity at a very low level (for example, showing the Volume Shadow Storage, which is flushing information obviously written by something ELSE other than VSS itself, and then writes to Pagefile.sys, which are obviously due to Virtual Memory faults in some application). What I would like to know is if a utility exists that can add up all direct disk input and output by user-level process, or find the process or service that caused VM or VSS activity. In that way, I hope, you could establish a real idea of how much of your computer's precious disk subsystem bandwidth is attributable to a particular application. here's a scenario: MyApp.exe writes 100k/s and reads 100k/s directly. VSS ends up writing another 100k/s. pagefaults caused inside MyApp.exe cause another 100k/s of writes. So the total "cost" of MyApp.exe running, during a period of time (let's say 1 second) is 400k/s, whereas you can only directly observe half of that, in Resource Monitor. Is there a smarter disk-IO watching piece of software I can use?

    Read the article

  • Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

    - by RoyOsherove
    One of the talks I did at QCON London was about a subject that I’ve come across fairly recently , when I was building SilverUnit – a “pure” unit test framework for silverlight objects that depend on the silverlight runtime to run. It is the concept of “cogs in the machine” – when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be: your custom control running inside silverlight runtime in the browser your plug-in running inside an IDE your activity running inside a windows workflow your code running inside a java EE bean your code inheriting from a COM+ (enterprise services) component etc.. Not all of these are necessarily testability problems. The main testability problem usually comes when your code actually inherits form something inside the system. For example. one of the biggest problems with testing objects like silverlight controls is the way they depend on the silverlight runtime – they don’t implement some silverlight interface, they don’t just call external static methods against the framework runtime that surrounds them – they actually inherit parts of the framework: they all inherit (in this case) from the silverlight DependencyObject Wrapping it up? An inheritance dependency is uniquely challenging to bring under test, because “classic” methods such as wrapping the object under test with a framework wrapper will not work, and the only way to do manually is to create parallel testable objects that get delegated with all the possible actions from the dependencies.    In silverlight’s case, that would mean creating your own custom logic class that would be called directly from controls that inherit from silverlight, and would be tested independently of these controls. The pro side is that you get the benefit of understanding the “contract” and the “roles” your system plays against your logic, but unfortunately, more often than not, it can be very tedious to create, and may sometimes feel unnecessary or like code duplication. About perimeters A perimeter is that invisible line that your draw around your pieces of logic during a test, that separate the code under test from any dependencies that it uses. Most of the time, a test perimeter around an object will be the list of seams (dependencies that can be replaced such as interfaces, virtual methods etc.) that are actually replaced for that test or for all the tests. Role based perimeters In the case of creating a wrapper around an object – one really creates a “role based” perimeter around the logic that is being tested – that wrapper takes on roles that are required by the code under test, and also communicates with the host system to implement those roles and provide any inputs to the logic under test. in the image below – we have the code we want to test represented as a star. No perimeter is drawn yet (we haven’t wrapped it up in anything yet). in the image below is what happens when you wrap your logic with a role based wrapper – you get a role based perimeter anywhere your code interacts with the system: There’s another way to bring that code under test – using isolation frameworks like typemock, rhino mocks and MOQ (but if your code inherits from the system, Typemock might be the only way to isolate the code from the system interaction.   Ad-Hoc Isolation perimeters the image below shows what I call ad-hoc perimeter that might be vastly different between different tests: This perimeter’s surface is much smaller, because for that specific test, that is all the “change” that is required to the host system behavior.   The third way of isolating the code from the host system is the main “meat” of this post: Subterranean perimeters Subterranean perimeters are Deep rooted perimeters  - “always on” seams that that can lie very deep in the heart of the host system where they are fully invisible even to the test itself, not just to the code under test. Because they lie deep inside a system you can’t control, the only way I’ve found to control them is with runtime (not compile time) interception of method calls on the system. One way to get such abilities is by using Aspect oriented frameworks – for example, in SilverUnit, I’ve used the CThru AOP framework based on Typemock hooks and CLR profilers to intercept such system level method calls and effectively turn them into seams that lie deep down at the heart of the silverlight runtime. the image below depicts an example of what such a perimeter could look like: As you can see, the actual seams can be very far away form the actual code under test, and as you’ll discover, that’s actually a very good thing. Here is only a partial list of examples of such deep rooted seams : disabling the constructor of a base class five levels below the code under test (this.base.base.base.base) faking static methods of a type that’s being called several levels down the stack: method x() calls y() calls z() calls SomeType.StaticMethod()  Replacing an async mechanism with a synchronous one (replacing all timers with your own timer behavior that always Ticks immediately upon calls to “start()” on the same caller thread for example) Replacing event mechanisms with your own event mechanism (to allow “firing” system events) Changing the way the system saves information with your own saving behavior (in silverunit, I replaced all Dependency Property set and get with calls to an in memory value store instead of using the one built into silverlight which threw exceptions without a browser) several questions could jump in: How do you know what to fake? (how do you discover the perimeter?) How do you fake it? Wouldn’t this be problematic  - to fake something you don’t own? it might change in the future How do you discover the perimeter to fake? To discover a perimeter all you have to do is start with a wishful invocation. a wishful invocation is the act of trying to invoke a method (or even just create an instance ) of an object using “regular” test code. You invoke the thing that you’d like to do in a real unit test, to see what happens: Can I even create an instance of this object without getting an exception? Can I invoke this method on that instance without getting an exception? Can I verify that some call into the system happened? You make the invocation, get an exception (because there is a dependency) and look at the stack trace. choose a location in the stack trace and disable it. Then try the invocation again. if you don’t get an exception the perimeter is good for that invocation, so you can move to trying out other methods on that object. in a future post I will show the process using CThru, and how you end up with something close to a domain specific test framework after you’re done creating the perimeter you need.

    Read the article

  • How do I test how customers use my Cocoa application?

    - by John Gallagher
    I'm interested in finding out how customers use features in my Cocoa application. I want to build up statistics on which features people use and how they use them, so that I can measure the value of features I'm implementing. This feedback of course will be off by default and anonymous. Does anyone know of any frameworks that have been developed that can achieve this without me having to write stuff from scratch?

    Read the article

  • Software to manage "application profiles", which services are running, etc., on Windows?

    - by Lasse V. Karlsen
    I am looking for a particular type of program. I use my computers in various "modes", like gaming, programming, just plain surfing, etc. What I'd like to find is a program that can help me manage these modes. For instance, while programming I might use SQL Server, but while gaming I don't want those services running, but perhaps I'd like Steam to run instead. Basically, the program type I'm looking for is a visual program that allow me to quickly switch modes, and when I do, the program would start and stop the necessary services and applications in order to leave one mode and enter another. I've looked at the programs related to startup management, and I haven't found one that lets me do what I want. At the moment I have batch files, but they're not very good at conveying problems or other things, I'd like a more visual program.

    Read the article

  • Deploy EAR with Websphere Application Server wsadmin.bat without losing security role-mapping?

    - by Tommy
    We're running CI towards our WAS with wsadmin.bat The applications are updated with this command $AdminApp update ${projectName}EAR app {-operation update -update.ignore.new -contents {${artifactsDir}/${projectName}-${buildVersion}.ear}} This causes all the "Security role to user/group mapping"-settings to reset, even though all the other settings are preserved with the -update.ignore.new Anyone know how to fix this?

    Read the article

  • Apache keeps resetting while testing on localhost...

    - by Scott
    Hello everyone. I'm getting errors while testing web pages on localhost. I'm running Windows 7 64-bit. I'm not using Wamp or Xampp. This is what the error.log tells me (I've highlighted the errors in question): [Sat Mar 06 05:10:55 2010] [notice] Apache/2.2.14 (Win32) PHP/5.2.13 configured -- resuming normal operations [Sat Mar 06 05:10:55 2010] [notice] Server built: Sep 28 2009 22:41:08 [Sat Mar 06 05:10:55 2010] [notice] Parent: Created child process 6588 httpd.exe: Could not reliably determine the server's fully qualified domain name, using 192.168.2.2 for ServerName httpd.exe: Could not reliably determine the server's fully qualified domain name, using 192.168.2.2 for ServerName [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Child process is running [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Acquired the start mutex. [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Starting 1000 worker threads. [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Starting thread to listen on port 80. Any input would be greatly appreciated. Thanks.

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >