Search Results

Search found 10206 results on 409 pages for 'tooling and testing'.

Page 41/409 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • MEF, IServiceProvider and Testing Visual Studio Extensions

    - by Daniel Cazzulino
    In the latest and greatest version of Visual Studio, MEF plays a critical role, one that makes extending VS much more fun than it ever was. So typically, you just [Export] something, and then someone [Import]s it and that's it. MEF in all its glory kicks in and gets all your dependencies satisfied. Cool, you say, so let's now import ITextTemplating and have some T4-based codegen going! Ah, if only it was that easy. Turns out by default, none of the VS built-in services are exposed to MEF, apparently because there wasn't enough time to analyze the lifetime, initialization, dependencies, etc. for each one before launch, which makes perfect sense. You don't want to blindly export everything now just in case. There's also the whole VS package initialization thing which in this version of VS is not so transparently integrated with the MEF publishing side (i.e. a MEF export from a package can get instantiated before its owning package, and in fact, the package can remain unloaded forever and the export will continue to be visible to anyone)....Read full article

    Read the article

  • Returning JsonResult From ASP.NET MVC 2.0 Controller and Unit Testing

    This post will show how to return a simple Json result from an ASP.NET MVC 2.0 web project.  It will show how to test that result inside a unit test and essentially pick apart the Json, just... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Returning JsonResult From ASP.NET MVC 2.0 Controller and Unit Testing

    This post will show how to return a simple Json result from an ASP.NET MVC 2.0 web project.  It will show how to test that result inside a unit test and essentially pick apart the Json, just like a JavaScript (or other client) would do.  It seems like it should be very simple (and indeed, [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • QA & Testing with UPK

    - by dan.gallo(at)oracle.com
    Most customers know that UPK produces both the word and excel based test scripts for UAT. Did you know that you can use UPK for QA review and bug tracking? To use UPK for QA, create content and assign it appropriately to authorized reviewers. Then have them open the developer, use customized views to find content assigned to them quickly and check out the topics. Then they can use the topic editor to review the content and provide comments right into the bubbles or use explanation frames. It makes QA-ing content this way easier than publishing and sending out .tpcs or docs for people to review. How about UPK for bug tracking? The hardest part about fixing bugs in software is reproducing the error! When you use UPK for bug tracking, it captures the exact steps the user took that gave them the error. Now development can easily walk through the process in a simulated environment to see what might have caused it, they have a documented procedure for what generated the error and they are able to better communicate with the LOB. Also, they can update or attach the simulation\documentation to any defect management software like bugzilla or something similar -all thanks to UPK.

    Read the article

  • Long delays in Unity3D substance generation

    - by Josh Buhler
    Currently working on an iOS/Android project in Unity3d, and we're seeing some incredibly long times for generating substances between testing runs. We can run the game, but once we shut down the playback, Unity begins to re-import all off the substances built using Substance Designer. As we've got a lot of these in our game, it's starting to lead to 5 minute delays between testing runs just to test a small change. Any suggestions or parameters we should check that could possibly prevent Unity from needing to regenerate these substances every time? Shouldn't it be caching these things somewhere?

    Read the article

  • How do I explain the importance of NUNIT Test cases to my Colleagues [duplicate]

    - by JNL
    This question already has an answer here: How to explain the value of unit testing 6 answers I am currently working in Software Development for applications including lot of Mathematical Calculations. As a result there are lot of test cases that we need to consider. We donot have any NUNIT Test case system, I am wonderring how should I get the advantages of implementing the NUNIT testing in front of my colleagues and my boss. I am pretty sure, it would be of great help for our team. Any help regarding the same, will be higly appreciated.

    Read the article

  • Complete RESTful API debugging/testing tool

    - by vartec
    I'm looking for the most complete tool, preferably portable GUI or browser plugin to test RESTful API. What I need is: GET/POST/DELETE/PUT support multiple file uploads as fields (multipart/form-data) file uploads as body Extra points for: possibility to save multiple configurations and use them to pre-fill parameters OAuth support nice JSON response formatting Currently I'm using 3 tools: Chrome REST Console extension — My favorite, very nicely done. Has OAuth. However the functionality missing for me is sending file as a body of the request; Cannot send multiple files; Firefox Poster add-on — Quite nice, but the functionality it's missing for file as POST fields parameters; Also cannot send multiple files; cURL — can do anything, but it's quite tedious to use it from command line.

    Read the article

  • Testing Reference Data Mappings

    - by Michael Stephenson
    Background Mapping reference data is one of the common scenarios in BizTalk development and its usually a bit of a pain when you need to manage a lot of reference data whether it be through the BizTalk Cross Referencing features or some kind of custom solution. I have seen many cases where only a couple of the mapping conditions are ever tested. Approach As usual I like to see these things tested in isolation before you start using them in your BizTalk maps so you know your mapping functions are working as expected. This approach can be used for almost all of your reference data type mapping functions where you can take advantage of MSTests data driven tests to test lots of conditions without having to write millions of tests. Walk Through Rather than go into the details of this here, I'm going to call out to one of my colleagues who wrote a nice little walk through about using data driven tests a while back. Check out Callum's blog: http://callumhibbert.blogspot.com/2009/07/data-driven-tests-with-mstest.html

    Read the article

  • How do you handle measuring Code Coverage in JavaScript

    - by Dancrumb
    In order to measure Code Coverage for JavaScript unit tests, one needs to instrument the code, run the tests and then perform post-processing. My concern is that, as a result, you are unit testing code that will never be run in production. Since JavaScript isn't compiled, what you test should be precisely what you execute. So here's my question, how do you handle this? One thought I had was to run Unit Testing on the production code and use that for my pass fail. I would then create a shadow of my production code, with instrumentation and run my unit tests again; this would give me my code coverage stats. Has anyone come across a method that is a little more graceful than this?

    Read the article

  • Rebuilding CoasterBuzz, Part IV: Dependency injection, it's what's for breakfast

    - by Jeff
    (Repost from my personal blog.) This is another post in a series about rebuilding one of my Web sites, which has been around for 12 years. I hope to relaunch soon. More: Part I: Evolution, and death to WCF Part II: Hot data objects Part III: The architecture using the "Web stack of love" If anything generally good for the craft has come out of the rise of ASP.NET MVC, it's that people are more likely to use dependency injection, and loosely couple the pieces parts of their applications. A lot of the emphasis on coding this way has been to facilitate unit testing, and that's awesome. Unit testing makes me feel a lot less like a hack, and a lot more confident in what I'm doing. Dependency injection is pretty straight forward. It says, "Given an instance of this class, I need instances of other classes, defined not by their concrete implementations, but their interfaces." Probably the first place a developer exercises this in when having a class talk to some kind of data repository. For a very simple example, pretend the FooService has to get some Foo. It looks like this: public class FooService {    public FooService(IFooRepository fooRepo)    {       _fooRepo = fooRepo;    }    private readonly IFooRepository _fooRepo;    public Foo GetMeFoo()    {       return _fooRepo.FooFromDatabase();    } } When we need the FooService, we ask the dependency container to get it for us. It says, "You'll need an IFooRepository in that, so let me see what that's mapped to, and put it in there for you." Why is this good for you? It's good because your FooService doesn't know or care about how you get some foo. You can stub out what the methods and properties on a fake IFooRepository might return, and test just the FooService. I don't want to get too far into unit testing, but it's the most commonly cited reason to use DI containers in MVC. What I wanted to mention is how there's another benefit in a project like mine, where I have to glue together a bunch of stuff. For example, when I have someone sign up for a new account on CoasterBuzz, I'm actually using POP Forums' new account mailer, which composes a bunch of text that includes a link to verify your account. The thing is, I want to use custom text and some other logic that's specific to CoasterBuzz. To accomplish this, I make a new class that inherits from the forum's NewAccountMailer, and override some stuff. Easy enough. Then I use Ninject, the DI container I'm using, to unbind the forum's implementation, and substitute my own. Ninject uses something called a NinjectModule to bind interfaces to concrete implementations. The forum has its own module, and then the CoasterBuzz module is loaded second. The CB module has two lines of code to swap out the mailer implementation: Unbind<PopForums.Email.INewAccountMailer>(); Bind<PopForums.Email.INewAccountMailer>().To<CbNewAccountMailer>(); Piece of cake! Now, when code asks the DI container for an INewAccountMailer, it gets my custom implementation instead. This is a lot easier to deal with than some of the alternatives. I could do some copy-paste, but then I'm not using well-tested code from the forum. I could write stuff from scratch, but then I'm throwing away a bunch of logic I've already written (in this case, stuff around e-mail, e-mail settings, mail delivery failures). There are other places where the DI container comes in handy. For example, CoasterBuzz does a number of custom things with user profiles, and special content for paid members. It uses the forum as the core piece to managing users, so I can ask the container to get me instances of classes that do user lookups, for example, and have zero care about how the forum handles database calls, configuration, etc. What a great world to live in, compared to ten years ago. Sure, the primary interest in DI is around the "separation of concerns" and facilitating unit testing, but as your library grows and you use more open source, it starts to be the glue that pulls everything together.

    Read the article

  • Has test driven development (TDD) actually benefited a real world project?

    - by James
    I am not new to coding. I have been coding (seriously) for over 15 years now. I have always had some testing for my code. However, over the last few months I have been learning test driven design/development (TDD) using Ruby on Rails. So far, I'm not seeing the benefit. I see some benefit to writing tests for some things, but very few. And while I like the idea of writing the test first, I find I spend substantially more time trying to debug my tests to get them to say what I really mean than I do debugging actual code. This is probably because the test code is often substantially more complicated than the code it tests. I hope this is just inexperience with the available tools (RSpec in this case). I must say though, at this point, the level of frustration mixed with the disappointing lack of performance is beyond unacceptable. So far, the only value I'm seeing from TDD is a growing library of RSpec files that serve as templates for other projects/files. Which is not much more useful, maybe less useful, than the actual project code files. In reading the available literature, I notice that TDD seems to be a massive time sink up front, but pays off in the end. I'm just wondering, are there any real world examples? Does this massive frustration ever pay off in the real world? I really hope I did not miss this question somewhere else on here. I searched, but all the questions/answers are several years old at this point. It was a rare occasion when I found a developer who would say anything bad about TDD, which is why I have spent as much time on this as I have. However, I noticed that nobody seems to point to specific real-world examples. I did read one answer that said the guy debugging the code in 2011 would thank you for have a complete unit testing suite (I think that comment was made in 2008). So, I'm just wondering, after all these years, do we finally have any examples showing the payoff is real? Has anybody actually inherited or gone back to code that was designed/developed with TDD and has a complete set of unit tests and actually felt a payoff? Or did you find that you were spending so much time trying to figure out what the test was testing (and why it was important) that you just tossed out the whole mess and dug into the code?

    Read the article

  • Non-public site for testing on shared-hosting site

    - by ptpaterson
    Is it possible to as a developer using a shared hosting site such as bluehost, hostgator, and the like, to view your site without making it public. Or do the files you upload always go live immediately? Is the best way to test a site (if using shared hosting) to just set up some apache/mysql/php service on my machine? I am considering putting together a site with shared hosting, and trying to see what all my options are. Thanks.

    Read the article

  • Docker vs ESXi for Startup Projects - Deploying Code for Dev Testing

    - by JasonG
    Why hello there little programmer dude! I have a question for you and all of your experience and knowledge. I have an ESXi whitebox that I built which is an 8 dude that sits in the corner. I made a mistake recently and took the key that had ESXi, formatted it and used it for something else. No big deal because the last project I worked on had stalled out. I'm about to pick up another project and now I need to spin up a whole bunch of stuff for CI, qa + db, ticket tracker, wikis etc etc. I've been hearing a lot about Docker recently and as this is just a consumer grade machine, I'm wondering if it may make more sense for me to use Docker on OpenOS and then put everything there - bamboo or hudson, jira, confluence, postgress for the tools to use, then a qa env. I can't really seem to find any documents that directly compare traditional VM infrastructure vs docker solutions and I'm wondering if it is fair to compare. Is there any reason why CoreOS w/ containers would be a strictly worse solution? Or do you have any insight into why I may want to stick with ESXi? I've looked on multiple occasions and can't find a good reason not to. I'm not going to run a production env on the server so I don't need to have HA if updating security or OS for example where esxi would allow me to restart one vm at a time. I can just shut the thing down and bring it back up if I need a reboot no problem. So what's up with this container stuff? Is it a fair replacement for ESXi? I'm guessing the atlassian products would run much better and my ram would go a lot farther using docker. Probably the CPU would run much cooler too and my expensive HDD space would be better utilized.

    Read the article

  • News From EAP Testing

    - by Fatherjack
    There is a phrase that goes something like “Watch the pennies and the pounds/dollars will take care of themselves”, meaning that if you pay attention to the small things then the larger things are going to fare well too. I am lucky enough to be a Friend of Red Gate and once in a while I get told about new features in their tools and have a test copy of the software to trial. I got one of those emails a week or so ago and I have been exploring the SQL Prompt 6 EAP since then. One really useful feature of long standing in SQL Prompt is the idea of a code snippet that is automatically pasted into the SSMS editor when you type a few key letters. For example I can type “ssf” and then press the tab key and the text is expanded to SELECT * FROM. There are lots of these combinations and it is possible to create your own really easily. To create your own you use the Snippet Manager interface to define the shortcut letters and the code that you want to have put in their place. Let’s look at an example. Say I am writing a blog about something and want to have the demo code create a temporary table. It might looks like this; The first time you run the code everything is fine, a lovely set of dates fill the results grid but run it a second time and this happens.   Yep, we didn’t destroy the temporary table so the CREATE statement fails when it finds the table already exists. No matter, I have a snippet created that takes care of this.   Nothing too technical here but you will see that in the Code section there is $CURSOR$, this isn’t a TSQL keyword but a marker for SQL Prompt to place the cursor in that position when the Code is pasted into the SSMS Editor. I just place my cursor above the CREATE statement and type “ifobj” – the shortcut for my code to DROP the temporary table – which has been defined in the Snippet Manager as below. This means I am right-away ready to type the name of the offending table. Pretty neat and it’s been very useful in saving me lots of time over many years.   The news for SQL Prompt 6 is that Red Gate have added a new Snippet Command of $PASTE$. Let’s alter our snippet to the following and try it out   Once again, we will type type “ifobj” in the SSMS Editor but first of all, highlight the name of the table #TestTable and copy it to your clipboard. Now type “ifobj” and press Tab… Wherever the string $PASTE$ is placed in the snippet, the contents of your clipboard are merged into the pasted TSQL. This means I don’t need to type the table name into the code snippet, it’s already there and I am seeing a fully functioning piece of TSQL ready to run. This means it is it even easier to write TSQL quickly and consistently. Attention to detail like this from Red Gate means that their developer tools stay on track to keep winning awards year after year and help take the hard work out of writing neat, accurate TSQL. If you want to try out SQL Prompt all the details are at http://www.red-gate.com/products/sql-development/sql-prompt/.

    Read the article

  • Severity and relation to occurence - priority?

    - by user970696
    I have been browsing through some webpages related to testing and found one dealing with the metrics of testing. It says: The severity level of a defect indicates the potential business impact for the end user (business impact = effect on the end user x frequency of occurrence). I do not think think this is correct or what am I missing? Usually it is the priority which is the result of such a calculation (severe bug that occurs rarely is still severe but does not have to be fixed immediately). Also from this description, what is the difference between the effect on the end user and business impact?

    Read the article

  • SQL Server v.Next (Denali) : Why you should start testing early

    - by AaronBertrand
    Denali is coming, whether you like it or not. You may not be an early adopter and you may not have plans on your current calendar, but at some point you will need to move your apps and databases to this release - or one very much like it. There are a lot of great new features you will be able to take advantage of, but not everything is a double rainbow. There are some changes that will break your spirit if you let them. What does it mean? I go over several breaking changes in my presentation that...(read more)

    Read the article

  • Testing Git competence

    - by David
    I hire a lot of programmers for tiny tasks. I very clearly specify that the tasks can only be completed by making a pull request on GitHub. Unfortunatelly, so many programmers do not know Git and often the programmers cannot complete the project due to not understanding/being willing to learn Git, even after they have undertaken the programming of the task. This is bad both for me and for the programmers. Sometimes I end up arguing for why it is inefficient that they just send me a zip file containing the code. Therefore, I am looking for an online service to certify that the programmers know how to make a pull request so I do not waste their nor my time. The certificate should be free for the coders, but may cost me. It is important that the course just focuses on exactly what is needed to make a clean pull request so it should not take more than 5 minutes to go through. Does such a thing exist?

    Read the article

  • Software requirements for replicating a Windows application

    - by gpuguy
    I developed an application using Windows Form in C++ (IDE MSVC 2010). Some part of application also has MFC, and OpenCv. I want to send the application to my cleint for interim testing on his own machine. I have not developed any installer for the same, and so I will be sending him the.EXE file. I want that the client should not face any difficulty in replicating the experiment, and thus saves his time. Can somebody suggest me what all softwares(such as, MSVC, .NET Framework, Windows SDK etc) should already be installed on the client's machine for successfull testing of the application? Note: OS (Windows 7) and hardware is exactly same at both sides.

    Read the article

  • Why to let / not let developers test their own work

    - by pyvi
    I want to gather some arguments as to why letting a developer testing his/her own work as the last step before the product goes into production is a bad idea, because unfortunately, my place of work sometimes does this (the last time this came up, the argument boiled down to most people being too busy with other things and not having the time to get another person familiar with that part of the program - it's very specialised software). There are test plans in this case (though not always), but I am very much in favor of making a person who didn't make the changes that are tested actually doing the final testing. So I am asking if you could provide me with a good and solid list of arguments I can bring up the next time this is discussed. Or to provide counter-arguments, in case you think this is perfectly fine especially when there are formal test cases to test.

    Read the article

  • Determining an application's dependencies

    - by gpuguy
    I have developed an application using Windows Forms in C++ (IDE MS VC++ 2010). Some parts of the application also use MFC, and OpenCV. I want to send the application to my cleint for interim testing on his own machine. I have not developed any installer for the application, so I will be sending him an .EXE file. I want the client to not face any difficulties in replicating the environment, and therefore not lose any time. Can somebody suggest me what software (such as MS VC++ Runtime, .NET Framework, Windows SDK, etc.) should be installed on the client's machine for successfull testing of the application? Note: The OS (Windows 7) and hardware are exactly the same on both sides.

    Read the article

  • Consecutive verse Parallel Nunit Testing

    - by Jacobm001
    My office has roughly ~300 webpages that should be tested on a fairly regular basis. I'm working with Nunit, Selenium, and C# in Visual Studio 2010. I used this framework as a basis, and I do have a few working tests. The problem I'm running into is is that when I run the entire suite. In each run, a random test(s) will fail. If they're run individually, they will all pass. My guess is that Nunit is trying to run all 7 tests at the same time and the browser can't support this for obvious reasons. Watching the browser visually, this does seem to be the case. Looking at the screenshot below, I need to figure out a way in which the tests under Index_Tests are run sequentially, not in parallel. errors: Selenium2.OfficeClass.Tests.Index_Tests.index_4: OpenQA.Selenium.NoSuchElementException : Unable to locate element: "method":"id","selector":"textSelectorName"} Selenium2.OfficeClass.Tests.Index_Tests.index_7: OpenQA.Selenium.NoSuchElementException : Unable to locate element: "method":"id","selector":"textSelectorName"} example with one test: using OpenQA.Selenium; using NUnit.Framework; namespace Selenium2.OfficeClass.Tests { [TestFixture] public class Index_Tests : TestBase { public IWebDriver driver; [TestFixtureSetUp] public void TestFixtureSetUp() { driver = StartBrowser(); } [TestFixtureTearDown] public void TestFixtureTearDown() { driver.Quit(); } [Test] public void index_1() { OfficeClass index = new OfficeClass(driver); index.Navigate("http://url_goeshere"); index.SendKeyID("txtFiscalYear", "input"); index.SendKeyID("txtIndex", ""); index.SendKeyID("txtActivity", "input"); index.ClickID("btnDisplay"); } } }

    Read the article

  • Testing HTML5 and javascript code for iPhone and Android devices

    - by Pankaj Upadhyay
    I have developed a simple HTML5 webpage that uses a javascript file. This is a fun learning page so I wanted to know as to how will they show up on mobile devices like iPhone and Android smartphones. The pages are hosted on a server and i have tested the thing on my desktop. But, how can i test the same for these mobile devices. i.e. how the page will look on mobile and stuff. I don't have an iPhone or Android. There is no serious development going in here so i was thinking if there is some free website or tool that acts as a iPhone or android browser. The main aim is just to see how the webpage will show up on an android phone.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >