Search Results

Search found 603 results on 25 pages for 'qa'.

Page 8/25 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • hg access control to central repository

    - by andreas buykx
    We come from a subversion background where we have a QA manager who gives commit rights to the central repository once he has verified that all QC activities have been done. Me and a couple of colleagues are starting to use mercurial, and we want to have a shared repository that would contain our QC-ed changes. Each of the developers hg clones the repository and pushes his changes back to the shared repository. I've read the HG init tutorial and skimmed through the red bean book, but could not find how to control who is allowed to push changes to the shared repository. How would our existing model of QA-manager controlled commits translate to a mercurial 'central' repository?

    Read the article

  • Issues with LINQ (to Entity) [adding records]

    - by Mario
    I am using LINQ to Entity in a project, where I pull a bunch of data (from the database) and organize it into a bunch of objects and save those to the database. I have not had problems writing to the db before using LINQ to Entity, but I have run into a snag with this particular one. Here's the error I get (this is the "InnerException", the exception itself is useless!): New transaction is not allowed because there are other threads running in the session. I have seen that before, when I was trying to save my changes inside a loop. In this case, the loop finishes, and it tries to make that call, only to give me the exception. Here's the current code: try { //finalResult is a list of the keys to match on for the records being pulled foreach (int i in finalResult) { var queryEff = (from eff in dbMRI.MemberEligibility where eff.Member_Key == i && eff.EffDate >= DateTime.Now select eff.EffDate).Min(); if (queryEff != null) { //Add a record to the Process table Process prRecord = new Process(); prRecord.GroupData = qa; prRecord.Member_Key = i; prRecord.ProcessDate = DateTime.Now; prRecord.RecordType = "F"; prRecord.UsernameMarkedBy = "Autocard"; prRecord.GroupsId = qa.GroupsID; prRecord.Quantity = 2; prRecord.EffectiveDate = queryEff; dbMRI.AddObject("Process", prRecord); } } dbMRI.SaveChanges(); //<-- Crashes here foreach (int i in finalResult) { var queryProc = from pro in dbMRI.Process where pro.Member_Key == i && pro.UsernameMarkedBy == "Autocard" select pro; foreach (var qp in queryProc) { Audit aud = new Audit(); aud.Member_Key = i; aud.ProcessId = qp.ProcessId; aud.MarkDate = DateTime.Now; aud.MarkedByUsername = "Autocard"; aud.GroupData = qa; dbMRI.AddObject("Audit", aud); } } dbMRI.SaveChanges(); //<-- AND here (if the first one is commented out) } catch (Exception e) { //Do Something here } Basically, I need it to insert a record, get the identity for that inserted record and insert a record into another table with the identity from the first record. Given some other constraints, it is not possible to create a FK relationship between the two (I've tried, but some other parts of the app won't allow it, AND my DBA team for whatever reason hates FK's, but that's for a different topic :)) Any ideas what might be causing this? Thank!

    Read the article

  • Why does my PDF ask for a password after being retrieved from Visual SourceSafe?

    - by Schnapple
    PREFACE: Yes we're moving away from VSS in the next few months. One of my web projects contains, as one of its files, a PDF. The PDF on our QA site is being pulled from VSS. A QA tester recently told me he's being prompted for a password when he tries to open it. VSS says the file I have on disk is different than the one it has, so I updated it, but afterwards it's still being shown as different. So basically VSS is mangling my PDF and the results are so wobbly that Adobe Acrobat Reader is confused and thinks it has a password. I've tried adding it as Auto-Detect and as Binary. Same results. Why does my PDF ask for a password after being retrieved from Visual SourceSafe and how can I prevent it?

    Read the article

  • Facebook application domain settings

    - by user887961
    we were working on our sandbox trying to get the facebook like button set up for our site. Like an idiot, I set our sandbox URL as the domain. Here's the question: is there any going back once I've done this? I tried to reset the domain but it doesn't seem to have taken. And, here's a related question: if i just go and make a like button (iFrame version) it spits out an app_id as part of the code. If that app_id isn't hooked up to an actual application w/domain, will it work after we move it to our QA server and then on to production? Or will it, once we've tested on our sandbox, establish the sandbox as it's domain and then we're back where we started? We can't change code once it moves on to QA, it's just bad form...so what do I do?

    Read the article

  • Can I improve this regex check for valid domain names?

    - by Josh
    So, I have been working on this domain name regular expression. So far, it seems to pick up domain names with SLDs and TLDs (with the optional ccTLD), but there is duplication of the TLD listing. Can this be refactored any further? params[:domain_name].downcase.strip.match(/^[a-z0-9\-]{2,63} \.((a[cdefgilmnoqrstuwxz]|aero|arpa)|(b[abdefghijmnorstvwyz]|biz)| (c[acdfghiklmnorsuvxyz]|cat|com|coop)|d[ejkmoz]|(e[ceghrstu]|edu)|f[ijkmor]| (g[abdefghilmnpqrstuwy]|gov)|h[kmnrtu]|(i[delmnoqrst]|info|int)| (j[emop]|jobs)|k[eghimnprwyz]|l[abcikrstuvy]| (m[acdghklmnopqrstuvwxyz]|me|mil|mobi|museum)|(n[acefgilopruz]|name|net)|(om|org)| (p[aefghklmnrstwy]|pro)|qa|r[eouw]|s[abcdeghijklmnortvyz]| (t[cdfghjklmnoprtvwz]|travel)|u[agkmsyz]|v[aceginu]|w[fs]|y[etu]|z[amw]) (\.((a[cdefgilmnoqrstuwxz]|aero|arpa)|(b[abdefghijmnorstvwyz]|biz)| (c[acdfghiklmnorsuvxyz]|cat|com|coop)|d[ejkmoz]|(e[ceghrstu]|edu)|f[ijkmor]| (g[abdefghilmnpqrstuwy]|gov)|h[kmnrtu]|(i[delmnoqrst]|info|int)| (j[emop]|jobs)|k[eghimnprwyz]|l[abcikrstuvy]| m[acdghklmnopqrstuvwxyz]|mil|mobi|museum)| (n[acefgilopruz]|name|net)|(om|org)| (p[aefghklmnrstwy]|pro)|qa|r[eouw]|s[abcdeghijklmnortvyz]| (t[cdfghjklmnoprtvwz]|travel)|u[agkmsyz]|v[aceginu]|w[fs]|y[etu]|z[amw]))?$/)

    Read the article

  • IIS 6 session timing out a lot quicker than expected

    - by Echiban
    I am working with an web application that has its sessions timing out a lot quicker than expected. We expected a timeout of 15 minutes but it's timing out at 3-4 minutes. Info about environment: IIS6 classic ASP / COM+ app timeout OK on current PROD, much quicker in dev / QA environments We already disabled app pool recycling, and even put IIS in isolation mode - no effect HTTP err log doesn't display any lines when session times out We've done a close comparison of PROD and DEV / QA environments, and given we use virtual machines on all of them, settings should be preserved. I tried to find IIS blog notes from David Wang but many of them now have HTTP 404 errors, and I don't know what else to do. Please help! At the very least, is there a way to get IIS to log every time a session expires? At the very least some means of logging / debugging IIS would be useful. Thanks in advance.

    Read the article

  • How to allow for modular development while still running in same JVM?

    - by Marcus
    Our current app runs in a single JVM. We are now splitting up the app into separate logical services where each service runs in its own JVM. The split is being done to allow a single service to be modified and deployed without impacting the entire system. This reduces the need to QA the entire system - just need to QA the interaction with the service being changed. For interservice communication we use a combination of REST, an MQ system bus, and database views. What I don't like about this: REST means we have to marshal data to/from XML DB views couple the systems together which defeats the whole concept of separate services MQ / system bus is added complexity There is inevitably some code duplication between services You have set up n JBoss server configurations, we have to do n number of deployments, n number of set up scripts, etc, etc. Is there a better way to structure an internal application to allow modular development and deployment while allowing the app to run in a single JVM (and achieving the associated benefits)?

    Read the article

  • Allow for modular development while still running in same JVM?

    - by Marcus
    Our current app runs in a single JVM. We are now splitting up the app into separate logical services where each service runs in its own JVM. The split is being done to allow a single service to be modified and deployed without impacting the entire system. This reduces the need to QA the entire system - just need to QA the interaction with the service being changed. For inter service communication we use a combination of REST, an MQ system bus, and database views. What I don't like about this: REST means we have to marshal data to/from XML DB views couple the systems together which defeats the whole concept of separate services MQ / system bus is added complexity There is inevitably some code duplication between services You have set up n JBoss server configurations, we have to do n number of deployments, n number of set up scripts, etc, etc. Is there a better way to structure an internal application to allow modular development and deployment while allowing the app to run in a single JVM (and achieving the associated benefits)?

    Read the article

  • Determine IP# of domain from client browser

    - by Tim W.
    Greetings all, I would very much like to determine the IP# of a domain from client script. It's for use in a testing application to determine whether or not a certain domain is set to a QA address as opposed to the address live on the . The testing machine will have it's host file set to resolve a domain to the QA address. Pinging from the server won't help since the server is getting the public DNS address. Is this possible in JavaScript? Maybe a Flash could do the trick?

    Read the article

  • How to study for MCTS 70-433 exam (SQL Server 2008 Database Development) and is it worth it?

    - by Mugen
    Hi, I'm working as a QA (Software Tester with 3 years of experience) and was thinking of getting some certifications for my career. I already have the ISTQB certification and was thinking of doing SCJP along with MCTS 70-433 certification (SQL Server 2008 Database Development) as the next move. So my question is this: 1) Who should go for the 70-433 certification and is it worth going for, for a career in QA? 2) What would be a good book to study for this? I'm just looking for a simple book that is written just to the point and not much bloated up such as technical bibles. Not that they aren't good but they just take up too much of time. Maybe something similar to the one written by Kathy Sierra & Bert Bates for SCJP. It is written in a very simple language and is enough to do SCJP. Edit: I'm still searching for answers to this. Thanks.

    Read the article

  • How do I test switching compilers from MSVS 6 to MSVS 2008?

    - by Leif
    When switching from MSVS 6 to MSVS 2008, what major differences should I look for when testing the software? I'm coming from more of a QA perspective. We have two programs that work closely together that were originally compiled in Visual C++ 6. Now one of the programs has been compiled in Visual C++ 2008 in order to use a specific CD writing routine. The other program is still compiled under MSVS 6. My manager is very concerned with this change and wants me to run tests specific to this change. Since I deal more with QA and less with development, I really have no idea where to start. I've looked for differences between the two, but nothing has given me a clear direction as far as testing is concerned. Any suggestions would be helpful.

    Read the article

  • Continuous integration with multiple branch development

    - by ryanprayogo
    In the project that I'm working on, we are using SVN with 'Stable Trunk' strategy. What that means is that for each bug that is found, QA opens a bug ticket and assigns it to a developer. Then, a developer fixes that bug and checks it in a branch (off trunk, let's call this the bug branch) and that branch will only contain fixes for that particular bug ticket When we decided to do a release, for each bug fixes that we want to release to the customer, a developer will merge all the fixes from several bug branch to trunk and proceed with the normal QA cycle. The problem is that we use trunk as the codebase for our CI job (Hudson, specifically), and therefore, for all commits to the bug branch, it will miss the daily build until it gets merged to trunk when we decided to release the new version of the software. Obviously, that defeats the purpose of having CI. What is the proper way to fix this issue?

    Read the article

  • Advice about testing an application before release?

    - by Troy
    I would like to get some tips from peer developers about how you go about testing an application you developed, prior to release to QA. Keep in mind, this is a small scale application (requirements are verbal), and so doing formal testing processes wont work, especially, since your boss told you to develop this app quick, push it out the door. Despite the time restraints, I would like to make sure it is bug free, however, numerous times in the past, I have had the app sent back to me because clicking the "Reset" button, messes up the other controls alignment etc. I know there are people out there that develop small scale apps fast, and send them out with minimal bugs. How can I achieve that? I researched this post, but it didnt quite answer my question. Testing your code before releasing to QA

    Read the article

  • WCF server component getting outdated user name

    - by JoelFan
    I am overriding System.IdentityModel.Policy.IAuthorizationPolicy.Evaluate as follows: public bool Evaluate(EvaluationContext evaluationContext,ref object state) { var ids = (IList<IIdentity>)evaluationContext.Properties["Identities"]; var userName = ids[0].Name; // look up "userName" in a database to check for app. permissions } Recently one of the users had her user name changed in Active Directory. She is able to login to her Windows box fine with her new user name, but when she tries to run the client side of our application, the server gets her old user name in the "userName" variable in the code above, which messes up our authentication (since her old user name is no longer in our database). Another piece of info: This only happens when she connects to the server code on the Production server. We have the same server code running on a QA server, and it does not have this issue (the QA server code gets her correct (new) user name) Any ideas what could be going on?

    Read the article

  • Is it possible to build this type of program in PHP?

    - by Steven
    I want to build a QA program that will crawl all the pages of a site (all files under a specified domain name), and it will return all external links on the site that doesn't open in a new window (does not have the target="_blank" attribute in the href). I can make a php or javascript to open external links in new windows or to report all problem links that don't open in new windows of a single page (the same page the script is in) but what I want is for the QA tool to go and search all pages of a website and report back to me what it finds. This "spidering" is what I have no idea how to do, and am not sure if it's even possible to do with a language like PHP. If it's possible how can I go about it?

    Read the article

  • is it a good idea to write tests for environments other than development?

    - by jcollum
    Let's say I have a (fairly typical) set of environments: PROD, UAT, QA, DEV. Is it a good idea to run your tests across all environments? Here's what I'm thinking of. I have a proc in SQL that my code depends on, I'll call it proc_getActiveCustomers. If that proc isn't present my app will go south real fast. So I write a test that checks for the existence of this proc in the database. Nothing new here. But when I then deploy my app to the QA environment, would I also want to have a test that checks that environment for the existence of proc_getActiveCustomers? I think this is a good idea but I've never heard much about testing in environments outside of development. Makes me wonder if there's some downside I'm not aware of. The direction that I'm going is to have a list of environments in code and then passing that environment into my unit test.

    Read the article

  • Can't figure out what's wrong with my php/sql statement

    - by Olegious
    So this is probably a dumb beginner question, but I've been looking at it and can't figure it out. A bit of background: just practicing making a web app, a form on page 1 takes in some values from the user, posts them to the next page which contains the code to connect to the DB and populate the relevant tables. I establish the DB connection successfully, here's the code that contains the query: $conn->query("SET NAMES 'utf9'"); $query_str = "INSERT INTO 'qa'.'users' ('id', 'user_name','password' ,'email' ,'dob' ,'sx') VALUES (NULL, $username, $password, $email, $dob, $sx);"; $result = @$conn->query($query_str); Here's the error that is returned:Insert query failed: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''qa'.'users' ('id', 'user_name' ,'password' ,'email' ,'dob' ,'s' at line 1 Thanks in advance!

    Read the article

  • More New JDeveloper/ADF Blogs - Dec 2010 Edition

    - by shay.shmeltzer
    It's only been a month since my last new bloggers update, but over this month I came across several other new blogs so here is a few more to add to your RSS reader: JDev and ADF QA Team ADF Code Corner Code Harvest JDeveloper PMs Blog Don Kleppinger Amit Seth Kishore Amir Hossein Khanof Oracle ADF Notebook Gerry O'D Muhammed Soyer Thanks for all the developers who are sharing their experience and helping advance the ADF community. As always we are trying to keep tracking these blogs for entries and you can find those on the JDeveloper tweet, facebook and blog roll.Twitter , Facebook , Blogs

    Read the article

  • IT Admin for Thrill Seekers

    - by Tony Davis
    A developer suggested to me recently that the life of the DBA was, surely, a dull one. My first reaction was indignation, but quickly followed by the thought that for many people excitement isn't necessarily the most desirable aspect of their job. It's true that some aspects of the DBA role seem guaranteed to quieten the pulse; in the days of tape backups, time must have slowed to eternity for the person whose job it was to oversee this process, placing tapes into secure containers, ensuring correct labeling, and.sorry, I drifted off there for a second. On the other hand, if you follow the adventures of the likes of Brent Ozar or Tom LaRock, you'd be forgiven for thinking that much of a database guy's time is spent, metaphorically, diving through plate glass windows in tight fitting underwear in order to extract grateful occupants from burning database applications. Alas it isn't true of the majority, but it isn't as dull as some people imagine, and is a helter-skelter ride compared with some other IT roles. Every IT department has people who toil away in shadowy corners doing quiet but mysterious tasks. When you ask them to explain what they do, you almost immediately want them to stop, but you hear enough to appreciate that these tasks are often absolutely vital to the smooth functioning of an IT organization. Compared with them, the DBAs are prima donnas. Here are a few nominations: Installation engineer - install all of the company's laptops and workstations, and software, deal with licensing, shipping and data entry.many organizations, especially those subject to tight regulation, would simply grind to a halt without their efforts. Localization engineer - Not quite software engineering, not quite translation, the job is to rebuild a product in a different language and make sure everything still works. QA Tester - firstly, I should say that the testers at Red Gate seem to me some of the most-fulfilled in the company. I refer here to the QA Tester whose job is more-or-less entirely to read a script, click some buttons and make sure the actual and expected values match. Configuration manager - for example, someone whose main job is to configure build environments so that devs can access their source code; assuredly necessary for the smooth functioning and productivity of the team, and hopefully well-paid. So what other sort of job in IT should one choose if the work of a DBA proves to be too exciting? Or are these roles secretly more exciting than many imagine? I invite you all to put forward your own suggestions. Cheers, Tony.

    Read the article

  • Quality Assurance & Quality Control = verification & validation?

    - by user970696
    According to a book (page below), reviewing e.g. design (verification activity) is quality assurance. I would not agree, I would say its quality control because we are checking the conformance to specification, plans and detecting deviations (defects) as we do in quality control. But what would be an example of QA then? Could you give me a clear example that proves/disproves what is this book saying? Software Testing: Srinisvasan Desikan, Gopalaswamy Ramesh

    Read the article

  • Various roles in an Organization and their respective tasks.

    - by balu
    In various organizations(Software Company) there would be various designations having different roles. I would like to know the Industry accepted & followed trend in the organization hierarchy(..Like DBA,System Architect,Project Manager,Senior Developer,Developer,QA,Design Team,Delivery Manager etc..).And the various roles played by each of them in the various stages of the Software Development Life Cycle.Who all could possibly be sharing the responsibility mutually?

    Read the article

  • Getting from a user-story to code while using TDD (scrum)

    - by Ittai
    I'm getting into scrum and TDD and I think I have some confusion which I'd like to get your feedback about. Let's assume I have a user-story in my backlog, in order for me to start developing it as part of TDD I need to have requirements, right so far? Is it true to say that the product manager and the QA should be responsible for taking the user-story and breaking it down to acceptance tests? I think the above is true since the acceptance tests need to be formal, so they can be used as tests, but also human readable so that the product can approve they are the requirements, right? Is it also true that I later take these acceptance tests and use them as my requirements, i.e. they are a set of use-cases which I implement (through TDD)? I hope I'm not making too much of a mess but that's the current flow I have in mind right now. Update I think my initial intentions were unclear so I'll try to rephrase. I want to know more details about the scrum flow of turning a user-story into code while using TDD. The starting point is obvious, a user surfaces a need (or the user's representative as the product) which is a short 1-2 lines description in the known format and that is added to the product backlog. When there is a spring planning meeting user-stories are taken from the backlog and assigned to developers. In order for a developer to write code they need requirements (especially in TDD since the requirements are what the tests are derived from). When, by whom and to which format are the requirements compiled? What I had in mind was that the product and QA define the requirements via acceptance tests (I'm thinking of automatic using FitNesse or the sort but that's not the core I think) which help to serve 2 purposes at the same time: They define "Done" properly. They give a developer something to derive tests from. I wasn't sure when these were written (before the sprint they're picked then that might be a waste since additional information will arrive or the story won't be picked, during the iteration then the developer might get stuck waiting for them...)

    Read the article

  • Advanced Continuous Delivery to Azure from TFS, Part 1: Good Enough Is Not Great

    - by jasont
    The folks over on the TFS / Visual Studio team have been working hard at releasing a steady stream of new features for their new hosted Team Foundation Service in the cloud. One of the most significant features released was simple continuous delivery of your solution into your Azure deployments. The original announcement from Brian Harry can be found here. Team Foundation Service is a great platform for .Net developers who are used to working with TFS on-premises. I’ve been using it since it became available at the //BUILD conference in 2011, and when I recently came to work at Stackify, it was one of the first changes I made. Managing work items is much easier than the tool we were using previously, although there are some limitations (more on that in another blog post). However, when continuous deployment was made available, it blew my mind. It was the killer feature I didn’t know I needed. Not to say that I wasn’t previously an advocate for continuous delivery; just that it was always a pain to set up and configure. Having it hosted - and a one-click setup – well, that’s just the best thing since sliced bread. It made perfect sense: my source code is in the cloud, and my deployment is in the cloud. Great! I can queue up a build from my iPad or phone and just let it go! I quickly tore through the quick setup and saw it all work… sort of. This will be the first in a three part series on how to take the building block of Team Foundation Service continuous delivery and build a CD model that will actually work for any team deploying something more advanced than a “Hello World” example. Part 1: Good Enough Is Not Great Part 2: A Model That Works: Branching and Multiple Deployment Environments Part 3: Other Considerations: SQL, Custom Tasks, Etc Good Enough Is Not Great There. I’ve said it. I certainly hope no one on the TFS team is offended, but it’s the truth. Let’s take a look under the hood and understand how it works, and then why it’s not enough to handle real world CD as-is. How it works. (note that I’ve skipped a couple of steps; I already have my accounts set up and something deployed to Azure) The first step is to establish some oAuth magic between your Azure management portal and your TFS Instance. You do this via the management portal. Once it’s done, you have a new build process template in your TFS instance. (Image lifted from the documentation) From here, you’ll get the usual prompts for security, allowing access, etc. But you’ll also get to pick which Solution in your source control to build. Here’s what the bulk of the build definition looks like. All I’ve had to do is add in the solution to build (notice that mine is from a specific branch – Release – more on that later) and I’ve changed the configuration. I trigger the build, and voila! I have an Azure deployment a few minutes later. The beauty of this is that it’s all in the cloud and I’m not waiting for my machine to compile and upload the package. (I also had to enable the build definition first – by default it is created in disabled state, probably a good thing since it will trigger on every.single.checkin by default.) I get to see a history of deployments from the Azure portal, and can link into TFS to see the associated changesets and work items. You’ll notice also that this build definition also automatically put my code in the Staging slot of my Azure deployment – more on this soon. For now, I can VIP swap and be in production. (P.S. I hate VIP swap and “production” and “staging” in Azure. More on that later too.) That’s it. That’s the default out-of-box experience. Easy, right? But it’s full of room for improvement, so let’s get into that….   The Problems Nothing is perfect (except my code – it’s always perfect), and neither is Continuous Deployment without a bit of work to help it fit your dev team’s process. So what are the issues? Issue 1: Staging vs QA vs Prod vs whatever other environments your team may have. This, for me, is the big hairy one. Remember how this automatically deployed to staging rather than prod for us? There are a couple of issues with this model: If I want to deliver to prod, it requires intervention on my part after deployment (via a VIP swap). If I truly want to promote between environments (i.e. Nightly Build –> Stable QA –> Production) I likely have configuration changes between each environment such as database connection strings and this process (and the VIP swap) doesn’t account for this. Yet. Issue 2: Branching and delivering on every check-in. As I mentioned above, I have set this up to target a specific branch – Release – of my code. For the purposes of this example, I have adopted the “basic” branching strategy as defined by the ALM Rangers. This basically establishes a “Main” trunk where you branch off Dev and Release branches. Granted, the Release branch is usually the only thing you will deploy to production, but you certainly don’t want to roll to production automatically when you merge to the Release branch and check-in (unless you like the thrill of it, and in that case, I like your style, cowboy….). Rather, you have nightly build and QA environments, or if you’ve adopted the feature-branch model you have environments for those. Those are the environments you want to continuously deploy to. But that takes us back to Issue 1: we currently have a 1:1 solution to Azure deployment target. Issue 3: SQL and other custom tasks. Let’s be honest and address the elephant in the room: I need to get some sleep because I see an elephant in the room. But seriously, I can’t think of an application I have touched in the last 10 years that doesn’t need to consider SQL changes when deploying code and upgrading an environment. Microsoft seems perfectly content to ignore this elephant for now: yes, they’ve added Data Tier Applications. But let’s be honest with ourselves again: no one really uses it, and it’s not suitable for anything more complex than a Hello World sample project database. Why? Because it doesn’t fit well into a great source control story. Developers make stored procedure and table changes all day long while coding complex applications, and if someone forgets to go update the DACPAC before the automated deployment, you have a broken build until it’s completed. Developers – not just DBAs – also like to work with SQL in SQL tools, not in Visual Studio. I’m really picking on SQL because that’s generally the biggest concern that I hear. But we need to account for any custom tasks as well in the build process.   The Solutions… ? We’ve taken a look at how this all works, and addressed the shortcomings. In my next post (which I promise will be very, very soon), I will detail how I’ve overcome these shortcomings and used this foundation to create a mature, flexible model for deploying my app – any version, any time, to any environment.

    Read the article

  • TestRail 1.0 Test Management Software released

    Gurock Software just released its new test management solution TestRail. TestRail is a web-based test case management software that helps software development teams and QA departments to efficiently manage, track and organize software testing efforts.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >