Search Results

Search found 78 results on 4 pages for 'uat'.

Page 2/4 | < Previous Page | 1 2 3 4  | Next Page >

  • C# sends SQL data 4 times less from one box than from another

    - by Bobb
    W2003, .NET 3.5, SQL 2008 I have prod and UAT app servers deployed in 2 different data centres. I have a C# app which reads text file, parse the text and sends the data to the SQL in bulk. SQL server is in US and the app servers are in London (but in different places). All POPs have dedicated network connections. There is no public internet involved. When the app runs on UAT server I can see in Perfmon that the Send byte/sec is x4 higher than from production server. My estimate is that one server outputs at 1 MB/s and the other at 250 KB/s rate. My suspicion immediately is that there is a router on one of the DCs which shapes traffic or does QoS limitation on traffice from London to US. However support and Windows team and networkig team all are saying that there are no differences in neither networking config on the 2 DCs nor NIC config on the 2 app server... How to find out why is the networking bottlneck is 4 times tighter in one place than in the other? What can I do about it?

    Read the article

  • Code promotion: Enforcing the rules

    - by jbarker7
    So here is our problem: We have a small team of developers with their own ways of doing things-- I am trying to formalize a process in which we are required to promote our code in the following order: Local sandbox Dev UAT Staging Live Developers develop/test as they go on their own sandbox, Dev is its own box that we would use for continuous integration, UAT is another site in IIS on the dev box, which uses our dev database. We then promote to staging, which is a site in IIS on the Live box and using live data (just like the live, hence staging). Then, finally, we promote to live. Here are a few of my questions: 1.) Does this seem to be best practice? If not, what needs to be done differently? 2.) How do I enforce the rules to the developers? Often developers skip steps in order to save time... this should not be tolerated and would be great if it could be physically enforced. 3.) How do I enforce these rules to the business group? The business group just wants to get features out FAST. Do we promote only on certain days? Thanks! Josh

    Read the article

  • Invalid Argument javascript error only on certain computers

    - by Jen
    Getting an error whenever we click a particular button/link on our site. It is generating a javascript "Invalid Argument" error. I know in the other posts it is typically because it is a syntax error in the javascript however it only just seems to have started happening and it doesn't happen on all pcs. ie. in our client's environment if I remote onto their web server and view the uat website I get the javascript error. If I remote onto their sql server and view the uat website I don't get the javascript error. If it was a syntax error then I would always get the error wouldn't I? both browsers are the same version of IE6 (yeah I know...) :) I have tried deleting temporary internet files - including viewing the files and deleting them myself - but no joy. client uses citrix.. and they're all getting the error :( Any ideas would be appreciated - Thanks! :) Update - Sorry I haven't posted specific code as there is too much to post (and I'm not sure where the error is occurring). The "button" launches a new window which in turn opens up a couple of aspx pages and calls lots of javascript. So the window opens ok, and there's a function that gets called to resize the window - but before it calls the resizing of the window/content it throws the invalid argument error. Am busy trying to get alerts to trigger to see if I can see where it's falling over but so far no luck. Again not sure why this error doesn't occur when I use a particular PC (same browser version)

    Read the article

  • How to know the source of certain TCP traffic on AIX

    - by A.Rashad
    We have two AIX boxes, one for production system and another for testing. both systems are running ATM machine switches, where the ATM device is connected via TCP socket. we had an issue on production system where the machine would power off or get disconnected but the netstat -na | grep <IP of machine > would still mention that the socket is up when simulated that case on the UAT environment, the problem did not happen, where the socket would terminate in 3 to 5 minutes. when sniffed on the traffic between the machine and ATM we found that no traffic takes place on production while there is some sort of heartbeat on UAT. but it is not initiated by the application. $>tcpdump | grep -v "10.2.2.71" | grep -v "HSRP" | grep "10.3.1.30" tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on en6, link-type 1, capture size 96 bytes 09:08:13.323421 IP server073.afs3-callback > 10.3.1.30.impera: . 278204201:278204202(1) ack 3307884029 win 164 09:08:13.335334 IP 10.3.1.30.impera > server073.afs3-callback: . ack 1 win 64180 09:08:23.425771 IP 10.3.1.30.impera > server073.afs3-callback: . 1:2(1) ack 1 win 64180 09:08:23.425789 IP server073.afs3-callback > 10.3.1.30.impera: . ack 2 win 65535 09:09:13.628985 IP server073.afs3-callback > 10.3.1.30.impera: . 0:1(1) ack 1 win 164 09:09:13.633900 IP 10.3.1.30.impera > server073.afs3-callback: . ack 1 win 64180 09:09:23.373634 IP 10.3.1.30.impera > server073.afs3-callback: . 1:2(1) ack 1 win 64180 09:09:23.373647 IP server073.afs3-callback > 10.3.1.30.impera: . ack 2 win 65535 while on production, that traffic is not there. we want to know where this traffic is initiated from to implement on production to sense disconnection our comms parameters are: tcp_keepcnt = 2 tcp_keepidle = 100 tcp_keepinit = 150 tcp_keepintvl = 150 tcp_finwait2 = 1200 can anyone help?

    Read the article

  • Cannot Delete Application Pool

    - by redsquare
    I am trying to tidy up an IIS server. I have removed some test/uat virtual directories however I am not able to remove the application pools. I get the following error message. Any hints on how I go about resolving this?

    Read the article

  • How can I force mail clients like Outlook or Thunderbird to load SSL images with invalid certificate?

    - by Abel
    Essentially, we only encounter this issue with our testing environments, where we send mail that contains HTTPS links that point to test servers with self-signed SSL certificates. In a browser I can select to accept the certificate. Is there something similar for mail clients like Thunderbird or Outlook that I can use? Currently, we don't see any images in our test mail, which greatly confuses the UAT team.

    Read the article

  • Should devs, testers and business users have one unified test script?

    - by Carlos Jaime C. De Leon
    In development, I would normally have my own test scripts that would document the data, scenarios and execution steps that I plan to test; this is my dev test plan. When the functionality has been deployed to Test, testers test it using their own test script that they wrote. In UAT, the business user then tests using their own test plan. In retrospect, it looks like this provides a better coverage, with dev tests having a mix of black and white box testing, while testers and business users focus on black box testing. But on the other hand, this brings up distinct test cases that only are executed per stage (ie. some cases which testers thought of are only executed on Test stage) and it would like the dev missed it, which makes it a finding/bug. Is it worth consolidating the test scripts from the start? Thus using one unified test script, or is it abit difficult to do this upfront?

    Read the article

  • How do I justify upgrading to Windows Server 2008?

    - by thebunk
    We're just about to start a new greenfield project - it's a highly functional web application using ASP.NET MVC3, SQL Server etc. We're also going to be using Windows Workflow Foundation for the first time. Our client only wants to use his existing Windows Server 2003 web servers. My main issue (other than it is 8 years old) is that we don't much experierence of WWF development, but understand that using AppFabric (Server 2008 only) will improve WWF development. It's a significant cost to the client, as we need fail-over servers and a UAT environment as well. Am I correct in my understanding, and what methodologies can I use to justify the cost of upgrading?

    Read the article

  • ExecuteNonQuery on a stored proc causes it to be deleted

    - by FinancialRadDeveloper
    This is a strange one. I have a Dev SQL Server which has the stored proc on it, and the same stored proc when used with the same code on the UAT DB causes it to delete itself! Has anyone heard of this behaviour? SQL Code: -- Check if user is registered with the system IF OBJECT_ID('dbo.sp_is_valid_user') IS NOT NULL BEGIN DROP PROCEDURE dbo.sp_is_valid_user IF OBJECT_ID('dbo.sp_is_valid_user') IS NOT NULL PRINT '<<< FAILED DROPPING PROCEDURE dbo.sp_is_valid_user >>>' ELSE PRINT '<<< DROPPED PROCEDURE dbo.sp_is_valid_user >>>' END go create procedure dbo.sp_is_valid_user @username as varchar(20), @isvalid as int OUTPUT AS BEGIN declare @tmpuser as varchar(20) select @tmpuser = username from CPUserData where username = @username if @tmpuser = @username BEGIN select @isvalid = 1 END else BEGIN select @isvalid = 0 END END GO Usage example DECLARE @isvalid int exec dbo.sp_is_valid_user 'username', @isvalid OUTPUT SELECT valid = @isvalid The usage example work all day... when I access it via C# it deletes itself in the UAT SQL DB but not the Dev one!! C# Code: public bool IsValidUser(string sUsername, ref string sErrMsg) { string sDBConn = ConfigurationSettings.AppSettings["StoredProcDBConnection"]; SqlCommand sqlcmd = new SqlCommand(); SqlDataAdapter sqlAdapter = new SqlDataAdapter(); try { SqlConnection conn = new SqlConnection(sDBConn); sqlcmd.Connection = conn; conn.Open(); sqlcmd.CommandType = CommandType.StoredProcedure; sqlcmd.CommandText = "sp_is_valid_user"; // params to pass in sqlcmd.Parameters.AddWithValue("@username", sUsername); // param for checking success passed back out sqlcmd.Parameters.Add("@isvalid", SqlDbType.Int); sqlcmd.Parameters["@isvalid"].Direction = ParameterDirection.Output; sqlcmd.ExecuteNonQuery(); int nIsValid = (int)sqlcmd.Parameters["@isvalid"].Value; if (nIsValid == 1) { conn.Close(); sErrMsg = "User Valid"; return true; } else { conn.Close(); sErrMsg = "Username : " + sUsername + " not found."; return false; } } catch (Exception e) { sErrMsg = "Error :" + e.Source + " msg: " + e.Message; return false; } }

    Read the article

  • SSIS - Multiple Configurations

    - by Mick Walker
    I have inherited a SSIS project. I have never worked with SSIS before, and the one thing that seems strange to me, is that there is no way to manage multiple configurations. For each SSIS package we have 3 delpoyment environments, DEV, UAT and PRODUCTION. At the moment I am having to edit the configuration for every package we deploy manually for each change (and there are a lot of packages). Does anyone know of a more graceful way to handle these configuration changes?

    Read the article

  • ASP.NET High CPU Bringing Servers to their Knees

    - by user880954
    Ok, our new build is having 100% cpu spikes on each server at random intervals. For long durations it make the site totally unresponsive - this will be at peak times as people in different countries log on to the site etc. We've looked at perfmom, memory profilers, CLR profiler, sql profilers, Red gate ants profiler, tried load testing in UAT - but cannot even reproduce the problem. This could mean only thousands of users hitting the live site causes it to happen. One pattern we did notice was that the new code - the broken build - actually uses noticably less threads. We are also using spring for IOC - does this have a bed reputation? To make things worse, we cannot deploy to live due to the business impact - so cannot narrow the problem down to subset of the new features we've added. We truly are destroyed - has anyone got any battle scars that may save us a few lives?

    Read the article

  • How to view multiple log files as one file in unix/linux

    - by user42679
    Hi, I was wondering if there is a convenient way in linux/unix to read multiple log files as one. More specifically, I would like to view a sequence of log files (app.log, app.log.1 app.log.2, etc) as one big file using normal unix tools (vi, less, etc). When the EOF is read the tool will automatically move to the beginning of the next file. During my work I have to analyze uat/prod logs to investigate and solve problems. The fact that I need to traverse many log files disturbs my work and causes delays. Any ideas?

    Read the article

  • release management system - architectural question

    - by Sonic Soul
    Every place i've worked created their own release process, and all of them worked pretty well, however it took pretty good effort (and often a dedicated team) to manage releases. I am currently at a new place, and about to design such a system, however this time the team is very lean and we won't have dedicated resources to releasing. It will be up to development manager until the system is proofed enough for other developers to use. we're using Subversion as code repository, Team City as the build server, Jira issue tracker, Oracle db. I was thinking about writing a basic workflow app, that will let developers create a new manifest which will specify the following items. release details (who, jira issues etc) workflow step (dev, test, uat, prod approved, prod released) source files that last item is where it can get hairy. especially with database scripts. Figured I'd ask if there is a good pattern, or off the shelf product that could help with the database part, or perhaps the whole process. I briefly tested Red Gate Oracle deployment tool, but it didn't work out as well as I had hoped (from 1 day of testing at least) Questions: I think I could get around releasing of our code with something like Octopus Deploy straight from Team City. I am not clear however, how I could create a simple database deployment part, that will track which version of which script (from subversion) has been deployed where. Is there already some utility I could utilize for navigating subversion to choose which scripts should be released, instead of writing one from scratch. I'd just need it to produce some manifest of paths + versions.

    Read the article

  • How to change the Target URL of EditForm.aspx, DispForm.aspx and NewForm.aspx

    - by Jayant Sharma
    Hi All, To changing the URL of ListForms we have very limited options. On Inernet if you search you will lots of article about Customization of List Forms using SharePoint Desinger, but the disadvantage of SharePoint Desinger is, you cannot create wsp of the customization means what ever changes you have done in UAT environment you need to repeat the same at Production. This is the main reason I donot like SharePoint Desinger more. I always prefer to create WSP file and Deployment of WSP file will do all of my work. Now, If you want to change the ListForm URL using Visual Studio, either you have create new List Defintion or Create New Content Type. I found some very good article and want to share.. http://www.codeproject.com/Articles/223431/Custom-SharePoint-List-Forms http://community.bamboosolutions.com/blogs/sharepoint-2010/archive/2011/05/12/sharepoint-2010-cookbook-how-to-create-a-customized-list-edit-form-for-development-in-visual-studio-2010.aspx Whenever you create, either List Defintion or Content type and specify the URL for List Forms it will automatically add ListID and ItemID (No ItemID in case of NewForm.aspx) in the URL as Querystring, so if you want to redirect it or do some logic you can, as you got the ItemID as well as ListID.

    Read the article

  • User Acceptance Testing Defect Classification when developing for an outside client

    - by DannyC
    I am involved in a large development project in which we (a very small start up) are developing for an outside client (a very large company). We recently received their first output from UAT testing of a fairly small iteration, which listed 12 'defects', triaged into three categories : Low, Medium and High. The issue we have is around whether everything in this list should be recorded as a 'defect' - some of the issues they found would be better described as refinements, or even 'nice-to-haves', and some we think are not defects at all. They client's QA lead says that it is standard for them to label every issues they identify as a defect, however, we are a bit uncomfortable about this. Whilst the relationship is good, we don't see a huge problem with this, but we are concerned that, if the relationship suffers in the future, these lists of 'defects' could prove costly for us. We don't want to come across as being difficult, or taking things too personally here, and we are happy to make all of the changes identified, however we are a bit concerned especially as there is a uneven power balance at play in our relationship. Are we being paranoid here? Or could we be setting ourselves up for problems down the line by agreeing to this classification?

    Read the article

  • QA & Testing with UPK

    - by dan.gallo(at)oracle.com
    Most customers know that UPK produces both the word and excel based test scripts for UAT. Did you know that you can use UPK for QA review and bug tracking? To use UPK for QA, create content and assign it appropriately to authorized reviewers. Then have them open the developer, use customized views to find content assigned to them quickly and check out the topics. Then they can use the topic editor to review the content and provide comments right into the bubbles or use explanation frames. It makes QA-ing content this way easier than publishing and sending out .tpcs or docs for people to review. How about UPK for bug tracking? The hardest part about fixing bugs in software is reproducing the error! When you use UPK for bug tracking, it captures the exact steps the user took that gave them the error. Now development can easily walk through the process in a simulated environment to see what might have caused it, they have a documented procedure for what generated the error and they are able to better communicate with the LOB. Also, they can update or attach the simulation\documentation to any defect management software like bugzilla or something similar -all thanks to UPK.

    Read the article

  • Functional testing in the verification

    - by user970696
    Yesterday my question How come verification does not include actual testing? created a lot of controversy, yet did not reveal the answer for related and very important question: does black box functional testing done by testers belong to verification or validation? ISO 12207:12208 here mentiones testing explicitly only as a validation activity, however, it speaks about validation of requirements of the intended use. For me its more high level, like UAT test cases written by business users ISO mentioned above does not mention any specific verification (7.2.4.3.2)except for Requirement verification, Design verification, Document and Code & Integration verification. The last two can be probably thought as unit and integrated testing. But where is then the regular testing done by testers at the end of the phase? The book I mentioned in the original question mentiones that verification is done by static techniques, yet on the V model graph it describes System testing against high level description as a verification, mentioning it includes all kinds of testing like functional, load etc. In the IEEE standard for V&V, you can read this: Even though the tests and evaluations are not part of the V&V processes, the techniques described in this standard may be useful in performing them. So that is different than in ISO, where validation mentiones testing as the activity. Not to mention a lot of contradicting information on the net. I would really appreciate a reference to e.g. a standard in the answer or explanation of what I missed in the ISO. For me, I am unable to tell where the testers work belong.

    Read the article

  • Please, tell us how you made Agile work for you?

    - by Paul
    I've been seeing many questions related to Agile. There seems to be confusion between the people who are doing Agile successfully, and those of us who don't understand it. So I'm wondering if some of the successful teams would be willing to give the result of us some examples of how you succeeded. Some of the things I know I wonder What steps did you use? (ie. Talk to users, mock up, tests, code, testing, (whatever)) Tools that helped you? Did you generate any artifacts, other than a working implementation? How did you prevent spaghetti architecture / code? How do you pass along to new team members, or is the team stable for the project How did you determine exit criteria, or was it open ended. (Scope of project?) Did you do this as contracting? How did you develop a contract up-front? Did the business do any up front work? Or did they come to the table with "We want to implement a "bleh bleh blah"? What types of tests did you use? Unit, Integration, UAT? Or did the process make some/all of those unnecessary? Bonus: Do you have an situations / links to "How To" Agile articles, books, etc? Wiki, describes what but not how (to the uninitiated) At least to me, not a duplicate

    Read the article

  • ISO 12207 - testing being only validation activity? [closed]

    - by user970696
    Possible Duplicate: How come verification does not include actual testing? ISO norm 12207 states that testing is only validation activity, while all static inspections are verification (that requirement, code.. is complete, correct..). I did found some articles saying its not correct but you know, it is not "official". I would like to understand because there are two different concepts (in books & articles): 1) Verification is all testing except for UAT (because only user can really validate the use). E.g. here OR 2) Verification is everything but testing. All testing is validation. E.g. here Definitions are mostly the same, as Sommerville's: The aim of verification is to check that the software meets its stated functional and non-functional requirements. Validation, however, is a more general process. The aim of validation is to ensure that the software meets the customer’s expectations. It goes beyond simply checking conformance with the specification to demonstrating that the software does what the customer expects it to do It is really bugging me because I tend to agree that functional testing done on a product (SIT) is still verification because I just follow the requirements. But ISO does not agree..

    Read the article

  • From the Coalface - 3 - Work as hard as you can to be as lazy as you can!

    - by TATWORTH
    The saga of the Change Log A recent conversation reminded me of the need for change logs within a database, to record when various change scripts were run. Creating such the required table is simple. A typical table for this consists of: Id - identity Integer primary key ChangeFileName - NVARCHAR(128) to hold the name of the file run. DateAdded - DateTime non-null with default value of getutcdate() Purpose - NVARCHAR(128) Rerunnable - Bit non-null default 0. By good design of the table only two data values normally need to be supplied. Two stored procedures, one for inserting data and one to list in reverse sequence the log complete the database essentials. The complete implementation can be found in the CommonData solution at http://CommonData.CodePlex.Com By including a call the add Change Log stored procedure, each script can log its name and purpose for posterity. The scripts that were applied to say the UAT system and their sequence of application can be readily identified for running on the Live system. Formatting XML XML is often produced as one continous string with no embedded CR/LF. To get it into human readable form, open it in visual studio, swap to another tab and back and click the format document button. The XML will then be nicely formatted!

    Read the article

  • MS Build Script accepting a list or collection of project names to build and deploy

    - by Darryn Parker
    I want to create a MS Build script (executed by way of TFS Build Server) that will accept a list of project names that need to be built, unit tested and deployed. The reason is that I want one script to serve many 'like' structured solutions. All the solutions are SOA in nature and share the same framework, project structure and deployment requirements. I've done this in Nant and acheived great success out of having one build file serving 4 solutions - it just accepted a list of web project names to compile and deploy to our test/uat environments. What is the syntax to cycle through a collection of project names suppled as params via MS Build?

    Read the article

  • HttpWebRequest to different IP than the domain resolves to

    - by fyjham
    Hey, Long story short an API I'm calling's different environments (dev/staging/uat/live) is set up by putting a host-record on the server so the live domain resolves to their other server in for the HTTP request. The problem is that they've done this with so many different environments that we don't have enough servers to use the server-wide host files for it anymore (We've got some environments running off the same servers - luckily not dev and live though :P). I'm wondering if there's a way to make WebRequest request to a domain but explicitly specify the IP of the server it should connect to? Or is there any way of doing this short of going all the way down to socket connections (Which I'd really prefer not to waste time/create bugs by trying to re-implementing the HTTP protocol). PS: I've tried and we can't just get a new sub-domain for each environment.

    Read the article

  • How can I run NUnit(Selenium Grid) tests in parallel?

    - by Benjamin Lee
    My current project uses NUnit for unit tests and to drive UATs written with Selenium. Developers normally run tests using ReSharper's test runner in VS.Net 2003 and our build box kicks them off via NAnt. We would like to run the UAT tests in parallel so that we can take advantage of Selenium Grid/RCs so that they will be able to run much faster. Does anyone have any thoughts on how this might be achieved? and/or best practices for testing Selenium tests against multiple browsers environments without writing duplicate tests automatically? Thank you.

    Read the article

  • Jrun Server crashes when a page has cfform,cfgrid,cflayout etc..

    - by kayteen
    Hi, I am having a weird problem. I have a application that works perfect in my development machine and UAT machine which is of windows 2003 server/cf8. When i uploaded the same application on Solaris box with CF8, and access the site it works perfect until i hit the page that has CFFORM, CFLAYOUT, CFGRID.. etc.. The Jrun Server just crashes[jrpp-2 unexpected constant#48...]. There is nothing available in any of the logs. Please help me how to resolve this thing...!! Thanks, Bittoo More Info: http://forums.adobe.com/thread/605411?tstart=0

    Read the article

  • Agile and code release

    - by ring bearer
    Do you know of any agile process that is created for code releases? One of the main theme of agile is frequent releases and each company/client would have their own test/approval processes that control code releases. Most of the time these slow down the pace of "frequent releases" Currently we have a proprietary tool based workflow. The team who needs a code promotion needs to create a promotion request to one of the final UAT servers. Once this is complete, and once tests are done, certain customers, technical/non-technical managers need to approve, then it goes in to production deploy stage. Meanwhile no sprint planning meeting or anything of that sort. What is the code release process (Which is agile) that has worked for you?

    Read the article

< Previous Page | 1 2 3 4  | Next Page >