Search Results

Search found 13653 results on 547 pages for 'integration testing'.

Page 61/547 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • What build tools do not depend on java (or Ruby)?

    - by Mohamed Meligy
    I'm wondering what generic build tools out there include their binary run-times and do not depend on another environment not shipped with them. For example, ANT requires Java, Rake requires Ruby, etc.. would be great if talking about also target-platform-agnostic tools, where I'd just give whatever command for building, whatever command for testing, etc.. and can then define my artifacts in CI or so. Would see something like that useful for building .NET projects (say, on both Windows .NET and Mono), and Node JS projects especially. I do not want to install Java and / or Ruby if what I want is a .NET build or a Node JS build. This is a bit of general awareness question not an exact problem I'm facing, that's why it's here not on StackOverflow. Update: To explain a bit more, what I'm after is the build script that would run MSBuild for compiling for example ( in .NET, and then maybe several Node/NPM commands in Node, etc..), and then have the rest build/test steps, instead of setting these all in MSBuild (again, in .NET case, also, wondering if there is equivalent story in Node).

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • Unit Testing.... a data provider ?

    - by TomTom
    Given problem: I like unit tests. I develop connectivity software to external systems that pretty much and often use a C++ library The return of this systems is nonndeterministic. Data is received while running, but making sure it is all correctly interpreted is hard. How can I test this properly? I can run a unit test that does a connect. Sadly, it will then process a life data stream. I can say I run the test for 30 or 60 seconds before disconnecting, but getting code ccoverage is impossible - I simply dont even comeclose to get all code paths EVERY ONCE PER DAY (error code paths are rarely run). I also can not really assert every result. Depending on the time of the day we talk of 20.000 data callbacks per second - all of which are not relly determined good enough to validate each of them for consistency. Mocking? Well, that would leave me testing an empty shell of myself because the code handling the events basically is the to be tested case, and in many cases we talk here of a COMPLEX c level structure - hard to have mocking frameworks that integrate from Csharp to C++ Anyone any idea? I am short on giving up using unit tests for this part of the application.

    Read the article

  • GH-Unit for unit testing Objective-C code, why am I getting linking errors?

    - by djhworld
    Hi there, I'm trying to dive into the quite frankly terrible world of unit testing using Xcode (such a convoluted process it seems.) Basically I have this test class, attempting to test my Show.h class #import <GHUnit/GHUnit.h> #import "Show.h" @interface ShowTest : GHTestCase { } @end @implementation ShowTest - (void)testShowCreate { Show *s = [[Show alloc] init]; GHAssertNotNil(s,@"Was nil."); } @end However when I try to build and run my tests it moans with this error: - Undefined symbols: "_OBJC_CLASS_$_Show", referenced from: __objc_classrefs__DATA@0 in ShowTest.o ld: symbol(s) not found collect2: ld returned 1 exit status Now I'm presuming this is a linking error. I tried following every step in the instructions located here: - http://github.com/gabriel/gh-unit/blob/master/README.md And step 2 of these instructions confused me: - In the Target 'Tests' Info window, General tab: Add a linked library, under Mac OS X 10.5 SDK section, select GHUnit.framework Add a linked library, select your project. Add a direct dependency, and select your project. (This will cause your application or framework to build before the test target.) How am I supposed to add my project to the linked library list when all it accepts it .dylib, .framework and .o files. I'm confused! Thanks for any help that is received.

    Read the article

  • Silverlight unit testing. Error while running tests.

    - by 1gn1ter
    I'm using VS2010. Silverlight 4, NUnit 2.5.5, and TypeMock TypemockIsolatorSetup6.0.3.619.msi In the test project MVVM is implemented, PeopleViewModel is a ViewModel which I want to test. Please advise if you use other products for unit testing of MVVM Silverlight. Or please help to win this TypeMock. TIA This is the code of the test: [Test] [SilverlightUnitTest] public void SomeTestAgainstSilverlight() { PeopleViewModel o = new PeopleViewModel(); var res = o.People; Assert.AreEqual(15, res.Count()); } While running the test in ReSharper i get the following error: TestA.SomeTestAgainstSilverlight : Failed****************************************** *Loading Silverlight Isolation Aspects...* ****************************************** TEST RESULTS: --------------------------------------------- System.MissingMethodException : Method not found: 'hv TypeMock.ArrangeActAssert.Isolate.a(System.Delegate)'. at a4.a(ref Delegate A_0) at a4.a(Boolean A_0) at il.b() at CThru.Silverlight.SilverlightUnitTestAttribute.Init() at CThru.Silverlight.SilverlightUnitTestAttribute.Execute() at TypeMock.MockManager.a(String A_0, String A_1, Object A_2, Object A_3, Boolean A_4, Object[] A_5) at TypeMock.InternalMockManager.getReturn(Object that, String typeName, String methodName, Object methodParameters, Boolean isInjected) at Tests.TestA.SomeTestAgainstSilverlight() in TestA.cs: line 21 While running test in NUnit i get: Tests.TestA.SomeTestAgainstSilverlight: System.DllNotFoundException : Unable to load DLL 'agcore': The specified module could not be found. (Exception from HRESULT: 0x8007007E) at MS.Internal.XcpImports.Application_GetCurrentNative(IntPtr context, IntPtr& obj) at MS.Internal.XcpImports.Application_GetCurrent(IntPtr& pApp) at System.Windows.Application.get_Current() at ViewModelExample.ViewModel.ViewModelBase.get_IsDesignTime() in C:\Documents and Settings\USER\Desktop\ViewModelExample\ViewModelExample\ViewModel\ViewModelBase.cs:line 20 at ViewModelExample.ViewModel.PeopleViewModel..ctor(IServiceAgent serviceAgent) in C:\Documents and Settings\USER\Desktop\ViewModelExample\ViewModelExample\ViewModel\PeopleViewModel.cs:line 28 at ViewModelExample.ViewModel.PeopleViewModel..ctor() in C:\Documents and Settings\USER\Desktop\ViewModelExample\ViewModelExample\ViewModel\PeopleViewModel.cs:line 24 at Tests.TestA.SomeTestAgainstSilverlight() in C:\Documents and Settings\USER\Desktop\ViewModelExample\Tests\TestA.cs:line 22

    Read the article

  • What is the best testing pattern for checking that parameters are being used properly?

    - by Joseph
    I'm using Rhino Mocks to try to verify that when I call a certain method, that the method in turn will properly group items and then call another method. Something like this: //Arrange var bucketsOfFun = new BucketGame(); var balls = new List<IBall> { new Ball { Color = Color.Red }, new Ball { Color = Color.Blue }, new Ball { Color = Color.Yellow }, new Ball { Color = Color.Orange }, new Ball { Color = Color.Orange } }; //Act bucketsOfFun.HaveFunWithBucketsAndBalls(balls); //Assert ??? Here is where the trouble begins for me. My method is doing something like this: public void HaveFunWithBucketsAndBalls(IList<IBall> balls) { //group all the balls together according to color var blueBalls = GetBlueBalls(balls); var redBalls = GetRedBalls(balls); // you get the idea HaveFunWithABucketOfBalls(blueBalls); HaveFunWithABucketOfBalls(redBalls); // etc etc with all the different colors } public void HaveFunWithABucketOfBalls(IList<IBall> colorSpecificBalls) { //doing some stuff here that i don't care about //for the test i'm writing right now } What I want to assert is that each time I call HaveFunWithABucketOfBalls that I'm calling it with a group of 1 red ball, then 1 blue ball, then 1 yellow ball, then 2 orange balls. If I can assert that behavior then I can verify that the method is doing what I want it to do, which is grouping the balls properly. Any ideas of what the best testing pattern for this would be?

    Read the article

  • With Google Website Optimizer's multivariate testing, can I vary multiple css classes on a single di

    - by brahn
    I would like to use Google Website Optimizer (GWO)'s multivariate tests to test some different versions of a web page. I can change from version to version just by varying some class tags on a div, i.e. the different versions are of this form: <div id="testing" class="foo1 bar1">content</div> <div id="testing" class="foo1 bar2">content</div> <div id="testing" class="foo2 bar1">content</div> <div id="testing" class="foo2 bar2">content</div> In the ideal, I would be able to use GWO section code in place of each class, and google would just swap in the appropriate tags (foo1 or foo2, bar1 or bar2). However, naively doing this results in horribly malformed code because I would be trying to put <script> tags inside the div's class attribute: <div id="testing" class=" <script>utmx_section("foo-class")</script>foo1</noscript> <script>utmx_section("bar-class")</script>bar1</noscript> "> content </div> And indeed, the browser chokes all over it. My current best approach is just to use a different div for each variable in the test, as follows: <script>utmx_section("foo-class-div")</script> <div class="foo1"> </noscript> <script>utmx_section("bar-class-div")</script> <div class="bar1"> </noscript> content </div> </div> So testing multiple variables requires layer of div-nesting per variable, and it all seems rather awkward. Is there a better approach that I could use in which I just vary the classes on a single div?

    Read the article

  • What are the best practices for unit testing properties with code in the setter?

    - by nportelli
    I'm fairly new to unit testing and we are actually attempting to use it on a project. There is a property like this. public TimeSpan CountDown { get { return _countDown; } set { long fraction = value.Ticks % 10000000; value -= TimeSpan.FromTicks(fraction); if(fraction > 5000000) value += TimeSpan.FromSeconds(1); if(_countDown != value) { _countDown = value; NotifyChanged("CountDown"); } } } My test looks like this. [TestMethod] public void CountDownTest_GetSet_PropChangedShouldFire() { ManualRafflePresenter target = new ManualRafflePresenter(); bool fired = false; string name = null; target.PropertyChanged += new PropertyChangedEventHandler((o, a) => { fired = true; name = a.PropertyName; }); TimeSpan expected = new TimeSpan(0, 1, 25); TimeSpan actual; target.CountDown = expected; actual = target.CountDown; Assert.AreEqual(expected, actual); Assert.IsTrue(fired); Assert.AreEqual("CountDown", name); } The question is how do I test the code in the setter? Do I break it out into a method? If I do it would probably be private since no one else needs to use this. But they say not to test private methods. Do make a class if this is the only case? would two uses of this code make a class worthwhile? What is wrong with this code from a design standpoint. What is correct?

    Read the article

  • How to setup and teardown temporary django db for unit testing?

    - by blokeley
    I would like to have a python module containing some unit tests that I can pass to hg bisect --command. The unit tests are testing some functionality of a django app, but I don't think I can use hg bisect --command manage.py test mytestapp because mytestapp would have to be enabled in settings.py, and the edits to settings.py would be clobbered when hg bisect updates the working directory. Therefore, I would like to know if something like the following is the best way to go: import functools, os, sys, unittest sys.path.append(path_to_myproject) os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp.settings' def with_test_db(func): """Decorator to setup and teardown test db.""" @functools.wraps def wrapper(*args, **kwargs): try: # Set up temporary django db func(*args, **kwargs) finally: # Tear down temporary django db class TestCase(unittest.TestCase): @with_test_db def test(self): # Do some tests using the temporary django db self.fail('Mark this revision as bad.') if '__main__' == __name__: unittest.main() I should be most grateful if you could advise either: If there is a simpler way, perhaps subclassing django.test.TestCase but not editing settings.py or, if not; What the lines above that say "Set up temporary django db" and "Tear down temporary django db" should be?

    Read the article

  • Testing Hibernate DAO, without building the universe around it.

    - by Varun Mehta
    We have an application built using spring/Hibernate/MySQL, now we want to test the DAO layer, but here are a few shortcomings we face. Consider the use case of multiple objects connected to one another, eg: Book has Pages. The Page object cannot exist without the Book as book_id is mandatory FK in Page. For testing a Page I have to create a Book. This simple usecase is easy to manage, but if you start building a Library, till you don't create the whole universe surrounding the Book and Page, you cannot test it! So to test Page; Create Library Create Section Create Genre Create Author Create Book Create Page Now test Page. Is there an easy way to by pass this "universe creation" and just test he page object in isolation. I also want to be able to test HQLs related to Page. eg: SELECT new com.test.BookPage (book.id, page.name) FROM Book book, Page page. JUnit is supposed to run in isolation, so I have to write the whole test case to create the Page. Any tips will be useful.

    Read the article

  • SSIS Catalog, Windows updates and deployment failures due to System.Core mismatch

    - by jamiet
    This is a heads-up for anyone doing development on SSIS. On my current project where we are implementing a SQL Server Integration Services (SSIS) 2012 solution we recently encountered a situation where we were unable to deploy any of our projects even though we had successfully deployed in the past. Any attempt to use the deployment wizard resulted in this error dialog: The text of the error (for all you search engine crawlers out there) was: A .NET Framework error occurred during execution of user-defined routine or aggregate "create_key_information": System.IO.FileLoadException: Could not load file or assembly 'System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) ---> System.IO.FileLoadException: The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) System.IO.FileLoadException: System.IO.FileLoadException:     at Microsoft.SqlServer.IntegrationServices.Server.Security.CryptoGraphy.CreateSymmetricKey(String algorithm)    at Microsoft.SqlServer.IntegrationServices.Server.Security.CryptoGraphy.CreateKeyInformation(SqlString algorithmName, SqlBytes& key, SqlBytes& IV) . (Microsoft SQL Server, Error: 6522) After some investigation and a bit of back and forth with some very helpful members of the SSIS product team (hey Matt, Wee Hyong) it transpired that this was due to a .Net Framework fix that had been delivered via Windows Update. I took a look at the server update history and indeed there have been some recently applied .Net Framework updates: This fix had (in the words of Matt Masson) “somehow caused a mismatch on System.Core for SQLCLR” and, as you may know, SQLCLR is used heavily within the SSIS Catalog. The fix was pretty simple – restart SQL Server. This causes the assemblies to be upgraded automatically. If you are using Data Quality Services (DQS) you may have experienced similar problems which are documented at Upgrade SQLCLR Assemblies After .NET Framework Update. I am hoping the SSIS team will follow-up with a more thorough explanation on their blog soon. You DBAs out there may be questioning why Windows Update is set to automatically apply updates on our production servers. We’re checking that out with our hosting provider right now You have been warned! @Jamiet

    Read the article

  • Extensible Metadata in Oracle IRM 11g

    - by martin.abrahams
    Another significant change in Oracle IRM 11g is that we now use XML to create the tamperproof header for each sealed document. This article explains what this means, and what benefit it offers. So, every sealed file has a metadata header that contains information about the document - its classification, its format, the user who sealed it, the name and URL of the IRM Server, and much more. The IRM Desktop and other IRM applications use this information to formulate the request for rights, as well as to enhance the user experience by exposing some of the metadata in the user interface. For example, in Windows explorer you can see some metadata exposed as properties of a sealed file and in the mouse-over tooltip. The following image shows 10g and 11g metadata side by side. As you can see, the 11g metadata is written as XML as opposed to the simple delimited text format used in 10g. So why does this matter? The key benefit of using XML is that it creates the opportunity for sealing applications to use custom metadata. This in turn creates the opportunity for custom classification models to be defined and enforced. Out of the box, the solution uses the context classification model, in which two particular pieces of metadata form the basis of rights evaluation - the context name and the document's item code. But a custom sealing application could use some other model entirely, enabling rights decisions to be evaluated on some other basis. The integration with Oracle Beehive is a great example of this. When a user adds a document to a Beehive workspace, that document can be automatically sealed with metadata that represents the Beehive security model rather than the context model. As a consequence, IRM can enforce the Beehive security model precisely and all rights configuration can actually be managed through the Beehive UI rather than the IRM UI. In this scenario, IRM simply supports the Beehive application, seamlessly extending Beehive security to all copies of workspace documents without any additional administration. Finally, I mentioned that the metadata header is tamperproof. This is obviously to stop a rogue user modifying the metadata with a view to gaining unauthorised access - reclassifying a board document to a less sensitive classifcation, for example. To prevent this, the header is digitally signed and can only be manipulated by a suitably authorised sealing application.

    Read the article

  • Export all SSIS packages from msdb using Powershell

    - by jamiet
    Have you ever wanted to dump all the SSIS packages stored in msdb out to files? Of course you have, who wouldn’t? Right? Well, at least one person does because this was the subject of a thread (save all ssis packages to file) on the SSIS forum earlier today. Some of you may have already figured out a way of doing this but for those that haven’t here is a nifty little script that will do it for you and it uses our favourite jack-of-all tools … Powershell!! Imagine I have the following package folder structure on my Integration Services server (i.e. in [msdb]): There are two packages in there called “20110111 Chaining Expression components” & “Package”, I want to export those two packages into a folder structure that mirrors that in [msdb]. Here is the Powershell script that will do that:Param($SQLInstance = "localhost") #####Add all the SQL goodies (including Invoke-Sqlcmd)##### add-pssnapin sqlserverprovidersnapin100 -ErrorAction SilentlyContinue add-pssnapin sqlservercmdletsnapin100 -ErrorAction SilentlyContinue cls $Packages = Invoke-Sqlcmd -MaxCharLength 10000000 -ServerInstance $SQLInstance -Query "WITH cte AS ( SELECT cast(foldername as varchar(max)) as folderpath, folderid FROM msdb..sysssispackagefolders WHERE parentfolderid = '00000000-0000-0000-0000-000000000000' UNION ALL SELECT cast(c.folderpath + '\' + f.foldername as varchar(max)), f.folderid FROM msdb..sysssispackagefolders f INNER JOIN cte c ON c.folderid = f.parentfolderid ) SELECT c.folderpath,p.name,CAST(CAST(packagedata AS VARBINARY(MAX)) AS VARCHAR(MAX)) as pkg FROM cte c INNER JOIN msdb..sysssispackages p ON c.folderid = p.folderid WHERE c.folderpath NOT LIKE 'Data Collector%'" Foreach ($pkg in $Packages) { $pkgName = $Pkg.name $folderPath = $Pkg.folderpath $fullfolderPath = "c:\temp\$folderPath\" if(!(test-path -path $fullfolderPath)) { mkdir $fullfolderPath | Out-Null } $pkg.pkg | Out-File -Force -encoding ascii -FilePath "$fullfolderPath\$pkgName.dtsx" } To run it simply change the “localhost” parameter of the server you want to connect to either by editing the script or passing it in when the script is executed. It will create the folder structure in C:\Temp (which you can also easily change if you so wish – just edit the script accordingly). Here’s the folder structure that it created for me: Notice how it is a mirror of the folder structure in [msdb]. Hope this is useful! @Jamiet UPDATE: THis post prompted Chad Miller to write a post describing his Powershell add-in that utilises a SSIS API to do exporting of packages. Go take a read here: http://sev17.com/2011/02/importing-and-exporting-ssis-packages-using-powershell/

    Read the article

  • Export all SSIS packages from msdb using Powershell

    - by jamiet
    Have you ever wanted to dump all the SSIS packages stored in msdb out to files? Of course you have, who wouldn’t? Right? Well, at least one person does because this was the subject of a thread (save all ssis packages to file) on the SSIS forum earlier today. Some of you may have already figured out a way of doing this but for those that haven’t here is a nifty little script that will do it for you and it uses our favourite jack-of-all tools … Powershell!!   Imagine I have the following package folder structure on my Integration Services server (i.e. in [msdb]): There are two packages in there called “20110111 Chaining Expression components” & “Package”, I want to export those two packages into a folder structure that mirrors that in [msdb]. Here is the Powershell script that will do that:   Param($SQLInstance = "localhost") #####Add all the SQL goodies (including Invoke-Sqlcmd)##### add-pssnapin sqlserverprovidersnapin100 -ErrorAction SilentlyContinue add-pssnapin sqlservercmdletsnapin100 -ErrorAction SilentlyContinue cls $Packages = Invoke-Sqlcmd -MaxCharLength 10000000 -ServerInstance $SQLInstance -Query "WITH cte AS ( SELECT cast(foldername as varchar(max)) as folderpath, folderid FROM msdb..sysssispackagefolders WHERE parentfolderid = '00000000-0000-0000-0000-000000000000' UNION ALL SELECT cast(c.folderpath + '\' + f.foldername as varchar(max)), f.folderid FROM msdb..sysssispackagefolders f INNER JOIN cte c ON c.folderid = f.parentfolderid ) SELECT c.folderpath,p.name,CAST(CAST(packagedata AS VARBINARY(MAX)) AS VARCHAR(MAX)) as pkg FROM cte c INNER JOIN msdb..sysssispackages p ON c.folderid = p.folderid WHERE c.folderpath NOT LIKE 'Data Collector%'" Foreach ($pkg in $Packages) { $pkgName = $Pkg.name $folderPath = $Pkg.folderpath $fullfolderPath = "c:\temp\$folderPath\" if(!(test-path -path $fullfolderPath)) { mkdir $fullfolderPath | Out-Null } $pkg.pkg | Out-File -Force -encoding ascii -FilePath "$fullfolderPath\$pkgName.dtsx" }   To run it simply change the “localhost” parameter of the server you want to connect to either by editing the script or passing it in when the script is executed. It will create the folder structure in C:\Temp (which you can also easily change if you so wish – just edit the script accordingly). Here’s the folder structure that it created for me: Notice how it is a mirror of the folder structure in [msdb]. Hope this is useful! @Jamiet

    Read the article

  • DTLoggedExec 1.1.2008.4 Released!

    - by Davide Mauri
    Today I've relased the latest version of my DTExec replacement tool, DTLoggedExec. The main changes are the following: Used a new strategy for version numbers. Now it will follow the following pattern Major.Minor.TargetSQLServerVersion.Revision Added support for Auto Configurations Fixed a bug that reported incorrect number of errors and warnings to Log Providers Fixed a buf that prevented correct casting of values when using /Set and /Param options Errors and Warnings are now counted more precisely. Updated database and log import scripts to categorize logs by projects and sections. E.g.: Project: MyBIProject; Sections: Staging, Datawarehouse Removed unused report stored procedures from database Updated Samples: 12 samples are now available to show ALL DTLoggedExec features From this version only SSIS 2008 will be supported http://dtloggedexec.codeplex.com/releases/view/62218  It useful to say something more on a couple of specific points: From this version only SSIS 2008 will be supportedYes, Integration Services 2005 are not supported anymore. The latest version capable of running SSIS 2005 Packages is the 1.0.0.2. Updated database and log import scripts to categorize logs by projects and sectionsWhen you import a log file, you can now assign it to a Project and to a Section of that project. In this way it's easier to gather statistical information for an entire project or a subsection of it. This also allows to store logged data of package belonging to different projects in the same database. For example:  Updated SamplesA complete set of samples that shows how to use all DTLoggedExec features are now shipped with the product. Enjoy! Added support for Auto ConfigurationsThis point will have a post on its own, since it's quite important and is by far the biggest new feature introduced in this release. To explain it in a few words, I can just say that you don't need to waste time with complex DTS configuration files or options, since a package will configure itself automatically. You just need to write a single statement as a parameter for DTLoggedExec. This feature can simplify deployment *a lot* :)   I the next days I'll write the mentioned post on Auto-Configurations and i'll update the documentation available on theDTLoggedExec website:   http://dtloggedexec.davidemauri.it/MainPage.ashx

    Read the article

  • How to install 32-bit libraries using Debian Testing

    - by bgoodr
    Question: What is the way to determine, ahead of time and without doing a full install of 64-bit Debian Testing NETINST, when Debian Testing has 32-bit libraries available and fully working and installable so that the following command works without broken package errors?: apt-get install ia32-libs ia32-libs-gtk The errors that occur when 32-bit libraries are not available, still in some broken state, or whatever is broken are detailed below. I already have concluded that "Just install Stable" is my stop-gap measure for now, but I would like to know the answer to the above question so as to avoid a lengthy installation process only to run into these problems at the very end. Details: I downloaded the 64-bit Debian Testing netinst a couple of days ago. This was "Jessie" built 20131014-06:07 via http://tinyurl.com/lejpa. This is weekly testing build. Yes, I know I should expect problems, and I did. I managed to get it completely installed and was able to invoke into GNOME, but not get past the 32-bit library problem. The problems starts when I attempt to install the 32-bit libraries via: apt-get install ia32-libs ia32-libs-gtk that returns: root@breath:~# apt-get install ia32-libs ia32-libs-gtk Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-i386 but it is not installable ia32-libs-gtk : Depends: ia32-libs-i386 but it is not installable Depends: ia32-libs-gtk-i386 but it is not installable E: Unable to correct problems, you have held broken packages. I then found an old (2012 is old to me) answer at ia32-libs : Depends: ia32-libs-i386 but it is not installable and even tried what they suggested there which was dpkg --add-architecture i386 apt-get update After executing the above, I tried again but got: root@breath:~# apt-get install ia32-libs ia32-libs-gtk Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-i386 ia32-libs-gtk : Depends: ia32-libs-i386 E: Unable to correct problems, you have held broken packages. root@breath:~# And then tried this: root@breath:~# dpkg --get-selections | grep hold And that returned nothing. Not only is there broken packages, the system doesn't even know what packages are broken, so Debian Stable is my only solution I know of right now. Hence my question above.

    Read the article

  • How to manage two separate testing teams using different test tracking tools

    - by newuser
    I have two independent testing teams currently testing the same application. One team is using ClearQuest, and the other is using Mantis. It has been a huge effort to manage all of the duplicate reported bugs. What options would improve this situation? My constraint is that the ClearQuest team will not change test reporting tools. The migration to ClearQuest also comes with a large training effort.

    Read the article

  • Inequality joins, Asynchronous transformations and Lookups : SSIS

    - by jamiet
    It is pretty much accepted by SQL Server Integration Services (SSIS) developers that synchronous transformations are generally quicker than asynchronous transformations (for a description of synchronous and asynchronous transformations go read Asynchronous and synchronous data flow components). Notice I said “generally” and not “always”; there are circumstances where using asynchronous transformations can be beneficial and in this blog post I’ll demonstrate such a scenario, one that is pretty common when building data warehouses. Imagine I have a [Customer] dimension table that manages information about all of my customers as a slowly-changing dimension. If that is a type 2 slowly changing dimension then you will likely have multiple rows per customer in that table. Furthermore you might also have datetime fields that indicate the effective time period of each member record. Here is such a table that contains data for four dimension members {Terry, Max, Henry, Horace}: Notice that we have multiple records per customer and that the [SCDStartDate] of a record is equivalent to the [SCDEndDate] of the record that preceded it (if there was one). (Note that I am on record as saying I am not a fan of this technique of storing an [SCDEndDate] but for the purposes of clarity I have included it here.) Anyway, the idea here is that we will have some incoming data containing [CustomerName] & [EffectiveDate] and we need to use those values to lookup [Customer].[CustomerId]. The logic will be: Lookup a [CustomerId] WHERE [CustomerName]=[CustomerName] AND [SCDStartDate] <= [EffectiveDate] AND [EffectiveDate] <= [SCDEndDate] The conventional approach to this would be to use a full cached lookup but that isn’t an option here because we are using inequality conditions. The obvious next step then is to use a non-cached lookup which enables us to change the SQL statement to use inequality operators: Let’s take a look at the dataflow: Notice these are all synchronous components. This approach works just fine however it does have the limitation that it has to issue a SQL statement against your lookup set for every row thus we can expect the execution time of our dataflow to increase linearly in line with the number of rows in our dataflow; that’s not good. OK, that’s the obvious method. Let’s now look at a different way of achieving this using an asynchronous Merge Join transform coupled with a Conditional Split. I’ve shown it post-execution so that I can include the row counts which help to illustrate what is going on here: Notice that there are more rows output from our Merge Join component than on the input. That is because we are joining on [CustomerName] and, as we know, we have multiple records per [CustomerName] in our lookup set. Notice also that there are two asynchronous components in here (the Sort and the Merge Join). I have embedded a video below that compares the execution times for each of these two methods. The video is just over 8minutes long. View on Vimeo  For those that can’t be bothered watching the video I’ll tell you the results here. The dataflow that used the Lookup transform took 36 seconds whereas the dataflow that used the Merge Join took less than two seconds. An illustration in case it is needed: Pretty conclusive proof that in some scenarios it may be quicker to use an asynchronous component than a synchronous one. Your mileage may of course vary. The scenario outlined here is analogous to performance tuning procedural SQL that uses cursors. It is common to eliminate cursors by converting them to set-based operations and that is effectively what we have done here. Our non-cached lookup is performing a discrete operation for every single row of data, exactly like a cursor does. By eliminating this cursor-in-disguise we have dramatically sped up our dataflow. I hope all of that proves useful. You can download the package that I demonstrated in the video from my SkyDrive at http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100514/20100514%20Lookups%20and%20Merge%20Joins.zip Comments are welcome as always. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Required Parameters [SSIS Denali]

    - by jamiet
    SQL Server Integration Services (SSIS) in its 2005 and 2008 incarnations expects you to set a property values within your package at runtime using Configurations. SSIS developers tend to have rather a lot of issues with SSIS configurations; in this blog post I am going to highlight one of those problems and how it has been alleviated in SQL Server code-named Denali.   A configuration is a property path/value pair that exists outside of a package, typically within SQL Server or in a collection of one or more configurations in a file called a .dtsConfig file. Within the package one defines a pointer to a configuration that says to the package “When you execute, go and get a configuration value from this location” and if all goes well the package will fetch that configuration value as it starts to execute and you will see something like the following in your output log: Information: 0x40016041 at Package: The package is attempting to configure from the XML file "C:\Configs\MyConfig.dtsConfig". Unfortunately things DON’T always go well, perhaps the .dtsConfig file is unreachable or the name of the SQL Sever holding the configuration value has been defined incorrectly – any one of a number of things can go wrong. In this circumstance you might see something like the following in your log output instead: Warning: 0x80012014 at Package: The configuration file "C:\Configs\MyConfig.dtsConfig" cannot be found. Check the directory and file name. The problem that I want to draw attention to here though is that your package will ignore the fact it can’t find the configuration and executes anyway. This is really really bad because the package will not be doing what it is supposed to do and worse, if you have not isolated your environments you might not even know about it. Can you imagine a package executing for months and all the while inserting data into the wrong server? Sounds ridiculous but I have absolutely seen this happen and the root cause was that no-one picked up on configuration warnings like the one above. Happily in SSIS code-named Denali this problem has gone away as configurations have been replaced with parameters. Each parameter has a property called ‘Required’: Any parameter with Required=True must have a value passed to it when the package executes. Any attempt to execute the package will result in an error. Here we see that error when attempting to execute using the SSMS UI: and similarly when executing using T-SQL: Error is: Msg 27184, Level 16, State 1, Procedure prepare_execution, Line 112 In order to execute this package, you need to specify values for the required parameters.   As you can see, SSIS code-named Denali has mechanisms built-in to prevent the problem I described at the top of this blog post. Specifying a Parameter required means that any packages in that project cannot execute until a value for the parameter has been supplied. This is a very good thing. I am loathe to make recommendations so early in the development cycle but right now I’m thinking that all Project Parameters should have Required=True, certainly any that are used to define external locations should be anyway. @Jamiet

    Read the article

  • How do you test a command object in a grails controller integration test?

    - by egervari
    I'm new to grails. How do I test a form command object to make sure that it's working? Here's some setup code in a test. When I try to do it, I get the following exceptions: Error occurred creating command object. org.codehaus.groovy.grails.web.servlet.mvc.exceptions.ControllerExecutionException: Error occurred creating command object. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) .... Caused by: groovy.lang.MissingPropertyException: No such property: password for class: project.user.RegistrationForm Possible solutions: password Here is my test case. As you can see, I set "password" on the params map... void testSaveWhenDataIsCorrect() { controller.params.emailAddress = "[email protected]" controller.params.password = "secret" controller.params.confirmPassword = "secret" controller.save() assertEquals "success", redirectArgs.view ... } Here's the controller action, that adds the command object as a closure parameter: def save = { RegistrationForm form -> if(form.hasErrors()) { render view: "create", model: [form: form] } else { def user = new User(form.properties) user.password = form.encryptedPassword if(user.save()) { redirect(action: "success") } else { render view: "create", model: [form: form] } } } Here's the command object itself... and note that it DOES have a "password" field... class RegistrationForm { def springSecurityService String emailAddress String password String confirmPassword String getEncryptedPassword() { springSecurityService.encodePassword(password) } static constraints = { emailAddress(blank: false, email: true) password(blank: false, size:4..10) confirmPassword(blank: false, validator: { password != confirmPassword }) } } I'm totally lost in the non-intuitive way to do controllers... Please help.

    Read the article

  • TFS and Sharepoint integration

    - by pho3nix
    I using a Sharepoint 2007 and TFS 2010. I installed all with reports sharepoint integration but when i try create a TeamProject Collections always return this warning: Server was unable to process request. --- TF250029: No user was found within the context of a Web site. Verify that the site does not allow anonymous access. And in sharepoint web applications tfs console window when check my sharepoint portal /sites return a error saying is blocked by firewall or server extensions is not installed, but i have all ok. Anyone can helpme, please.

    Read the article

  • Integration of Tomcat 7 with IIS 7

    - by priya
    After following all the steps related to integration of tomcat7 and IIS7 i am getting below error.Any idea what might be the cause?First time when I did all the steps as mentioned in tutorial my site was coming up then suddenly it stop coming up.Again i removed my site from IIS and followed the steps but then every time below error is coming:- HTTP Error 500.0 - Internal Server Error The page cannot be displayed because an internal server error has occurred. Detailed Error Information Module IsapiFilterModule Notification AuthenticateRequest Handler StaticFile Error Code 0x80070001 Physical Path D:\New\IISROOT Logon Method Anonymous Logon User Anonymous Failed Request Tracing Log Directory D:\New\Tomcat\logs I have checked the logs but trace log is not created as well as isapi_redirect.logs

    Read the article

  • Changing LDAP schema casts Confluence AD integration unoperable

    - by Maxim V. Pavlov
    I have had our instance of Atlassian Confluence configured to be integrated with our Active Directory. In AD, all the users were being created under default Users folder in Active Directory Users and Computers. We have decided to introduce cleaner separation and have created an Organizational Units structure in AD. Under root we have created Managed OU, and under it - Users OU and all user accounts were moved under Users OU. Now I though that to let the Confluence AD integration engine "know" where to look for user accounts now, I only need to adjust the BaseDN and prepand it with ou=Managed so it is aware that it is looking for cn=Users but under ou=Managed. That didn't work. How should I adjust LDAP schema root in a client application for it to be able to look for users in OU that then in a default folder.

    Read the article

  • Tomcat SSL integration issue

    - by small_ticket
    Hi all, I've bought a wildcard ssl certificate from a company, i sent them the csr file and they send me two certificate files namely CA.txt and com_sertificate. I've searched on web and find some tutorials about tomcat and ssl but i can not accomplish with these two files. All that tutorials mention about different files that i don't have. (I asked about this process to the company that i bought certificates but they said they don't have any knowledge about tomcat integration) Is there anyone that has an idea about this? p.s I'm using ubuntu 8.04 server, Java 1.6 and tomcat 6

    Read the article

  • T-4 Templates for ASP.NET Web Form Databound Control Friendly Logical Layers

    - by joycsharp
    I just released an open source project at codeplex, which includes a set of T-4 templates to enable you to build logical layers (i.e. DAL/BLL) with just few clicks! The logical layers implemented here are  based on Entity Framework 4.0, ASP.NET Web Form Data Bound control friendly and fully unit testable. In this open source project you will get Entity Framework 4.0 based T-4 templates for following types of logical layers: Data Access Layer: Entity Framework 4.0 provides excellent ORM data access layer. It also includes support for T-4 templates, as built-in code generation strategy in Visual Studio 2010, where we can customize default structure of data access layer based on Entity Framework. default structure of data access layer has been enhanced to get support for mock testing in Entity Framework 4.0 object model. Business Logic Layer: ASP.NET web form based data bound control friendly business logic layer, which will enable you few clicks to build data bound web applications on top of ASP.NET Web Form and Entity Framework 4.0 quickly with great support of mock testing. Download it to make your web development productive. Enjoy!

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >