Search Results

Search found 39118 results on 1565 pages for 'boost unit test framework'.

Page 68/1565 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • How big teams work with database

    - by Michael Riva
    Lets say I have a team, 20 developers. And we are making a project on .net. In team every one can easy create their tables according to their modules working on it. And we think to use an ORM, can you tell me how can and which ORM tools for good to working with team. Is there any proven way? I m asking becouse I never work with a team, so I dont know the best practices. So you guys what kind of pattern you use?. I realy wonder. The team members can write their unit tests and supply necessary design patterns. What kind of approach I need to create to manage team? What kind of ORM tools that we have to use?

    Read the article

  • Unit Testing Example, on a class with required fields?

    - by Mastro
    I'm new to developing Unit test and I can't find an example of how to test an existing class I have. The class has a save method which does an insert or update in the database when the user clicks Save in the UI. But the save method has required fields that need to be populated. And has other fields that do not. So how can I run this test properly? Was trying to write it out.. Give a user When user saves object Then Field1 is required then Field2 is required Then Field3 is required WhenUserSavesObject() object = new object object.field1 IsNot Nothing something like that right? And what about the other fields that are optional? How would I test the save method to make sure it takes all those values properly? Was trying to use BDD but not sure if I should try it or not. Can't find any example of classes with many properties that are needed when calling a test method.

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • What should be tested in Javascript?

    - by Nathan Hoad
    At work, we've just started on a heavily Javascript based application (actually using Coffeescript, but still), of which I've been implementing an automated test system using JsTestDriver and fabric. We've never written something with this much Javascript, so up until now we've never done any Javascript testing. I'm unsure what exactly we should be testing in our unit tests. We've written JQuery plugins for various things, so it's quite obvious that they should be verified for correctness as much as possible with JsTestDriver, but everyone else in my team seems to think that we should be testing the page level Javascript as well. I don't think we should be testing page level Javascript as unit tests, but instead using a system like Selenium to verify everything works as expected. My main reasoning for this is that at the moment, page level Javascript tests are guaranteed to fail through JsTestDriver, because they're trying to access elements on the DOM that can't possibly exist. So, what should be unit tested in Javascript?

    Read the article

  • TestDriven.NET - Free unit test tool for Visual Studio

    - by Guilherme Cardoso
    Developers that use unit testing are familiar with Resharper and his plugin for Unit Testing. For those that like me, don't have a great pc hardware (Pentium 4, 3ghz, 1GB ram) the Resharper can be really slow, and affect the performance of pc. That's why i use TestDriven.NET TestDriven.NET is a freeware license tool (there are others licenses for this product) that gives us the possibility to run unit tests with this plugin, that's integrated with Visual Studio. You can check some screenshots here: http://www.testdriven.net/Screenshots.aspx It's compatible with: NUnit, MbUnit, MSTest, NCover, Reflector, TypeMock, dotTrace and MSBee. More information and free download here: http://www.testdriven.net

    Read the article

  • Next generation Three MiFi unit - call for questions to put to Three

    - by Liam Westley
    I've been invited to a preview of the next generation Three mobile Mi-Fi unit in their London offices this week. If you've got feedback on the current MiFi unit; niggles, wish list items or general feedback, or you've got any questions about what the next generation MiFi unit might be, drop me an e-mail or post a comment with your question on this blog. I'll be taking any questions from my blog or my twitter account @westleyl to Three, and if I get an answer I can publish, I'll add to this blog post with the details. Thanks Liam

    Read the article

  • Best approach for unit enemy "awareness" in RTS?

    - by Phil
    I'm using Unity3d to develop an RTS/TD hybrid prototype game. What is the best approach to have "awareness" between units and their enemies? Is it sane to have every unit check the distance to every enemy and engage if within range? The approach I'm going for right now is to have a trigger sphere on every unit. If an enemy enters the trigger, the unit becomes aware of the enemy and starts distance checking. I'm imagining that this would save some unnecessary checks? What's the best practice here (if there's such a thing)? Thanks for reading.

    Read the article

  • Platform for DS/Gameboy Dev - Managed Memory, Tools, and Unit Testing

    - by ashes999
    I'm interested in dabbling in Nintendo DS, 3DS, or GBA development. I would like to know what my (legal) options for development tools and IDEs are. In particular, I would not consider moving in this direction unless I can find: A programming language that has managed memory (garbage collection) A unit testing tool akin to JUnit, NUnit, etc. for unit tests I would also prefer if other tools exist, like code-coverage, etc. for that platform. But the main thing is managed memory and unit testing. What options are out there?

    Read the article

  • Making Separate Assemblies For Different Types Of Tests For The Same Component?

    - by sooprise
    I was told by a few members here that splitting up my unit tests into different assemblies for different components is the best way to structure unit tests. Now, I have a few questions about that idea. What are the advantages of this? Organization, and isolation of errors? Let's say I have a component named "calculator", and I create an assembly for the unit tests on "calculator". Would I create a separate assembly for the integration tests I want to run on "calculator"? Or is the definition of an integration test a test across multiple components, like "calculator" and whatever else, which would require a separate assembly to test both of them together? In that case, would I have one assembly to do all of the integration testing for every component combination?

    Read the article

  • NHibernate with Framework .NET Framework 2

    - by Daniel Dolz
    Hi. I can not make NHibernate 2.1 work in machines without framework 3.X (basically, windows 2000 SP4, although it happens with XP too). NHibernate doc do no mention this. Maybe you can help? I NEED to make NHibernate 2.1 work in Windows 2000 PCs, so you think this can be done? Thanks! PD: DataBase is SQL 2000/2005. Error is: NHibernate.MappingException: Could not compile the mapping document: Datos.NH_VEN_ComprobanteBF.hbm.xml ---> NHibernate.HibernateException: Could not instantiate dialect class NHibernate.Dialect.MsSql2000Dialect ---> System.Reflection.TargetInvocationException: Se produjo una excepción en el destino de la invocación. ---> System.TypeInitializationException: Se produjo una excepción en el inicializador de tipo de 'NHibernate.NHibernateUtil'. ---> System.TypeLoadException: No se puede cargar el tipo 'System.DateTimeOffset' del ensamblado'mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. en NHibernate.Type.DateTimeOffsetType.get_ReturnedClass() en NHibernate.NHibernateUtil..cctor() --- Fin del seguimiento de la pila de la excepción interna --- en NHibernate.Dialect.Dialect..ctor() en NHibernate.Dialect.MsSql2000Dialect..ctor() --- Fin del seguimiento de la pila de la excepción interna --- en System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandle& ctor, Boolean& bNeedSecurityCheck) en System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean fillCache) en System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) en System.Activator.CreateInstance(Type type, Boolean nonPublic) en NHibernate.Bytecode.ActivatorObjectsFactory.CreateInstance(Type type) en NHibernate.Dialect.Dialect.InstantiateDialect(String dialectName) --- Fin del seguimiento de la pila de la excepción interna --- en NHibernate.Dialect.Dialect.InstantiateDialect(String dialectName) en NHibernate.Dialect.Dialect.GetDialect(IDictionary`2 props) en NHibernate.Cfg.Configuration.AddValidatedDocument(NamedXmlDocument doc) --- Fin del seguimiento de la pila de la excepción interna --- en NHibernate.Cfg.Configuration.LogAndThrow(Exception exception) en NHibernate.Cfg.Configuration.AddValidatedDocument(NamedXmlDocument doc) en NHibernate.Cfg.Configuration.ProcessMappingsQueue() and continues...

    Read the article

  • vim does not preserve symlink over sshfs

    - by HighCommander4
    I'm having some trouble with symlinks and sshfs. I use the '-o follow_symlinks' option to follow symlinks on the server side, but whenever I edit a symlinked file on the client side with vim, a copy of it is made on the server side, i.e. it's no longer a symlink. Set up a symlink on the server side: me@machine1:~$ echo foo > test.txt me@machine1:~$ mkdir test me@machine1:~$ cd test me@machine1:~/test$ ln -s ../test.txt test.txt me@machine1:~/test$ ls -al test.txt lrwxrwxrwx 1 me me 11 Jan 5 21:13 test.txt -> ../test.txt me@machine1:~/test$ cat test.txt foo me@machine1:~/test$ cat ../test.txt foo So far so good. Now: me@machine2:~$ mkdir test me@machine2:~$ sshfs me@machine1:test test -o follow_symlinks me@machine2:~$ cd test me@machine2:~/test$ vim test.txt [in vim, add a new line "bar" to the file] me@machine2:~/test$ cat test.txt foo bar Now observe what this does to the file on the server side: me@machine1:~/test$ ls -al test.txt -rw-r--r-- 1 me me 19 Jan 5 21:24 test.txt me@machine1:~/test$ cat test.txt foo bar me@machine1:~/test$ cat ../test.txt foo As you can see, it made a copy and only edited the copy. How can I get it to work so it actually follows the symlink when editing the file?

    Read the article

  • Recent resources on Entity Framework 4

    - by Eric Nelson
    I just posted on the bits you need to install to explore all the features of Entity Framework 4 with the Visual Studio 2010 RC. I’ve also had a quick look (March 12th 2010) to see what new resources are out there on EF4. They appear a little thin on the ground – but there are some gems. The following all caught my attention: Julie Lerman has published 2 How-to-videos on EF4 on pluralsight.com. You need to create a free guest pass to watch them. Getting Started with Entity Framework 4.0 – Session given at Cairo CodeCamp 2010 . This includes ppt and demos. Entity Framework 4 providers – read through the comments What’s new with Entity Framework in Visual Studio 2010 RC Extending the design surface of EF4 using the Extension Starter Kit Persistence Ignorance and EF4 on geekSpeak on channel 9 (poor audio IMHO – I gave up) First of a series of posts on EF4 How to stop your dba having a heart attack with EF4 from Simon Sabin in the UK. This includes ppt and demos. And the biggy. You no longer have to depend on SQL Profiler to keep an eye on the generated SQL. There is now a commercial profiler for Entity Framework.  I am yet to try it – but I listened to a .NET rocks podcast which made it sound great. It is “hidden” in a session on DSLs in Boo –> Oren Eini on creating DSLs in Boo. This is a much richer experience than you would get from SQL Profiler – matching the SQL to the .NET code. And finally a momentous #fail to … drum roll… the Visual Studio 2010 and .NET Framework 4 Training Kit Feb release. This just contains one ppt on EF4 – and it is not even a good one. Real shame. P.S. I will update the 101 EF4 Resources with the above … but post devweek in case I find some more goodies. Related Links 101 EF4 Resources

    Read the article

  • What is the best way to test using grails using IDEA?

    - by egervari
    I am seriously having a very non-pleasant time testing using Grails. I will describe my experience, and I'd like to know if there's a better way. The first problem I have with testing is that Grails doesn't give immediate feedback to the developer when .save() fails inside of an integration test. So let's say you have a domain class with 12 fields, and 1 of them is violating a constraint and you don't know it when you create the instance... it just doesn't save. Naturally, the test code afterward is going to fail. This is most troublesome because the thingy under test is probably fine... and the real risk and pain is the setup code for the test itself. So, I've tried to develop the habit of using .save(failOnError: true) to avoid this problem, but that's not something that can be easily enforced by everyone working on the project... and it's kind of bloaty. It'd be nice to turn this on for code that is running as part of a unit test automatically. Integration Tests run slow. I cannot understand how 1 integration test that saves 1 object takes 15-20 seconds to run. With some careful test planning, I've been able to get 1000 tests talking to an actual database and doing dbunit dumps after every test to happen in about the same time! This is dumb. It is hard to run all the unit tests and not integration tests in IDEA. Integration tests are a massive pain. Idea actually shows a GREEN BAR when integration tests fail. The output given by grails indicates that something failed, but it doesn't say what it was. It says to look in the test reports... which forces the developer to launch up their file system to hunt the stupid html file down. What a pain. Then once you got the html file and click to the failing test, it'll tell you a line number. Since these reports are not in the IDE, you can't just click the stack trace to go to that line of code... you gotta go back and find it yourself. ARGGH!@!@! Maybe people put up with this, but I refuse. Testing should not be this painful. It should be fast and painless, or people won't do it. Please help. What is the solution? Rails instead of Grails? Something else entirely? I love the Grails framework, but they never demo their testing for a reason. They have a snazzy framework, but the testing is painful. After having used Scala for the last 1.5 months, and being totally spoiled by ScalaTest... I can't go back to this.

    Read the article

  • Using jQuery.UI CSS Framework for DIV Styling

    - by Gordon
    Hi folks, a Project I am currently working on uses the jQuery UI framework for some of its widgets. To provide the user with a global look and feel I would like to use the framework also for its css stuff. I am implementing at the moment a dashboard like homepage, where the user can see an overall status of its data. This dashboard is build of some divs that should be aligned into a grid layout. I try to style the divs like follows <div class="ui-widget"> <div class="ui-widget-header">Box Header</div> <div class="ui-widget-content"> Content of the Box </div> </div> Later I would like to implement some draggable-and-sortable functionality. The Problem I am facing right now is that the boxes aren't properly aligned. Does anyone has a hint on using jQuery.UI for that kind of css work? I was studing the CSS framework documentation on jqueryui.com but there aren't that much information. best regards, Gordon

    Read the article

  • Converting a Linq expression tree that relies on SqlMethods.Like() for use with the Entity Framework

    - by JohnnyO
    I recently switched from using Linq to Sql to the Entity Framework. One of the things that I've been really struggling with is getting a general purpose IQueryable extension method that was built for Linq to Sql to work with the Entity Framework. This extension method has a dependency on the Like() method of SqlMethods, which is Linq to Sql specific. What I really like about this extension method is that it allows me to dynamically construct a Sql Like statement on any object at runtime, by simply passing in a property name (as string) and a query clause (also as string). Such an extension method is very convenient for using grids like flexigrid or jqgrid. Here is the Linq to Sql version (taken from this tutorial: http://www.codeproject.com/KB/aspnet/MVCFlexigrid.aspx): public static IQueryable<T> Like<T>(this IQueryable<T> source, string propertyName, string keyword) { var type = typeof(T); var property = type.GetProperty(propertyName); var parameter = Expression.Parameter(type, "p"); var propertyAccess = Expression.MakeMemberAccess(parameter, property); var constant = Expression.Constant("%" + keyword + "%"); var like = typeof(SqlMethods).GetMethod("Like", new Type[] { typeof(string), typeof(string) }); MethodCallExpression methodExp = Expression.Call(null, like, propertyAccess, constant); Expression<Func<T, bool>> lambda = Expression.Lambda<Func<T, bool>>(methodExp, parameter); return source.Where(lambda); } With this extension method, I can simply do the following: someList.Like("FirstName", "mike"); or anotherList.Like("ProductName", "widget"); Is there an equivalent way to do this with Entity Framework? Thanks in advance.

    Read the article

  • What java web application framework to use?

    - by frohiky
    One of the main products of my company is an Oracle Forms (and Reports) based application, that "needs" to be re-written in another technology. Why? Users want a more rich interface experience, and we want, preferably, to reduce costs with an open source application server. For this (HUGE) project, we intend to use a java web application framework, keep these points in mind: We have: hundreds of tables on our database (the ORM must be as flexible as possible); some logic which is (and will still be) based on PL/SQL procedures/functions/packages; a lot of CRUDs (the application itself is of an considerable size); a demand to work with/generate documents and workflows; an intranet based user environment; We want: to offer a RIA interface experience; use (if possible) an open source app server; a rapid (as possible) development framework; a somewhat mature framework with a "wise" roadmap (and a considerable community support); a MVC approach combined with JS or GWT widgets (e.g. Vaadin or SmartGWT); Well, in the past weeks I've read a lot of posts, Q&As on stackoverflow, and much more: Wicket, JSF, Tapestry, Grails, GWT, Struts2, Play, Spring, Seam, Echo, .... the list goes on! I've even researched about Alfresco..! The obvious question: Which one to use? At this time, any insight, recommendation, shared experience, advice will be more then welcome!

    Read the article

  • What is the PastryKit Framework?

    - by Kerrick
    I'm trying to find any information I can on the PastryKit Javascript Framework. It appears to be in use on the iPhone User Guide that is displayed on the iPhone itself in Mobile Safari, but I cannot find any documentation or API. If you want to see it in action, open Safari 4, set your user agent to iPhone 3 (In the Develop menu) and check out the guide. Overall, it seems to be a way to write an HTML/CSS/Javascript application that acts like a native iPhone app. When it comes to Javascript, I used the JS Beautifier on (what I assume to be) the framework file and it was over 3,400 lines! Beautified, (again what I assume to be) their implementation of it was over 1,200 lines. On the CSS side, I used Clean CSS on (again what I assume to be) the framework CSS, and it came out to over 700 lines. Their implementation was shy of 500. Does anybody have, or know where to find, any information, documentation, or APIs on PastryKit? Or, can anybody figure out how to implement it?

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • Alternatives to the Entity Framework for Serving/Consuming an OData Interface

    - by Egahn
    I'm researching how to set up an OData interface to our database. I would like to be able to pull/query data from our DB into Excel, as a start. Eventually I would like to have Excel run queries and pull data over HTTP from a remote client, including authentication, etc. I've set up a working (rickety) prototype so far, using the ADO.NET Entity Data Model wizard in Visual Studio, and VSTO to create a test Excel worksheet with a button to pull from that ADO.NET interface. This works OK so far, and I can query the DB using Linq through the entities/objects that are created by the ADO.NET EDM wizard. However, I have started to run into some problems with this approach. I've been finding the Entity Framework difficult to work with (and in fact, also difficult to research solutions to, as there's a lot of chaff out there regarding it and older versions of it). An example of this is my being unable to figure out how to set the SQL command timeout (as opposed to the HTTP request timeout) on the DataServiceContext object that the wizard generates for my schema, but that's not the point of my question. The real question I have is, if I want to use OData as my interface standard, am I stuck with the Entity Framework? Are there any other solutions out there (preferably open source) which can set up, serve and consume an OData interface, and are easier to work with and less bloated than the Entity Framework? I have seen mention of NHibernate as an alternative, but most of the comparison threads I've seen are a few years old. Are there any other alternatives out there now? Thanks very much!

    Read the article

  • Entity Framework and associations between string keys

    - by fredrik
    Hi, I am new to Entity Framework, and ORM's for that mather. In the project that I'm involed in we have a legacy database, with all its keys as strings, case-insensitive. We are converting to MSSQL and want to use EF as ORM, but have run in to a problem. Here is an example that illustrates our problem: TableA has a primary string key, TableB has a reference to this primary key. In LINQ we write something like: var result = from t in context.TableB select t.TableA; foreach( var r in result ) Console.WriteLine( r.someFieldInTableA ); if TableA contains a primary key that reads "A", and TableB contains two rows that references TableA but with different cases in the referenceing field, "a" and "A". In our project we want both of the rows to endup in the result, but only the one with the matching case will end up there. Using the SQL Profiler, I have noticed that both of the rows are selected. Is there a way to tell Entity Framework that the keys are case insensitive? Edit:We have now tested this with NHibernate and come to the conclution that NHibernate works with case-insensitive keys. So NHibernate might be a better choice for us.I am however still interested in finding out if there is any way to change the behaviour of Entity Framework.

    Read the article

  • SIMPLE PHP MVC Framework [closed]

    - by Allen
    I need a simple and basic MVC example to get me started. I don't want to use any of the available packaged frameworks. I am in need of a simple example of a simple PHP MVC framework that would allow, at most, the basic creation of a simple multi-page site. I am asking for a simple example because I learn best from simple real world examples. Big popular frameworks (such as code igniter) are to much for me to even try to understand and any other "simple" example I have found are not well explained or seem a little sketchy in general. I should add that most examples of simple MVC frameworks I see use mod_rewrite (for URL routing) or some other Apache-only method. I run PHP on IIS. I need to be able to understand a basic MVC framework, so that I could develop my own that would allow me to easily extend functionality with classes. I am at the point where I understand basic design patterns and MVC pretty well. I understand them in theory, but when it comes down to actually building a real world, simple, well designed MVC framework in PHP, I'm stuck. Edit: I just want to note that I am looking for a simple example that an experienced programmer could whip up in under an hour. I mean simple as in bare bones simple. I don't want to use any huge frameworks, I am trying to roll my own. I need a decent SIMPLE example to get me going.

    Read the article

  • Entity Framework POCO objects

    - by Dan
    I'm struggling with understanding Entity Framework and POCO objects. Here's what I'm trying to achieve. 1) Separate the DAL from the Business Layer by having my business layer use an interface to my DAL. Maybe use Unity to create my context. 2) Use Entity Framework inside my DAL. I have a Domain model with objects that reside in my business layer. I also have a database full of tables that doesn't really represent my domain model. I setup Entity Framework and generated POCO objects by using the ADO.NET POCO Generator extension. This gave me an object for each table in my database. Now I want to be able to say context.GetAll<User>(); and have it return a list of my User objects. The User object is in my business layer. Is that possible? Does that make sense or am I totally off and should start over? I'm guessing I need to use the repository pattern to achieve this, but I'm not sure. Can anyone help?

    Read the article

  • Binding not writing to datasource on .NET Compact Framework Form -- works on Full Framework

    - by Dave Welling
    I have a problem with a bound user control writing back to it's datasource on a NetCF forms application. The application is too complex to post code, so I made a toy version to show you. I create a form, usercontrol with a combobox, a class (testBind) and another class (TestLookup). I bind a property of the usercontrol ("value") to a property ("selectedValue") on the testBind class. The testBind class implements INotifyPropertyChanged. I create a few fascade methods on the user control to bind the contained combobox to a BindingList(of TestLookup). I create a button to show the value of the testBind bound property (in a MessageBox). The messagebox returns "-1" every time regardless of the combobox entry selected. I can take the EXACT same code, paste it in a full framework Forms app and it will return the correct value of the selected combobox entry. Imports System.ComponentModel Public Class Form2 Inherits Form Private _testBind1 As testBind Private _testUserControlX As UserControlX Friend WithEvents _buttonX As System.Windows.Forms.Button Public Sub New() _buttonX = New System.Windows.Forms.Button _buttonX.Location = New System.Drawing.Point(126, 228) _buttonX.Size = New System.Drawing.Size(70, 21) _testBind1 = New testBind _testUserControlX = New UserControlX() Dim _lookup As New System.ComponentModel.BindingList(Of TestLookup)() _lookup.Add(New TestLookup(1, "text1")) _lookup.Add(New TestLookup(2, "text2")) _testUserControlX.DataSource = _lookup _testUserControlX.DisplayMember = "Text" _testUserControlX.ValueMember = "ID" _testUserControlX.DataBindings.Add("Value", _testBind1, "SelectedID", False, DataSourceUpdateMode.OnValidation) MinimizeBox = False Controls.Add(_testUserControlX) Controls.Add(_buttonX) End Sub Private Sub ButtonX_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles _buttonX.Click MessageBox.Show(_testBind1.SelectedID.ToString()) End Sub Public Class testBind Implements System.ComponentModel.INotifyPropertyChanged Private _selectedRow As Integer = -1 Public Event PropertyChanged(ByVal sender As Object, ByVal e As System.ComponentModel.PropertyChangedEventArgs) Implements System.ComponentModel.INotifyPropertyChanged.PropertyChanged Protected Sub OnPropertyChanged(ByVal PropertyName As String) RaiseEvent PropertyChanged(Me, New PropertyChangedEventArgs(PropertyName)) End Sub Public Property SelectedID() As Integer Get Return _selectedRow End Get Set(ByVal value As Integer) _selectedRow = value OnPropertyChanged("SelectedID") End Set End Property End Class Public Class TestLookup Private _text As String Private _id As Integer Public Sub New(ByVal id As Integer, ByVal text As String) _text = text _id = id End Sub Public Property ID() As Integer Get Return _id End Get Set(ByVal value As Integer) _id = value End Set End Property Public Property Text() As String Get Return _text End Get Set(ByVal value As String) _text = value End Set End Property End Class End Class Public Class UserControlX Inherits System.Windows.Forms.UserControl Friend WithEvents ComboBox1 As System.Windows.Forms.ComboBox Public Sub New() Me.ComboBox1 = New System.Windows.Forms.ComboBox Me.Controls.Add(Me.ComboBox1) End Sub Public Property Value() As Integer Get Return ComboBox1.SelectedValue End Get Set(ByVal value As Integer) ComboBox1.SelectedValue = value End Set End Property Public Property DataSource() As Object Get Return ComboBox1.DataSource End Get Set(ByVal value As Object) ComboBox1.DataSource = value End Set End Property Public Property ValueMember() As String Get Return ComboBox1.ValueMember End Get Set(ByVal value As String) ComboBox1.ValueMember = value End Set End Property Public Property DisplayMember() As String Get Return ComboBox1.DisplayMember End Get Set(ByVal value As String) ComboBox1.DisplayMember = value End Set End Property End Class

    Read the article

  • Get all model details in zend.

    - by Prasanth P
    Hi, I have one doubt in zend framework. I need all model details from project which i have done in zend framework. Is there any possibility to get all model details in zend framework. Please help me.. Thanks and regards, Prasanth P

    Read the article

  • Eclipse JUnit Plugin Test very slow to re-execute Test Suite on Windows

    - by soundasleepful
    I'm having an odd, and stressing, problem with running a large JUnit Plugin test suite in Eclipse. When I try to re-run a JUnit plugin suite that has just been executed, Eclipse hangs for quite some time before it eventually wakes up and launches. It can take up to 5 minutes sometimes, and increases with the size of the suite. Visually, it appears as a GC cleanup, except that I have plenty of GC space available (400 MB freely allocated). The size of the workspace that is has to delete is well under 1 GB, and there are not too many files - definitely less than 20,000. While I was waiting for a new run to start, I decided to manually kill explorer.exe to see if it had any effect. Surprisingly, Eclipse instantly fell out of its freeze and ran as normal. This makes me think that Windows is somehow interfering with the deletion of these workspace files. They're not being put into the Recycle Bin though. The workspace is in C: which I think is out of the range of any workspace/domain stuff. Any ideas?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >