Search Results

Search found 10414 results on 417 pages for 'business caliber'.

Page 229/417 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • WCF: Exposed Object Model - stuck in a loop

    - by Mark
    Hi I'm working on a pretty big WSSF project. I have a normal object model in the business layer. Eg a customer has an orders collection property, when this is accessed then it loads from the data layer (lazy loading). An order has a productCollection property etc etc.. Now the bit I'm finding tricky is exposing this via WCF. I want to export a collection of orders. The client app will also need information about the customers. Using the WSSF data contract designer I have set it up so that customers have a property called "order collection". This is fine if you have a customer object and would like to look at the orders but if you have an order object there is no customer property so it doesn't work going up the hierarchy. I've tried adding a customer property to the orders object but then the code gets stuck in a loop when it loads the data contracts up. This is because it doesn't load on demand like in the business layer. I need to load all properties up before the objects can be sent out via WCF. It ends up loading an order, then the customer for that order, then the orders for that customer, then the customer for that order etc etc... I'm sure I've got all this wrong. Help!!

    Read the article

  • Entity Framework 4 + POCO with custom classes and WCF contracts (serialization problem)

    - by eman
    Yesterday I worked on a project where I upgraded to Entity Framework 4 with the Repository pattern. In one post, I have read that it is necessary to turn off the custom tool generator classes and then write classes (same like entites) by hand. That I can do it, I used the POCO Entity Generator and then deleted the new generated files .tt and all subordinate .cs classes. Then I wrote the "entity classes" by myself. I added the repository pattern and implemented it in the business layer and then implemented a WCF layer, which should call the methods from the business layer. By calling an Insert (Add) method from the presentation layer and everything is OK. But if I call any method that should return some class, then I get an error like (the connection was interrupted by the server). I suppose there is a problem with the serialization or am I wrong? How can by this problem solved? I'm using Visual Studio S2010, Entity Framework 4, C#. UPDATE: I have uploaded the project and hope somebody can help me! link text UPDATE 2: My questions: Why is POCO good (pros/cons)? When should POCO be used? Is POCO + the repository pattern a good choice? Should POCO classes by written by myself or could I use auto generated POCO classes?

    Read the article

  • Code promotion: Enforcing the rules

    - by jbarker7
    So here is our problem: We have a small team of developers with their own ways of doing things-- I am trying to formalize a process in which we are required to promote our code in the following order: Local sandbox Dev UAT Staging Live Developers develop/test as they go on their own sandbox, Dev is its own box that we would use for continuous integration, UAT is another site in IIS on the dev box, which uses our dev database. We then promote to staging, which is a site in IIS on the Live box and using live data (just like the live, hence staging). Then, finally, we promote to live. Here are a few of my questions: 1.) Does this seem to be best practice? If not, what needs to be done differently? 2.) How do I enforce the rules to the developers? Often developers skip steps in order to save time... this should not be tolerated and would be great if it could be physically enforced. 3.) How do I enforce these rules to the business group? The business group just wants to get features out FAST. Do we promote only on certain days? Thanks! Josh

    Read the article

  • Telerik RadGrid: grid clientside pagination

    - by ram
    I have a web service which returns me some data,I am massaging this data and using this as datasource for my radgrid (telerik). The datasource is quite large, and would like to paginate it. I found couple of problems when I paginate it in the server side I have to bind the grid again for pagination, which essentially means I have to make a call to WS again to get the data. This is an expensive call for me. I would rather forgo the benefits of pagination and would display all the results in the same page, except for it would be a bit clumsy During the postback RadGrid1.Items.Count happens to be the number of items getting paginated (25- in my case) which is expected as all the items in the datasource are not getting bound. This of course is not an issue. The real issue is that we have some checkboxes which get checked based on some business condition. We add this to our business object/DB later. So if the user has not navigated all the pages, these "checked" items do not get added as pagination limits the "Items" in the grid to those which get bound for that particular page index. My Thoughts: I would rather have some sort of client side pagination, where we can hide/show contents than going to the server and doing a databind every time. Though it will return all the results, the UI will not be clumsy and the grid would have "all the items" during postback Is there a way to do it ? If it were a regular asp.net gridView, can someone point me to a good article which would serve my purpose Ram PS: who else think radgrid is crazy ? (unfortunately I did not make this choice)

    Read the article

  • Refactoring an ASP.NET 2.0 app to be more "modern"

    - by Wayne M
    This is a hypothetical scenario. Let's say you've just been hired at a company with a small development team. The company uses an internal CRM/ERP type system written in .NET 2.0 to manage all of it's day to day things (let's simplify and say customer accounts and records). The app was written a couple of years ago when .NET 2.0 was just out and uses the following architectural designs: Webforms Data layer is a thin wrapper around SqlCommand that calls stored procedures Rudimentary DTO-style business objects that are populated via the sprocs A "business logic" layer that acts as a gateway between the webform and database (i.e. code behind calls that layer) Let's say that as there are more changes and requirements added to the application, you start to feel that the old architecture is showing its age, and changes are increasingly more difficult to make. How would you go about introducing refactoring steps to A) Modernize the app (i.e. proper separation of concerns) and B) Make sure that the app can readily adapt to change in the organization? IMO the changes would involve: Introduce an ORM like Linq to Sql and get rid of the sprocs for CRUD Assuming that you can't just throw out Webforms, introduce the M-V-P pattern to the forms Make sure the gateway classes conform to SRP and the other SOLID principles. Change the logic that is re-used to be web service methods instead of having to reuse code What are your thoughts? Again this is a totally hypothetical scenario that many of us have faced in the past, or may end up facing.

    Read the article

  • Unit Testing the Use of TransactionScope

    - by Randolpho
    The preamble: I have designed a strongly interfaced and fully mockable data layer class that expects the business layer to create a TransactionScope when multiple calls should be included in a single transaction. The problem: I would like to unit test that my business layer makes use of a TransactionScope object when I expect it to. Unfortunately, the standard pattern for using TransactionScope is a follows: using(var scope = new TransactionScope()) { // transactional methods datalayer.InsertFoo(); datalayer.InsertBar(); scope.Complete(); } While this is a really great pattern in terms of usability for the programmer, testing that it's done seems... unpossible to me. I cannot detect that a transient object has been instantiated, let alone mock it to determine that a method was called on it. Yet my goal for coverage implies that I must. The Question: How can I go about building unit tests that ensure TransactionScope is used appropriately according to the standard pattern? Final Thoughts: I've considered a solution that would certainly provide the coverage I need, but have rejected it as overly complex and not conforming to the standard TransactionScope pattern. It involves adding a CreateTransactionScope method on my data layer object that returns an instance of TransactionScope. But because TransactionScope contains constructor logic and non-virtual methods and is therefore difficult if not impossible to mock, CreateTransactionScope would return an instance of DataLayerTransactionScope which would be a mockable facade into TransactionScope. While this might do the job it's complex and I would prefer to use the standard pattern. Is there a better way?

    Read the article

  • Authentication Problem in Silverlight - Cannot connect to SQL Server

    - by Johann
    Hi All, At the moment, when I try to register a user using the default Business Template in Silverlight, I am getting an error. Basically I am following a tutorial found on Channel 9 on how to build a business app with Silverlight. "An error has occurred while establishing a connection to the server. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 5) An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 1326)". I found a blog where it identifies the problem, and followed every step, however I am still getting the same problems. Here is my connection string:- <add name="SIEventManagerEntities" connectionString="metadata=res://*/EventManagerDBModel.csdl|res://*/EventManagerDBModel.ssdl|res://*/EventManagerDBModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=MONFU-PC;Initial Catalog=SIEventManager;Integrated Security=SSPI;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> Any help would really be very much appreciated! Thanks

    Read the article

  • How to implement Administrator rights in Java Application?

    - by Yatendra Goel
    I am developing a Data Modeling Software that is implemented in Java. This application converts the textual data (stored in a database) to graphical form so that users can interpret the data in a more efficient form. Now, this application will be accessed by 3 kinds of persons: 1. Managers (who can fill the database with data and they can also view the visual form of the data after entering the data into the database) 2. Viewers (who can only view the visual form of data that has been filled by managers) 3. Administrators (who can create and manage other administrators, managers and viewers) Now, how to implement 3 diff. views of the same application. Note: Managers, Viewers and Administrators can be located in any part of the world and should access the application through internet. One idea that came in my mind is as follows: Step1: Code all the business logic in EJBs so that it can be used in distributed environment (means which can be accessed by several users through internet) Step2: Code 3 Swing GUI Clients: One for administrators, one for managers and one for viewers. These 3 GUI clients can access business logic written in EJBs. Step3: Distribute the clients corresponding to their users. For instance, manager client to managers. =================================QUESTIONS======================================= Q1. Is the above approach is correct? Q2. This is very common functionality that various softwares have. So, Do they implement this kind of functionality through this way or any other way? Q3. If any other approach would be more better, then what is that approach?

    Read the article

  • Mutually beneficial IP/copyright clauses for contract-based freelance work

    - by Nathan de Vries
    I have a copyright section in the contract I give to my clients stating that I retain copyright on any works produced during my work for them as an independent contractor. This is most definitely not intended to place arbitrary restrictions on my clients, but rather to maintain my ability to decide on how the software I create is licensed and distributed. Almost every project I work on results in at least one part of it being released as open source. Every project I work on makes use of third-party software released in the same fashion, so returning the favour is something I would like to continue doing. Unfortunately, the contract is not so clear when it comes to defining the rights of the client in the use of said software. I mention that the code will be licensed to them, but do not mention specifics about exclusivity, ability to produce derivatives etc. As such, a client has raised concerns about the copyright section of my contract, and has suggested that I reword it such that all copyrights are transferred entirely to the client on final payment for the project. This will almost certainly reduce my ability to distribute the software I have created; I would much prefer to find a more mutually beneficial agreement where both our concerns are appeased. Are there any tried and true approaches to licensing software in this kind of situation? To summarise: I want to maintain the ability to license (parts of) the software under my own terms, independently of my relationship with the client; with some guarantee to the client that no trade-secrets or critical business logic will be shared; giving them the ability to re-use my code in their future projects; but not necessarily letting them sell it (I'm not sure about this, though...what happens if they sell their business and the software along with it?) I realise that everyone's feedback is going to be prefixed with "IANAL", however I appreciate any thoughts you might have on the matter.

    Read the article

  • Efficient database access when dealing with multiple abstracted repositories

    - by Nathan Ridley
    I want to know how most people are dealing with the repository pattern when it involves hitting the same database multiple times (sometimes transactionally) and trying to do so efficiently while maintaining database agnosticism and using multiple repositories together. Let's say we have repositories for three different entities; Widget, Thing and Whatsit. Each repository is abstracted via a base interface as per normal decoupling design processes. The base interfaces would then be IWidgetRepository, IThingRepository and IWhatsitRepository. Now we have our business layer or equivalent (whatever you want to call it). In this layer we have classes that access the various repositories. Often the methods in these classes need to do batch/combined operations where multiple repositories are involved. Sometimes one method may make use of another method internally, while that method can still be called independently. What about, in this scenario, when the operation needs to be transactional? Example: class Bob { private IWidgetRepository _widgetRepo; private IThingRepository _thingRepo; private IWhatsitRepository _whatsitRepo; public Bob(IWidgetRepository widgetRepo, IThingRepository thingRepo, IWhatsitRepository whatsitRepo) { _widgetRepo = widgetRepo; _thingRepo= thingRepo; _whatsitRepo= whatsitRepo; } public void DoStuff() { _widgetRepo.StoreSomeStuff(); _thingRepo.ReadSomeStuff(); _whatsitRepo.SaveSomething(); } public void DoOtherThing() { _widgetRepo.UpdateSomething(); DoStuff(); } } How do I keep my access to that database efficient and not have a constant stream of open-close-open-close on connections and inadvertent invocation of MSDTS and whatnot? If my database is something like SQLite, standard mechanisms like creating nested transactions are going to inherently fail, yet the business layer should not have to be concerning itself with such things. How do you handle such issues? Does ADO.Net provide simple mechanisms to handle this or do most people end up wrapping their own custom bits of code around ADO.Net to solve these types of problems?

    Read the article

  • How does Crystal Reports Runtime Licensing work?

    - by GluedHands
    I am designing an application in C# and I want to use some Crystal Reports in my application. I am selling this application as freelance to a small business. This is my first program that I have ever sold. I have Crystal Reports 2008 that I am using to design reports with. Do I need to get any kind of licensing from Business Objects to include the Crystal Reports Runtime for report printing in my application? Or do I not need to worry about it as long as I have a licensed version of Crystal Reports 2008 on my development machine. The client would only need be able to print the reports that I have designed on my machine, not design their own. The reports would be saved as a file. The application will load the saved report and print it with provided data. I did see this article which answers the most part of my question. However, it does not include whether it covers loading saved report documents? Any additional related information for a commercial product newbie is gladly appreciated.

    Read the article

  • Multiple Solution Layout for ASP.NET Web Portal?

    - by Jared S
    At work, we've developed a custom ASP.NET Web Portal (That's very similar to iGoogle). We have "Apps" (self-contained, large web forms) and "Modules" (similar to Google Gadgets). Currently, we use a single-solution model. Right now, we have: 3 core projects 60 application projects 80 module projects To reduce copy and pasting between projects, we're going to factor out common functionality (Data Access, Business Logic) into separate projects. I'd also like to introduce Unit Tests, which is going to increase the number of projects even more. We've already reached the point where Visual Studio is choking on the number of projects. We generally only load the 3 core projects and then whatever app's/module's project we're working on. Would a different solution structure help us out? Our number of projects is only going to increase. In general, an app or module only references the 3 core projects. Soon, apps/modules may start referencing the Data Access/Business Logic projects. But in general, apps and modules do not make references between themselves. So to recap, what is the best practice for solution structure when there are MANY projects that use a small number of core projects?

    Read the article

  • asp.net, wcf authentication and caching

    - by andrew
    I need to place my app business logic into a WCF service. The service shouldn't be dependent on ASP.NET and there is a lot of data regarding the authenticated user which is frequently used in the business logic hence it's supposed to be cached (probably using a distributed cache). As for authentication - I'm going to use two level authentication: Front-End - forms authentication back-end (WCF Service) - message username authentication. For both authentications the same custom membership provider is supposed to be used. To cache the authenticated user data, I'm going to implement two service methods: 1) Authenticate - will retrieve the needed data and place it into the cache(where username will be used as a key) 2) SignOut - will remove the data from the cache Question 1. Is correct to perform authentication that way (in two places) ? Question 2. Is this caching strategy worth using or should I look at using aspnet compatible service and asp.net session ? Maybe, these questions are too general. But, anyway I'd like to get any suggestions or recommendations. Any Idea

    Read the article

  • Entity Framework Validation & usage

    - by kmsellers
    I'm aware there is an AssociationChanged event, however, this event fires after the association is made. There is no AssociationChanging event. So, if I want to throw an exception for some validation reason, how do I do this and get back to my original value? Also, I would like to default values for my entity based on information from other entities but do this only when I know the entitiy is instanced for insertion into the database. How do I tell the difference between that and the object getting instanced because it is about to be populated based on existing data? Am I supposed to know? Is that considiered business logic that should be outside of my entity business logic? If that's the case, then should I be designing controller classes to wrap all these entities? My concern is that if I deliver back an entity, I want the client to get access to the properties, but I want to retain tight control over validations on how they are set, defaulted, etc. Every example I've seen references context, which is outside of my enity partial class validation, right? BTW, I looked at the EFPocoAdapter and for the life of me cannot determine how to populate lists of from within my POCO class... anyone know how I get to the context from a EFPoco Class?

    Read the article

  • Remote interface lookup-problem in Glassfish3

    - by andersmo
    I have deployed a war-file, with actionclasses and a facade, and a jar-file with ejb-components (a stateless bean, a couple of entities and a persistence.xml) on glassfish3. My problem is that i cant find my remote interface to the stateless bean from my facade. My bean and interface looks like: @Remote public interface RecordService {... @Stateless(name="RecordServiceBean", mappedName="ejb/RecordServiceJNDI") public class RecordServiceImpl implements RecordService { @PersistenceContext(unitName="record_persistence_ctx") private EntityManager em;... and if i look in the server.log the portable jndi looks like: Portable JNDI names for EJB RecordServiceBean : [java:global/recordEjb/RecordServiceBean, java:global/recordEjb/RecordServiceBean!domain.service.RecordService]|#] and my facade: ...InitialContext ctx= new InitialContext(); try{ recordService = (RecordService) ctx.lookup("java:global/recordEjb/RecordServiceBean!domain.service.RecordService"); } catch(Throwable t){ System.out.println("ooops"); try{ recordService = (RecordService)ctx.lookup("java:global/recordEjb/RecordServiceImpl"); } catch(Throwable t2){ System.out.println("noooo!"); }... } and when the facade makes the first call this exception occur: javax.naming.NamingException: Lookup failed for 'java:global/recordEjb/RecordServiceBean!domain.service.RecordService' in SerialContext [Root exception is javax.naming.NamingException: ejb ref resolution error for remote business interfacedomain.service.RecordService [Root exception is java.lang.ClassNotFoundException: domain.service.RecordService]] and the second call: javax.naming.NamingException: Lookup failed for 'java:global/recordEjb/RecordServiceBean' in SerialContext [Root exception is javax.naming.NamingException: ejb ref resolution error for remote business interfacedomain.service.RecordService [Root exception is java.lang.ClassNotFoundException: domain.service.RecordService]] I have also tested to inject the bean with the @EJB-annotation: @EJB(name="RecordServiceBean") private RecordService recordService; But that doesnt work either. What have i missed? I tried with an ejb-jar.xml but that shouldnt be nessesary. Is there anyone who can tell me how to fix this problem?

    Read the article

  • calling same function on different buttons not loaded yet

    - by Jordan Faust
    I can not get this to work for every button and I cannot find anything explaining why. I guessing it is something small that I am missing $(document).ready(function() { // delete the selected row from the database $(document).on('click', '#business-area-delete-button', { model: "BusinessArea" }, deleteRow); $(document).on('click', '#business-type-delete-button', { model: "BusinessType" }, deleteRow); $(document).on('click', '#client-delete-button', { model: "Client" }, deleteRow); $(document).on('click', '#client-type-delete-button', { model: "ClientType" }, deleteRow); $(document).on('click', '#communication-channel-type', { model: "CommunicationChannelType" }, deleteRow); $(document).on('click', '#parameter-type-delete-button', { model: "ParameterType" }, deleteRow); $(document).on('click', '#validation-method-delete-button', { model: "ValidationMethod" }, deleteRow); } the event function deleteRow(event){ $.ajax( { type:'POST', data: { id: $(".delete-row").attr("id") }, url:"/mysite/admin/delete" + event.data.model, success:function(data,textStatus){ $('#main-content').html(data); }, error:function(XMLHttpRequest,textStatus,errorThrown){ jQuery('#alerts').html(XMLHttpRequest.responseText); }, complete:function(XMLHttpRequest,textStatus){ placeAlerts() } } ); return false }; This works only for a the button with id validation-method-delete-button. I use document and not the button its self because the button is contained in a template that is loaded later via ajax. I have this working for a similar function that is selecting a row in a table however I am not attempting to pass data in that scenario.

    Read the article

  • Free cross-platform library to convert numbers (money amounts) to words?

    - by bialix
    I'm looking for cross-platform library which I can use in my C application to convert money amounts (e.g. $123.50) to words (one hundred twenty three dollars and fifty cents). I need support for multiple currencies: dollars, euros, UK pounds etc. Although I understand this is not hard at all to write my own implementation, but I'd like to avoid reinventing wheel. I've tried to google it, but there is too much noise related to MS Word converters. Can anybody suggest something? UPDATE numerous comments suggest to write my own implementation because it's really easy task. And I agree. My point was about support of multiple currencies in the same time and different business rules to spell the amounts (should be fractional part written as text or numbers? etc.) As I understand serious business applications have such library inside, but I think there is nothing open-source available, maybe because it seems as very easy task. I'm going to write my own libary and then open-source it. Thanks to all.

    Read the article

  • Why Can't Businesses Upgrade their Browsers from IE6/IE7?

    - by viatropos
    I have read lots these past few weeks on IE6, seeing if it was really that bad to make it look right. I have just learned HTML and CSS this past year so I've been spoiled to start with basically CSS3 and HTML5, and I can do some really cool stuff super fast. I'm no IE6 master and I don't have years of experience with IE. So I thought it'd take a little time to figure out all the hacks to IE6/7 discovered and just implement them. But it's way harder than that (or maybe just way too much work). I'd have to either completely rebuild my design using "Internet Explorer 'Principles'", or cut out a lot of the neat things I could do using more recent technologies. For a million and one other reasons, everyone who builds things online seems to think IE should die. My question is, why can't businesses upgrade their browsers? When I work with businesses, they almost always resist the first time I ask, but 5 seconds later I'll show them what it looks like on my computer and talk about how great the latest stuff is (how much more secure later browser are, all the famous IE security cases, how much smoother and faster they new browsers are, how the IE team has basically missed the boat entirely, how much smoother business processes run, etc.), and they get excited! And within a few seconds they're up and running with Chrome or something. So can businesses not upgrade for some reasons? What are the reasons a business cannot upgrade? The main reason I think of is because they have an old version of windows. But a) wasn't there a legal case against this? and b) somebody must have figured out how to install Chrome or Firefox on ancient versions of Windows by now.

    Read the article

  • Help Needed Finding a Programmer

    - by ssean
    Good Morning, I am trying to find a programmer to code a piece of custom software for my business. I plan on using this software to manage my business, and possibly sell it to other companies (in the same industry) at a later date. I've never hired a programmer before, so I'm not sure what to expect or where to begin. I know exactly what features I need, and how I want it laid out, I just need someone who can take my ideas and make it happen. This software will be used to manage customer information, and keep track of orders. What I think I need: * SQL Server or similar database that will be located at our office. * Desktop Application, that connects via LAN to the database server (cannot be browser based) * Multiple User Support (Simultaneous users accesing the system) * Needs to be scalable (currently we have 5 employees, but who knows what the future will bring) * Multi-Platform Support (Windows, Linux) I posted a job offer through elance, which seems to raise more questions than answers. How do I decide what language(s) will work best for my situation? (I have received offers for C#, Eclipse, .NET, Powerbuilder, etc. - I want to make sure that I choose the best one now, so I don't run into problems later) Does the programmer hold any rights to the software? (I plan to offer the software for sale at a later date) Any help or insight would be appreciated, and I'd be happy to clarify anything if it helps. Thanks in advance!

    Read the article

  • Building a structure/object in a place other than the constructor

    - by Vishal Naidu
    I have different types of objects representing the same business entity. UIObject, PowershellObject, DevCodeModelObject, WMIObject all are different representation to the same entity. So say if the entity is Animal then I have AnimalUIObject, AnimalPSObject, AnimalModelObject, AnimalWMIObject, etc. Now the implementations of AnimalUIObject, AnimalPSObject, AnimalModelObject are all in separate assemblies. Now my scenario is I want to verify the contents of business entity Animal irrespective of the assembly it came from. So I created a GenericAnimal class to represent the Animal entity. Now in GenericAnimal I added the following constructors: GenericAnimal(AnimalUIObject) GenericAnimal(AnimalPSObject) GenericAnimal(AnimalModelObject) Basically I made GenericAnimal depend on all the underlying assemblies so that while verifying I deal with this abstraction. Now the other approach to do this is have GenericAnimal with an empty constructor an allow these underlying assemblies to have a Transform() method which would build the GenericAnimal. Both approaches have some pros and cons: The 1st approach: Pros: All construction logic is in one place in one class GenericAnimal Cons: GenericAnimal class must be touched every-time there is a new representation form. The 2nd approach: Pros: construction responsibility is delegated to the underlying assembly. Cons: As construction logic is spread accross assemblies, tomorrow if I need to add a property X in GenericAnimal then I have to touch all the assemblies to change the Transform method. Which approach looks better ? or Which would you consider a lesser evil ? Is there any alternative way better than the above two ?

    Read the article

  • Mixing stored procedures and ORM

    - by Jason
    The company I work for develops a large application which is almost entirely based on stored procedures. We use classic ASP and SQL Server and the major part of the business logic is contained inside those stored procedures. For example, (I know, this is bad...) a single stored procedure can be used for different purposes (insert, update, delete, make some calculations, ...). Most of the time, a stored procedure is used for operations on related tables, but this is not always the case. We are planning to move to ASP.NET in a near future. I have read a lot of posts on StackOverflow recommending that I move the business logic outside the database. The thing is, I have tried to convince the people who takes the decisions at our company and there is nothing I can do to change their mind. Since I want to be able to use the advantages of object-oriented programming, I want to map the tables to actual classes. So far, my solution is to use an ORM (Entity Framework 4 or nHibernate) to avoid mapping the objects manually (mostly to retrieve the data) and use some kind of Data Access Layer to call the existing stored procedures (for saving). I want your advice on this. Do you think it is a good solution? Any ideas?

    Read the article

  • Is it okay to violate the principle that collection properties should be readonly for performance?

    - by uriDium
    I used FxCop to analyze some code I had written. I had exposed a collection via a setter. I understand why this is not good. Changing the backing store when I don't expect it is a very bad idea. Here is my problem though. I retrieve a list of business objects from a Data Access Object. I then need to add that collection to another business class and I was doing it with the setter method. The reason I did this was that it is going to be faster to make an assignment than to insert hundreds of thousands of objects one at a time to the collection again via another addElement method. Is it okay to have a getter for a collection in some scenarios? I though of rather having a constructor which takes a collection? I thought maybe I could pass the object in to the Dao and let the Dao populate it directly? Are there any other better ideas?

    Read the article

  • what's a good technique for building and running many similar unit tests?

    - by jcollum
    I have a test setup where I have many very similar unit tests that I need to run. For example, there are about 40 stored procedures that need to be checked for existence in the target environment. However I'd like all the tests to be grouped by their business unit. So there'd be 40 instances of a very similar TestMethod in 40 separate classes. Kinda lame. One other thing: each group of tests need to be in their own solution. So Business Unit A will have a solution called Tests.BusinessUnitA. I'm thinking that I can set this all up by passing a configuration object (with the name of the stored proc to check, among other things) to a TestRunner class. The problem is that I'm losing the atomicity of my unit tests. I wouldn't be able to run just one of the tests, I'd have to run all the tests in the TestRunner class. This is what the code looks like at this time. Sure, it's nice and compact, but if Test 8 fails, I have no way of running just Test 8. TestRunner runner = new TestRunner(config, this.TestContext); var runnerType = typeof(TestRunner); var methods = runnerType.GetMethods() .Where(x => x.GetCustomAttributes(typeof(TestMethodAttribute), false) .Count() > 0).ToArray(); foreach (var method in methods) { method.Invoke(runner, null); } So I'm looking for suggestions for making a group of unit tests that take in a configuration object but won't require me to generate many many TestMethods. This looks like it might require code-generation, but I'd like to solve it without that.

    Read the article

  • Can it be done?

    - by bzarah
    We are in design phase of a project whose goal is replatforming an ASP classic application to ASP.Net 4.0. The system needs to be entirely web based. There are several new requirements for the new system that make this a challenging project: The system needs to be database independent. It must, at version 1.0, support MS SQL Server, Oracle, MySQL, Postgres and DB2. The system must be able to allow easy reporting from the database by third party reporting packages. The system must allow an administrative end user to create their own tables in the database through the web based interface. The system must allow an administrative end user to design/configure a user interface (web based) where they can select tables and fields in the system (either our system's core tables or their own custom tables created in #3) The system must allow an administrative end user to create and maintain relationships between these custom created tables, and also between these tables and our system's core tables. The system must allow an administrative end user to create business rules that will enforce validation, show/hide UI elements, block certain actions based on the identity of specific users, specific user groups or privileges. Essentially it's a system that has some core ticket tracking functionality, but allows the end user to extend the interface, business rules and the database. Is this possible to build in a .Net, Web based environment? If so, what do you think the level of effort would be to get this done? We are currently a 6 person shop, with 2.5 full time developers.

    Read the article

  • Is there a point to have multiple VS projects for an ASP.NET MVC application?

    - by mare
    I'm developing MVC application where I currently have 3 projects in solution. Core (it is supposed to be for Repositories, Business Classes, Models, HttpModules, HttpFilters, Settings, etc.) Data access (Data provider, for instance SqlDataProvider for working with SQL Server datastore - implements Repository interfaces, XmlDataProvider - also implements Repository interfaces but for local XML files as datastore) ASP.NET MVC project (all the typical stuff, UI, controllers, content, scripts, resources and helpers). I have no Models in my ASP.NET MVC project. I've just run into a problem because of that coz I want to use the new DataAnnotation feature in MVC 2 on my Bussiness class, which are, as said in Core, however I have I want to be able to localize the error messages. This where my problem starts. I cannot use my Resources from MVC project in Core. The MVC project references Core and it cannot be vice-versa. My options as I see them are: 1) Move Resources out but this would require correcting a whole bunch of Views and Controllers where I reference them, 2) Make a complete restructure of my app What are your thoughts on this? Also, Should I just move everything business related into Models folder in MVC project?? Does it even make any sense to have it structured like that, because we can just make subfolders for everything under MVC project? The whole Core library is not intended to ever be used for anything else, so there actually no point of compiling it to a separate DLL. Suggestions appreciated.

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >