Search Results

Search found 8467 results on 339 pages for 'map layers'.

Page 215/339 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • N-Tier Architecture - Structure with multiple projects in VB.NET

    - by focus.nz
    I would like some advice on the best approach to use in the following situation... I will have a Windows Application and a Web Application (presentation layers), these will both access a common business layer. The business layer will look at a configuration file to find the name of the dll (data layer) which it will create a reference to at runtime (is this the best approach?). The reason for creating the reference at runtime to the data access layer is because the application will interface with a different 3rd party accounting system depending on what the client is using. So I would have a separate data access layer to support each accounting system. These could be separate setup projects, each client would use one or the other, they wouldn't need to switch between the two. Projects: MyCompany.Common.dll - Contains interfaces, all other projects have a reference to this one. MyCompany.Windows.dll - Windows Forms Project, references MyCompany.Business.dll MyCompany.Web.dll - Website project, references MyCompany.Business.dll MyCompany.Busniess.dll - Business Layer, references MyCompany.Data.* (at runtime) MyCompany.Data.AccountingSys1.dll - Data layer for accounting system 1 MyCompany.Data.AccountingSys2.dll - Data layer for accounting system 2 The project MyCompany.Common.dll would contain all the interfaces, each other project would have a reference to this one. Public Interface ICompany ReadOnly Property Id() as Integer Property Name() as String Sub Save() End Interface Public Interface ICompanyFactory Function CreateCompany() as ICompany End Interface The project MyCompany.Data.AccountingSys1.dll and MyCompany.Data.AccountingSys2.dll would contain the classes like the following: Public Class Company Implements ICompany Protected _id As Integer Protected _name As String Public ReadOnly Property Id As Integer Implements MyCompany.Common.ICompany.Id Get Return _id End Get End Property Public Property Name As String Implements MyCompany.Common.ICompany.Name Get Return _name End Get Set(ByVal value as String) _name = value End Set End Property Public Sub Save() Implements MyCompany.Common.ICompany.Save Throw New NotImplementedException() End Sub End Class Public Class CompanyFactory Implements ICompanyFactory Public Function CreateCompany() As ICompany Implements MyCompany.Common.ICompanyFactory.CreateCompany Return New Company() End Function End Class The project MyCompany.Business.dll would provide the business rules and retrieve data form the data layer: Public Class Companies Public Shared Function CreateCompany() As ICompany Dim factory as New MyCompany.Data.CompanyFactory Return factory.CreateCompany() End Function End Class Any opinions/suggestions would be greatly appreciated.

    Read the article

  • multithreading in c#

    - by Lalit Dhake
    Hi, I have console application. In that i have some process that fetch the data from database through different layers ( business and Data access). stores the fetched data in respective objects. Like if data is fetched for student then this data will store (assigned ) to Student object. same for school. and them a delegate call the certain method that generates outputs as per requirement. This process will execute many times say 10 times. Ok? I want to run simultaneously this process. not one will start, it will finish and then second will start. I want after starting 1'st process, just 2'nd , 3rd....10'th must be start. Means it should be multithreading. how can i achieve this ? is that will give me error while connection with data base open and close ? I have tried this concept . but when thread 1'st is starting then data will fetched for thread 1 will stored in its respective (student , school) objects. ok? when simultaneous 2'nd thread starts , but the data is changing of 1'st object ,while control flowing in program. What have to do?

    Read the article

  • Conditional Drag and Drop Operations in Flex/AS3 Tree

    - by user163757
    Good day everyone. I am currently working with a hierarchical tree structure in AS3/Flex, and want to enable drag and drop capabilities under certain conditions: Only parent/top level nodes can be moved Parent/top level nodes must remain at this level; they can not be moved to child nodes of other parent nodes Using the dragEnter event of the tree, I am able to handle condition 1 easily. private function onDragEnter(event:DragEvent):void { // only parent nodes (map layers) are moveable event.preventDefault(); if(toc.selectedItem.hasOwnProperty("layer")) DragManager.acceptDragDrop(event.target as UIComponent); else DragManager.showFeedback(DragManager.NONE); } Handling the second condition is proving to be a bit more difficult. I am pretty sure the dragOver event is the place for logic. I have been experimenting with calculateDropIndex, but that always gives me the index of the parent node, which doesn't help check if the potential drop location is acceptable or not. Below is some pseudo code of what I am looking to accomplish. private function onDragOver(e:DragEvent):void { // if potential drop location has parents // dont allow drop // else // allow drop } Can anyone provide advice how to implement this?

    Read the article

  • VB.Net Custom Object Master-Detail Data Binding

    - by clawson
    Since beginning to use VB.Net some years ago I have become slowly familiar with using the data binding features of .Net, however I often find my self bewildered by it's behavior and instead of discover the correct way it should work I find some dirty work around to suit my needs and continue on. Needless to say my problems continue to arise. I am using Custom Objects as the Data Sources for by controls and often entire forms. I find it frustrating to separate business logic and the graphical interface. (That could be a new question entirely.) So for a lot of objects I generate a form which has the DataBindingSource for the object. When I create each from using the New Constructor I explicitly pass to it the object to which it should be bound, and then set this passed object as the DataSource of the BindingSource. (That's a mouthful!) Now the Master object (say, bound to each form) often contains a List of objects which I like to have displayed in a DataGridView. I (sometimes) create and modify these child objects in their own form (again creating a databind the same way as the master form) but when I add them to the List in the master object the DataGridView won't update with the new items. So my question really has a few layers: How can I easily/efficiently/correctly update this DataGridView with the list of Detail objects when I add them to the list of the Master object. Is this approach to DataBinding good/viable. What's the best way to separate business logic from graphical interface. Thanks for the help!

    Read the article

  • Feasibility of using Silverlight for web and windows client with common code base for data intensive

    - by Kabeer
    Hello. Recently in a conversation, someone suggested me to make use of Silverlight if I am targeting a web client and a windows client for the same application. This will cut down my effort for supporting the contrast in both presentation layers. Mine is a product, that will be deployed in enterprises. Both web and windows clients are desirable. With the above context, I have few queries: Is it advisable to adopt the recommended approach and whether this approach is becoming a trend? Besides, some configuration & deployment tweaking, will this significantly reduce effort on the presentation layer? Is there a possibility that my future prospects (for this product) will resist Silverlight footprint? Will I be able to make use of the ASP.Net MVC pattern? Will there be any performance implication for the web client? Will Silverlight support incremental load of controls? If my back-end includes SSRS, will I be able to harness all its front end features with Silverlight? Will I be able to support additional devices with same code base in future? Mine is a very data intensive application from both, data entry and reporting perspective. Is it advisable to use 3rd party controls (like Telerik) for improved user experience and developer productivity? Are their any professional quality open source Silverlight controls (library) available? Further, I seek information of best practices in the context I shared above.

    Read the article

  • Time to start returning IQueryable<T> instead of IList<T> to my Web UI / Web API Layer?

    - by JohnnyO
    I've got a multi-layer application that starts with the repository pattern for all data access and it returns IQueryable to the Services layer. The Services layer, which includes all of the business logic, returns IList to the Controllers (note: I'm using ASP.NET MVC for the UI layer). The benefit of returning IQueryable in the data access layer is that it allows my repositories to be extremely simple and the database queries to be deferred. However, I'm triggering the database queries in my services layer so that my unit tests is more reliable and I don't give flexibility to the Controllers to reshape my queries. However, I've recently encountered several situations where deferring the execution of queries down to the Controllers would have been significantly more performant because the Controllers had to do some projections on the data that was UI specific. Additionally, with the emergence of things like oData, I was starting to wonder if end points (e.g. web UI or web apis) should be working directly with IQueryable. What are your thoughts? Is it time to start returning IQueryable from the services layer to the UI layer? Or stick with IList? This thread here: http://stackoverflow.com/questions/718624/to-return-iqueryablet-or-not-return-iqueryablet seems to vouch for returning IList to the UI layers, but I was wondering if things are changing because of new emerging technologies and techniques.

    Read the article

  • Extract <name> attribute from KML

    - by Ozaki
    I am using OpenLayers for a mapping service. In which I have several KML layers that are using KML feeds from server to populate data on the map. It currently plots: images / points / vector lines & shapes. But on these points it will not add a label with the value of the for the placemark in the KML. What I have currently tried is: ////////////////////KML Feed for * Layer// var surveylinelayer = new OpenLayers.Layer.Vector("First KML Layer", { projection: new OpenLayers.Projection("EPSG:4326"), strategies: [new OpenLayers.Strategy.Fixed()], protocol: new OpenLayers.Protocol.HTTP({ url: firstKMLURL, format: new OpenLayers.Format.KML({ extractStyles: true, extractAttributes: true }) }), styleMap: new OpenLayers.StyleMap({ "default": KMLStyle }) }); then the Style as follows: var KMLStyle = new OpenLayers.Style({ //label: "${name}", // This method will display nothing fillOpacity: 1, pointRadius: 10, fontColor: "#7E3C1C", fontSize: "13px", fontFamily: "Courier New, monospace", fontWeight: "strong", labelXOffset: "0", labelYOffset: "-15" }, { //dynamic label context: { label: function(feature) { return "Feature Name: " + feature.attributes.name; // also displays nothing } } }); Example of the KML: <Placemark> <name>POI1</name> <Style> <LabelStyle> <color>ffffffff</color> </LabelStyle> </Style> <Point> <coordinates>0.000,0.000</coordinates> </Point> </Placemark> When debugging I just hit "feature is undefined" and am unsure why it would be undefined in this instance?

    Read the article

  • Dropdown menu disappears in IE7

    - by Justine
    A weird problem with a dropdown menu in IE7: http://screenr.com/SNM The dropdown disappears when the mouse moves to a part that hovers above other layers. The HTML structure looks like this: <div class="header"> <ul class="nav> <li><a href="">item</a> <ul><li><a href="">sub-item</a></li></ul> </li> </ul> </div><!-- /header--> <div class="featured"></div> <div class="content"></div> The sub-menu is positioned absolutely and has visibility:hidden, then it's set to visible using jQuery, like so: $(".header ul.nav li").hover(function(){ $(this).addClass("hover"); $('ul:first',this).css('visibility', 'visible'); }, function(){ $(this).removeClass("hover"); $('ul:first',this).css('visibility', 'hidden'); }); I had a problem with the dropdown hiding under other content in IE7, fixed easily by giving the z-index to its parent and other divs: *:first-child+html .header { position: relative; z-index: 2 !important; } *:first-child+html .content, *:first-child+html .main, *:first-child+html .primary *:first-child+html .featured { position: relative; z-index: 1 !important; } Now, I have no idea why the menu disappears when hovered over other divs, you can view the site live here: http://dev.gentlecode.net/ama/ubezpieczenia.html I would love any help, been staring at this code for ages now without any solution. I guess it's just me tunnel visioning already... Thanks in advance for any help!

    Read the article

  • General N-Tier Architecture Question

    - by whatispunk
    In an N-Tier app you're supposed to have a business logic layer and a data access layer. Is it bad to simply have two assemblies: BusinessLogicLayer.dll and DataAccessLayer.dll to handle all this logic? How do you actually represent these layers. It seems silly, the way I've seen it, to have a BusinessLogic class library containing classes like: CustomerBusinessLogic.cs, OrderBusinessLogic.cs, etc. each calling their appropriately named cousin in the DataAccessLayer class library, i.e. CustomerDataAccess.cs, OrderDataAccess.cs. I want to create a web app using MVP and it doesn't seem so cut and dry as this. There are lots of opinions about where the business logic is supposed to be put in MVP and I'm not sure I've found a really great answer yet. I want this project to be easily testable, and I am trying to adhere to TDD methodologies as best I can. I intend to use MSTest and Rhino Mocks for testing. I was thinking of something like the following for my architecture: I'd use LINQ-To-SQL to talk to the database. WCF services to define data contract interfaces for the business logic layer. Then use MVP with ASP.NET Forms for the UI/BLL. Now, this isn't the start of this project, most of the LINQ stuff is already done, so its stuck. The WCF service would replace the existing DataAccessLayer assembly and the UI/BLL would replace the BusinessLogicLayer assembly etc. This sort of makes sense in my head, but its getting really late. Anyone that's traveled down this path have any guidance? Good links? Warnings? Thanks!

    Read the article

  • Optimizing PHP require_once's for low disk i/o?

    - by buggedcom
    Q1) I'm designing a CMS (-who isn't!) but priority is being given to caching. Literally everything is cached. DB rows, DB id queries, Configuration data, processed data, compiled templates. Currently it has two layers of caching. The first is a opcode cache or memory cache such as apc, eaccelerator, xcache or memcached. If an entry is not found in there it is then searched for in the secondary slow cache, ie php includes. Are the opcode caches actually faster than doing a require_once to a php file with a var_export'd array of data in it? My tests are inconclusive as my development box (5.3 of XAMPP) keeps throwing errors installing any of the aforementioned programs. Q2) The CMS has numerous helper classes that are autoloaded on demand instead of loading all files. Mostly each has a require before it so no autoloading needs to take place, however this is not the question. Because a page script can have up to 50/60 helper files included I have a feeling that if the site was under pressure it would buckle because of all the i/o that this incurs. Ignore for the moment that there is output cache in place that would remove the need for what I am about to suggest, and also that opcode caches would render this moot. What I have tried to do is join all the helper files required for the scripts execution in one single file. This is achievable and works well, however it has a side effect of greatly increasing the memory usage dramatically even though technically the same code is being used. What are your thoughts and opinions on this?

    Read the article

  • Design: Website calling a webservice on the same machine

    - by Chris L
    More of a design/conceptual question. At work the decision was made to have our data access layer be called through webservices. So our website would call the webservices for any/all data to and from the database. Both the website & the webservices will be on the same machine(so no trip across the wire), but the database is on a separate machine(so that would require a trip across the wire regardless). This is all in-house, the website, webservice, and database are all within the same company(AFAIK, the webservices won't be reused by another other party). To the best of my knowledge: the website will open a port to the webservices, and the webservices will in turn open another port and go across the wire to the database server to get/submit the data. The trip across the wire can't be avoided, but I'm concerned about the webservices standing in the middle. I do agree there needs to be distinct layers between the functionality(such as business layer, data access layer, etc...), but this seems overly complex to me. I'm also sensing there will be some performance problems down the line. Seems to me it would be better to have the (DAL)assemblies referenced directly within the solution, thus negating the first port to port connection. Any thoughts(or links) both for and against this idea would be appreciated P.S. We're a .NET shop(migrating from vb to C# 3.5)

    Read the article

  • WPF image cropping

    - by Califer
    I'm loading layers of images to make a single image. I'm currently stacking them all onto a canvas. I've got it set up so the user can specify the final dimensions of the single image, but even when I change the size of the canvas the images retain their original sizes. I tried to modify the image size as I was loading them in, but the sizes were NaN and the Actual Sizes were 0 so I couldn't change them there. I'm starting to think that canvas might not be the way to go. Any suggestions to how I could clip images to fit a specific size? canvas1.Children.Clear(); int totalImages = Window1.GetNumberOfImages(); if (drawBackground) canvas1.Background = new SolidColorBrush(Color.FromArgb(a,r,g,b)); else canvas1.Background = new SolidColorBrush(Color.FromArgb(0, 0, 0, 0)); for (int i = 0; i < totalImages; i++) { Image image = new Image(); image.Source = Window1.GetNextImage(i); canvas1.Children.Add(image); }

    Read the article

  • Flash "visible" issue

    - by justkevin
    I'm writing a tool in Flex that lets me design composite sprites using layered bitmaps and then "bake" them into a low overhead single bitmapData. I've discovered a strange behavior I can't explain: toggling the "visible" property of my layers works twice for each layer (i.e., I can turn it off, then on again) and then never again for that layer-- the layer stays visible from that point on. If I override "set visible" on the layer as such: override public function set visible(value:Boolean):void { if(value == false) this.alpha = 0; else {this.alpha = 1;} } The problem goes away and I can toggle "visibility" as much as I want. Any ideas what might be causing this? Edit: Here is the code that makes the call: private function onVisibleChange():void { _layer.visible = layerVisible.selected; changed(); } The changed() method "bakes" the bitmap: public function getBaked():BitmapData { var w:int = _composite.width + (_atmosphereOuterBlur * 2); var h:int = _composite.height + (_atmosphereOuterBlur * 2); var bmpData:BitmapData = new BitmapData(w,h,true,0x00000000); var matrix:Matrix = new Matrix(); var bounds:Rectangle = this.getBounds(this); matrix.translate(w/2,h/2); bmpData.draw(this,matrix,null,null,new Rectangle(0,0,w,h),true); return bmpData; } Incidentally, while the layer is still visible, using the Flex debugger I can verify that the layer's visible value is "false".

    Read the article

  • Why should I bother with unit testing if I can just use integration tests?

    - by CodeGrue
    Ok, I know I am going out on a limb making a statement like that, so my question is for everyone to convince me I am wrong. Take this scenario: I have method A, which calls method B, and they are in different layers. So I unit test B, which delivers null as a result. So I test that null is returned, and the unit test passes. Nice. Then I unit test A, which expects an empty string to be returned from B. So I mock the layer B is in, an empty string is return, the test passes. Nice again. (Assume I don't realize the relationship of A and B, or that maybe two differente people are building these methods) My concern is that we don't find the real problem until we test A and B togther, i.e. Integration Testing. Since an integration test provides coverage over the unit test area, it seems like a waste of effort to build all these unit tests that really don't tell us anything (or very much) meaningful. Why am I wrong?

    Read the article

  • RIA Services for transmitting non DB object-graph

    - by Mike Gates
    I have been getting into RIA services because I thought it would simplify dealing with the services layer of web applications I wish to build. I see lots of examples out there showing how to create DomainService classes which expose and consume entities that have some kind of relational database backing, and therefore have foreign-key relationships. However, I would like to know how to expose and consume normal object graphs...objects that contain references to eachother but don't have foreign keys. For example, say I want a service operation called "GetFolderInformation(string pathToFolder)". I want this to return a custom object called "FolderInformation" structured with: - string Name - IEnumerable<FileInformation> Files I cannot get this to work because it seems that RIA wants to deal with entities that have foreign key relationships. Why? Why can't the serializer just see my object references and recreate that in the proxy on the other side? Data exists behind service layers that doesn't necessarily have foreign key relationships...like folder/file for example. EDIT: I realized I hadn't asked my question! My question is, is there a way to do what I am trying to do?

    Read the article

  • Rendering Swing Components to an Offscreen buffer

    - by Nick C
    I have a Java (Swing) application, running on a 32-bit Windows 2008 Server, which needs to render it's output to an off-screen image (which is then picked up by another C++ application for rendering elsewhere). Most of the components render correctly, except in the odd case where a component which has just lost focus is occluded by another component, for example where there are two JComboBoxes close to each other, if the user interacts with the lower one, then clicks on the upper one so it's pull-down overlaps the other box. In this situation, the component which has lost focus is rendered after the one occluding it, and so appears on top in the output. It renders correctly in the normal Java display (running full-screen on the primary display), and attempting to change the layers of the components in question does not help. I am using a custom RepaintManager to paint the components to the offscreen image, and I assume the problem lies with the order in which addDirtyRegion() is called for each of the components in question, but I can't think of a good way of identifying when this particular state occurs in order to prevent it. Hacking it so that the object which has just lost focus is not repainted stops the problem, but obviously causes the bigger problem that it is not repainted in all other, normal, circumstances. Is there any way of programmatically identifying this state, or of reordering things so that it does not occur? Many thanks, Nick

    Read the article

  • Which Code Should Go Where in MVC Structure

    - by Oguz
    My problem is in somewhere between model and controller.Everything works perfect for me when I use MVC just for crud (create, read, update, delete).I have separate models for each database table .I access these models from controller , to crud them . For example , in contacts application,I have actions (create, read, update, delete) in controller(contact) to use model's (contact) methods (create, read, update, delete). The problem starts when I try to do something more complicated. There are some complex processes which I do not know where should I put them. For example , in registering user process. I can not just finish this process in user model because , I have to use other models too (sending mails , creating other records for user via other models) and do lots of complex validations via other models. For example , in some complex searching processes , I have to access lots of models (articles, videos, images etc.) Or, sometimes , I have to use apis to decide what I will do next or which database model I will use to record data So where is the place to do this complicated processes. I do not want to do them in controllers , Because sometimes I should use these processes in other controllers too. And I do not want to put these process in models because , I use models as database access layers .May be I am wrong,I want to know . Thank you for your answer .

    Read the article

  • How to parse out base file name using Script-Fu

    - by ongle
    Using Gimp 2.6.6 for MAC OS X (under X11) as downloaded from gimp.org. I'm trying to automate a boring manual process with Script-Fu. I needed to parse the image file name to save off various layers as new files using a suffix on the original file name. My original attempts went like this but failed because (string-search ...) doesn't seem to be available under 2.6 (a change to the scripting engine?). (set! basefilename (substring filename 0 (string-search "." filename))) Then I tried to use this information to parse out the base file name using regex but (re-match-nth ...) is not recognized either. (if (re-match "^(.*)[.]([^.]+)$" filename buffer) (set! basefilename (re-match-nth orig-name buffer 1)) ) And while pulling the value out of the vector ran without error, the resulting value is not considered a string when it is passed into (string-append ...). (if (re-match "^(.*)[.]([^.]+)$" filename buffer) (set! basefilename (vector-ref buffer 1)) ) So I guess my question is, how would I parse out the base file name?

    Read the article

  • How do I fix a broken connection to DB2 from a web application?

    - by Eddie White
    I support some old web applications, VBScript-based ASP for the UI and VB6 COM modules for the business and data access layers. Last weekend, I installed DB2 Connect Enterprise Edition v8 fixpack 14 on several Windows 2000 servers, and one of the web apps errors out on null data when it calls the built in VBScript function FormatNumber. This numeric data is retrieved by a SQL Server query, but the only way the SQL Server column is populated is with the calculated results returned from a DB2 query earlier in a progression through several pages. When I installed DB2 Connect EE, one of the components loaded was MDAC 2.7. I followed corporate instructions and had the installation save an ODBC System Data Source, which reported a good connection when I tested it after the install. For what it's worth, the project references in the production VB6 modules pointed to MDAC 2.5. I have tried recompiling and deploying to COM on my test server new versions of the VB6 modules referencing MDAC 2.7. My development environment is Windows XP Pro, with MDAC 2.8 and DB2 Connect EE v9.5 installed. When I deployed the updated VB6 dlls, the CreateObject fails to instantiate the classes with the error message that "The class does not support automation or the requested interface". I've rolled the DB2 Connect install back and have reinstall v8 of the DB2 runtime client, which was the previous environment. The problem, however, persists.

    Read the article

  • Alternatives to using web.config to store settings (for complex solutions)

    - by Brian MacKay
    In our web applications, we seperate our Data Access Layers out into their own projects. This creates some problems related to settings. Because the DAL will eventually need to be consumed from perhaps more than one application, web.config does not seem like a good place to keep the connection strings and some of the other DAL-related settings. To solve this, on some of our recent projects we introduced a third project just for settings. We put the setting in a system of .Setting files... With a simple wrapper, the ability to have different settings for various enviroments (Dev, QA, Staging, Production, etc) was easy to achieve. The only problem there is that the settings project (including the .Settings class) compiles into an assembly, so you can't change it without doing a build/deployment, and some of our customers want to be able to configure their projects without Visual Studio. So, is there a best practice for this? I have that sense that I'm reinventing the wheel. Some solutions such as storing settings in a fixed directory on the server in, say, our own XML format occurred to us. But again, I would rather avoid having to re-create encryption for sensitive values and so on. And I would rather keep the solution self-contained if possible. EDIT: The original question did not contain the really penetrating reason that we can't (I think) use web.config ... That puts a few (very good) answers out of context, my bad.

    Read the article

  • Can I have two separate projects, 1 WebForms and 1 ASP.NET MVC, to both point to the same domain?

    - by Hamman359
    Is it possible to setup two separate projects, 1 WebForms and 1 ASP.NET MVC, to both point to the same domain? i.e. both point to different pages within www.somesite.com. Here's some background on the application and why I'm asking. This is a brownfield application that is currently 2.0 WebForms and is full of WebFormy 'goodness' (i.e. ObjectDataSources, FormView controls, UpdatePanels, etc...) There are lost of other 'fun' things in the code base like 600+ Stored Procedures and 200+ line methods in the business layer code that get data from the DB via stored proc, do some processing on the data, build an HTML string using string concatenation and then return that string to the UI layer. What we are planning on doing is developing new features in MVC and slowly converting the existing features over to MVC one at a time. As part of this transition, we will also be re-writing the layers below the UI to clean up the mess there and to do things like replace the stored procedures with NHibernate and introduce an IOC container. I know that you can run WebForms and MVC side-by-side in the same project, however, because we will be making wholesale changes to the way we do many things throughout our entire development stack, I'd like the new stuff to be a completely separate project within the solution. This should help serve as very visual reminder that this is a different way of doing things than before and make it easier to remove the old code as it is no longer needed. What I don't know is, is this even possible? Can two separate project point to the same domain? Here's an quick example of what I'm thinking: www.somesite.com/orders.aspx?id=123 (Orders page from existing WebForms project) www.somesite.com/customer/987 (Customer page from new MVC project)

    Read the article

  • Best strategy for moving data between physical tiers in ASP.net

    - by Pete Lunenfeld
    Building a new ASP.net application, and planning to separate DB, 'service' tier and Web/UI tier into separate physical layers. What is the best/easiest strategy to move serialized objects between the service tier and the UI tier? I was considering serializing POCOs into JSON using simple ASP.net pages to serve the middle tier. Meaning that the UI/Web tier will request data from a (hidden to the outside user) web server that will return a JSON string. This kind of JSON 'emitter' seems easily testable. It also seems easily compressible for efficiently moving data over the WAN between tiers. I know that some folks use .asmx webservices for this kind of task, but this seems like there is excess overhead with SOAP, and the package is not as human readable (testable) as POCOs serialized as JSON. Others are using more complex technology like WCF which we have never used. Does anyone have advice for choosing a method for moving data/objects between the data (db) tier and the web (UI) tier over the WAN using .net technologies? Thanks!!!

    Read the article

  • svnserve not strictly required?

    - by Kev
    I was reading the Red Bean book and noticed this paragraph: Do not be seduced by the simple idea of having all of your users access a repository directly via file:// URLs. Even if the repository is readily available to everyone via a network share, this is a bad idea. It removes any layers of protection between the users and the repository: users can accidentally (or intentionally) corrupt the repository database, it becomes hard to take the repository offline for inspection or upgrade, and it can lead to a mess of file permission problems (see the section called “Supporting Multiple Repository Access Methods”). Note that this is also one of the reasons we warn against accessing repositories via svn+ssh:// URLs—from a security standpoint, it's effectively the same as local users accessing via file://, and it can entail all the same problems if the administrator isn't careful. I realized that, since I'm the only one accessing the repository, ever, none of these caveats seem to apply. Can I safely down svnserve then and only ever have to worry about upgrading my TortoiseSVN client, not both the client and the server whenever there's a new version out? (I've tried it already--just needed to use the Relocate feature to switch from svn:// to file://--but I wanted to make sure something wouldn't be sneaking up on me if I left it this way.)

    Read the article

  • .Net Entity objectcontext thread error

    - by Chris Klepeis
    I have an n-layered asp.net application which returns an object from my DAL to the BAL like so: public IEnumerable<SourceKey> Get(SourceKey sk) { var query = from SourceKey in _dataContext.SourceKeys select SourceKey; if (sk.sourceKey1 != null) { query = from SourceKey in query where SourceKey.sourceKey1 == sk.sourceKey1 select SourceKey; } return query.AsEnumerable(); } This result passes through my business layer and hits the UI layer to display to the end users. I do not lazy load to prevent query execution in other layers of my application. I created another function in my DAL to delete objects: public void Delete(SourceKey sk) { try { _dataContext.DeleteObject(sk); _dataContext.SaveChanges(); } catch (Exception ex) { Debug.WriteLine(ex.Message + " " + ex.StackTrace + " " + ex.InnerException); } } When I try to call "Delete" after calling the "Get" function, I receive this error: New transaction is not allowed because there are other threads running in the session This is an ASP.Net app. My DAL contains an entity data model. The class in which I have the above functions share the same _dataContext, which is instantiated in my constructor. My guess is that the reader is still open from the "Get" function and was not closed. How can I close it?

    Read the article

  • PHP: Aggregate Model Classes or Uber Model Classes?

    - by sunwukung
    In many of the discussions regarding the M in MVC, (sidestepping ORM controversies for a moment), I commonly see Model classes described as object representations of table data (be that an Active Record, Table Gateway, Row Gateway or Domain Model/Mapper). Martin Fowler warns against the development of an anemic domain model, i.e. a class that is nothing more than a wrapper for CRUD functionality. I've been working on an MVC application for a couple of months now. The DBAL in the application I'm working on started out simple (on account of my understanding - oh the benefits of hindsight), and is organised so that Controllers invoke Business Logic classes, that in turn access the database via DAO/Transaction Scripts pertinent to the task at hand. There are a few "Entity" classes that aggregate these DAO objects to provide a convenient CRUD wrapper, but also embody some of the "behaviour" of that Domain concept (for example, a user - since it's easy to isolate). Taking a look at some of the code, and thinking along refactoring some of the code into a Rich Domain Model, it occurred to me that were I to try and wrap the CRUD routines and behaviour of say, a Company into a single "Model" class, that would be a sizeable class. So, my question is this: do Models represent domain objects, business logic, service layers, all of the above combined? How do you go about defining the responsibilities for these components?

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >