Search Results

Search found 109760 results on 4391 pages for 'ado net entity data model'.

Page 118/4391 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • LINQ to Entity: using Contains in the "select" portion throws unexpected error

    - by Chu
    I've got a LINQ query going against an Entity Framework object. Here's a summary of the query: //a list of my allies List<int> allianceMembers = new List<int>() { 1,5,10 }; //query for fleets in my area, including any allies (and mark them as such) var fleets = from af in FleetSource select new Fleet { fleetID = af.fleetID, fleetName = af.fleetName, isAllied = (allianceMembers.Contains(af.userID) ? true : false) }; Basically, what I'm doing is getting a set of fleets. The allianceMembers list contains INTs of all users who are allied with me. I want to set isAllied = true if the fleet's owner is part of that list, and false otherwise. When I do this, I am seeing an exception: "LINQ to Entities does not recognize the method 'Boolean Contains(Int32)' method" I can understand getting this error if I had used the contains in the where portion of the query, but why would I get it in the select? By this point I would assume the query would have executed and returned the results. This little ditty of code does nothing to constrain my data at all. Any tips on how else I can accomplish what I need to with setting the isAllied flag? Thanks

    Read the article

  • Foreign key relationships in Entity Framework

    - by Anders Svensson
    I'm trying to add an object created from Entity Data Model classes. I have a table called Users, which has turned into a User EDM class. And I also have a table Pages, which has become a Page EDM class. These tables have a foreign key relationship, so that each page is associated with many users. Now I want to be able to add a page, but I can't get how to do it. I get a nullreference exception on Users below. I'm still rather confused by all this, so I'm sure it's a simple error, but I just can't see how to do it. Also, by the way, the compiler requires that I set PageID in the object initializer, even though this field is set to be an automatic id in the table. Am I doing it right just setting it to 0, expecting it to be updated automatically in the table when saved, or how should I do that? Any help appreciated! The method in question: private Page GetPage(User currentUser) { string url = _request.ServerVariables["url"].ToLower(); var userPages = from p in _context.PageSet where p.Users.UserID == currentUser.UserID select p; var existingPage = userPages.FirstOrDefault(e => e.Url == url); //Could be combined with above, but hard to read? if (existingPage != null) return existingPage; Page page = new Page() { Count = 0, Url = _request.ServerVariables["url"].ToLower(), PageID = 0, //Only initial value, changed later? }; _context.AddToPageSet(page); page.Users.UserID = currentUser.UserID; //Here's the problem... return page; }

    Read the article

  • Entity Framework autoincrement key

    - by Tommy Ong
    I'm facing an issue of duplicated incremental field on a concurrency scenario. I'm using EF as the ORM tool, attempting to insert an entity with a field that acts as a incremental INT field. Basically this field is called "SequenceNumber", where each new record before insert, will read the database using MAX to get the last SequenceNumber, append +1 to it, and saves the changes. Between the getting of last SequenceNumber and Saving, that's where the concurrency is happening. I'm not using ID for SequenceNumber as it is not a unique constraint, and may reset on certain conditions such as monthly, yearly, etc. InvoiceNumber | SequenceNumber | DateCreated INV00001_08_14 | 1 | 25/08/2014 INV00001_08_14 | 1 | 25/08/2014 <= (concurrency is creating two SeqNo 1) INV00002_08_14 | 2 | 25/08/2014 INV00003_08_14 | 3 | 26/08/2014 INV00004_08_14 | 4 | 27/08/2014 INV00005_08_14 | 5 | 29/08/2014 INV00001_09_14 | 1 | 01/09/2014 <= (sequence number reset) Invoice number is formatted based on the SequenceNumber. After some research I've ended up with these possible solutions, but wanna know the best practice 1) Optimistic Concurrency, locking the table from any reads until the current transaction is completed (not fancy of this idea as I guess performance will be of a great impact?) 2) Create a Stored Procedure solely for this purpose, does select and insert on a single statement as such concurrency is at minimum (would prefer a EF based approach if possible)

    Read the article

  • How to map a 0..1 to 1 relationship in Entity Framework 3

    - by sako73
    I have two tables, Users, and Address. A the user table has a field that maps to the primary key of the address table. This field can be null. In plain english, Address exist independent of other objects. A user may be associated with one address. In the database, I have this set up as a foreign key relationship. I am attempting to map this relationship in the Entity Framework. I am getting errors on the following code: <Association Name="fk_UserAddress"> <End Role="User" Type="GenesisEntityModel.Store.User" Multiplicity="1"/> <End Role="Address" Type="GenesisEntityModel.Store.Address" Multiplicity="0..1" /> <ReferentialConstraint> <Principal Role="Address"> <PropertyRef Name="addressId"/> </Principal> <Dependent Role="User"> <PropertyRef Name="addressId"/> </Dependent> </ReferentialConstraint> </Association> It is giving a "The Lower Bound of the multiplicity must be 0" error. I would appreciate it if anyone could explain the error, and the best way to solve it. Thanks for any help.

    Read the article

  • Selecting row in SSMS causes Entity Framework 4 to Fail

    - by Eric J.
    I have a simple Entity Framework 4 unit test that creates a new record, saves it, attempts to find it, then deletes it. All works great, unless... ... I open up SQL Server Management Studio while stopped at a breakpoint in the unit test and execute a SELECT statement that returns the row I just created (not SELECT FOR UPDATE, not WITH (updlock), no transaction, just a plain SELECT). If I do that before attempting to find the row I just created, I don't find the row. If I instead do that after finding the row but before deleting the row, I do find the row but get an OptimisticConcurrencyException. This is consistently repeatable. Unit Test: [TestMethod()] public void CreateFindDeleteActiveParticipantsTest() { // Setup this test Participant utPart = CreateUTParticipant(); ctx.Participants.AddObject(utPart); ctx.SaveChanges(); // External SELECT Point #1: // part is null // Find participant Participant part = ParticipantRepository.Find(UT_SURVEY_ID, UT_TOKEN); Assert.IsNotNull(part, "Expected to find a participant"); // External SELECT Point #2: // SaveChanges throws OptimisticConcurrencyException // Cleanup this test ctx.Participants.DeleteObject(utPart); ctx.SaveChanges(); }

    Read the article

  • Entity Framework Decorator Pattern

    - by Anthony Compton
    In my line of business we have Products. These products can be modified by a user by adding Modifications to them. Modifications can do things such as alter the price and alter properties of the Product. This, to me, seems to fit the Decorator pattern perfectly. Now, envision a database in which Products exist in one table and Modifications exist in another table and the database is hooked up to my app through the Entity Framework. How would I go about getting the Product objects and the Modification objects to implement the same interface so that I could use them interchangeably? For instance, the kind of things I would like to be able to do: Given a Modification object, call .GetNumThings(), which would then return the number of things in the original object, plus or minus the number of things added by the modification. This question may be stemming from a pretty serious lack of exposure to the nitty-gritty of EF (all of my experience so far has been pretty straight-forward LOB Silverlight apps), and if that's the case, please feel free to tell me to RTFM. Thanks in advance!

    Read the article

  • Entity Framework: Detect DBSchema for licensing

    - by Program.X
    We're working on a product that may or may not have differing license schema for different databases. In particular, a lower-tier product would run on SQLExpress, but we don't want the end user to be able to use a "full-fat" SQL install - benefiting from the price cut. Clearly this must also be the case for other DBs, so Oracle may command a higher price than SQL, for instance (hypothetically). We're using Entity Framework. Obviously this hides all the neatness of accessing the core schema and using sp_version or whatever it is. We'd rather not pre-load the condition by running a series of SQL commands (one for each platform) and see what comes back, as this would limit our DB options. But if necassary, we're prepared to do it. So, is it possible to get this using EF itself? DBContext.COnnection.ServerVersion only returns something like "9.00.1234" (for SQL Server 2005). I would assume (though haven't yet checked - need to install an instance) SQLExpress would return something similar - "pretending" it is full-fat. Obviously, we have no Oracle/MySQL/etc. instance so can't establish whether that returns text "Oracle" or whatever.

    Read the article

  • How to enable .NET Extensibility, ASP.NET, ISAPI Extensions, and ISAPI Filters on Windows Small Buis

    - by ruslander
    Today wanting to deploy an asp.net mvc application I had to enable .NET Extensibility, ASP.NET, ISAPI Extensions, and ISAPI Filters on Win Small Buisness Server 2008 I was surprised 1h and I could not find how to do it. This was the hint but ... still nothing. http://blogs.dovetailsoftware.com/blogs/kmiller/archive/2008/10/07/deploying-an-asp-net-mvc-web-application-to-iis7.aspx

    Read the article

  • pass data from parent page to popup

    - by d daly
    Hi I have an asp.net page which launches a child page in another browser window to look like a popup. I want to pass 2 pieces of data to it from the parent page. I know ots possible with javascript, but is there a way to do it using C#? thanks again

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Asp.Net MVC2 RenderAction changes page mime type?

    - by Gabe Moothart
    It appears that calling Html.RenderAction in Asp.Net MVC2 apps can alter the mime type of the page if the child action's type is different than the parent action's. The code below (testing in MVC2 RTM), which seems sensible to me, will return a result of type application/json when calling Home/Index. Instead of dispylaying the page, the browser will barf and ask you if you want to download it. My question: Am I missing something? Is this a bug? If so, what's the best workaround? controller: public class HomeController : Controller { public ActionResult Index() { ViewData[ "Message" ] = "Welcome to ASP.NET MVC!"; return View(); } [ChildActionOnly] public JsonResult States() { string[] states = new[] { "AK", "AL", "AR", "AZ", }; return Json(states, JsonRequestBehavior.AllowGet); } } view: <h2><%= Html.Encode(ViewData["Message"]) %></h2> <p> To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. </p> <script> var states = <% Html.RenderAction("States"); %>; </script>

    Read the article

  • Deploying an MVC 2.0 .Net 4.0 application to Windows Server 2003

    - by Brett Rigby
    I've spent the past couple of nights on this, and have come to the conclusions that I'm doing something wrong! Basically, I have my .Net 4.0 MVC 2.0 application deployed to my 2003 Server and it's constantly giving me the following error: The value for the 'compilerVersion' attribute in the provider options must be 'v4.0' or later if you are compiling for version 4.0 or later of the .NET Framework. To compile this Web application for version 3.5 or earlier of the .NET Framework, remove the 'targetFramework' attribute from the element of the Web.config file. I also have the .Net 3.5 MVC 1.0 application also deployed to the same server (from before the upgrade to .Net 4.0), and it works absolutely fine. I have re-run through Phil Haack's post on IIS6, making the necessary changes for .Net 4.0, but no luck. I have been through the steps in the British Developer's post too, but no joy. I have also tried the steps outlined in Johan's blog post, but the settings he mentions were already enabled. Obviously, I can't simply upgrade to a 2008 box, nor do I want to downgrade my software back to < 4.0 - any other ideas?

    Read the article

  • Is ASP.NET MVC destined to replace Webforms?

    - by johnny
    I found these questions, but a couple of them were a little old: http://stackoverflow.com/questions/191556/should-i-pursue-asp-net-webforms-or-asp-net-mvc http://stackoverflow.com/questions/88787/do-you-think-asp-net-mvc-will-compete-with-asp-net-webforms http://stackoverflow.com/questions/722637/asp-net-mvc-asp-net-webforms-why I do not believe these are duplicates and might be old enough that new light can be shed. If not please close this. I know that no one framework or language is necessarily the only tool for every job. But, do you see MVC eclipsing webforms or webforms going lower on the priority list for Microsoft? They will have to keep webforms for a long time because so many have invested in it, but they don't have to keep adding new functionality for it. I don't know if this is a good example, but it reminds me of web parts. I never saw much improvement in it from Microsoft. It works and I thought it was great until I started to really try and get a lot out of it. Then from what I could see it just wasn't being pursued by Microsoft that much, though it stayed in Visual Studio. Maybe that's a bad example; just what I remembered. EDIT: Also, if anyone has any statements from Microsoft on this subject it is appreciated. No offense to anyone. I was only hoping for something official.

    Read the article

  • Bazaar + CruiseControl.Net

    - by Chris Gill
    I want to setup CruiseControl.Net at my company. We currently have several .net solutions stored in a Bazaar repository and I want to use MSBuild to build each solution. This didn't seem too controversial, but I can't see an easy way of binding CruiseControl.Net to Bazaar. There seems to have been a plugin to do this at http://www.sorn.net/projects/bazaar-ccnet but this link no longer works and I cant seem to find the plugin anywhere else I was going to use the External source control type, but bazaar seems to bork at the GETMODS parameter being passed to it My current thought now is to create a separate project to pull modifications from bazaar using an Exec task, then create another project to run a FileSystem source control check on that directory. I'm moderately sure I can get this to work, but it seems a bit hacky. I don't mind writing a new Bazaar plugin for CruiseControl.Net but I cant find where to start with this. My questions are do you run these two in combination, if so how do you do it? If you don't run these together, do you have any recommendations on a good approach? Is there any documentation or good starting point that I could use to write a bazaar plugin? Am I an idiot for trying to use CruiseControl.Net? Should I be using something else?

    Read the article

  • Custom Membership in ASP.NET MVC 2

    - by Interfector
    Hello, I'm trying to make my Custom Membership starting from ASP.NET's. I have a users table created by me which has as Primary Key a Guid UserId, exactly the sames as ASP.NET's default user and membership table. I have all the Foreign Key relationships built. However, I cannot insert the user into my custom users table. The data gets inserted correctly into the default ASP.NET's tables. I have tried the following scenarios: First version Receive the user model from POST call the CreateUser method of the Membership.class (stuff gets inserted) without modifying the user object, I try to insert it using EF's AddObject I receive the following error: "Cannot insert the value NULL into column 'LoweredUserName', table 'asp.dbo.aspnet_Users'; column does not allow nulls. INSERT fails.\r\nThe statement has been terminated."} System.Exception {System.Data.SqlClient.SqlException The error actually makes sense as EF's AddObject doesn't know how to create the LoweredUserName value, and I don't want to do it, this is ASP.NET's job. Second version Receive the user model from POST call the CreateUser method of the Membership.class (stuff gets inserted) Get the Guid of the freshly inserted system user get the ASP.NET's User object and overwrite that of user.User get the ASP.NET's Membership object and overwrite that of user.Membership (remember the FKs) try again EF's AddObject new error {"The relationship between the two objects cannot be defined because they are attached to different ObjectContext objects."} System.Exception {System.InvalidOperationException} Don't know what this is, but it certenly doesn't make any sense for me. After some googleing I think that it has something to do with the context, but as I'm a beginner I don't know how to fix it. Any ideeas on how to accomplish this task are more than welcomed, especially if they follow some good practices. Thx

    Read the article

  • How to use bll, dal and model?

    - by bruno
    Dear all, In my company I must use a Bll, Dal and model layer for creating applications with a database. I've learned at school that every databasetable should be an object in my model. so I create the whole model of my database. Also I've learned that for every table (or model object) there should be a DAO created into the DAL. So I do this to. Now i'm stuck with the BLL classes. I can write a BLLclass for each DAO/ModelObject or I can write a BLLclass that combine some (logic) DAO's... Or i can write just one Bllclass that will manage everything. (this last one I'm sure it aint the best way..) What is the best practice to handle this Bll 'problem'? And a second question. If a bll is in need of tablecontent from an other table where it aint responsible for, whats the best way to get the content? Go ask on the responsible BLL or go directly to the DAO? I'm struggling with these questions the last 2 months, and i don't know what is the best way to handle it.

    Read the article

  • A few questions about a Rails model for a simple addressbook app

    - by user284194
    I have a Rails application that lists information about local services. My objectives for this model are as follows: 1. Require the name and tag_list fields. 2. Require one or more of the tollfreephone, phone, phone2, mobile, fax, email or website fields. 3. If the paddress field has a value, then encode it with the Geokit plugin. Here is my entry.rb model: class Entry < ActiveRecord::Base validates_presence_of :name, :tag_list validates_presence_of :tollfreephone or :phone or :phone2 or :mobile or :fax or :email or :website acts_as_taggable_on :tags acts_as_mappable :auto_geocode=>{:field=>:paddress, :error_message=>'Could not geocode physical address'} before_save :geocode_paddress validate :required_info def required_info unless phone or phone2 or tollfreephone or mobile or fax or email or website errors.add_to_base "Please have at least one form of contact information." end end private def geocode_paddress #if paddress_changed? geo=Geokit::Geocoders::MultiGeocoder.geocode (paddress) errors.add(:paddress, "Could not Geocode address") if ! geo.success self.lat, self.lng = geo.lat,geo.lng if geo.success #end end end Requiring name and tag_list work, but requiring one (or more) of the tollfreephone, phone, phone2, mobile, fax, email or website fields does not. As for encoding with Geokit, in order to save a record with the model I have to enter an address. Which is not the behavior I want. I would like it to not require the paddress field, but if the paddress field does have a value, it should encode the geocode. As it stands, it always tries to geocode the incoming entry. The commented out "if paddress_changed?" was not working and I could not find something like "if paddress_exists?" that would work. Help with any of these issues would be greatly appreciated. I posted three questions pertaining to my model because I'm not sure if they are preventing each other from working. Thank you for reading my questions.

    Read the article

  • OO Design / Patterns - Fat Model Vs Transaction Script?

    - by ben
    Ok, 'Fat' Model and Transaction Script both solve design problems associated with where to keep business logic. I've done some research and popular thought says having all business logic encapsulated within the model is the way to go (mainly since Transaction Script can become really complex and often results in code duplication). However, how does this work if I want to use the TDG of a second Model in my business logic? Surely Transaction Script presents a neater, less coupled solution than using one Model inside the business logic of another? A practical example... I have two classes: User & Alert. When pushing User instances to the database (eg, creating new user accounts), there is a business rule that requires inserting some default Alerts records too (eg, a default 'welcome to the system' message etc). I see two options here: 1) Add this rule as a User method, and in the process create a dependency between User and Alert (or, at least, Alert's Table Data Gateway). 2) Use a Transaction Script, which avoids the dependency between models. (Also, means the business logic is kept in a 'neutral' class & easily accessible by Alert. That probably isn't too important here, though). User takes responsibility for it's own validation etc, however, but because we're talking about a business rule involving two Models, Transaction Script seems like a better choice to me. Anyone spot flaws with this approach?

    Read the article

  • Creating an Excel file using .NET (C#) - Problems with columns headers!

    - by tsocks
    Hello, I want to create & fill a .xls file using ADO.NET or LINQ, but I do not want to have the columns names in the first row. I just want to insert rows starting in row no. 1. I know I have to insert colums first, but... is there a way to 'hide' those columns headers? The problem is that, in first row of my spreadsheet, I must have only two values (one in A1 and the other in B1), but in the remaining rows I'll be inserting more than just two values (maximum 15 columns). I'm open to suggestions/hacks/tricks even if that's not the best way of doing this. Thanks!

    Read the article

  • Pagination in a Rich Domain Model

    - by user246790
    I use rich domain model in my app. The basic ideas were taken there. For example I have User and Comment entities. They are defined as following: <?php class Model_User extends Model_Abstract { public function getComments() { /** * @var Model_Mapper_Db_Comment */ $mapper = $this->getMapper(); $commentsBlob = $mapper->getUserComments($this->getId()); return new Model_Collection_Comments($commentsBlob); } } class Model_Mapper_Db_Comment extends Model_Mapper_Db_Abstract { const TABLE_NAME = 'comments'; protected $_mapperTableName = self::TABLE_NAME; public function getUserComments($user_id) { $commentsBlob = $this->_getTable()->fetchAllByUserId((int)$user_id); return $commentsBlob->toArray(); } } class Model_Comment extends Model_Abstract { } ?> Mapper's getUserComments function simply returns something like: return $this->getTable->fetchAllByUserId($user_id) which is array. fetchAllByUserId accepts $count and $offset params, but I don't know to pass them from my Controller to this function through model without rewriting all the model code. So the question is how can I organize pagination through model data (getComments). Is there a "beatiful" method to get comments from 5 to 10, not all, as getComments returns by default.

    Read the article

  • Formating a table date field from the Model in Codeigniter

    - by Landitus
    Hi, I', trying to re-format a date from a table in Codeigniter. The Controller is for a blog. I was succesfull when the date conversion happens in the View. I was hoping to convert the date in the Model to have things in order. Here's the date conversion as it happens in the View. This is inside the posts loop: <?php foreach($records as $row) : ?> <?php $fdate = "%d <abbr>%M</abbr> %Y"; $dateConv = mdate($fdate, mysql_to_unix($row->date)); ?> <div class="article section"> <span class="date"><?php echo $dateConv ;?></span> ... Keeps going ... This is the Model: class Novedades_model extends Model { function getAll() { $this->db->order_by('date','desc'); $query = $this->db->get('novedades'); if($query->num_rows() > 0) { foreach ($query->result() as $row) { $data[] = $row; } } return $data; } } How can I convert the date in the Model? Can I access the date key and refactor it?

    Read the article

  • In Rails, a Sweeper isn't getting called in a Model-only setup

    - by charliepark
    I'm working on a Rails app, where I'm using page caching to store static html output. The caching works fine. I'm having trouble expiring the caches, though. I believe my problem is, in part, because I'm not expiring the cache from my controller. All of the actions necessary for this are being handled within the model. This seems like it should be doable, but all of the references to Model-based cache expiration that I'm finding seem to be out of date, or are otherwise not working. In my environment.rb file, I'm calling config.load_paths += %W( #{RAILS_ROOT}/app/sweepers ) And I have, in the /sweepers folder, a LinkSweeper file: class LinkSweeper < ActionController::Caching::Sweeper observe Link def after_update(link) clear_links_cache(link) end def clear_links_cache(link) # expire_page :controller => 'links', :action => 'show', :md5 => link.md5 expire_page '/l/'+ link.md5 + '.html' end end So ... why isn't it deleting the cached page when I update the model? (Process: using script/console, I'm selecting items from the database and saving them, but their corresponding pages aren't deleting from the cache), and I'm also calling the specific method in the Link model that would normally invoke the sweeper. Neither works. If it matters, the cached file is an md5 hash off a key value in the Links table. The cached page is getting stored as something like /l/45ed4aade64d427...99919cba2bd90f.html. Essentially, it seems as though the Sweeper isn't actually observing the Link. I also read (here) that it might be possible to simply add the sweeper to config.active_record.observers in environment.rb, but that didn't seem to do it (and I wasn't sure if the load_path of app/sweepers in environment.rb obviated that).

    Read the article

  • C# 5 Async, Part 1: Simplifying Asynchrony – That for which we await

    - by Reed
    Today’s announcement at PDC of the future directions C# is taking excite me greatly.  The new Visual Studio Async CTP is amazing.  Asynchronous code – code which frustrates and demoralizes even the most advanced of developers, is taking a huge leap forward in terms of usability.  This is handled by building on the Task functionality in .NET 4, as well as the addition of two new keywords being added to the C# language: async and await. This core of the new asynchronous functionality is built upon three key features.  First is the Task functionality in .NET 4, and based on Task and Task<TResult>.  While Task was intended to be the primary means of asynchronous programming with .NET 4, the .NET Framework was still based mainly on the Asynchronous Pattern and the Event-based Asynchronous Pattern. The .NET Framework added functionality and guidance for wrapping existing APIs into a Task based API, but the framework itself didn’t really adopt Task or Task<TResult> in any meaningful way.  The CTP shows that, going forward, this is changing. One of the three key new features coming in C# is actually a .NET Framework feature.  Nearly every asynchronous API in the .NET Framework has been wrapped into a new, Task-based method calls.  In the CTP, this is done via as external assembly (AsyncCtpLibrary.dll) which uses Extension Methods to wrap the existing APIs.  However, going forward, this will be handled directly within the Framework.  This will have a unifying effect throughout the .NET Framework.  This is the first building block of the new features for asynchronous programming: Going forward, all asynchronous operations will work via a method that returns Task or Task<TResult> The second key feature is the new async contextual keyword being added to the language.  The async keyword is used to declare an asynchronous function, which is a method that either returns void, a Task, or a Task<T>. Inside the asynchronous function, there must be at least one await expression.  This is a new C# keyword (await) that is used to automatically take a series of statements and break it up to potentially use discontinuous evaluation.  This is done by using await on any expression that evaluates to a Task or Task<T>. For example, suppose we want to download a webpage as a string.  There is a new method added to WebClient: Task<string> WebClient.DownloadStringTaskAsync(Uri).  Since this returns a Task<string> we can use it within an asynchronous function.  Suppose, for example, that we wanted to do something similar to my asynchronous Task example – download a web page asynchronously and check to see if it supports XHTML 1.0, then report this into a TextBox.  This could be done like so: private async void button1_Click(object sender, RoutedEventArgs e) { string url = "http://reedcopsey.com"; string content = await new WebClient().DownloadStringTaskAsync(url); this.textBox1.Text = string.Format("Page {0} supports XHTML 1.0: {1}", url, content.Contains("XHTML 1.0")); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Let’s walk through what’s happening here, step by step.  By adding the async contextual keyword to the method definition, we are able to use the await keyword on our WebClient.DownloadStringTaskAsync method call. When the user clicks this button, the new method (Task<string> WebClient.DownloadStringTaskAsync(string)) is called, which returns a Task<string>.  By adding the await keyword, the runtime will call this method that returns Task<string>, and execution will return to the caller at this point.  This means that our UI is not blocked while the webpage is downloaded.  Instead, the UI thread will “await” at this point, and let the WebClient do it’s thing asynchronously. When the WebClient finishes downloading the string, the user interface’s synchronization context will automatically be used to “pick up” where it left off, and the Task<string> returned from DownloadStringTaskAsync is automatically unwrapped and set into the content variable.  At this point, we can use that and set our text box content. There are a couple of key points here: Asynchronous functions are declared with the async keyword, and contain one or more await expressions In addition to the obvious benefits of shorter, simpler code – there are some subtle but tremendous benefits in this approach.  When the execution of this asynchronous function continues after the first await statement, the initial synchronization context is used to continue the execution of this function.  That means that we don’t have to explicitly marshal the call that sets textbox1.Text back to the UI thread – it’s handled automatically by the language and framework!  Exception handling around asynchronous method calls also just works. I’d recommend every C# developer take a look at the documentation on the new Asynchronous Programming for C# and Visual Basic page, download the Visual Studio Async CTP, and try it out.

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >