Search Results

Search found 12909 results on 517 pages for 'domain'.

Page 507/517 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • What is the best way to test using grails using IDEA?

    - by egervari
    I am seriously having a very non-pleasant time testing using Grails. I will describe my experience, and I'd like to know if there's a better way. The first problem I have with testing is that Grails doesn't give immediate feedback to the developer when .save() fails inside of an integration test. So let's say you have a domain class with 12 fields, and 1 of them is violating a constraint and you don't know it when you create the instance... it just doesn't save. Naturally, the test code afterward is going to fail. This is most troublesome because the thingy under test is probably fine... and the real risk and pain is the setup code for the test itself. So, I've tried to develop the habit of using .save(failOnError: true) to avoid this problem, but that's not something that can be easily enforced by everyone working on the project... and it's kind of bloaty. It'd be nice to turn this on for code that is running as part of a unit test automatically. Integration Tests run slow. I cannot understand how 1 integration test that saves 1 object takes 15-20 seconds to run. With some careful test planning, I've been able to get 1000 tests talking to an actual database and doing dbunit dumps after every test to happen in about the same time! This is dumb. It is hard to run all the unit tests and not integration tests in IDEA. Integration tests are a massive pain. Idea actually shows a GREEN BAR when integration tests fail. The output given by grails indicates that something failed, but it doesn't say what it was. It says to look in the test reports... which forces the developer to launch up their file system to hunt the stupid html file down. What a pain. Then once you got the html file and click to the failing test, it'll tell you a line number. Since these reports are not in the IDE, you can't just click the stack trace to go to that line of code... you gotta go back and find it yourself. ARGGH!@!@! Maybe people put up with this, but I refuse. Testing should not be this painful. It should be fast and painless, or people won't do it. Please help. What is the solution? Rails instead of Grails? Something else entirely? I love the Grails framework, but they never demo their testing for a reason. They have a snazzy framework, but the testing is painful. After having used Scala for the last 1.5 months, and being totally spoiled by ScalaTest... I can't go back to this.

    Read the article

  • How to remove a model object from an EMF model and its GEF Editor via Adapter

    - by s.d
    This question is principally a follow-up to my question about EMF listening mechanisms. So, I have a third-party EMF model (uneditable) which is based on a generic graph model. The structure is as follows: Project | ItemGraph | Item | Document | DocumentGraph / | \ Tokens Nodes Relations(Edges) I have a GEF editor which works on the DocumentGraph (i.e., not the root object, perhaps this is a problem?): getGraphicalViewer().setContents(documentGraph). THe editor has the following edit part structure: DocumentGraphEP / \ Primary Connection LayerEP LayerEP / \ | TokenEP NodeEP RelationEP PrimaryLayerEP and ConnectionLayerEP both have simple Strings as model, which are not represented in the EMF (domain) model. They are simply used to add a primary (i.e., node) layer, and a connection layer (with ShortestPathConnectionRouter) to the editor. Problem: I am trying to get myself into the workings of EMF adapters, and have tried to make use of the available tutorials, mainly the EMF-GEF Eclipse tutorial, vainolo's blog, and vogella's tutorial. I thought I'd start with an easy thing, so tried to remove a node from the graph and see if I get it to work. Which I didn't, and I don't see where the problem is. I can select a node, and have the generic delete Action in my toolbar, but when I click it, nothing happens. Here is the respective source code for the different responsible parts. Please be so kind to point me to any errors (of thinking, coding errors, whathaveyou) you can find. NodeEditPart public class NodeEditPart extends AbstractGraphicalEditPart implements Adapter { protected IFigure createFigure() { return new NodeFigure(); } protected void createEditPolicies() { .... installEditPolicy(EditPolicy.COMPONENT_ROLE, new NodeComponentEditPolicy()); } protected void refreshVisuals() { NodeFigure figure = (NodeFigure) getFigure(); SNode model = (SNode) getModel(); PrimaryLayerEditPart parent = (PrimaryLayerEditPart) getParent(); // Set text figure.getLabel().setText(model.getSName()); .... } public void activate() { if (isActive()) return; // start listening for changes in the model ((Notifier)getModel()).eAdapters().add(this); super.activate(); } public void deactivate() { if (!isActive()) return; // stop listening for changes in the model ((Notifier)getModel()).eAdapters().remove(this); super.deactivate(); } private Notifier getSDocumentGraph() { return ((SNode)getModel()).getSDocumentGraph(); } @Override public void notifyChanged(Notification notification) { int type = notification.getEventType(); switch( type ) { case Notification.ADD: case Notification.ADD_MANY: case Notification.REMOVE: case Notification.REMOVE_MANY: refreshChildren(); break; case Notification.SET: refreshVisuals(); break; } } @Override public Notifier getTarget() { return target; } @Override public void setTarget(Notifier newTarget) { this.target = newTarget; } @Override public boolean isAdapterForType(Object type) { return type.equals(getModel().getClass()); } } NodeComponentEditPolicy public class NodeComponentEditPolicy extends ComponentEditPolicy { public NodeComponentEditPolicy() { super(); } protected Command createDeleteCommand(GroupRequest deleteRequest) { DeleteNodeCommand cmd = new DeleteNodeCommand(); cmd.setSNode((SNode) getHost().getModel()); return cmd; } } DeleteNodeCommand public class DeleteNodeCommand extends Command { private SNode node; private SDocumentGraph graph; @Override public void execute() { node.setSDocumentGraph(null); } @Override public void undo() { node.setSDocumentGraph(graph); } public void setSNode(SNode node) { this.node = node; this.graph = node.getSDocumentGraph(); } } All seems to work fine: When a node is selected in the editor, the delete symbol is activated in the toolbar, but when it is clicked, nothing happens in the editor. I'd be very thankful for any pointers :).

    Read the article

  • MVC SiteMap - when different nodes point to same action SiteMap.CurrentNode does not map to the correct route

    - by awrigley
    Setup: I am using ASP.NET MVC 4, with mvcSiteMapProvider to manage my menus. I have a custom menu builder that evaluates whether a node is on the current branch (ie, if the SiteMap.CurrentNode is either the CurrentNode or the CurrentNode is nested under it). The code is included below, but essentially checks the url of each node and compares it with the url of the currentnode, up through the currentnodes "family tree". The CurrentBranch is used by my custom menu builder to add a class that highlights menu items on the CurrentBranch. The Problem: My custom menu works fine, but I have found that the mvcSiteMapProvider does not seem to evaluate the url of the CurrentNode in a consistent manner: When two nodes point to the same action and are distinguished only by a parameter of the action, SiteMap.CurrentNode does not seem to use the correct route (it ignores the distinguishing parameter and defaults to the first route that that maps to the action defined in the node). Example of the Problem: In an app I have Members. A Member has a MemberStatus field that can be "Unprocessed", "Active" or "Inactive". To change the MemberStatus, I have a ProcessMemberController in an Area called Admin. The processing is done using the Process action on the ProcessMemberController. My mvcSiteMap has two nodes that BOTH map to the Process action. The only difference between them is the alternate parameter (such are my client's domain semantics), that in one case has a value of "Processed" and in the other "Unprocessed": Nodes: <mvcSiteMapNode title="Process" area="Admin" controller="ProcessMembers" action="Process" alternate="Unprocessed" /> <mvcSiteMapNode title="Change Status" area="Admin" controller="ProcessMembers" action="Process" alternate="Processed" /> Routes: The corresponding routes to these two nodes are (again, the only thing that distinguishes them is the value of the alternate parameter): context.MapRoute( "Process_New_Members", "Admin/Unprocessed/Process/{MemberId}", new { controller = "ProcessMembers", action = "Process", alternate="Unprocessed", MemberId = UrlParameter.Optional } ); context.MapRoute( "Change_Status_Old_Members", "Admin/Members/Status/Change/{MemberId}", new { controller = "ProcessMembers", action = "Process", alternate="Processed", MemberId = UrlParameter.Optional } ); What works: The Html.ActionLink helper uses the routes and produces the urls I expect: @Html.ActionLink("Process", MVC.Admin.ProcessMembers.Process(item.MemberId, "Unprocessed") // Output (alternate="Unprocessed" and item.MemberId = 12): Admin/Unprocessed/Process/12 @Html.ActionLink("Status", MVC.Admin.ProcessMembers.Process(item.MemberId, "Processed") // Output (alternate="Processed" and item.MemberId = 23): Admin/Members/Status/Change/23 In both cases the output is correct and as I expect. What doesn't work: Let's say my request involves the second option, ie, /Admin/Members/Status/Change/47, corresponding to alternate = "Processed" and a MemberId of 47. Debugging my static CurrentBranch property (see below), I find that SiteMap.CurrentNode shows: PreviousSibling: null Provider: {MvcSiteMapProvider.DefaultSiteMapProvider} ReadOnly: false ResourceKey: "" Roles: Count = 0 RootNode: {Home} Title: "Process" Url: "/Admin/Unprocessed/Process/47" Ie, for a request url of /Admin/Members/Status/Change/47, SiteMap.CurrentNode.Url evaluates to /Admin/Unprocessed/Process/47. Ie, it is ignorning the alternate parameter and using the wrong route. CurrentBranch Static Property: /// <summary> /// ReadOnly. Gets the Branch of the Site Map that holds the SiteMap.CurrentNode /// </summary> public static List<SiteMapNode> CurrentBranch { get { List<SiteMapNode> currentBranch = null; if (currentBranch == null) { SiteMapNode cn = SiteMap.CurrentNode; SiteMapNode n = cn; List<SiteMapNode> ln = new List<SiteMapNode>(); if (cn != null) { while (n != null && n.Url != SiteMap.RootNode.Url) { // I don't need to check for n.ParentNode == null // because cn != null && n != SiteMap.RootNode ln.Add(n); n = n.ParentNode; } // the while loop excludes the root node, so add it here // I could add n, that should now be equal to SiteMap.RootNode, but this is clearer ln.Add(SiteMap.RootNode); // The nodes were added in reverse order, from the CurrentNode up, so reverse them. ln.Reverse(); } currentBranch = ln; } return currentBranch; } } The Question: What am I doing wrong? The routes are interpreted by Html.ActionLlink as I expect, but are not evaluated by SiteMap.CurrentNode as I expect. In other words, in evaluating my routes, SiteMap.CurrentNode ignores the distinguishing alternate parameter.

    Read the article

  • N-tier Repository POCOs - Aggregates?

    - by Sam
    Assume the following simple POCOs, Country and State: public partial class Country { public Country() { States = new List<State>(); } public virtual int CountryId { get; set; } public virtual string Name { get; set; } public virtual string CountryCode { get; set; } public virtual ICollection<State> States { get; set; } } public partial class State { public virtual int StateId { get; set; } public virtual int CountryId { get; set; } public virtual Country Country { get; set; } public virtual string Name { get; set; } public virtual string Abbreviation { get; set; } } Now assume I have a simple respository that looks something like this: public partial class CountryRepository : IDisposable { protected internal IDatabase _db; public CountryRepository() { _db = new Database(System.Configuration.ConfigurationManager.AppSettings["DbConnName"]); } public IEnumerable<Country> GetAll() { return _db.Query<Country>("SELECT * FROM Countries ORDER BY Name", null); } public Country Get(object id) { return _db.SingleById(id); } public void Add(Country c) { _db.Insert(c); } /* ...And So On... */ } Typically in my UI I do not display all of the children (states), but I do display an aggregate count. So my country list view model might look like this: public partial class CountryListVM { [Key] public int CountryId { get; set; } public string Name { get; set; } public string CountryCode { get; set; } public int StateCount { get; set; } } When I'm using the underlying data provider (Entity Framework, NHibernate, PetaPoco, etc) directly in my UI layer, I can easily do something like this: IList<CountryListVM> list = db.Countries .OrderBy(c => c.Name) .Select(c => new CountryListVM() { CountryId = c.CountryId, Name = c.Name, CountryCode = c.CountryCode, StateCount = c.States.Count }) .ToList(); But when I'm using a repository or service pattern, I abstract away direct access to the data layer. It seems as though my options are to: Return the Country with a populated States collection, then map over in the UI layer. The downside to this approach is that I'm returning a lot more data than is actually needed. -or- Put all my view models into my Common dll library (as opposed to having them in the Models directory in my MVC app) and expand my repository to return specific view models instead of just the domain pocos. The downside to this approach is that I'm leaking UI specific stuff (MVC data validation annotations) into my previously clean POCOs. -or- Are there other options? How are you handling these types of things?

    Read the article

  • Deleting unreferenced child records with nhibernate

    - by Chev
    Hi There I am working on a mvc app using nhibernate as the orm (ncommon framework) I have parent/child entities: Product, Vendor & ProductVendors and a one to many relationship between them with Product having a ProductVendors collection Product.ProductVendors. I currently am retrieving a Product object and eager loading the children and sending these down the wire to my asp.net mvc client. A user will then modify the list of Vendors and post the updated Product back. I am using a custom model binder to generate the modified Product entity. I am able to update the Product fine and insert new ProductVendors. My problem is that dereferenced ProductVendors are not cascade deleted when specifying Product.ProductVendors.Clear() and calling _productRepository.Save(product). The problem seems to be with attaching the detached instance. Here are my mapping files: Product <?xml version="1.0" encoding="utf-8" ?> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Name" type="String" length="250" /> ProductVendors <?xml version="1.0" encoding="utf-8" ?> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Price" /> <many-to-one name="Product" class="Product" column="ProductId" lazy="false" not-null="true" /> <many-to-one name="Vendor" class="Vendor" column="VendorId" lazy="false" not-null="true" /> Custom Model Binder: using System; using Test.Web.Mvc; using Test.Domain; namespace Spoked.MVC { public class ProductUpdateModelBinder : DefaultModelBinder { private readonly ProductSystem ProductSystem; public ProductUpdateModelBinder(ProductSystem productSystem) { ProductSystem = productSystem; } protected override void OnModelUpdated(ControllerContext controllerContext, ModelBindingContext bindingContext) { var product = bindingContext.Model as Product; if (product != null) { product.Category = ProductSystem.GetCategory(new Guid(bindingContext.ValueProvider["Category"].AttemptedValue)); product.Brand = ProductSystem.GetBrand(new Guid(bindingContext.ValueProvider["Brand"].AttemptedValue)); product.ProductVendors.Clear(); if (bindingContext.ValueProvider["ProductVendors"] != null) { string[] productVendorIds = bindingContext.ValueProvider["ProductVendors"].AttemptedValue.Split(','); foreach (string id in productVendorIds) { product.AddProductVendor(ProductSystem.GetVendor(new Guid(id)), 90m); } } } } } } Controller: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Update(Product product) { using (var scope = new UnitOfWorkScope()) { //product.ProductVendors.Clear(); _productRepository.Save(product); scope.Commit(); } using (new UnitOfWorkScope()) { IList<Vendor> availableVendors = _productSystem.GetAvailableVendors(product); productDetailEditViewModel = new ProductDetailEditViewModel(product, _categoryRepository.Select(x => x).ToList(), _brandRepository.Select(x => x).ToList(), availableVendors); } return RedirectToAction("Edit", "Products", new {id = product.Id.ToString()}); } The following test does pass though: [Test] [NUnit.Framework.Category("ProductTests")] public void Can_Delete_Product_Vendors_By_Dereferencing() { Product product; using(UnitOfWorkScope scope = new UnitOfWorkScope()) { Console.Out.WriteLine("Selecting..."); product = _productRepository.First(); Console.Out.WriteLine("Adding Product Vendor..."); product.AddProductVendor(_vendorRepository.First(), 0m); scope.Commit(); } Console.Out.WriteLine("About to delete Product Vendors..."); using (UnitOfWorkScope scope = new UnitOfWorkScope()) { Console.Out.WriteLine("Clearing Product Vendor..."); _productRepository.Save(product); // seems to be needed to attach entity to the persistance manager product.ProductVendors.Clear(); scope.Commit(); } } Going nuts here as I almost have a very nice solution between mvc, custom model binders and nhibernate. Just not seeing my deletes cascaded. Any help greatly appreciated. Chev

    Read the article

  • Simple Constructor With Initializer List? - C++

    - by Alex
    Hi all, below I've included my h file, and my problem is that the compiler is not liking my simple exception class's constructor's with initializer lists. It also is saying that string is undeclared identifier, even though I have #include <string> at the top of the h file. Do you see something I am doing wrong? For further explanation, this is one of my domain classes that I'm integrating into a wxWidgets GUI application on Windows. Thanks! Time.h #pragma once #include <string> #include <iostream> // global constants for use in calculation const int HOURS_TO_MINUTES = 60; const int MINUTES_TO_HOURS = 100; class Time { public: // default Time class constructor // initializes all vars to default values Time(void); // ComputeEndTime computes the new delivery end time // params - none // preconditions - vars will be error-free // postconditions - the correct end time will be returned as an int // returns an int int ComputeEndTime(); // GetStartTime is the getter for var startTime // params - none // returns an int int GetStartTime() { return startTime; } // GetEndTime is the getter for var endTime // params - none // returns an int int GetEndTime() { return endTime; } // GetTimeDiff is the getter for var timeDifference // params - none // returns a double double GetTimeDiff() { return timeDifference; } // SetStartTime is the setter for var startTime // params - an int // returns void void SetStartTime(int s) { startTime = s; } // SetEndTime is the setter for var endTime // params - an int // returns void void SetEndTime(int e) { endTime = e; } // SetTimeDiff is the setter for var timeDifference // params - a double // returns void void SetTimeDiff(double t) { timeDifference = t; } // destructor for Time class ~Time(void); private: int startTime; int endTime; double timeDifference; }; class HourOutOfRangeException { public: // param constructor // initializes message to passed paramater // preconditions - param will be a string // postconditions - message will be initialized // params a string // no return type HourOutOfRangeException(string pMessage) : message(pMessage) {} // GetMessage is getter for var message // params none // preconditions - none // postconditions - none // returns string string GetMessage() { return message; } // destructor ~HourOutOfRangeException() {} private: string message; }; class MinuteOutOfRangeException { public: // param constructor // initializes message to passed paramater // preconditions - param will be a string // postconditions - message will be initialized // params a string // no return type MinuteOutOfRangeException(string pMessage) : message(pMessage) {} // GetMessage is getter for var message // params none // preconditions - none // postconditions - none // returns string string GetMessage() { return message; } // destructor ~MinuteOutOfRangeException() {} private: string message; }; class PercentageOutOfRangeException { public: // param constructor // initializes message to passed paramater // preconditions - param will be a string // postconditions - message will be initialized // params a string // no return type PercentageOutOfRangeException(string pMessage) : message(pMessage) {} // GetMessage is getter for var message // params none // preconditions - none // postconditions - none // returns string string GetMessage() { return message; } // destructor ~PercentageOutOfRangeException() {} private: string message; }; class StartEndException { public: // param constructor // initializes message to passed paramater // preconditions - param will be a string // postconditions - message will be initialized // params a string // no return type StartEndException(string pMessage) : message(pMessage) {} // GetMessage is getter for var message // params none // preconditions - none // postconditions - none // returns string string GetMessage() { return message; } // destructor ~StartEndException() {} private: string message; };

    Read the article

  • Advice on Factory Method

    - by heath
    Using php 5.2, I'm trying to use a factory to return a service to the controller. My request uri would be of the format www.mydomain.com/service/method/param1/param2/etc. My controller would then call a service factory using the token sent in the uri. From what I've seen, there are two main routes I could go with my factory. Single method: class ServiceFactory { public static function getInstance($token) { switch($token) { case 'location': return new StaticPageTemplateService('location'); break; case 'product': return new DynamicPageTemplateService('product'); break; case 'user' return new UserService(); break; default: return new StaticPageTemplateService($token); } } } or multiple methods: class ServiceFactory { public static function getLocationService() { return new StaticPageTemplateService('location'); } public static function getProductService() { return new DynamicPageTemplateService('product'); } public static function getUserService() { return new UserService(); } public static function getDefaultService($token) { return new StaticPageTemplateService($token); } } So, given this, I will have a handful of generic services in which I will pass that token (for example, StaticPageTemplateService and DynamicPageTemplateService) that will probably implement another factory method just like this to grab templates, domain objects, etc. And some that will be specific services (for example, UserService) which will be 1:1 to that token and not reused. So, this seems to be an ok approach (please give suggestions if it is not) for a small amount of services. But what about when, over time and my site grows, I end up with 100s of possibilities. This no longer seems like a good approach. Am I just way off to begin with or is there another design pattern that would be a better fit? Thanks. UPDATE: @JSprang - the token is actually sent in the uri like mydomain.com/location would want a service specific to loction and mydomain.com/news would want a service specific to news. Now, for a lot of these, the service will be generic. For instance, a lot of pages will call a StaticTemplatePageService in which the token is passed in to the service. That service in turn will grab the "location" template or "links" template and just spit it back out. Some will need DynamicTemplatePageService in which the token gets passed in, like "news" and that service will grab a NewsDomainObject, determine how to present it and spit that back out. Others, like "user" will be specific to a UserService in which it will have methods like Login, Logout, etc. So basically, the token will be used to determine which service is needed AND if it is generic service, that token will be passed to that service. Maybe token isn't the correct terminology but I hope you get the purpose. I wanted to use the factory so I can easily swap out which Service I need in case my needs change. I just worry that after the site grows larger (both pages and functionality) that the factory will become rather bloated. But I'm starting to feel like I just can't get away from storing the mappings in an array (like Stephen's solution). That just doesn't feel OOP to me and I was hoping to find something more elegant.

    Read the article

  • Spring hibernate ehcache setup

    - by Johan Sjöberg
    I have some problems getting the hibernate second level cache to work for caching domain objects. According to the ehcache documentation it shouldn't be too complicated to add caching to my existing working application. I have the following setup (only relevant snippets are outlined): @Entity @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE public void Entity { // ... } ehcache-entity.xml <cache name="com.company.Entity" eternal="false" maxElementsInMemory="10000" overflowToDisk="true" diskPersistent="false" timeToIdleSeconds="0" timeToLiveSeconds="300" memoryStoreEvictionPolicy="LRU" /> ApplicationContext.xml <bean class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="ds" /> <property name="annotatedClasses"> <list> <value>com.company.Entity</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.generate_statistics">true</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="net.sf.ehcache.configurationResourceName">/ehcache-entity.xml</prop> <prop key="hibernate.cache.region.factory_class">net.sf.ehcache.hibernate.SingletonEhCacheRegionFactory</prop> .... </property> </bean> Maven dependencies <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-annotations</artifactId> <version>3.4.0.GA</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-hibernate3</artifactId> <version>2.0.8</version> <exclusions> <exclusion> <artifactId>hibernate</artifactId> <groupId>org.hibernate</groupId> </exclusion> </exclusions> </dependency> <dependency> <groupId>net.sf.ehcache</groupId> <artifactId>ehcache-core</artifactId> <version>2.3.2</version> </dependency> A test class is used which enables cache statistics: Cache cache = cacheManager.getCache("com.company.Entity"); cache.setStatisticsAccuracy(Statistics.STATISTICS_ACCURACY_GUARANTEED); cache.setStatisticsEnabled(true); // store, read etc ... cache.getStatistics().getMemoryStoreObjectCount(); // returns 0 No operation seems to trigger any cache changes. What am I missing? Currently I'm using HibernateTemplate in the DAO, perhaps that has some impact. [EDIT] The only ehcache log output when set to DEBUG is: SettingsFactory: Cache region factory : net.sf.ehcache.hibernate.SingletonEhCacheRegionFactory

    Read the article

  • Where should I create my DbCommand instances?

    - by Domenic
    I seemingly have two choices: Make my class implement IDisposable. Create my DbCommand instances as private readonly fields, and in the constructor, add the parameters that they use. Whenever I want to write to the database, bind to these parameters (reusing the same command instances), set the Connection and Transaction properties, then call ExecuteNonQuery. In the Dispose method, call Dispose on each of these fields. Each time I want to write to the database, write using(var cmd = new DbCommand("...", connection, transaction)) around the usage of the command, and add parameters and bind to them every time as well, before calling ExecuteNonQuery. I assume I don't need a new command for each query, just a new command for each time I open the database (right?). Both of these seem somewhat inelegant and possibly incorrect. For #1, it is annoying for my users that I this class is now IDisposable just because I have used a few DbCommands (which should be an implementation detail that they don't care about). I also am somewhat suspicious that keeping a DbCommand instance around might inadvertently lock the database or something? For #2, it feels like I'm doing a lot of work (in terms of .NET objects) each time I want to write to the database, especially with the parameter-adding. It seems like I create the same object every time, which just feels like bad practice. For reference, here is my current code, using #1: using System; using System.Net; using System.Data.SQLite; public class Class1 : IDisposable { private readonly SQLiteCommand updateCookie = new SQLiteCommand("UPDATE moz_cookies SET value = @value, expiry = @expiry, isSecure = @isSecure, isHttpOnly = @isHttpOnly WHERE name = @name AND host = @host AND path = @path"); public Class1() { this.updateCookie.Parameters.AddRange(new[] { new SQLiteParameter("@name"), new SQLiteParameter("@value"), new SQLiteParameter("@host"), new SQLiteParameter("@path"), new SQLiteParameter("@expiry"), new SQLiteParameter("@isSecure"), new SQLiteParameter("@isHttpOnly") }); } private static void BindDbCommandToMozillaCookie(DbCommand command, Cookie cookie) { long expiresSeconds = (long)cookie.Expires.TotalSeconds; command.Parameters["@name"].Value = cookie.Name; command.Parameters["@value"].Value = cookie.Value; command.Parameters["@host"].Value = cookie.Domain; command.Parameters["@path"].Value = cookie.Path; command.Parameters["@expiry"].Value = expiresSeconds; command.Parameters["@isSecure"].Value = cookie.Secure; command.Parameters["@isHttpOnly"].Value = cookie.HttpOnly; } public void WriteCurrentCookiesToMozillaBasedBrowserSqlite(string databaseFilename) { using (SQLiteConnection connection = new SQLiteConnection("Data Source=" + databaseFilename)) { connection.Open(); using (SQLiteTransaction transaction = connection.BeginTransaction()) { this.updateCookie.Connection = connection; this.updateCookie.Transaction = transaction; foreach (Cookie cookie in SomeOtherClass.GetCookieArray()) { Class1.BindDbCommandToMozillaCookie(this.updateCookie, cookie); this.updateCookie.ExecuteNonQuery(); } transaction.Commit(); } } } #region IDisposable implementation protected virtual void Dispose(bool disposing) { if (!this.disposed && disposing) { this.updateCookie.Dispose(); } this.disposed = true; } public void Dispose() { this.Dispose(true); GC.SuppressFinalize(this); } ~Class1() { this.Dispose(false); } private bool disposed; #endregion }

    Read the article

  • ADO program to list members of a large group.

    - by AlexGomez
    Hi everyone, I'm attempting to list all the members in a Active Directory group using ADO. The problem I have is that many of these groups have over 1500 members and ADSI cannot handle more than 1500 items in a multi-valued attribute. Fortunately I came across Richard Muller's wonderful VBScript that handles more than 1500 members at http://www.rlmueller.net/DocumentLargeGroup.htm I modified his code as shown below so that I can list ALL the groups and its memberships in a certain OU. However, I'm keeping getting the exception shown below: "ADODB.Recordset: Item cannot be found in the collection corresponding to the requested name or ordinal." My program appears to get stuck at: strPath = adoRecordset.Fields("ADsPath").Value Set objGroup = GetObject(strPath) All I am doing above is issuing the query to get back a recordset consisting of the ADsPath for each group in the OU. It then walks through the recordset and grabs the ADsPath for the first group and store its in a variable named strPath; we then use the value of that variable to bind to the group account for that group. It really should work! Any idea why the code below doesn't work for me? Any pointers will be great appreciated. Thanks. Option Explicit Dim objRootDSE, strDNSDomain, adoCommand Dim adoConnection, strBase, strAttributes Dim strFilter, strQuery, adoRecordset Dim strDN, intCount, blnLast, intLowRange Dim intHighRange, intRangeStep, objField Dim objGroup, objMember, strName ' Determine DNS domain name. Set objRootDSE = GetObject("LDAP://RootDSE") 'strDNSDomain = objRootDSE.Get("DefaultNamingContext") strDNSDomain = "XXXXXXXX" ' Use ADO to search Active Directory. Set adoCommand = CreateObject("ADODB.Command") Set adoConnection = CreateObject("ADODB.Connection") adoConnection.Provider = "ADsDSOObject" adoConnection.Open = "Active Directory Provider" adoCommand.ActiveConnection = adoConnection adoCommand.Properties("Page Size") = 100 adoCommand.Properties("Timeout") = 30 adoCommand.Properties("Cache Results") = False ' Specify base of search. strBase = "<LDAP://" & strDNSDomain & ">" ' Specify the attribute values to retrieve. strAttributes = "member" ' Filter on objects of class "group" strFilter = "(&(objectClass=group)(samAccountName=*))" ' Enumerate direct group members. ' Use range limits to handle more than 1000/1500 members. ' Setup to retrieve 1000 members at a time. blnLast = False intRangeStep = 999 intLowRange = 0 IntHighRange = intLowRange + intRangeStep Do While True If (blnLast = True) Then ' If last query, retrieve remaining members. strQuery = strBase & ";" & strFilter & ";" _ & strAttributes & ";range=" & intLowRange _ & "-*;subtree" Else ' If not last query, retrieve 1000 members. strQuery = strBase & ";" & strFilter & ";" _ & strAttributes & ";range=" & intLowRange & "-" _ & intHighRange & ";subtree" End If adoCommand.CommandText = strQuery Set adoRecordset = adoCommand.Execute adoRecordset.MoveFirst intCount = 0 Do Until adoRecordset.EOF strPath = adoRecordset.Fields("ADsPath").Value Set objGroup = GetObject(strPath) For Each objField In adoRecordset.Fields If (VarType(objField) = (vbArray + vbVariant)) _ Then For Each strDN In objField.Value ' Escape any forward slash characters, "/", with the backslash ' escape character. All other characters that should be escaped are. strDN = Replace(strDN, "/", "\/") ' Check dictionary object for duplicates. 'If (objGroupList.Exists(strDN) = False) Then ' Add to dictionary object. 'objGroupList.Add strDN, True ' Bind to each group member, to find member's samAccountName Set objMember = GetObject("LDAP://" & strDN) ' Output group cn, group samaAccountName and group member's samAccountName. Wscript.Echo objMember.samAccountName intCount = intCount + 1 'End if Next End If Next adoRecordset.MoveNext Loop adoRecordset.Close ' If this is the last query, exit the Do While loop. If (blnLast = True) Then Exit Do End If ' If the previous query returned no members, then the previous ' query for the next 1000 members failed. Perform one more ' query to retrieve remaining members (less than 1000). If (intCount = 0) Then blnLast = True Else ' Setup to retrieve next 1000 members. intLowRange = intHighRange + 1 intHighRange = intLowRange + intRangeStep End If Loop

    Read the article

  • Survey Data Model - How to avoid EAV and excessive denormalization?

    - by AlexDPC
    Hi everyone, My database skills are mediocre at best and I have to design a data model for survey data. I have spent some thoughts on this and right now I feel that I am stuck between some kind of EAV model and a design involving hundreds of tables, each with hundreds of columns (and thousands of records). There must be a better way to do this and I hope that the wise folks on this forum can help me. I have already searched various forums, but I couldn't really find a solution. If it has already been given elsewhere, please excuse me and provide me with a link so I can read it up. Some assumptions about the data I have to deal with: Each survey consists of 1 to n questionnaires Each questionnaire consists of 100-2,000 questions (please ignore that 2,000 questions really sound like a lot to answer...) Questions can be of various types: multiple-choice, free text, a number (like age, income, percentages, ...) Each survey involves 10-200 countries (These are not the respondents. The respondents are actually people in the countries.) Depending on the type of questionnaire, each questionnaire is answered by 100-20,000 respondents per country. A country can adapt the questionnaires for a survey, i.e. add, remove or edit questions The data for one country is gathered in a separate database in that country. There is no possibility for online integration from the start. The data for all countries has to be integrated later. This means for example, if a country has deleted a question, that data must somehow be derived from what they sent in order to achieve a uniform design across all countries I will have to write the integration and cleaning software, which will need to work with every country's data In the end the data needs to be exported to flat files, one rectangular grid per country and questionnaire. I have already discussed this topic with people from various backgrounds and have not come to a good solution yet. I mainly got two kinds of opinions. The domain experts, who are used to working with flat files (spreadsheet-style) for data processing and analysis vote for a denormalized structure with loads of tables and columns as I described above (1 table per country and questionnaire). This sounds terrible to me, because I learned that wide tables are to be avoided, it will be annoying to determine which columns are actually in a table when working with it, the database will become cluttered with hundreds of tables (or I even need to set up multiple databases, each with a similar yet a bit differetn design), etc. O-O-programmers vote for a strongly "normalized" design, which would effectively lead to a central table containing all the answers from all respondents to all questions. This table would either need to contain a column of type sql_variant type or multiple answer columns with different types to store answers of different types (multiple choice, free text, ..). The former would essentially be a EAV model. I tend to follow Joe Celko here, who strongly discourages its use (he calls it OTLT or "One True Lookup Table"). The latter would imply that each row would contain null cells for the not applicable types by design. Another alternative I could think of would be to create one table per answer type, i.e., one for multiple-choice questions, one for free text questions, etc.. That's not so generic, it would lead to a lot of union joins, I think and I would have to add a table if a new answer type is invented. Sorry for boring you with all this text and thank you for your input! Cheers, Alex PS: I asked the same question here: http://www.eggheadcafe.com/community/aspnet/13/10242616/survey-data-model--how-to-avoid-eav-and-excessive-denormalization.aspx

    Read the article

  • Database Table Schema and Aggregate Roots

    - by bretddog
    Hi, Applicaiton is single user, 1-tier(1 pc), database SqlCE. DataService layer will be (I think) : Repository returning domain objects and quering database with LinqToSql (dbml). There are obviously a lot more columns, this is simplified view. http://img573.imageshack.us/img573/3612/ss20110115171817w.png This is my first attempt of creating a 2 tables database. I think the table schema makes sense, but I need some reassurance or critics. Because the table relations looks quite scary to be honest. I'm hoping you could; Look at the table schema and respond if there are clear signs of troubles or errors that you spot right away.. And if you have time, Look at Program Summary/Questions, and see if the table layout makes makes sense to those points. Please be brutal, I will try to defend :) Program summary: a) A set of categories, each having a set of strategies (1:m) b) Each day a number of items will be produced. And each strategy MAY reference it. (So there can be 50 items, and a strategy may reference 23 of them) c) An item can be referenced by more than one strategy. So I think it's an m:m relation. d) Status values will be logged at fixed time-fractions through the day, for: - .... each Strategy.....each StrategyItem....each item e) An action on an item may be executed by a strategy that reference it. - This is logged as ItemAction (Could have called it StrategyItemAction) User Requsts b) - e) described the main activity mode of the program. To work with only today's DayLog , for each category. 2nd priority activity is retrieval of history, which typically will be From all categories, from day x to day y; Get all StrategyDailyLog. Questions First, does the overall layout look sound? I'm worried to see that there are so many relationships in all directions, connecting everything. Is this normal, or does it look like trouble? StrategyItem is made to represent an m:m relationship. Is it correct as I noted 1:m / 1:1 (marked red) ? StrategyItemTimeLog and ItemTimeLog; Logs values that both need to be retrieved together, when retreiving a StrategyItem. Reason I separated is that the first one is strategy-specific, and several strategies can reference same item. So I thought not to duplicate those values that are not dependent no strategy, but only on the item. Hence I also dragged out the LogTime, as it seems to be the only parameter to unite the logs. But this all looks quite disturbing with those 3 tables. Does it make sense at all? Or you have suggestion? Pink circles shows my vague attempt of Aggregate Root Paths. I've been thinking in terms of "what entity is responsible for delete". Though I'm unsure about the actual root. I think it's Category. Does it make sense related to User Requests described above?

    Read the article

  • JQuery form sticks with the ajax indicator on and won't submit

    - by Steven Buick
    Hi, I'm using JQuery 1.3 to validate and submit a form to a PHP page which JSON encodes a server response to display on the original form page. I've tried submitting the form without the JQuery part and everything seems to work fine but when I add JQuery it doesn't submit and constantly displays the ajax indicator. Here's my code: $(document).ready(function(){ var options = { target: '#messagebox', url: 'updateregistration.php', type:'POST', beforeSubmit: validatePassword, success: processJson, dataType: 'json' }; $("form:not(.filter) :input:visible:enabled:first").focus(); $("#webmailForm").validate({ errorLabelContainer: "#messagebox", rules: { forename: "required", surname: "required", currentpassword: "required", directemail: { required: true, email: true }, directtelephone: "required" }, messages: { forename: { required: "Please enter your forename" }, directemail: { required: "Please enter your direct e-mail address", email: "Your e-mail address does not appear to be valid(Example: [email protected])" }, surname: { required: "Please enter your surname" }, directtelephone: { required: "Please enter your direct telephone number" }, currentpassword: { required: "Please enter your current password" } } }); $('#webmailForm').submit(function() { $('#ajaxindicator').show(); $(this).ajaxSubmit(options); return false; }); }); function processJson(data) { $("#webmailForm").fadeOut("fast"); $("#messagebox").fadeIn("fast"); $("#messagebox").css({'background-image' : 'url(../images/messageboxbackgroundgreen.png)','border-color':'#009900','border-width':'1px','border-style':'solid'}); var forename=data.forename; var surname=data.surname; var directemail=data.directemail; var directphone=data.directphone; var dateofbirth=data.dateofbirth; var companyname=data.companyname; var fulladdress=data.fulladdress; var telephone=data.telephone; var fax=data.fax; var email=data.email; var website=data.website; var fsanumber=data.fsanumber; var membertype=data.membertype; var network=data.network; $("#messagebox").html('<h3>Registration Update successful!</h3>' + '<p><strong>Member Type:</strong> ' + membertype + '<br>' + '<strong>Forename:</strong> ' + forename + '<br><strong>Surname:</strong> ' + surname + '<br><strong>Direct E-mail:</strong> ' + directemail + '<br><strong>Direct Phone:</strong> ' + directphone + '<br><strong>Date of Birth:</strong> ' + dateofbirth + '<br><strong>Company:</strong> ' + companyname + '<br><strong>Address:</strong> ' + fulladdress + '<br><strong>Telephone:</strong> ' + telephone + '<br><strong>Fax:</strong> ' + fax + '<br><strong>E-mail:</strong> ' + email + '<br><strong>Website:</strong> ' + website + '<br><strong>FSA Number:</strong> ' + fsanumber + '<br><strong>Network:</strong> ' + network + '</p>'); $('#ajaxindicator').hide(); } function validatePassword(){ var clientpassword=$("#clientpassword").val(); var currentpassword=$("#currentpassword").val(); var currentpasswordmd5=hex_md5(currentpassword); if (currentpasswordmd5!=clientpassword){ $("#messagebox").html("You input the wrong current password, please try again."); $('#ajaxindicator').hide(); return false; } } I have a disabled textbox and some hidden ones. Could this be the problem?

    Read the article

  • How to figure out who owns a worker thread that is still running when my app exits?

    - by Dave
    Not long after upgrading to VS2010, my application won't shut down cleanly. If I close the app and then hit pause in the IDE, I see this: The problem is, there's no context. The call stack just says [External code], which isn't too helpful. Here's what I've done so far to try to narrow down the problem: deleted all extraneous plugins to minimize the number of worker threads launched set breakpoints in my code anywhere I create worker threads (and delegates + BeginInvoke, since I think they are labeled "Worker Thread" in the debugger anyway). None were hit. set IsBackground = true for all threads While I could do the next brute force step, which is to roll my code back to a point where this didn't happen and then look over all of the change logs, this isn't terribly efficient. Can anyone recommend a better way to figure this out, given the notable lack of information presented by the debugger? The only other things I can think of include: read up on WinDbg and try to use it to stop anytime a thread is started. At least, I thought that was possible... :) comment out huge blocks of code until the app closes properly, then start uncommenting until it doesn't. UPDATE Perhaps this information will be of use. I decided to use WinDbg and attach to my application. I then closed it, and switched to thread 0 and dumped the stack contents. Here's what I have: ThreadCount: 6 UnstartedThread: 0 BackgroundThread: 1 PendingThread: 0 DeadThread: 4 Hosted Runtime: no PreEmptive GC Alloc Lock ID OSID ThreadOBJ State GC Context Domain Count APT Exception 0 1 1c70 005a65c8 6020 Enabled 02dac6e0:02dad7f8 005a03c0 0 STA 2 2 1b20 005b1980 b220 Enabled 00000000:00000000 005a03c0 0 MTA (Finalizer) XXXX 3 08504048 19820 Enabled 00000000:00000000 005a03c0 0 Ukn XXXX 4 08504540 19820 Enabled 00000000:00000000 005a03c0 0 Ukn XXXX 5 08516a90 19820 Enabled 00000000:00000000 005a03c0 0 Ukn XXXX 6 08517260 19820 Enabled 00000000:00000000 005a03c0 0 Ukn 0:008> ~0s eax=c0674960 ebx=00000000 ecx=00000000 edx=00000000 esi=0040f320 edi=005a65c8 eip=76c37e47 esp=0040f23c ebp=0040f258 iopl=0 nv up ei pl nz na po nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000202 USER32!NtUserGetMessage+0x15: 76c37e47 83c404 add esp,4 0:000> !clrstack OS Thread Id: 0x1c70 (0) Child SP IP Call Site 0040f274 76c37e47 [InlinedCallFrame: 0040f274] 0040f270 6baa8976 DomainBoundILStubClass.IL_STUB_PInvoke(System.Windows.Interop.MSG ByRef, System.Runtime.InteropServices.HandleRef, Int32, Int32)*** WARNING: Unable to verify checksum for C:\Windows\assembly\NativeImages_v4.0.30319_32\WindowsBase\d17606e813f01376bd0def23726ecc62\WindowsBase.ni.dll 0040f274 6ba924c5 [InlinedCallFrame: 0040f274] MS.Win32.UnsafeNativeMethods.IntGetMessageW(System.Windows.Interop.MSG ByRef, System.Runtime.InteropServices.HandleRef, Int32, Int32) 0040f2c4 6ba924c5 MS.Win32.UnsafeNativeMethods.GetMessageW(System.Windows.Interop.MSG ByRef, System.Runtime.InteropServices.HandleRef, Int32, Int32) 0040f2dc 6ba8e5f8 System.Windows.Threading.Dispatcher.GetMessage(System.Windows.Interop.MSG ByRef, IntPtr, Int32, Int32) 0040f318 6ba8d579 System.Windows.Threading.Dispatcher.PushFrameImpl(System.Windows.Threading.DispatcherFrame) 0040f368 6ba8d2a1 System.Windows.Threading.Dispatcher.PushFrame(System.Windows.Threading.DispatcherFrame) 0040f374 6ba7fba0 System.Windows.Threading.Dispatcher.Run() 0040f380 62e6ccbb System.Windows.Application.RunDispatcher(System.Object)*** WARNING: Unable to verify checksum for C:\Windows\assembly\NativeImages_v4.0.30319_32\PresentationFramewo#\7f91eecda3ff7ce478146b6458580c98\PresentationFramework.ni.dll 0040f38c 62e6c8ff System.Windows.Application.RunInternal(System.Windows.Window) 0040f3b0 62e6c682 System.Windows.Application.Run(System.Windows.Window) 0040f3c0 62e6c30b System.Windows.Application.Run() 0040f3cc 001f00bc MyApplication.App.Main() [C:\code\trunk\MyApplication\obj\Debug\GeneratedInternalTypeHelper.g.cs @ 24] 0040f608 66c421db [GCFrame: 0040f608] EDIT -- not sure if this helps, but the main thread's call stack looks like this: [Managed to Native Transition] > WindowsBase.dll!MS.Win32.UnsafeNativeMethods.GetMessageW(ref System.Windows.Interop.MSG msg, System.Runtime.InteropServices.HandleRef hWnd, int uMsgFilterMin, int uMsgFilterMax) + 0x15 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.GetMessage(ref System.Windows.Interop.MSG msg, System.IntPtr hwnd, int minMessage, int maxMessage) + 0x48 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.PushFrameImpl(System.Windows.Threading.DispatcherFrame frame = {System.Windows.Threading.DispatcherFrame}) + 0x85 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.PushFrame(System.Windows.Threading.DispatcherFrame frame) + 0x49 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.Run() + 0x4c bytes PresentationFramework.dll!System.Windows.Application.RunDispatcher(object ignore) + 0x17 bytes PresentationFramework.dll!System.Windows.Application.RunInternal(System.Windows.Window window) + 0x6f bytes PresentationFramework.dll!System.Windows.Application.Run(System.Windows.Window window) + 0x26 bytes PresentationFramework.dll!System.Windows.Application.Run() + 0x1b bytes I did a search on it and found some posts related to WPF GUIs hanging, and maybe that'll give me some more clues.

    Read the article

  • Object validator - is this good design?

    - by neo2862
    I'm working on a project where the API methods I write have to return different "views" of domain objects, like this: namespace View.Product { public class SearchResult : View { public string Name { get; set; } public decimal Price { get; set; } } public class Profile : View { public string Name { get; set; } public decimal Price { get; set; } [UseValidationRuleset("FreeText")] public string Description { get; set; } [SuppressValidation] public string Comment { get; set; } } } These are also the arguments of setter methods in the API which have to be validated before storing them in the DB. I wrote an object validator that lets the user define validation rulesets in an XML file and checks if an object conforms to those rules: [Validatable] public class View { [SuppressValidation] public ValidationError[] ValidationErrors { get { return Validator.Validate(this); } } } public static class Validator { private static Dictionary<string, Ruleset> Rulesets; static Validator() { // read rulesets from xml } public static ValidationError[] Validate(object obj) { // check if obj is decorated with ValidatableAttribute // if not, return an empty array (successful validation) // iterate over the properties of obj // - if the property is decorated with SuppressValidationAttribute, // continue // - if it is decorated with UseValidationRulesetAttribute, // use the ruleset specified to call // Validate(object value, string rulesetName, string FieldName) // - otherwise, get the name of the property using reflection and // use that as the ruleset name } private static List<ValidationError> Validate(object obj, string fieldName, string rulesetName) { // check if the ruleset exists, if not, throw exception // call the ruleset's Validate method and return the results } } public class Ruleset { public Type Type { get; set; } public Rule[] Rules { get; set; } public List<ValidationError> Validate(object property, string propertyName) { // check if property is of type Type // if not, throw exception // iterate over the Rules and call their Validate methods // return a list of their return values } } public abstract class Rule { public Type Type { get; protected set; } public abstract ValidationError Validate(object value, string propertyName); } public class StringRegexRule : Rule { public string Regex { get; set; } public StringRegexRule() { Type = typeof(string); } public override ValidationError Validate(object value, string propertyName) { // see if Regex matches value and return // null or a ValidationError } } Phew... Thanks for reading all of this. I've already implemented it and it works nicely, and I'm planning to extend it to validate the contents of IEnumerable fields and other fields that are Validatable. What I'm particularly concerned about is that if no ruleset is specified, the validator tries to use the name of the property as the ruleset name. (If you don't want that behavior, you can use [SuppressValidation].) This makes the code much less cluttered (no need to use [UseValidationRuleset("something")] on every single property) but it somehow doesn't feel right. I can't decide if it's awful or awesome. What do you think? Any suggestions on the other parts of this design are welcome too. I'm not very experienced and I'm grateful for any help. Also, is "Validatable" a good name? To me, it sounds pretty weird but I'm not a native English speaker.

    Read the article

  • RavenDB - Can I create/query an index, from an application that doesn't share the CLR objects that were persisted?

    - by Jaz Lalli
    This post goes some way to answering this question (I'll include the answer later), but I was hoping for some further details. We have a number of applications that each need to access/manipulate data from Raven in their own way. Data is only written via the main web application. Other apps include batch-style tasks, reporting etc. In an attempt to keep each of these as de-coupled as possible, they are separate solutions. That being the case, how can I, from the reporting application, create indexes over the existing data, using my locally defined types? The answer from the linked question states As long as the structure of the classes you are deserializing into partially matches the structure of the data, it shouldn't make a difference. The RavenDB server doesn't care at all what classes you use in the client. You certainly could share a dll, or even share a portable dll if you are targeting a different platform. But you are correct that it is not necessary. However, you should be aware of the Raven-Clr-Type metadata value. The RavenDB client sets this when storing the original document. It is consumed back by the client to assist with deserialization, but it is not fully enforced It's the first part of that that I wanted clarification on. Do the object graphs for the docs on the server and types in my application have to match exactly? If the Click document on the server is { "Visit": { "Version": "0", "Domain": "www.mydomain.com", "Page": "/index", "QueryString": "", "IPAddress": "127.0.0.1", "Guid": "10cb6886-cb5c-46f8-94ed-4b0d45a5e9ca", "MetaData": { "Version": "1", "CreatedDate": "2012-11-09T15:11:03.5669038Z", "UpdatedDate": "2012-11-09T15:11:03.5669038Z", "DeletedDate": null } }, "ResultId": "Results/1", "ProductCode": "280", "MetaData": { "Version": "1", "CreatedDate": "2012-11-09T15:14:26.1332596Z", "UpdatedDate": "2012-11-09T15:14:26.1332596Z", "DeletedDate": null } } Is it possible (and if so, how?), to create a Map index from my application, which defines the Click class as follows? class Click { public Guid Guid {get;set;} public int ProductCode {get;set;} public DateTime CreatedDate {get;set;} } Or would my class have to be look like this? (where the custom types are defined as a sub-set of the properties on the document above, with matching property names) class Click { public Visit Visit {get;set;} public int ProductCode {get;set;} public MetaData MetaData {get;set;} } UPDATE Following on from the answer below, here's the code I managed to get working. Index public class Clicks_ByVisitGuidAndProductCode : AbstractIndexCreationTask { public override IndexDefinition CreateIndexDefinition() { return new IndexDefinition { Map = "from click in docs.Clicks select new {Guid = click.Visit.Guid, ProductCode = click.ProductCode, CreatedDate = click.MetaData.CreatedDate}", TransformResults = "results.Select(click => new {Guid = click.Visit.Guid, ProductCode = click.ProductCode, CreatedDate = click.MetaData.CreatedDate})" }; } } Query var query = _documentSession.Query<ReportClick, Clicks_ByVisitGuidAndProductCode>() .Customize(x => x.WaitForNonStaleResultsAsOfNow()) .Where(x => x.CreatedDate >= start.Date && x.CreatedDate < end.Date); where Click is public class Click { public Guid Guid { get; set; } public int ProductCode { get; set; } public DateTime CreatedDate { get; set; } } Many thanks @MattJohnson.

    Read the article

  • jQuery $.ajax response empty, but only in Chrome

    - by roguepixel
    I've exhausted every avenue of research to solve this one so hopefully someone else will think of something I just didn't. Relatively straight forward setup, I have a html page with some javascript that makes an ajax request to a URL (in the same domain) the java web app in the background does its stuff and returns a partial html page (no html, head or body tags, just the content) which should be inserted at a particular point in the page. All sounds pretty easy and the code I have works in IE, Firefox and Safari, but not in Chrome. In Chrome the target element just ends up empty and if I look at the resource request in Chromes developer tools the response content is also empty. All very confusing, I've tried a myriad of things to solve it and I'm just out of ideas. Any help would be greatly appreciated. var container = $('#container'); $.ajax({ type: 'GET', url: '/path/to/local/url', data: data('parameters=value&another=value2'), dataType: 'html', cache: false, beforeSend: requestBefore, complete: requestComplete, success: requestSuccess, error: requestError }); function data(parameters) { var dictionary = {}; var pairs = parameters.split('&'); for (var i = 0; i < pairs.length; i++) { var keyValuePair = pairs[i].split('='); dictionary[keyValuePair[0]] = keyValuePair[1]; } return dictionary; } function requestBefore() { container.find('.message.error').hide(); container.prepend('<div class="modal"><div class="indicator">Loading...</div></div>'); } function requestComplete() { container.find('.modal').remove(); } function requestSuccess(response) { container.empty(); container.html(response); } function requestError(response) { if (response.status == 200 && response.responseText == 'OK') { requestSuccess(response); } else { container.find('.message.error').fadeIn('slow'); } } All of this is executed in a $(document).ready(function() {}); Cheers, Jim @Oleg - Additional information requested, an example of the response that the ajax call might receive. <p class="message error hidden">An unknown error occured while trying to retrieve data, please try again shortly.</p> <div class="timeline"> <a class="icon shuttle-previous" rel="max_id=16470650733&page=1&q=something">Newer Data</a> <a class="icon shuttle-next" rel="max_id=16470650733&page=3&q=something">Older Data</a> </div> <ol class="social"> <li class="even"> <div class="avatar"> <img src="sphere_normal.gif"/> </div> <p> Some Content<br/> <span class="published">Jun 18, 2010 11:29:05 AM</span> - <a target="_blank" href="">Direct Link</a> </p> </li> <li class="odd"> <div class="avatar"> <img src="sphere_normal.gif"/> </div> <p> Some Content<br/> <span class="published">Jun 18, 2010 11:29:05 AM</span> - <a target="_blank" href="">Direct Link</a> </p> </li> </ol> <div class="timeline"> <a class="icon shuttle-previous" rel="max_id=16470650733&page=1&q=something">Newer Data</a> <a class="icon shuttle-next" rel="max_id=16470650733&page=3&q=something">Older Data</a> </div>

    Read the article

  • Applying Unity in dynamic menu

    - by Rajarshi
    I was going through Unity 2.0 to check if it has an effective use in our new application. My application is a Windows Forms application and uses a traditional bar menu (at the top), currently. My UIs (Windows Forms) more or less support Dependency Injection pattern since they all work with a class (Presentation Model Class) supplied to them via the constructor. The form then binds to the properties of the supplied P Model class and calls methods on the P Model class to perform its duties. Pretty simple and straightforward. How P Model reacts to the UI actions and responds to them by co-ordinating with the Domain Class (Business Logic/Model) is irrelevant here and thus not mentioned. The object creation sequence to show up one UI from menu then goes like this - Create Business Model instance Create Presentation Model instance with Business Model instance passed to P Model constructor. Create UI instance with Presentation Model instance passed to UI constructor. My present solution: To show an UI in the method above from my menu I would have to refer all assemblies (Business, PModel, UI) from my Menu class. Considering I have split the modules into a number of physical assemblies, that would be a dificult task to add references to about 60 different assemblies. Also the approach is not very scalable since I would certainly need to release more modules and with this approach I would have to change the source code every time I release a new module. So primarily to avoid the reference of so many assemblies from my Menu class (assembly) I did as below - Stored all the dependency described above in a database table (SQL Server), e.g. ModuleShortCode | BModelAssembly | BModelFullTypeName | PModelAssembly | PModelFullTypeName | UIAssembly | UIFullTypeName Now used a static class named "Launcher" with a method "Launch" as below - Launcher.Launch("Discount") Launcher.Launch("Customers") The Launcher internally uses data from the dependency table and uses Activator.CreateInstance() to create each of the objects and uses the instance as constructor parameter to the next object being created, till the UI is built. The UI is then shown as a modal dialog. The code inside Launcher is somewhat like - Form frm = ResolveForm("Discount"); frm.ShowDialog(); The ResolveForm does the trick of building the chain of objects. Can Unity help me here? Now when I did that I did not have enough information on Unity and now that I have studied Unity I think I have been doing more or less the same thing. So I tried to replace my code with Unity. However, as soon as I started I hit a block. If I try to resolve UI forms in my Menu as Form customers = myUnityContainer.Resolve(); or Form customers = myUnityContainer.Resolve(typeof(Customers)); Then either way, I need to refer to my UI assembly from my Menu assembly since the target Type "Customers" need to be known for Unity to resolve it. So I am back to same place since I would have to refer all UI assemblies from the Menu assembly. I understand that with Unity I would have to refer fewer assemblies (only UI assemblies) but those references are needed which defeats my objectives below - Create the chain of objects dynamically without any assembly reference from Menu assembly. This is to avoid Menu source code changing every time I release a new module. My Menu also is built dynamically from a table. Be able to supply new modules just by supplying the new assemblies and inserting the new Dependency row in the table by a database patch. At this stage, I have a feeling that I have to do it the way I was doing, i.e. Activator.CreateInstance() to fulfil all my objectives. I need to verify whether the community thinks the same way as me or have a better suggestion to solve the problem. The post is really long and I sincerely thank you if you come til this point. Waiting for your valuable suggestions. Rajarshi

    Read the article

  • Learning to implement DIC in MVC

    - by Tom
    I am learning to apply DIC to MVC project. So, I have sketched this DDD-ish DIC-ready-ish layout to my best understanding. I have read many blogs articles wikis for the last few days. However, I am not confident about implementing it correctly. Could you please demonstrate to me how to put them into DIC the proper way? I prefer Ninject or Windsor after all the readings, but anyDIC will do as long as I can get the correct idea how to do it. Web controller... public class AccountBriefingController { //create private IAccountServices accountServices { get; set; } public AccountBriefingController(IAccountServices accsrv) accountServices = accsrv; } //do work public ActionResult AccountBriefing(string userid, int days) { //get days of transaction records for this user BriefingViewModel model = AccountServices.GetBriefing(userid, days); return View(model); } } View model ... public class BriefingViewModel { //from user repository public string UserId { get; set; } public string AccountNumber {get; set;} public string FirstName { get; set; } public string LastName { get; set; } //from account repository public string Credits { get; set; } public List<string> Transactions { get; set; } } Service layer ... public interface IAccountServices { BriefingViewModel GetBriefing(); } public class AccountServices { //create private IUserRepository userRepo {get; set;} private IAccountRepository accRepo {get; set;} public AccountServices(UserRepository ur, AccountRepository ar) { userRepo = ur; accRepo = ar; } //do work public BriefingViewModel GetBriefing(string userid, int days) { var model = new BriefingViewModel(); //<---is that okay to new a model here?? var user = userRepo.GetUser(userid); if(user != null) { model.UserId = userid; model.AccountNumber = user.AccountNumber; model.FirstName = user.FirstName; model.LastName = user.LastName; //account records model.Credits = accRepo.GetUserCredits(userid); model.Transactions = accRepo.GetUserTransactions(userid, days); } return model; } } Domain layer and data models... public interface IUserRepository { UserDataModel GetUser(userid); } public interface IAccountRepository { List<string> GetUserTransactions(userid, days); int GetUserCredits(userid); } // Entity Framework DBContext goes under here Please point out if my implementation is wrong, e.g.I can feel in AccountServices-GetBriefing - new BriefingViewModel() seems wrong to me, but I don't know how to fit the stud into DIC? Thank you very much for your help!

    Read the article

  • Getting the constructor of an Interface Type through reflection?

    - by Will Marcouiller
    I have written a generic type: IDirectorySource<T> where T : IDirectoryEntry, which I'm using to manage Active Directory entries through my interfaces objects: IGroup, IOrganizationalUnit, IUser. So that I can write the following: IDirectorySource<IGroup> groups = new DirectorySource<IGroup>(); // Where IGroup implements `IDirectoryEntry`, of course.` foreach (IGroup g in groups.ToList()) { listView1.Items.Add(g.Name).SubItems.Add(g.Description); } From the IDirectorySource<T>.ToList() methods, I use reflection to find out the appropriate constructor for the type parameter T. However, since T is given an interface type, it cannot find any constructor at all! Of course, I have an internal class Group : IGroup which implements the IGroup interface. No matter how hard I have tried, I can't figure out how to get the constructor out of my interface through my implementing class. [DirectorySchemaAttribute("group")] public interface IGroup { } internal class Group : IGroup { internal Group(DirectoryEntry entry) { NativeEntry = entry; Domain = NativeEntry.Path; } // Implementing IGroup interface... } Within the ToList() method of my IDirectorySource<T> interface implementation, I look for the constructor of T as follows: internal class DirectorySource<T> : IDirectorySource<T> { // Implementing properties... // Methods implementations... public IList<T> ToList() { Type t = typeof(T) // Let's assume we're always working with the IGroup interface as T here to keep it simple. // So, my `DirectorySchema` property is already set to "group". // My `DirectorySearcher` is already instantiated here, as I do it within the DirectorySource<T> constructor. Searcher.Filter = string.Format("(&(objectClass={0}))", DirectorySchema) ConstructorInfo ctor = null; ParameterInfo[] params = null; // This is where I get stuck for now... Please see the helper method. GetConstructor(out ctor, out params, new Type() { DirectoryEntry }); SearchResultCollection results = null; try { results = Searcher.FindAll(); } catch (DirectoryServicesCOMException ex) { // Handling exception here... } foreach (SearchResult entry in results) entities.Add(ctor.Invoke(new object() { entry.GetDirectoryEntry() })); return entities; } } private void GetConstructor(out ConstructorInfo constructor, out ParameterInfo[] parameters, Type paramsTypes) { Type t = typeof(T); ConstructorInfo[] ctors = t.GetConstructors(BindingFlags.CreateInstance | BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.InvokeMethod); bool found = true; foreach (ContructorInfo c in ctors) { parameters = c.GetParameters(); if (parameters.GetLength(0) == paramsTypes.GetLength(0)) { for (int index = 0; index < parameters.GetLength(0); ++index) { if (!(parameters[index].GetType() is paramsTypes[index].GetType())) found = false; } if (found) { constructor = c; return; } } } // Processing constructor not found message here... } My problem is that T will always be an interface, so it never finds a constructor. Might somebody guide me to the right path to follow in this situation?

    Read the article

  • checking virtual sub domains

    - by Persian.
    I create a project that check the sub domain and redirect to the exist subdomain ( username ) but I can't find out why when the username is in database it can't show it . on local system it works finely .. but when I upload it on server it not works .. of course I change the commented place to uncomment for test .. but it's not working .. it shows this error : Object reference not set to an instance of an object. My code is this in page load : //Uri MyUrl = new Uri(Request.Url.ToString()); //string Url = MyUrl.Host.ToString(); Uri MyUrl = new Uri("http://Subdomain.Mydomain.com/"); string Url = MyUrl.Host.ToString(); string St1 = Url.Split('.')[0]; if ((St1.ToLower() == "Mydomain") || (St1.ToLower() == "Mydomain")) { Response.Redirect("Intro.aspx"); } else if (St1.ToLower() == "www") { string St2 = Url.Split('.')[1]; if ((St2.ToLower() == "Mydomain") || (St2.ToLower() == "Mydomain")) { Response.Redirect("Intro.aspx"); } else { object Blogger = ClsPublic.GetBlogger(St2); if (Blogger != null) { lblBloger.Text = Blogger.ToString(); if (Request.QueryString["id"] != null) { GvImage.DataSourceID = "SqlDataSourceImageId"; GvComments.DataSourceID = "SqlDataSourceCommentsId"; this.BindItemsList(); GetSubComments(); } else { SqlConnection scn = new SqlConnection(ClsPublic.GetConnectionString()); SqlCommand scm = new SqlCommand("SELECT TOP (1) fId FROM tblImages WHERE (fxAccepted = 1) AND (fBloging = 1) AND (fxSender = @fxSender) ORDER BY fId DESC", scn); scm.Parameters.AddWithValue("@fxSender", lblBloger.Text); scn.Open(); lblLastNo.Text = scm.ExecuteScalar().ToString(); scn.Close(); GvImage.DataSourceID = "SqlDataSourceLastImage"; GvComments.DataSourceID = "SqlDataSourceCommentsWId"; this.BindItemsList(); GetSubComments(); } if (Session["User"] != null) { MultiViewCommenting.ActiveViewIndex = 0; } else { MultiViewCommenting.ActiveViewIndex = 1; } } else { Response.Redirect("Intro.aspx"); } } } else { object Blogger = ClsPublic.GetBlogger(St1); if (Blogger != null) { lblBloger.Text = Blogger.ToString(); if (Request.QueryString["id"] != null) { GvImage.DataSourceID = "SqlDataSourceImageId"; GvComments.DataSourceID = "SqlDataSourceCommentsId"; this.BindItemsList(); GetSubComments(); } else { SqlConnection scn = new SqlConnection(ClsPublic.GetConnectionString()); SqlCommand scm = new SqlCommand("SELECT TOP (1) fId FROM tblImages WHERE (fxAccepted = 1) AND (fBloging = 1) AND (fxSender = @fxSender) ORDER BY fId DESC", scn); scm.Parameters.AddWithValue("@fxSender", lblBloger.Text); scn.Open(); lblLastNo.Text = scm.ExecuteScalar().ToString(); scn.Close(); GvImage.DataSourceID = "SqlDataSourceLastImage"; GvComments.DataSourceID = "SqlDataSourceCommentsWId"; this.BindItemsList(); GetSubComments(); } if (Session["User"] != null) { MultiViewCommenting.ActiveViewIndex = 0; } else { MultiViewCommenting.ActiveViewIndex = 1; } } else { Response.Redirect("Intro.aspx"); } } and my class : public static object GetBlogger(string User) { SqlConnection scn = new SqlConnection(ClsPublic.GetConnectionString()); SqlCommand scm = new SqlCommand("SELECT fUsername FROM tblMembers WHERE fUsername = @fUsername", scn); scm.Parameters.AddWithValue("@fUsername", User); scn.Open(); object Blogger = scm.ExecuteScalar(); if (Blogger != null) { SqlCommand sccm = new SqlCommand("SELECT COUNT(fId) AS Exp1 FROM tblImages WHERE (fxSender = @fxSender) AND (fxAccepted = 1)", scn); sccm.Parameters.AddWithValue("fxSender", Blogger); object HasQuty = sccm.ExecuteScalar(); scn.Close(); if (HasQuty != null) { int Count = Int32.Parse(HasQuty.ToString()); if (Count < 10) { Blogger = null; } } } return Blogger; } Which place if my code has problem ?

    Read the article

  • Form gets submitted Multiple times

    - by rasika vijay
    While clicking inside the form , the form automatically tried to resubmit , I cannot figure out which part of the code causes it to behave like this . In the createTable function , a table is created after the domain is given . But I am unable to select any of the controls in the output . I have attached the jsfiddle code link here : http://jsfiddle.net/rasikaceg/S7kWM/ function createTable() { document.getElementById("table_container").innerHTML = ""; var input_domain = document.forms["form1"]["DomainName"].value; if (input_domain == null || input_domain == "") return; var table = document.createElement("table"), tablehead = document.createElement("thead"), theadrow = document.createElement("tr"), th1 = document.createElement("th"), th2 = document.createElement("th"), th3 = document.createElement("th"), th4 = document.createElement("th"); th1.appendChild(document.createTextNode("Website")); th2.appendChild(document.createTextNode("Enable/Disable Live Update for LM and CBD")); th3.appendChild(document.createTextNode("From Date")); th4.appendChild(document.createTextNode("To Date")); theadrow.appendChild(th1); theadrow.appendChild(th2); theadrow.appendChild(th3); theadrow.appendChild(th4); tablehead.appendChild(theadrow); table.appendChild(tablehead); var names = ["website1", "website2"]; var container = document.getElementById("table_container"); var tablebody = document.createElement("tbody"); for (var i = 0, len = names.length; i < len; ++i) { var row = document.createElement("tr"), column1 = document.createElement("td"), column2 = document.createElement("td"), column3 = document.createElement("td"), column4 = document.createElement("td"), checkbox = document.createElement('input'); checkbox.type = "checkbox"; checkbox.name = names[i]; checkbox.value = names[i]; checkbox.id = names[i]; var label = document.createElement('label') label.htmlFor = names[i]; label.appendChild(document.createTextNode(names[i])); column1.appendChild(checkbox); column1.appendChild(label); var dropdown = document.createElement("select"); dropdown.name = names[i] + "_select"; var op1 = new Option(); op1.value = "enable"; op1.text = "enable"; var op2 = new Option(); op2.value = "disable"; op2.text = "disable"; dropdown.options.add(op1); dropdown.options.add(op2); column2.appendChild(dropdown); var datetime_from = document.createElement('input'); datetime_from.type = "datetime-local"; datetime_from.name = names[i] + "_from"; column3.appendChild(datetime_from); var datetime_to = document.createElement('input'); datetime_to.type = "datetime-local"; datetime_to.name = names[i] + "_to"; column4.appendChild(datetime_to); row.appendChild(column1); row.appendChild(column2); row.appendChild(column3); row.appendChild(column4); tablebody.appendChild(row); } table.appendChild(tablebody); document.getElementById("table_container").appendChild(table); }

    Read the article

  • What language/framework (technology) to use for website (flash games portal)

    - by cripox
    Hello, I know there are a lot of similar questions on the net, but because I am a newbie in web development I didn't find the solution for my specific problem. I am planing on creating a flash games portal from scratch. It is a big chance that there will be big traffic from the beginning (millions of pageviews). I want to reduce the server costs as much as possible but in the same time to not be tide to an expensive contract as there is a chance that the project will not be as successfully as I want and in that case the money would be very little. The question is : what technology to use? I don't know any web dev technology yet so it doesn't matter what I will learn. My web dev experience is a little php 8 years ago, and from then I programmed in C++ / Java- game and mobile development. I like Java and C syntax and language very much and I tend to dislike dynamic typing or non robust scripting (like php)- but I can get along if these are the best choices. The candidates are now: - Grails (my best for now) Ruby on Rails Cake PHP Other technologies (Google App Engine, Python/Django etc...) I was considering at first using pure C and compiling the web app in the server- just to squeeze more from the servers, but soon I understand that this is overkill. Next my eyes came on Ruby - as there is a lot of buzz for it's easiness of use. Next I discovered Grails and looked at Java because it is said that it is "faster". But I don't know what this "Faster" really means on my needs, so here comes the first question: 1) What will be my biggest consumption on the server, other than bandwidth, for a lot of flash content requests? Is it memory? I heard that Java needs a lot of memory, but is faster. Is it CPU? I am planning to take some daily VPS.NET nodes at first, to see if there is a demand, and if the "spike" is permanent to move to a dedicated server (serverloft.com has some good offers), else to remain with less nodes. I was also considering developing in Google App Engine- cheap or free hosting to use at first - so I can test my assumption- and also very easy to use (no need for sys administration) but the costs became high if used more ( 3 million games played / month .. x mb/ each). And the issue with Google is that it looks me in this technology. My other concern is scalability (not only for traffic/users, but as adding functionality) My plans are to release a functional site in just 4 weeks (just the basics frontend and some quick basic backend - so I can be able to modify some things and add games manually) - but then to raise it and add more things to it. I am planning to take a little different approach than other portals so I need to write it from scratch (a script will not do). 2) Will Grails take much more resources than RoR or Php server wise? I heard that making it on Java stack will be hardware expensive and is overkill if you don't make a bank application. My application will not be very complex (I hope and i will try to) but will have a lot of traffic. I also took in account using CDN for files, but the cheapest CDN found was 5c/GB (vps.net) and the cost per gb on serverloft (http://www.serverloft.com/dedizierte-server/server-details.php?products=4) is only 1.79 cents/GB and comes with the other resources either. I am new to this domain (web). I am learning the ropes and searching on the web for ~half of year but don't have any really practical experience, so I know that I must have some naive thinking and other issues that i don't know from now, so please give me any advice you want regarding anything, not just the specific questions asked. And thank you so much for such great community!

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >