Search Results

Search found 19635 results on 786 pages for 'materialized path pattern'.

Page 554/786 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • Database and logic layer for ASP.NET MVC application

    - by Ismail
    I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things. I've a good years of experience working on SQL Server databases and on one project I've had a bad experience creating and managing stored procedures on MySQL database. I'm totally new to Linq but I see that it is easier to use once you are familiar with it. First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. I would love if I can implement repository pattern as it allows to apply filter in logic layer rather than in data access layer. Will it be possible if I use Entity Framework? I'm not clear on how I should go about all this or I should just forget every thing and directly use SQL to Linq on SQL Server. I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right? So questions basically are - Is MySQL to Linq possible? If yes where can I get more details on it? Pros and cons of using EntityFramework with MySQL? Will it be easy to access data using EntityFramework with MySQL? Will I be able to implement repository patter which allows applying filter in logic layer rather than data access layer (when I use EntityFramework with MySQL) Does it fetches hell lot of data from database and then apply filter on it? If it sounds too many questions from my side in that case, if you can just let me know what you will do (with a considerable reason) in this situation as an experienced person in this area, that should answer my question.

    Read the article

  • actionscript-3: refactor interface inheritance to get rid of ambiguous reference error

    - by maxmc
    hi! imagine there are two interfaces arranged via composite pattern, one of them has a dispose method among other methods: interface IComponent extends ILeaf { ... function dispose() : void; } interface ILeaf { ... } some implementations have some more things in common (say an id) so there are two more interfaces: interface ICommonLeaf extends ILeaf { function get id() : String; } interface ICommonComponent extends ICommonLeaf, IComponent { } so far so good. but there is another interface which also has a dispose method: interface ISomething { ... function dispose() : void; } and ISomething is inherited by ICommonLeaf: interface ICommonLeaf extends ILeaf, ISomething { function get id() : String; } As soon as the dispose method is invoked on an instance which implements the ICommonComponent interface, the compiler fails with an ambiguous reference error because ISomething has a method called dispose and ILeaf also has a dispose method, both living in different interfaces (IComponent, ISomething) within the inheritace tree of ICommonComponent. I wonder how to deal with the situation if the IComponent, the ILeaf and the ISomething can't change. the composite structure must also work for for the ICommonLeaf & ICommonComponent implementations and the ICommonLeaf & ICommonComponent must conform to the ISomething type. this might be an actionscript-3 specific issue. i haven't tested how other languages (for instance java) handle stuff like this.

    Read the article

  • Should a setter return immediately if assigned the same value?

    - by Andrei Rinea
    In classes that implement INotifyPropertyChanged I often see this pattern : public string FirstName { get { return _customer.FirstName; } set { if (value == _customer.FirstName) return; _customer.FirstName = value; base.OnPropertyChanged("FirstName"); } } Precisely the lines if (value == _customer.FirstName) return; are bothering me. I've often did this but I am not that sure it's needed nor good. After all if a caller assigns the very same value I don't want to reassign the field and, especially, notify my subscribers that the property has changed when, semantically it didn't. Except saving some CPU/RAM/etc by freeing the UI from updating something that will probably look the same on the screen/whatever_medium what do we obtain? Could some people force a refresh by reassigning the same value on a property (NOT THAT THIS WOULD BE A GOOD PRACTICE HOWEVER)? 1. Should we do it or shouldn't we? 2. Why?

    Read the article

  • Testing for interface implementation in WCF/SOA

    - by rabidpebble
    I have a reporting service that implements a number of reports. Each report requires certain parameters. Groups of logically related parameters are placed in an interface, which the report then implements: [ServiceContract] [ServiceKnownType(typeof(ExampleReport))] public interface IService1 { [OperationContract] void Process(IReport report); } public interface IReport { string PrintedBy { get; set; } } public interface IApplicableDateRangeParameter { DateTime StartDate { get; set; } DateTime EndDate { get; set; } } [DataContract] public abstract class Report : IReport { [DataMember] public string PrintedBy { get; set; } } [DataContract] public class ExampleReport : Report, IApplicableDateRangeParameter { [DataMember] public DateTime StartDate { get; set; } [DataMember] public DateTime EndDate { get; set; } } The problem is that the WCF DataContractSerializer does not expose these interfaces in my client library, thus I can't write the generic report generating front-end that I plan to. Can WCF expose these interfaces, or is this a limitation of the serializer? If the latter case, then what is the canonical approach to this OO pattern? I've looked into NetDataContractSerializer but it doesn't seem to be an officially supported implementation (which means it's not an option in my project). Currently I've resigned myself to including the interfaces in a library that is common between the service and the client application, but this seems like an unnecessary extra dependency to me. Surely there is a more straightforward way to do this? I was under the impression that WCF was supposed to replace .NET remoting; checking if an object implements an interface seems to be one of the most basic features required of a remoting interface?

    Read the article

  • Project management: Implementing custom errors in VS compilation process

    - by David Lively
    Like many architects, I've developed coding standards through years of experience to which I expect my developers to adhere. This is especially a problem with the crowd that believes that three or four years of experience makes you a senior-level developer.Approaching this as a training and code review issue has generated limited success. So, I was thinking that it would be great to be able to add custom compile-time errors to the build process to more strictly enforce this and other guidelines. For instance, we use stored procedures for ALL database access, which provides procedure-level security, db encapsulation (table structure is hidden from the app), and other benefits. (Note: I am not interested in starting a debate about this.) Some developers prefer inline SQL or parametrized queries, and that's fine - on their own time and own projects. I'd like a way to add a compilation check that finds, say, anything that looks like string sql = "insert into some_table (col1,col2) values (@col1, @col2);" and generates an error or, in certain circumstances, a warning, with a message like Inline SQL and parametrized queries are not permitted. Or, if they use the var keyword var x = new MyClass(); Variable definitions must be explicitly typed. Do Visual Studio and MSBuild provide a way to add this functionality? I'm thinking that I could use a regular expression to find unacceptable code and generate the correct error, but I'm not sure what, from a performance standpoint, is the best way to to integrate this into the build process. We could add a pre- or post-build step to run a custom EXE, but how can I return line- and file-specifc errors? Also, I'd like this to run after compilation of each file, rather than post-link. Is a regex the best way to perform this type of pattern matching, or should I go crazy and run the code through a C# parser, which would allow node-level validation via the parse tree? I'd appreciate suggestions and tales of prior experience.

    Read the article

  • Is there a Scheduler/Calendar JS Widget library?

    - by Ravi Chhabra
    I am looking for some JavaScript based component to be used as a course scheduler which would be a cross between Google Calendar and the login time. I do not know if the right term for this is Course Scheduler but I shall describe this in more detail here. Course Scheduler The widget would be used to enter date and times of a course, as an example if I run a programming course 3 days a week on Mon, Tue and Wed every 7:00 am to 9:00am, 2 hours every day from 1st September to 30th November. I could answer various questions and the course data would be displayed in the calendar. It would also allow for non pattern based timings where each week is different from the other week etc. Question So would I end up creating something from scratch? Would it be sensible to use Google Calendar API for this? I did a Google search for some widgets, but I believe I need better keywords, as I could not find anything close to what I am looking for. Any tips? Commercial libraries would also work for me. Thanks.

    Read the article

  • How do I tell if an action is a lambda expression?

    - by Keith
    I am using the EventAgregator pattern to subscribe and publish events. If a user subscribes to the event using a lambda expression, they must use a strong reference, not a weak reference, otherwise the expression can be garbage collected before the publish will execute. I wanted to add a simple check in the DelegateReference so that if a programmer passes in a lambda expression and is using a weak reference, that I throw an argument exception. This is to help "police" the code. Example: eventAggregator.GetEvent<RuleScheduler.JobExecutedEvent>().Subscribe ( e => resetEvent.Set(), ThreadOption.PublisherThread, false, // filter event, only interested in the job that this object started e => e.Value1.JobDetail.Name == jobName ); public DelegateReference(Delegate @delegate, bool keepReferenceAlive) { if (@delegate == null) throw new ArgumentNullException("delegate"); if (keepReferenceAlive) { this._delegate = @delegate; } else { //TODO: throw exception if target is a lambda expression _weakReference = new WeakReference(@delegate.Target); _method = @delegate.Method; _delegateType = @delegate.GetType(); } } any ideas? I thought I could check for @delegate.Method.IsStatic but I don't believe that works... (is every lambda expression a static?)

    Read the article

  • Setting javascript prototype function within object class declaration

    - by Tauren
    Normally, I've seen prototype functions declared outside the class definition, like this: function Container(param) { this.member = param; } Container.prototype.stamp = function (string) { return this.member + string; } var container1 = new Container('A'); alert(container1.member); alert(container1.stamp('X')); This code produces two alerts with the values "A" and "AX". I'd like to define the prototype function INSIDE of the class definition. Is there anything wrong with doing something like this? function Container(param) { this.member = param; if (!Container.prototype.stamp) { Container.prototype.stamp = function() { return this.member + string; } } } I was trying this so that I could access a private variable in the class. But I've discovered that if my prototype function references a private var, the value of the private var is always the value that was used when the prototype function was INITIALLY created, not the value in the object instance: Container = function(param) { this.member = param; var privateVar = param; if (!Container.prototype.stamp) { Container.prototype.stamp = function(string) { return privateVar + this.member + string; } } } var container1 = new Container('A'); var container2 = new Container('B'); alert(container1.stamp('X')); alert(container2.stamp('X')); This code produces two alerts with the values "AAX" and "ABX". I was hoping the output would be "AAX" and "BBX". I'm curious why this doesn't work, and if there is some other pattern that I could use instead.

    Read the article

  • Should you declare methods using overloads or optional parameters in C# 4.0?

    - by Greg Beech
    I was watching Anders' talk about C# 4.0 and sneak preview of C# 5.0, and it got me thinking about when optional parameters are available in C# what is going to be the recommended way to declare methods that do not need all parameters specified? For example something like the FileStream class has about fifteen different constructors which can be divided into logical 'families' e.g. the ones below from a string, the ones from an IntPtr and the ones from a SafeFileHandle. FileStream(string,FileMode); FileStream(string,FileMode,FileAccess); FileStream(string,FileMode,FileAccess,FileShare); FileStream(string,FileMode,FileAccess,FileShare,int); FileStream(string,FileMode,FileAccess,FileShare,int,bool); It seems to me that this type of pattern could be simplified by having three constructors instead, and using optional parameters for the ones that can be defaulted, which would make the different families of constructors more distinct [note: I know this change will not be made in the BCL, I'm talking hypothetically for this type of situation]. What do you think? From C# 4.0 will it make more sense to make closely related groups of constructors and methods a single method with optional parameters, or is there a good reason to stick with the traditional many-overload mechanism?

    Read the article

  • Resolve php "deadlock"

    - by Matt
    Hey all, I'm currently running into a situation that I guess would be called deadlock. I'm implementing a message service that calls various object methods.. it's quite similar to observer pattern.. Here's whats going on: Dispatcher.php Dispatcher.php <? class Dispatcher { ... public function message($name, $method) { // Find the object based on the name $object = $this->findObjectByName($name); // Slight psuedocode.. for ease of example if($this->not_initialized($object)) $object = new $object(); // This is where it locks up. } return $object->$method(); ... } class A { function __construct() { $Dispatcher->message("B", "getName"); } public function getName() { return "Class A"; } } class B { function __construct() { // Assume $Dispatcher is the classes $Dispatcher->message("A", "getName"); } public function getName() { return "Class B"; } } ?> It locks up when neither object is initialized. It just goes back and forth from message each other and no one can be initialized. Any help would be immensely helpful.. Thanks! Matt Mueller

    Read the article

  • Rolling back file moves, folder deletes and mysql queries

    - by Workoholic
    This has been bugging me all day and there is no end in sight. When the user of my php application adds a new update and something goes wrong, I need to be able to undo a complex batch of mixed commands. They can be mysql update and insert queries, file deletes and folder renaming and creations. I can track the status of all insert commands and undo them if an error is thrown. But how do I do this with the update statements? Is there a smart way (some design pattern?) to keep track of such changes both in the file structure and the database? My database tables are MyISAM. It would be easy to just convert everything to InnoDB, so that I can use transactions. That way I would only have to deal with the file and folder operations. Unfortunately, I cannot assume that all clients have InnoDB support. It would also require me to convert many tables in my database to InnoDB, which I am hesitant to do.

    Read the article

  • Processing command-line arguments in prefix notation in Python

    - by ejm
    I'm trying to parse a command-line in Python which looks like the following: $ ./command -o option1 arg1 -o option2 arg2 arg3 In other words, the command takes an unlimited number of arguments, and each argument may optionally be preceded with an -o option, which relates specifically to that argument. I think this is called a "prefix notation". In the Bourne shell I would do something like the following: while test -n "$1" do if test "$1" = '-o' then option="$2" shift 2 fi # Work with $1 (the argument) and $option (the option) # ... shift done Looking around at the Bash tutorials, etc. this seems to be the accepted idiom, so I'm guessing Bash is optimized to work with command-line arguments this way. Trying to implement this pattern in Python, my first guess was to use pop(), as this is basically a stack operation. But I'm guessing this won't work as well on Python because the list of arguments in sys.argv is in the wrong order and would have to be processed like a queue (i.e. pop from the left). I've read that lists are not optimized for use as queues in Python. So, my ideas are: convert argv to a collections.deque and use popleft(), reverse argv using reverse() and use pop(), or maybe just work with the int list indices themselves. Does anyone know of a better way to do this, otherwise which of my ideas would be best-practise in Python?

    Read the article

  • Is there a way to use Linq projections with extension methods

    - by Acoustic
    I'm trying to use AutoMapper and a repository pattern along with a fluent interface, and running into difficulty with the Linq projection. For what it's worth, this code works fine when simply using in-memory objects. When using a database provider, however, it breaks when constructing the query graph. I've tried both SubSonic and Linq to SQL with the same result. Thanks for your ideas. Here's an extension method used in all scenarios - It's the source of the problem since everything works fine without using extension methods public static IQueryable<MyUser> ByName(this IQueryable<MyUser> users, string firstName) { return from u in users where u.FirstName == firstName select u; } Here's the in-memory code that works fine var userlist = new List<User> {new User{FirstName = "Test", LastName = "User"}}; Mapper.CreateMap<User, MyUser>(); var result = (from u in userlist select Mapper.Map<User, MyUser>(u)) .AsQueryable() .ByName("Test"); foreach (var x in result) { Console.WriteLine(x.FirstName); } Here's the same thing using a SubSonic (or Linq to SQL or whatever) that fails. This is what I'd like to make work somehow with extension methods... Mapper.CreateMap<User, MyUser>(); var result = from u in new DataClasses1DataContext().Users select Mapper.Map<User, MyUser>(u); var final = result.ByName("Test"); foreach(var x in final) // Fails here when the query graph built. { Console.WriteLine(x.FirstName); } The goal here is to avoid having to manually map the generated "User" object to the "MyUser" domain object- in other words, I'm trying to find a way to use AutoMapper so I don't have this kind of mapping code everywhere a database read operation is needed: var result = from u in new DataClasses1DataContext().Users select new MyUser // Can this be avoided with AutoMapper AND extension methods? { FirstName = v.FirstName, LastName = v.LastName };

    Read the article

  • Use continue or Checked Exceptions when checking and processing objects

    - by Johan Pelgrim
    I'm processing, let's say a list of "Document" objects. Before I record the processing of the document successful I first want to check a couple of things. Let's say, the file referring to the document should be present and something in the document should be present. Just two simple checks for the example but think about 8 more checks before I have successfully processed my document. What would have your preference? for (Document document : List<Document> documents) { if (!fileIsPresent(document)) { doSomethingWithThisResult("File is not present"); continue; } if (!isSomethingInTheDocumentPresent(document)) { doSomethingWithThisResult("Something is not in the document"); continue; } doSomethingWithTheSucces(); } Or for (Document document : List<Document> documents) { try { fileIsPresent(document); isSomethingInTheDocumentPresent(document); doSomethingWithTheSucces(); } catch (ProcessingException e) { doSomethingWithTheExceptionalCase(e.getMessage()); } } public boolean fileIsPresent(Document document) throws ProcessingException { ... throw new ProcessingException("File is not present"); } public boolean isSomethingInTheDocumentPresent(Document document) throws ProcessingException { ... throw new ProcessingException("Something is not in the document"); } What is more readable. What is best? Is there even a better approach of doing this (maybe using a design pattern of some sort)? As far as readability goes my preference currently is the Exception variant... What is yours?

    Read the article

  • Resolve php endless recursion issue

    - by Matt
    Hey all, I'm currently running into an endless recursion situation. I'm implementing a message service that calls various object methods.. it's quite similar to observer pattern.. Here's whats going on: Dispatcher.php class Dispatcher { ... public function message($name, $method) { // Find the object based on the name $object = $this->findObjectByName($name); // Slight psuedocode.. for ease of example if($this->not_initialized($object)) $object = new $object(); // This is where it locks up. } return $object->$method(); ... } class A { function __construct() { $Dispatcher->message("B", "getName"); } public function getName() { return "Class A"; } } class B { function __construct() { // Assume $Dispatcher is the classes $Dispatcher->message("A", "getName"); } public function getName() { return "Class B"; } } It locks up when neither object is initialized. It just goes back and forth from message each other and no one can be initialized. I'm looking for some kind of queue implementation that will make messages wait for each other.. One where the return values still get set. I'm looking to have as little boilerplate code in class A and class B as possible Any help would be immensely helpful.. Thanks! Matt Mueller

    Read the article

  • JSF how to temporary disable validators to save draft

    - by Swiety
    I have a pretty complex form with lots of inputs and validators. For the user it takes pretty long time (even over an hour) to complete that, so they would like to be able to save the draft data, even if it violates rules like mandatory fields being not typed in. I believe this problem is common to many web applications, but can't find any well recognised pattern how this should be implemented. Can you please advise how to achieve that? For now I can see the following options: use of immediate=true on "Save draft" button doesn't work, as the UI data would not be stored on the bean, so I wouldn't be able to access it. Technically I could find the data in UI component tree, but traversing that doesn't seem to be a good idea. remove all the fields validation from the page and validate the data programmaticaly in the action listener defined for the form. Again, not a good idea, form is really complex, there are plenty of fields so validation implemented this way would be very messy. implement my own validators, that would be controlled by some request attribute, which would be set for standard form submission (with full validation expected) and would be unset for "save as draft" submission (when validation should be skipped). Again, not a good solution, I would need to provide my own wrappers for all validators I am using. But as you see no one is really reasonable. Is there really no simple solution to the problem?

    Read the article

  • REST web service keeps first POST parametrs

    - by Diego
    I have a web service in REST, designed with Java and deployed on Tomcat. This is the web service structure: @Path("Personas") public class Personas { @Context private UriInfo context; /** * Creates a new instance of ServiceResource */ public Personas() { } @GET @Produces("text/html") public String consultarEdad (@QueryParam("nombre") String nombre) { ConectorCliente c = new ConectorCliente("root", "cafe.sql", "test"); int edad = c.consultarEdad(nombre); if (edad == Integer.MIN_VALUE) return "-1"; return String.valueOf(edad); } @POST @Produces("text/html") public String insertarPersona(@QueryParam("nombre") String msg, @QueryParam("edad") int edad) { ConectorCliente c = new ConectorCliente("usr", "passwd", "dbname"); c.agregar(msg, edad); return "listo"; } } Where ConectorCliente class is MySQL connector and querying class. So, I had tested this with the @GET actually doing POST work, any user inputed data and information from ma Java FX app and it went direct to webservice's database. However, I changed so the CREATE operation was performed through a webservice responding to an actual POST HTTP request. However, when I run the client and add some info, parameters go OK, but in next time I input different parameters it'll input the same. I run this several times and I can't get the reason of it. This is the clients code: public class WebServicePersonasConsumer { private WebTarget webTarget; private Client client; private static final String BASE_URI = "http://localhost:8080/GetSomeRest/serviciosweb/"; public WebServicePersonasConsumer() { client = javax.ws.rs.client.ClientBuilder.newClient(); webTarget = client.target(BASE_URI).path("Personas"); } public <T> T insertarPersona(Class<T> responseType, String nombre, String edad) throws ClientErrorException { String[] queryParamNames = new String[]{"nombre", "edad"}; String[] queryParamValues = new String[]{nombre, edad}; ; javax.ws.rs.core.Form form = getQueryOrFormParams(queryParamNames, queryParamValues); javax.ws.rs.core.MultivaluedMap<String, String> map = form.asMap(); for (java.util.Map.Entry<String, java.util.List<String>> entry : map.entrySet()) { java.util.List<String> list = entry.getValue(); String[] values = list.toArray(new String[list.size()]); webTarget = webTarget.queryParam(entry.getKey(), (Object[]) values); } return webTarget.request().post(null, responseType); } public <T> T consultarEdad(Class<T> responseType, String nombre) throws ClientErrorException { String[] queryParamNames = new String[]{"nombre"}; String[] queryParamValues = new String[]{nombre}; ; javax.ws.rs.core.Form form = getQueryOrFormParams(queryParamNames, queryParamValues); javax.ws.rs.core.MultivaluedMap<String, String> map = form.asMap(); for (java.util.Map.Entry<String, java.util.List<String>> entry : map.entrySet()) { java.util.List<String> list = entry.getValue(); String[] values = list.toArray(new String[list.size()]); webTarget = webTarget.queryParam(entry.getKey(), (Object[]) values); } return webTarget.request(javax.ws.rs.core.MediaType.TEXT_HTML).get(responseType); } private Form getQueryOrFormParams(String[] paramNames, String[] paramValues) { Form form = new javax.ws.rs.core.Form(); for (int i = 0; i < paramNames.length; i++) { if (paramValues[i] != null) { form = form.param(paramNames[i], paramValues[i]); } } return form; } public void close() { client.close(); } } And this this the code when I perform the operations in a Java FX app: String nombre = nombreTextField.getText(); String edad = edadTextField.getText(); String insertToDatabase = consumidor.insertarPersona(String.class, nombre, edad); So, as parameters are taken from TextFields, is quite odd why second, third, fourth and so on POSTS post the SAME.

    Read the article

  • How To Edit XML File

    - by thebourneid
    I have a movie collection catalogue with local links to folders and files for an easy access. Recently I reorganaized my entire hard disk space and I need to update the links and I'm trying to do that automatically with perl. I can export the data in a xml file and import it again. I can extract the new filepaths with the use of File::Find but I'm stuck with two problems. I have no idea how to connect the $title from the new filepath with the corresponding $title from the xml file. I'm dealing with such files for the first time and I don't know how to proceed with the replacement process. Here is what I've done till now use strict; use warnings; use File::Basename; use File::Find; use File::Spec; use XML::Simple; use Data::Dumper; my $dir_target = 'somepath'; find(\&a, $dir_target); sub a { /\.iso$/ or return; my $fn = $File::Find::name; $fn =~ s/\//\\/g; $fn =~ /(.*\\)(.*)/; my $path = $1; my $filename = $2; my $title = (File::Spec->splitdir($fn))[2]; $title =~ s/(.*?)\s\(\d+\)$/$1/; $title =~ s/~/:/; $title =~ s/`/?/; my $link_local = '<link><description>Folder</description><url>'.$path.'</url><urltype>Movie</urltype></link><link><description>'.$filename.'</description><url>'.$fn.'</url><urltype>Movie</urltype></link>' unless $title eq ''; my $txt = 'somepath/log.txt'; my $xml_in = XMLin('somepath/test.xml', ForceArray => 1, KeepRoot => 1); my $xml_out = XMLout($xml_in, OutputFile => 'somepath/test_out.xml', KeepRoot=>1); open F, ">>", $txt; print F $link_local."\n\n"; close F; } And here is a snippet of the data I need to edit. If found imdb and dvdempire link - do not touch. if found local links replace, otherwise insert. I'm willing to complete the code myself but need some directions how to proceed further. Thanks. <title>$title</title> ....... <links> <link> <description>IMDB</description> <url>http://www.imdb.com/title/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>DVD Empire</description> <url>http://www.dvdempire.com/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>Folder</description> <url>OLD_FOLDERPATH</url> <urltype>Movie</urltype> </link> <link> <description>OLD_FILENAME</description> <url>OLD_FILENAMEPATH</url> <urltype>Movie</urltype> </link> </links>

    Read the article

  • Decoupling the view, presentation and ASP.NET Web Forms

    - by John Leidegren
    I have an ASP.NET Web Forms page which the presenter needs to populate with controls. This interaction is somewhat sensitive to the page-life cycle and I was wondering if there's a trick to it, that I don't know about. I wanna be practical about the whole thing but not compromise testability. Currently I have this: public interface ISomeContract { void InstantiateIn(System.Web.UI.Control container); } This contract has a dependency on System.Web.UI.Control and I need that to be able to do things with the ASP.NET Web Forms programming model. But neither the view nor the presenter may have knowledge about ASP.NET server controls. How do I get around this? How can I work with the ASP.NET Web Forms programming model in my concrete views without taking a System.Web.UI.Control dependency in my contract assemblies? To clarify things a bit, this type of interface is all about UI composition (using MEF). It's known through-out the framework but it's really only called from within the concrete view. The concrete view is still the only thing that knows about ASP.NET Web Forms. However those public methods that say InstantiateIn(System.Web.UI.Control) exists in my contract assemblies and that implies a dependency on ASP.NET Web Forms. I've been thinking about some double dispatch mechanism or even visitor pattern to try and work around this.

    Read the article

  • Add child to existing parent record in entity framework.

    - by Shawn Mclean
    My relationship between the parent and child is that they are connected by an edge. It is similiar to a directed graph structure. DAL: public void SaveResource(Resource resource) { context.AddToResources(resource); //Should also add children. context.SaveChanges(); } public Resource GetResource(int resourceId) { var resource = (from r in context.Resources .Include("ToEdges").Include("FromEdges") where r.ResourceId == resourceId select r).SingleOrDefault(); return resource; } Service: public void AddChildResource(int parentResourceId, Resource childResource) { Resource parentResource = repository.GetResource(parentResourceId); ResourceEdge inEdge = new ResourceEdge(); inEdge.ToResource = childResource; parentResource.ToEdges.Add(inEdge); repository.SaveResource(parentResource); } Error: An object with the same key already exists in the ObjectStateManager. The existing object is in the Unchanged state. An object can only be added to the ObjectStateManager again if it is in the added state. Image: I have been told this is the sequence in submitting a child to an already existing parent: Get parent - Attach Child to parent - submit parent. That is the sequence I used. The code above is extracted from an ASP.NET MVC 2 application using the repository pattern.

    Read the article

  • Using git filter-branch to remove commits by their commit message

    - by machineghost
    In our repository we have a convention where every commit message starts with a certain pattern: Redmine #555: SOME_MESSAGE We also do a bit of rebasing to bring in the potential release branch's changes to a specific issue's branch. In other words, I might have branch "foo-555", but before I merge it in to branch "pre-release" I need to get any commits that pre-release has that foo-555 doesn't (so that foo-555 can fast-forward merge in to pre-release). However, because pre-release sometimes changes, we sometimes wind up with situations where you bring in a commit from pre-release, but then that commit later gets removed from pre-release. It's easy to identify commits that came from pre-release, because the number from their commit message won't match the branch number; for instance, if I see "Redmine #123: ..." in my foo-555 branch, I know that its not a commit from my branch. So now the question: I'd like to remove all of the commits that "don't belong" to a branch; in other words, any commit that: Is in my foo-555 branch, but not in the pre-release branch (pre-release..foo-555) Has a commit message that doesn't start with "Redmine #555" but of course "555" will vary from branch to branch. Is there any way to use filter-branch (or any other tool) to accomplish this? Currently the only way I can see to do it is to do go an interactive rebase ("git rebase -i") and manually remove all the "bad" commits.

    Read the article

  • Is it important to dispose SolidBrush and Pen?

    - by Joe
    I recently came across this VerticalLabel control on CodeProject. I notice that the OnPaint method creates but doesn't dispose Pen and SolidBrush objects. Does this matter, and if so how can I demonstrate whatever problems it can cause? EDIT This isn't a question about the IDisposable pattern in general. I understand that callers should normally call Dispose on any class that implements IDisposable. What I want to know is what problems (if any) can be expected when GDI+ object are not disposed as in the above example. It's clear that, in the linked example, OnPaint may be called many times before the garbage collector kicks in, so there's the potential to run out of handles. However I suspect that GDI+ internally reuses handles in some circumstances (for example if you use a pen of a specific color from the Pens class, it is cached and reused). What I'm trying to understand is whether code like that in the linked example will be able to get away with neglecting to call Dispose. And if not, to see a sample that demonstrated what problems it can cause. I should add that I have very often (including the OnPaint documentation on MSDN) seen WinForms code samples that fail to dispose GDI+ objects.

    Read the article

  • MKAnnotations are being made successfully, however they sometimes fail to render on MKMapView

    - by jtkendall
    I'm working on an iPhone app using the 3.1.3 SDK, my app finds the users current location, displays it on a MKMapView and then finds nearby locations and renders them as MKAnnotations. My code is working, however sometimes the nearby annotations do not appear on the map. They are still being made as I see the correct data in the console (from NSLog which runs just after the annotations are made). When it fails is completely random, it could be the 5th time I've hit "Build and Run" for the day, or the 500th it doesn't appear to have any pattern and doesn't throw any type of error, it just doesn't add the annotations to the MapView. This is the method called for each nearby location to add the MKAnnotation. - (void)addPinsWithLocation:(NSDictionary *)spot { CLLocationCoordinate2D location; location.longitude = [[spot objectForKey:@"spot_longitude"] doubleValue]; location.latitude = [[spot objectForKey:@"spot_latitude"] doubleValue]; PlaceMarks *placemark = [[PlaceMarks alloc] initWithCoordinate:location title:[spot objectForKey:@"spot_name"] subtitle:@""]; NSLog(@"Adding Pin for Location: '%@' at %f, %f", [spot objectForKey:@"spot_name"], location.latitude, location.longitude); [mapView addAnnotation:placemark]; } Any ideas on how to get MKAnnotations to always show?

    Read the article

  • Extending the .NET type system so the compiler enforces semantic meaning of primitive values in cert

    - by Drew Noakes
    I'm working with geometry a bit at the moment and am converting a lot between degrees and radians. Unfortunately, both of these are represented by double, so there's compile time warning/error if I try to pass a value in degrees where radians are expected. I believe F# has a compile-time solution for this (called units of measure.) I'd like to do something similar in C#. As another example, imagine a SQL library that accepts various query parameters as strings. It'd be good to have a way of enforcing that only clean strings were allowed to be passed in at runtime, and the only way to get a clean string was to pass through some SQL injection attack preventing logic. The obvious solution is to wrap the double/string/whatever in a new type to give it the type information the compiler needs. I'm curious if anyone has an alternative solution. If you do think wrapping is the only/best way, then please go into some of the downsides of the pattern (and any upsides I haven't mentioned too.) I'm especially concerned about the performance of abstracted primitive numeric types on my calculations at runtime.

    Read the article

  • How to receive Email in JEE application

    - by Hank
    Obviously it's not so difficult to send out emails from a JEE application via JavaMail. What I am interested in is the best pattern to receive emails (notification bounces, mostly)? I am not interested in IMAP/POP3-based approaches (polling the inbox) - my application shall react to inbound emails. One approach I could think of would be Keep existing MTA (postfix on linux in my case) - ops team already knows how to configure / operate it For every mail that arrives, spawn a Java app that receives the data and sends it off via JMS. I could do this via an entry in /etc/aliases like myuser: "|/path/to/javahelper" with javahelper calling the Java app, passing STDIN along. MDB (part of JEE application) receives JMS message, parses it, detects bounce message and acts accordingly. Another approach could be Open a listening network socket on port 25 on the JEE application container. Associate a SessionBean with the socket. Bean is part of JEE application and can parse/detect bounces/handle the messages directly. Keep existing MTA as inbound relay, do all its security/spam filtering, but forward emails to myuser (that pass the filter) to the JEE application container, port 25. The first approach I have done before (albeit in a different language/setup). From a performance and (perceived) cleanliness point of view, I think the second approach is better, but it would require me to provide a proper SMTP transport implementation. Also, I don't know if it's at all possible to connect a network socket with a bean... What is your recommendation? Do you have details about the second approach?

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >