Search Results

Search found 16764 results on 671 pages for 'provider model'.

Page 100/671 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • how can i introspect properties and model fields in django?

    - by shreddd
    I am trying to get a list of all existing model fields and properties for a given object. Is there a clean way to instrospect an object so that I can get a dict of fields and properties. class MyModel(Model) url = models.TextField() def _get_location(self): return "%s/jobs/%d"%(url, self.id) location = property(_get_location) What I want is something that returns a dict that looks like this: { 'id' : 1, 'url':'http://foo', 'location' : 'http://foo/jobs/1' } I can use model._meta.fields to get the model fields, but this doesn't give me things that are properties but not real DB fields.

    Read the article

  • CQRS - Benefits

    - by Dylan Smith
    Thanks to all the comments and feedback from the last post I think I have a better understanding now of the benefits of CQRS (separate from the benefits of Event Sourcing). I’m going to try and sum it up here, and point out some areas where I could still use some advice: CQRS Benefits Sounds like the primary benefit of CQRS as an architecture is it allows you to create a simpler domain model by sucking out everything related to queries. I can definitely see the benefit to this, in general the domain logic related to commands is the high-value behavior in the software, but the logic required to service the queries would add a lot of low-value “noise” to the domain model that would dilute the high-value (command) behavior – sorting, paging, filtering, pre-fetch paths, etc. Also the most appropriate domain structure for implementing commands might not be the most optimal for implementing queries. To paraphrase Greg, this usually results in a domain model that is mediocre at both, piss-poor at one, or more likely piss-poor at both commands and queries. Not only will you be able to simplify your domain model by pulling out all the query logic, but at least a handful of commands in most systems will probably be “pass-though” type commands with little to no logic that just generate events. If these can be implemented directly in the command-handler and never touch the domain model, this allows you to slim down the domain model even more. Also, if you were to do event sourcing without CQRS, you no longer have a database containing the current state (only the domain model would) which makes it difficult (or impossible) to support ad-hoc querying and/or reporting that is common in most business software. Of course CQRS provides some great scalability benefits, not only scalability but I have to assume that it provides extremely low latency for most operations, especially if you have an asynchronous event bus. I know Greg says that you get a 3x scaling (Commands, Queries, Client) of your ability to perform parallel development, but IMHO, it seems like it only provides 1.5x scaling since even without CQRS you’re going to have your client loosely coupled to your domain - which is still a great benefit to be able to realize. Questions / Concerns If all the queries against an aggregate get pulled out to the Query layer, what if the only commands for that aggregate can be handled in a “pass-through” manner with the command handler directly generating events. Is it possible to have an aggregate that isn’t modeled in the domain model? Are there any issues or downsides to this? I know in the feedback from my previous posts it was suggested that having one domain model handling both commands and queries requires implementing a lot of traversals between objects that wouldn’t be necessary if it was only servicing commands. My question is, do you include traversals in your domain model based on the needs of the code, or based on the conceptual domain model? If none of my Commands require a Customer.Orders traversal, but the conceptual domain includes the concept of a set of orders belonging to a customer – should I model that in my domain model or not? I like the idea of using the Query side of the architecture as a place to put junior devs where the risk of them screwing something up has minimal impact. But I’m not sold on the idea that you can actually outsource it. Like I said in one of my comments on my previous post, the code to handle a query and generate DTO’s is going to be dead simple, but the code to process events and apply them to the tables on the query side is going to require a significant amount of domain knowledge to know which events to listen for to update each of the de-normalized tables (and what changes need to be made when each event is processed). I don’t know about everybody else, but having Indian/Russian/whatever outsourced developers have to do anything that requires significant domain knowledge has never been successful in my experience. And if you need to spec out for each new query which events to listen to and what to do with each one, well that’s probably going to be just as much work to document as it would be to just implement it. Greg made the point in a comment that doing an aggregate query like “Total Sales By Customer” is going to be inefficient if you use event sourcing but not CQRS. I don’t understand why that would be the case. I imagine in that case you’d simply have a method/property on the Customer object that calculated total sales for that customer by enumerating over the Orders collection. Then the application services layer would generate DTO’s off of the Customers collection that included say the CustomerID, CustomerName, TotalSales, or whatever the case may be. As long as you use a snapshotting implementation, I don’t see why that would be anymore inefficient in a DDD+Event Sourcing implementation than in a typical DDD implementation. Like I mentioned in my last post I still have some questions about query logic that haven’t been answered yet, but before I start asking those I want to make sure I have a strong grasp on what benefits CQRS provides.  My main concern with the query logic was that I know I could just toss it all into the query side, but I was concerned that I would be losing the benefits of using CQRS in the first place if I did that.  I want to elaborate more on this though with some example situations in an upcoming post.

    Read the article

  • MVC: Why put the business logic in the model? What happens when I've multiple types of storage?

    - by Steffen Winkler
    I always thought that the business logic has to be in the controller and that the controller, since it is the 'middle' part, stays static and that the model/view have to be capsuled via interfaces, that way you could change the business logic without affecting anything else, program multiple Models (one for each database/type of storage) and a dozens of views (for different platforms for example). Now I read in this question that you should always put the business logic into the model and that the controller is deeply connected with the view. To me, that doesn't really make sense and implies that each time I want to have the means of supporting another database/type of storage I've to rewrite my whole model including the business logic. And if I want another view, I've to rewrite both the view and the controller. May someone explain why that is or if I went wrong somewhere? Currently, that whole thing doesn't really make sense to me.

    Read the article

  • StreamInsight 2.1, meet LINQ

    - by Roman Schindlauer
    Someone recently called LINQ “magic” in my hearing. I leapt to LINQ’s defense immediately. Turns out some people don’t realize “magic” is can be a pejorative term. I thought LINQ needed demystification. Here’s your best demystification resource: http://blogs.msdn.com/b/mattwar/archive/2008/11/18/linq-links.aspx. I won’t repeat much of what Matt Warren says in his excellent series, but will talk about some core ideas and how they affect the 2.1 release of StreamInsight. Let’s tell the story of a LINQ query. Compile time It begins with some code: IQueryable<Product> products = ...; var query = from p in products             where p.Name == "Widget"             select p.ProductID; foreach (int id in query) {     ... When the code is compiled, the C# compiler (among other things) de-sugars the query expression (see C# spec section 7.16): ... var query = products.Where(p => p.Name == "Widget").Select(p => p.ProductID); ... Overload resolution subsequently binds the Queryable.Where<Product> and Queryable.Select<Product, int> extension methods (see C# spec sections 7.5 and 7.6.5). After overload resolution, the compiler knows something interesting about the anonymous functions (lambda syntax) in the de-sugared code: they must be converted to expression trees, i.e.,“an object structure that represents the structure of the anonymous function itself” (see C# spec section 6.5). The conversion is equivalent to the following rewrite: ... var prm1 = Expression.Parameter(typeof(Product), "p"); var prm2 = Expression.Parameter(typeof(Product), "p"); var query = Queryable.Select<Product, int>(     Queryable.Where<Product>(         products,         Expression.Lambda<Func<Product, bool>>(Expression.Property(prm1, "Name"), prm1)),         Expression.Lambda<Func<Product, int>>(Expression.Property(prm2, "ProductID"), prm2)); ... If the “products” expression had type IEnumerable<Product>, the compiler would have chosen the Enumerable.Where and Enumerable.Select extension methods instead, in which case the anonymous functions would have been converted to delegates. At this point, we’ve reduced the LINQ query to familiar code that will compile in C# 2.0. (Note that I’m using C# snippets to illustrate transformations that occur in the compiler, not to suggest a viable compiler design!) Runtime When the above program is executed, the Queryable.Where method is invoked. It takes two arguments. The first is an IQueryable<> instance that exposes an Expression property and a Provider property. The second is an expression tree. The Queryable.Where method implementation looks something like this: public static IQueryable<T> Where<T>(this IQueryable<T> source, Expression<Func<T, bool>> predicate) {     return source.Provider.CreateQuery<T>(     Expression.Call(this method, source.Expression, Expression.Quote(predicate))); } Notice that the method is really just composing a new expression tree that calls itself with arguments derived from the source and predicate arguments. Also notice that the query object returned from the method is associated with the same provider as the source query. By invoking operator methods, we’re constructing an expression tree that describes a query. Interestingly, the compiler and operator methods are colluding to construct a query expression tree. The important takeaway is that expression trees are built in one of two ways: (1) by the compiler when it sees an anonymous function that needs to be converted to an expression tree, and; (2) by a query operator method that constructs a new queryable object with an expression tree rooted in a call to the operator method (self-referential). Next we hit the foreach block. At this point, the power of LINQ queries becomes apparent. The provider is able to determine how the query expression tree is evaluated! The code that began our story was intentionally vague about the definition of the “products” collection. Maybe it is a queryable in-memory collection of products: var products = new[]     { new Product { Name = "Widget", ProductID = 1 } }.AsQueryable(); The in-memory LINQ provider works by rewriting Queryable method calls to Enumerable method calls in the query expression tree. It then compiles the expression tree and evaluates it. It should be mentioned that the provider does not blindly rewrite all Queryable calls. It only rewrites a call when its arguments have been rewritten in a way that introduces a type mismatch, e.g. the first argument to Queryable.Where<Product> being rewritten as an expression of type IEnumerable<Product> from IQueryable<Product>. The type mismatch is triggered initially by a “leaf” expression like the one associated with the AsQueryable query: when the provider recognizes one of its own leaf expressions, it replaces the expression with the original IEnumerable<> constant expression. I like to think of this rewrite process as “type irritation” because the rewritten leaf expression is like a foreign body that triggers an immune response (further rewrites) in the tree. The technique ensures that only those portions of the expression tree constructed by a particular provider are rewritten by that provider: no type irritation, no rewrite. Let’s consider the behavior of an alternative LINQ provider. If “products” is a collection created by a LINQ to SQL provider: var products = new NorthwindDataContext().Products; the provider rewrites the expression tree as a SQL query that is then evaluated by your favorite RDBMS. The predicate may ultimately be evaluated using an index! In this example, the expression associated with the Products property is the “leaf” expression. StreamInsight 2.1 For the in-memory LINQ to Objects provider, a leaf is an in-memory collection. For LINQ to SQL, a leaf is a table or view. When defining a “process” in StreamInsight 2.1, what is a leaf? To StreamInsight a leaf is logic: an adapter, a sequence, or even a query targeting an entirely different LINQ provider! How do we represent the logic? Remember that a standing query may outlive the client that provisioned it. A reference to a sequence object in the client application is therefore not terribly useful. But if we instead represent the code constructing the sequence as an expression, we can host the sequence in the server: using (var server = Server.Connect(...)) {     var app = server.Applications["my application"];     var source = app.DefineObservable(() => Observable.Range(0, 10, Scheduler.NewThread));     var query = from i in source where i % 2 == 0 select i; } Example 1: defining a source and composing a query Let’s look in more detail at what’s happening in example 1. We first connect to the remote server and retrieve an existing app. Next, we define a simple Reactive sequence using the Observable.Range method. Notice that the call to the Range method is in the body of an anonymous function. This is important because it means the source sequence definition is in the form of an expression, rather than simply an opaque reference to an IObservable<int> object. The variation in Example 2 fails. Although it looks similar, the sequence is now a reference to an in-memory observable collection: var local = Observable.Range(0, 10, Scheduler.NewThread); var source = app.DefineObservable(() => local); // can’t serialize ‘local’! Example 2: error referencing unserializable local object The Define* methods support definitions of operator tree leaves that target the StreamInsight server. These methods all have the same basic structure. The definition argument is a lambda expression taking between 0 and 16 arguments and returning a source or sink. The method returns a proxy for the source or sink that can then be used for the usual style of LINQ query composition. The “define” methods exploit the compile-time C# feature that converts anonymous functions into translatable expression trees! Query composition exploits the runtime pattern that allows expression trees to be constructed by operators taking queryable and expression (Expression<>) arguments. The practical upshot: once you’ve Defined a source, you can compose LINQ queries in the familiar way using query expressions and operator combinators. Notably, queries can be composed using pull-sequences (LINQ to Objects IQueryable<> inputs), push sequences (Reactive IQbservable<> inputs), and temporal sequences (StreamInsight IQStreamable<> inputs). You can even construct processes that span these three domains using “bridge” method overloads (ToEnumerable, ToObservable and To*Streamable). Finally, the targeted rewrite via type irritation pattern is used to ensure that StreamInsight computations can leverage other LINQ providers as well. Consider the following example (this example depends on Interactive Extensions): var source = app.DefineEnumerable((int id) =>     EnumerableEx.Using(() =>         new NorthwindDataContext(), context =>             from p in context.Products             where p.ProductID == id             select p.ProductName)); Within the definition, StreamInsight has no reason to suspect that it ‘owns’ the Queryable.Where and Queryable.Select calls, and it can therefore defer to LINQ to SQL! Let’s use this source in the context of a StreamInsight process: var sink = app.DefineObserver(() => Observer.Create<string>(Console.WriteLine)); var query = from name in source(1).ToObservable()             where name == "Widget"             select name; using (query.Bind(sink).Run("process")) {     ... } When we run the binding, the source portion which filters on product ID and projects the product name is evaluated by SQL Server. Outside of the definition, responsibility for evaluation shifts to the StreamInsight server where we create a bridge to the Reactive Framework (using ToObservable) and evaluate an additional predicate. It’s incredibly easy to define computations that span multiple domains using these new features in StreamInsight 2.1! Regards, The StreamInsight Team

    Read the article

  • Web Services and code lists

    - by 0x0me
    Our team heavily discuss the issues how to handle code list in a web service definition. The design goal is to describe a provider API to query a system using various values. Some of them are catalogs resp. code lists. A catalog or code list is a set of key value pairs. There are different systems (at least 3) maintaining possibly different code lists. Each system should implement the provider API, whereas each system might have different code list for the same business entity eg. think of colors. One system know [(1,'red'),(2,'green')] and another one knows [(1,'lightgreen'),(2,'darkgreen'),(3,'red')] etc. The access to the different provider API implementations will be encapsulated by a query service, but there is already one candidate which might use at least one provider API directly. The current options to design the API discussed are: use an abstract code list in the interface definition: the web service interface defines a well known set of code list which are expected to be used for querying and returning data. Each API provider implementation has to mapped the request and response values from those abstract codelist to the system specific one. let the query component handle the code list: the encapsulating query service knows the code list set of each provider API implementation and takes care of mapping the input and output to the system specific code lists of the queried system. do not use code lists in the query definition at all: Just query code lists by a plain string and let the provider API implementation figure out the right value. This might lead to a loose of information and possibly many false positives, due to the fact that the input string could not be canonical mapped to a code list value (eg. green - lightgreen or green - darkgreen or both) What are your experiences resp. solutions to such a problem? Could you give any recommendation?

    Read the article

  • Configuring Jenkins for running with BitBucket

    - by Claus
    I'm trying to setup Jenkins on my mac mini in order to pull my iOS project source code from BitBucket and build it automatically. I've already gone through the major well know problems generating the ssh keys,uploading them in BitBucket,performing an ssh connection by console for adding the host to the well know list (you can find all my adventure here and here). Now,there are 3 user in my system: A,B and Shared. When I installed Jenkins it automatically placed itself in Shared, but I generated the ssh keys with the user A. So just to be clear In the A home directory there is an .ssh directory with public and private keys. When I try to run by Jenkins job I get this error message: Started by user anonymous Building in workspace /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace Checkout:workspace / /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace - hudson.remoting.LocalChannel@625cb0bb Using strategy: Default Cloning the remote Git repository Cloning repository [email protected]:myuser/myproject.git git --version git version 1.8.0 ERROR: Error cloning remote repo 'origin' : Could not clone [email protected]:myuser/myproject.git hudson.plugins.git.GitException: Could not clone [email protected]:myuser/myproject.git at hudson.plugins.git.GitAPI.clone(GitAPI.java:271) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1036) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:978) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitSCM.determineRevisionToBuild(GitSCM.java:978) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1134) at hudson.model.AbstractProject.checkout(AbstractProject.java:1325) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:676) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:581) at hudson.model.Run.execute(Run.java:1516) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236) Caused by: hudson.plugins.git.GitException: Command "/usr/local/git/bin/git clone --progress -o origin [email protected]:myuser/myproject.git /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace" returned status code 128: stdout: Cloning into '/Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace'... stderr: Host key verification failed. fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:885) at hudson.plugins.git.GitAPI.access$000(GitAPI.java:40) at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:267) at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:246) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitAPI.clone(GitAPI.java:246) ... 14 more Trying next repository ERROR: Could not clone repository FATAL: Could not clone hudson.plugins.git.GitException: Could not clone at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1048) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:978) at hudson.FilePath.act(FilePath.java:851) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitSCM.determineRevisionToBuild(GitSCM.java:978) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1134) at hudson.model.AbstractProject.checkout(AbstractProject.java:1325) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:676) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:581) at hudson.model.Run.execute(Run.java:1516) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236) As you can see it fails when Hudson try to run the GIT command. The odd things is that if I try to run /usr/local/git/bin/git clone --progress -o origin [email protected]:myuser/myproject.git /Users/Shared/Jenkins/Home/jobs/myprojectAdHocBuild/workspace In my console, it works fine (after fixing a small problem relative the folder write permission with chmod) I found a post reporting a similar error which names a number of possible options but I'm not sure how to perform correctly these operations on my console. It looks like Jenkins is trying to run a command with a user which doesn't have permission to retrieve the appropriate keys from my .ssh directory.Not really sure.Maybe this output can help: MacMini:~ myuser$ ps axu | grep "/jenkins" myuser 11660 0.0 4.6 2918124 97096 ?? S 6:59pm 1:05.63 /usr/bin/java -jar /Users/myuser/Library/Caches/org.jenkins-ci.jenkins/jenkins.war jenkins 9896 0.0 9.0 2939824 188552 ?? Ss 4:06pm 17:55.91 /usr/bin/java -jar /Applications/Jenkins/jenkins.war myuser 11930 0.0 0.0 2432768 588 s000 S+ 10:28am 0:00.00 grep /jenkins MacMini:~ myuser$ ps axu | grep tomcat myuser 11932 0.0 0.0 2432768 588 s000 S+ 10:28am 0:00.00 grep tomcat MacMini:~ myuser$ I really hope to fix this problem, because I would like to write a very detailed tutorial with all the information I found disseminated around the web.

    Read the article

  • ASP.NET MVC 2 Mdel encapsulated within ViewModel Validation

    - by Program.X
    I am trying to get validation to work in ASP.NET MVC 2, but without much success. I have a complex class containing a large number of fields. (Don't ask - this is oneo f those real-world situations best practices can't touch) This would normally be my Model and is a LINQ-to-SQL generated class. Because this is generated code, I have created a MetaData class as per http://davidhayden.com/blog/dave/archive/2009/08/10/AspNetMvc20BuddyClassesMetadataType.aspx. public class ConsultantRegistrationMetadata { [DisplayName("Title")] [Required(ErrorMessage = "Title is required")] [StringLength(10, ErrorMessage = "Title cannot contain more than 10 characters")] string Title { get; set; } [Required(ErrorMessage = "Forename(s) is required")] [StringLength(128, ErrorMessage = "Forename(s) cannot contain more than 128 characters")] [DisplayName("Forename(s)")] string Forenames { get; set; } // ... I've attached this to the partial class of my generated class: [MetadataType(typeof(ConsultantRegistrationMetadata))] public partial class ConsultantRegistration { // ... Because my form is complex, it has a number of dependencies, such as SelectLists, etc. which I have encapsulated in a ViewModel pattern - and included the ConsultantRegistration model as a property: public class ConsultantRegistrationFormViewModel { public Data.ConsultantRegistration ConsultantRegistration { get; private set; } public SelectList Titles { get; private set; } public SelectList Countries { get; private set; } // ... So it is essentially ViewModel=Model My View then has: <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Title) %> <%: Html.DropDownListFor(model => model.ConsultantRegistration.Title, Model.Titles,"(select a Title)") %> <%: Html.ValidationMessage("Title","*") %> </p> <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.TextBoxFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.ValidationMessageFor(model=>model.ConsultantRegistration.Forenames) %> </p> The problem is, the validation attributes on the metadata class are having no effect. I tried doing it via an Interface, but also no effect. I'm beginning to think that the reason is because I am encapsulating my model within a ViewModel. My Controller (Create Action) is as follows: [HttpPost] public ActionResult Create(Data.ConsultantRegistration consultantRegistration) { if (ModelState.IsValid) // this is always true - which is wrong!! { try { consultantRegistration = ConsultantRegistrationRepository.SaveConsultantRegistration(consultantRegistration); return RedirectToAction("Edit", new { id = consultantRegistration.ID, sectionIndex = 2 }); } catch (Exception ex) { ModelState.AddModelError("CreateException",ex); } } return View(new ConsultantRegistrationFormViewModel(consultantRegistration)); } As outlined in the comment, the ModelState.IsValid property always returns true, despite fields with the Validaiton annotations not being valid. (Forenames being a key example). Am I missing something obvious - considering I am an MVC newbie? I'm after the mechanism demoed by Jon Galloway at http://www.asp.net/learn/mvc-videos/video-10082.aspx. (Am aware t is similar to http://stackoverflow.com/questions/1260562/asp-net-mvc-model-viewmodel-validation but that post seems to talk about xVal. I have no idea what that is and suspect it is for MVC 1)

    Read the article

  • NullPointerException in com.sun.tools.jxc.SchemaGenTask

    - by David Collie
    Given this ant script: <?xml version="1.0" encoding="UTF-8"?> <project name="projectname" default="generate-schema" basedir="."> <taskdef name="schemagen" classname="com.sun.tools.jxc.SchemaGenTask"> <classpath> <fileset dir="../BuildJars/lib" includes="*.jar" /> </classpath> </taskdef> <target name="generate-schema"> <schemagen srcdir="src/gb/informaticasystems/messages" destdir="schema"> <schema namespace="http://www.informatica-systems.co.uk/aquarius/messages/1.0" file="messages-1.0.xsd" /> </schemagen> </target> </project> I am getting this error: Buildfile: C:\Users\davidcollie\workspace\aquarius-feature\AquariusServerLibrary\schemagen.xml generate-schema: [schemagen] Generating schema from 4 source files [schemagen] Problem encountered during annotation processing; [schemagen] see stacktrace below for more information. [schemagen] java.lang.NullPointerException [schemagen] at com.sun.tools.jxc.model.nav.APTNavigator$2.onDeclaredType(APTNavigator.java:428) [schemagen] at com.sun.tools.jxc.model.nav.APTNavigator$2.onClassType(APTNavigator.java:402) [schemagen] at com.sun.tools.jxc.model.nav.APTNavigator$2.onClassType(APTNavigator.java:456) [schemagen] at com.sun.istack.tools.APTTypeVisitor.apply(APTTypeVisitor.java:27) [schemagen] at com.sun.tools.jxc.model.nav.APTNavigator.getBaseClass(APTNavigator.java:109) [schemagen] at com.sun.tools.jxc.model.nav.APTNavigator.getBaseClass(APTNavigator.java:85) [schemagen] at com.sun.xml.bind.v2.model.impl.PropertyInfoImpl.getIndividualType(PropertyInfoImpl.java:190) [schemagen] at com.sun.xml.bind.v2.model.impl.PropertyInfoImpl.<init>(PropertyInfoImpl.java:132) [schemagen] at com.sun.xml.bind.v2.model.impl.MapPropertyInfoImpl.<init>(MapPropertyInfoImpl.java:67) [schemagen] at com.sun.xml.bind.v2.model.impl.ClassInfoImpl.createMapProperty(ClassInfoImpl.java:917) [schemagen] at com.sun.xml.bind.v2.model.impl.ClassInfoImpl.addProperty(ClassInfoImpl.java:874) [schemagen] at com.sun.xml.bind.v2.model.impl.ClassInfoImpl.findGetterSetterProperties(ClassInfoImpl.java:993) [schemagen] at com.sun.xml.bind.v2.model.impl.ClassInfoImpl.getProperties(ClassInfoImpl.java:303) [schemagen] at com.sun.xml.bind.v2.model.impl.ModelBuilder.getClassInfo(ModelBuilder.java:243) [schemagen] at com.sun.xml.bind.v2.model.impl.ModelBuilder.getClassInfo(ModelBuilder.java:209) [schemagen] at com.sun.xml.bind.v2.model.impl.ModelBuilder.getTypeInfo(ModelBuilder.java:315) [schemagen] at com.sun.xml.bind.v2.model.impl.ModelBuilder.getTypeInfo(ModelBuilder.java:330) [schemagen] at com.sun.tools.xjc.api.impl.j2s.JavaCompilerImpl.bind(JavaCompilerImpl.java:90) [schemagen] at com.sun.tools.jxc.apt.SchemaGenerator$1.process(SchemaGenerator.java:115) [schemagen] at com.sun.mirror.apt.AnnotationProcessors$CompositeAnnotationProcessor.process(AnnotationProcessors.java:60) [schemagen] at com.sun.tools.apt.comp.Apt.main(Apt.java:454) [schemagen] at com.sun.tools.apt.main.JavaCompiler.compile(JavaCompiler.java:258) [schemagen] at com.sun.tools.apt.main.Main.compile(Main.java:1102) [schemagen] at com.sun.tools.apt.main.Main.compile(Main.java:964) [schemagen] at com.sun.tools.apt.Main.processing(Main.java:95) [schemagen] at com.sun.tools.apt.Main.process(Main.java:85) [schemagen] at com.sun.tools.apt.Main.process(Main.java:67) [schemagen] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [schemagen] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [schemagen] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [schemagen] at java.lang.reflect.Method.invoke(Method.java:597) [schemagen] at com.sun.tools.jxc.AptBasedTask$InternalAptAdapter.execute(AptBasedTask.java:97) [schemagen] at com.sun.tools.jxc.AptBasedTask.compile(AptBasedTask.java:144) [schemagen] at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:820) [schemagen] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) [schemagen] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [schemagen] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [schemagen] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [schemagen] at java.lang.reflect.Method.invoke(Method.java:597) [schemagen] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105) [schemagen] at org.apache.tools.ant.Task.perform(Task.java:348) [schemagen] at org.apache.tools.ant.Target.execute(Target.java:357) [schemagen] at org.apache.tools.ant.Target.performTasks(Target.java:385) [schemagen] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1329) [schemagen] at org.apache.tools.ant.Project.executeTarget(Project.java:1298) [schemagen] at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) [schemagen] at org.eclipse.ant.internal.ui.antsupport.EclipseDefaultExecutor.executeTargets(EclipseDefaultExecutor.java:32) [schemagen] at org.apache.tools.ant.Project.executeTargets(Project.java:1181) [schemagen] at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.run(InternalAntRunner.java:423) [schemagen] at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.main(InternalAntRunner.java:137) BUILD FAILED C:\Users\davidcollie\workspace\aquarius-feature\AquariusServerLibrary\schemagen.xml:11: schema generation failed Total time: 1 second Any ideas?

    Read the article

  • Partial view is not rendering within the main view it's contained (instead it's rendered in it's own page)?

    - by JaJ
    I have a partial view that is contained in a simple index view. When I try to add a new object to my model and update my partial view to display that new object along with existing objects the partial view is rendered outside the page that it's contained? I'm using AJAX to update the partial view but what is wrong with the following code? Model: public class Product { public int ID { get; set; } public string Name { get; set; } [DataType(DataType.Currency)] public decimal Price { get; set; } } public class BoringStoreContext { List<Product> results = new List<Product>(); public BoringStoreContext() { Products = new List<Product>(); Products.Add(new Product() { ID = 1, Name = "Sure", Price = (decimal)(1.10) }); Products.Add(new Product() { ID = 2, Name = "Sure2", Price = (decimal)(2.10) }); } public List<Product> Products {get; set;} } public class ProductIndexViewModel { public Product NewProduct { get; set; } public IEnumerable<Product> Products { get; set; } } Index.cshtml View: @model AjaxPartialPageUpdates.Models.ProductIndexViewModel @using (Ajax.BeginForm("Index_AddItem", new AjaxOptions { UpdateTargetId = "productList" })) { <div> @Html.LabelFor(model => model.NewProduct.Name) @Html.EditorFor(model => model.NewProduct.Name) </div> <div> @Html.LabelFor(model => model.NewProduct.Price) @Html.EditorFor(model => model.NewProduct.Price) </div> <div> <input type="submit" value="Add Product" /> </div> } <div id='productList'> @{ Html.RenderPartial("ProductListControl", Model.Products); } </div> ProductListControl.cshtml Partial View @model IEnumerable<AjaxPartialPageUpdates.Models.Product> <table> <!-- Render the table headers. --> <tr> <th>Name</th> <th>Price</th> </tr> <!-- Render the name and price of each product. --> @foreach (var item in Model) { <tr> <td>@Html.DisplayFor(model => item.Name)</td> <td>@Html.DisplayFor(model => item.Price)</td> </tr> } </table> Controller: public class HomeController : Controller { public ActionResult Index() { BoringStoreContext db = new BoringStoreContext(); ProductIndexViewModel viewModel = new ProductIndexViewModel { NewProduct = new Product(), Products = db.Products }; return View(viewModel); } public ActionResult Index_AddItem(ProductIndexViewModel viewModel) { BoringStoreContext db = new BoringStoreContext(); db.Products.Add(viewModel.NewProduct); return PartialView("ProductListControl", db.Products); } }

    Read the article

  • Alienware M17x R3: Possible downclock

    - by Ywen
    I installed recently Kubuntu 11.10 32 bits (had graphics driver issues, wanted to try on 32 bits version) on my new Alienware M17x, with a Core i7-2670QM CPU. Cores are supposed to be clocked at 2.2 GHz, however the output of $ cat /proc/cpuinfo | grep -i "hz" gives me: model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 If useful, the AC adapter is plugged in (yet the ouput is the same when the computer is powered only by the battery) and I have Firefox and Eclipse running. Does /proc/cpuinfo reflect a possible automatic downclock made to save power if processor load is low or is this output abnormal? EDIT: Ok, I checked and yes, the ouput does vary in function of the load. I reach 2.2 GHz when needed. But my following problem remains. I was checking my CPU clocking because I experienced poor performances when reading 720p video files on Ubuntu with VLC or mplayer when on battery (and I believe VLC by default only uses CPU, not GPU to decode), whereas I haven't got such problems with VLC on Windows (which made me think it wasn't coming from a BIOS option, plus every option in the BIOS regarding the CPU is turned ON).

    Read the article

  • Xna GS 4 Animation Sample bone transforms not copying correctly

    - by annonymously
    I have a person model that is animated and a suitcase model that is not. The person can pick up the suitcase and it should move to the location of the hand bone of the person model. Unfortunately the suitcase doesn't follow the animation correctly. it moves with the hand's animation but its position is under the ground and way too far to the right. I haven't scaled any of the models myself. Thank you. The source code (forgive the rough prototype code): Matrix[] tran = new Matrix[man.model.Bones.Count];// The absolute transforms from the animation player man.model.CopyAbsoluteBoneTransformsTo(tran); Vector3 suitcasePos, suitcaseScale, tempSuitcasePos = new Vector3();// Place holders for the Matrix Decompose Quaternion suitcaseRot = new Quaternion(); // The transformation of the right hand bone is decomposed tran[man.model.Bones["HPF_RightHand"].Index].Decompose(out suitcaseScale, out suitcaseRot, out tempSuitcasePos); suitcasePos = new Vector3(); suitcasePos.X = tempSuitcasePos.Z;// The axes are inverted for some reason suitcasePos.Y = -tempSuitcasePos.Y; suitcasePos.Z = -tempSuitcasePos.X; suitcase.Position = man.Position + suitcasePos;// The actual Suitcase properties suitcase.Rotation = man.Rotation + new Vector3(suitcaseRot.X, suitcaseRot.Y, suitcaseRot.Z); I am also copying the bone transforms from the animation player in the Person class like so: // The transformations from the AnimationPlayer Matrix[] skinTrans = new Matrix[model.Bones.Count]; skinTrans = player.GetBoneTransforms(); // copy each transformation to its corresponding bone for (int i = 0; i < skinTrans.Length; i++) { model.Bones[i].Transform = skinTrans[i]; }

    Read the article

  • How do I generate mipmap .png and model .obj files for LibGDX?

    - by John Murdoch
    I'm playing a bit with LibGDX (OpenGL ES 2.0 wrapper for Android). I found some sample code which used prepared files to load models and mipmap textures for them, e.g., at https://github.com/libgdx/libgdx/blob/master/demos/invaders/gdx-invaders/src/com/badlogic/gdxinvaders/RendererGL20.java it reads .obj file for the model and RGB565 format .png file to apply a mipmapped texture to it. What is the best / easiest way for me to create these files? I understand .obj files are generated by a bunch of tools (I installed Blender, Wings3D and Kerkythea so far), but which ones will be the most user friendly for someone unfamiliar with 3D modelling? But more importantly, how do I produce a .png file with the mipmapped texture? The .png I looked at ( https://github.com/libgdx/libgdx/blob/master/demos/invaders/gdx-invaders/data/ship.png ) seems to include different textures for each of the faces, probably created with some tool. I browsed through the menus for the 3 tools I have installed but didn't find an option to export such a .png file. What am I missing?

    Read the article

  • What is the best database design and/or software to model a thesaurus?

    - by Miles O'Keefe
    I would like to design a web app that functions as a simple thesaurus : a long list of words with attributes, all of which are linked to each other. Wikipedia defines it as: In Information Science, Library Science, and Information Technology, specialized thesauri are designed for information retrieval. They are a type of controlled vocabulary, for indexing or tagging purposes. Such a thesaurus can be used as the basis of an index for online material. The Art and Architecture Thesaurus, for example, is used to index the Canadian Information retrieval thesauri are formally organized so that existing relationships between concepts are made explicit. What database software, design or model would best fit this? Are PHP and MySQL good technologies to handle it?

    Read the article

  • Is it a good idea to add robots "noindex" m tags deep, low content pages, e.g. product model data

    - by Cognize
    I'm considering adding robots "noindex, follow" tags to the very numerous product data pages that are linked from the product style pages in our online store. For example, each product style has a page with full text content on the product: http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE Then many data pages with technical data for each model code is linked from the product style page. http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-1 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-2 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-3 It is these technical data pages that I intend to add the no index code to, as I imagine that this might stop these pages from cannibalizing keyword authority for more important content rich pages on the site. Any advice appreciated.

    Read the article

  • Small projects using the cathedral model: does open-source lower security?

    - by Anto
    We know of Linus' law: With enough eyeballs all bugs are shallow In general, people seem to say that open-source software is more secure because of that very thing, but... There are many small OSS projects with just 1 or 2 developers (the cathedral model, as described by ESR). For these projects, does releasing the source-code actually lower the security? For projects like the Linux kernel there are thousands of developers and security vulnerabilities are quite likely going to be found, but when just some few people look through the source code, while allowing crackers (black hat hackers) to see the source as well, is the security lowered instead of increased? I know that the security advantage closed-source software has over OSS is security through obscurity, which isn't good (at all), but it could help to some degree, at least by giving those few devs some more time (security through obscurity doesn't help with the if but with the when). EDIT: The question isn't whether OSS is more secure than non-OSS software but if the advantages for crackers are greater than the advantages for the developers who want to prevent security vulnerabilities from being exploited.

    Read the article

  • Blocking row navigation in af:table , synchronize row selection with model in case of validation failure- Oracle ADF by Ashish Awasthi

    - by JuergenKress
    In ADF we often work on editable af:table and when we use af:table to insert ,update or delete data, it is normal to use some validation but problem is when some validation failure occurs on page (in af:table) ,still we can select another row and it shows as currently selected Row this is a bit confusing for user as Row Selection of af:table is not synchronized with model or binding layer See Problem- i have an editable table on page Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: ADF,Ashish Awasthi,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • What resources will help me understand the data model for QC 10.0 in order to write my SQL queries?

    - by srihari
    I am a fresher in Quality Center 10.0 HP software testing tool. As per my understanding in order to generate reports from QC and to troubleshoot the scenarios, we need to write SQL queries in the QC back end database. In my case it is SQL db. I downloaded the database reference help file but I could not understand from where I can start. It just gave the table name and its information. For a starter like me are there any online tutorials or helpful websites,hands on exercises,scenario's where I can better understand how to write queries for the QC data model? I am very confident about the SQL coding itself, what I want to know is how to query on the QC database tables based on the scenarios that occur in QC tool. Please suggest. Thanks, Srihari

    Read the article

  • How can an SQL relational database be used to model a thesaurus? [closed]

    - by Miles O'Keefe
    I would like to design a web app that functions as a simple thesaurus: a long list of words with attributes, all of which are linked to each other. This thesaurus data model can be defined as: a controlled vocabulary arranged in a known order in which equivalence, hierarchical, and associative relationships among terms are clearly displayed and identified by standardized relationship indicators. My idea so far is to have one database in which every word is a table, and every table contains all words related to that word. e.g. Thesaurus(database) - happy(table) - excited(row)|cheerful(row)|lively(row) Is there are more efficient way to store words and their relationship to other words in a relational SQL database?

    Read the article

  • Why does my model render differently than its preview?

    - by Raven Dreamer
    I've been importing .fbx files that I made with 3DS Max 2012 into Unity, and it's quite neat to see my models running around in game. However, I can't help but notice that the models, as they're rendered in game, vary substantially from what they look like in the preview (and also what they looked like in 3DS Max). Observe: In-Game Unity Preview 3DS Max My gut tells me that I'm not setting up Unity's lighting system properly. What, then, do I need to do, to either my scene or my model, in order to get the left-most picture to look like the middle one?

    Read the article

  • exporting a maya model to work with Playstation mobile Studio?

    - by spartan2417
    Ive just started to program in Playstation Mobile Studio for the PS vita. The Models that are available to use in Ps Development have to .mdk models. The studio comes with a a model converter that converts from DAE, FBX, XSI and X files. However ive tried converting a maya scene to both DAE and FBX files but the converter returns an error with no explanation. Ive even tried to use blender and the tutorial here: http://www.gamefromscratch.com/post/2012/04/30/Exporting-from-Blender-to-PS-Suite-Blender-to-Vita-in-under-5-minutes.aspx but this fails at the converter too. With regards to maya all i do is export the file with default settings. Anyone know any good tutorials on this or guess why this would be happening?

    Read the article

  • Is it a good idea to add robots "noindex" meta tags to deep low content pages, e.g. product model data

    - by Cognize
    I'm considering adding robots "noindex, follow" tags to the very numerous product data pages that are linked from the product style pages in our online store. For example, each product style has a page with full text content on the product: http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE Then many data pages with technical data for each model code is linked from the product style page. http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-1 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-2 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-3 It is these technical data pages that I intend to add the no index code to, as I imagine that this might stop these pages from cannibalizing keyword authority for more important content rich pages on the site. Any advice appreciated.

    Read the article

  • What software development model has worked best for software teams with heavy dependancy on hardware teams?

    - by MasterDIB
    So, let me explain more. There are a number of competing best practices for software development. I can find that many teams have benefited from Agile practices in some cases. In some other cases, using the Unified Process has been championed by large companies like IBM. The common themes that I find seemed to work well for teams that mainly develop software. I am interested to know what has worked best for people who have worked in shops where there is a team on the other side that produce the hardware that your software is running on. For example, one team puts together a crate with several custom hardware on it; while you need to develop the software that would run on those crates. I can't find a development model (agile, spiral ...) that works best in this case. Any wisdom is this area will be well appreciated.

    Read the article

  • SharePoint 2010 BDC Model Deployment Issue: The default web application could not be determined.

    Yesterday I tried to deploy a Business Data Connectivity Model project created in Visual Studio 2010 to my SharePoint 2010 test server (all RTM versions), but during the deployment of the solution, SharePoint threw my following error: Add Solution:  Adding solution 'BCSDemo2.wsp'...  Deploying solution 'BCSDemo2.wsp'...Error occurred in deployment step 'Add Solution': The default web application could not be determined. Set the SiteUrl property in feature BCSDemo2_Feature1 to the URL of...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SharePoint 2010 BDC Model Deployment Issue: The default web application could not be determined.

    Yesterday I tried to deploy a Business Data Connectivity Model project created in Visual Studio 2010 to my SharePoint 2010 test server (all RTM versions), but during the deployment of the solution, SharePoint threw my following error: Add Solution:  Adding solution 'BCSDemo2.wsp'...  Deploying solution 'BCSDemo2.wsp'...Error occurred in deployment step 'Add Solution': The default web application could not be determined. Set the SiteUrl property in feature BCSDemo2_Feature1 to the URL of...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >