Search Results

Search found 12952 results on 519 pages for 'model'.

Page 118/519 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • I Admit I Misspoke

    - by Patrick Liekhus
    OK.  I admit it.  The last post I hade mentioned that we moved the XAF DSL to the Entity Framework.  This has caused a lot of confusion.  I meant to say that we have used the ADO.NET Entity Data Model extensions.  This is the design surface that can be tailored to create Entity Framework. We leveraged the code generation within the ADO.NET Entity Data Model (EDMX) file to generate XAF/XPO classes.  This allows you to visually create the entity model, set a few XAF properties and then generate the business objects from there.  I am presenting all these topics at the Kansas City Developers Conference on June 19th.  I will post the presentation after the conference.  I have a full presentation that will demonstrate the power of the ADO.NET Entity Data Model extensions, create a small project and then add the OData layer to XAF to connect to the PowerPivot in Excel 2010. The latest code can be found at http://efxaf.codeplex.com. More details to come soon.  Sorry for the confusion in the last post. Thanks again.

    Read the article

  • Push or Pull Mobile Coupons?

    - by David Dorf
    Mobile phones allow consumers to receive coupons in context, which increases their relevance and therefore redemption rates. Using your current location, you can get coupons that can be redeemed nearby for the things you want now. Receiving a coupon for something you wanted last week or something you might buy next month just isn't as valuable. I previously talked about Placecast and their concept of pushing offers to mobile phones that transgress "geo-fences" around points of interest, like store locations. This push model is an automatic reminder there are good deals just up ahead. This model works well in dense cities where people walk, but I question how effective it will be in the suburbs where people are driving. McDonald's recently ran a campaign in Finland where they pushed offers to GPS devices when cars neared their restaurants. Amazingly, they achieved a 7% click-through rate. But 8coupons.com sees things differently. They prefer the pull model that requires customers to initiate a search for nearby coupons, and they've done some studies to better understand what "nearby" means. It turns out that there are concentric search circles that emanate from your home and work. From inner to outer, people search for food, drink, shopping, and entertainment. Intuitively, that feels about right. So the question is, do consumers prefer the push or pull model for offers? No doubt the market is big enough for both. These days its not good enough to just know who your customers are -- you also need to know where they are so you can catch them in the right moment. According to Borrell Associates, redemption rates of mobile coupons are 10x that of traditional mail and newspaper coupons. One thing is for sure; assuming 85% of consumers regularly spend money within 5 miles of home and work, location-based coupons make tons of sense.

    Read the article

  • Permission based Authorization vs. Role based Authorization - Best Practices - 11g

    - by Prakash Yamuna
    In previous blog posts here and here I have alluded to the support in OWSM for Permission based authorization and Role based authorization support. Recently I was having a conversation with an internal team in Oracle looking to use OWSM for their Web Services security needs and one of the topics was around - When to use permission based authorization vs. role based authorization? As in most scenarios the answer is it depends! There are trade-offs involved in using the two approaches and you need to understand the trade-offs and you need to understand which trade-offs are better for your scenario. Role based Authorization: Simple to use. Just create a new custom OWSM policy and specify the role in the policy (using EM Fusion Middleware Control). Inconsistent if you have multiple type of resources in an application (ex: EJBs, Web Apps, Web Services) - ex: the model for securing EJBs with roles or the model for securing Web App roles - is inconsistent. Since the model is inconsistent, tooling is also fairly inconsistent. Achieving this use-case using JDeveloper is slightly complex - since JDeveloper does not directly support creating OWSM custom policies. Permission based Authorization: More complex. You need to attach both an OWSM policy and create OPSS Permission authorization policies. (Note: OWSM leverages OPSS Permission based Authorization support). More appropriate if you have multiple type of resources in an application (ex: EJBs, Web Apps, Web Services) and want a consistent authorization model. Consistent Tooling for managing authorization across different resources (ex: EM Fusion Middleware Control). Better Lifecycle support in terms of T2P, etc. Achieving this use-case using JDeveloper is slightly complex - since JDeveloper does not directly support creating/editing OPSS Permission based authorization policies.

    Read the article

  • WPF and Composite Application Library &ndash; Missing The Point

    - by David Totzke
    I have a headache and it’s not even 9AM yet.  Well, ok, it’s nearly ten here now in GMT –5 but it’s before nine somewhere still. Sometimes people will miss the point of something so utterly and completely that one is left wondering how such a person can even dress themselves. Writing an application using WPF and the Composite Application Library (Prism) means that one must learn the various programming idioms common to these frameworks.  The Windows Forms event driven model simply will not suffice.  You need to come to grips with the idea of a very loosely coupled application.  Concepts that must be absorbed and internalized include Data Binding, Control and Data Templates, Commands, Dependency Injection, and Inversion of Control, as well as the Supervising Controller, Presentation Model and Model-View-View-Model patterns. It is as simple as that.  Not to embrace these concepts is to invite pain.  It is to invite noodles; and not the holy kind. Someone actually said to me that “just because it’s not WPF, doesn’t mean it’s wrong.”  And he’s right.  Unless, of course, you are writing a WPF application and especially if you are using the Composite Application Library. In simple terms then; YOU’RE DOING IT WRONG!   Dave Just because I can…

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • My processor is not detected intel core 2 duo

    - by walid
    My processor is not detected intel core 2 duo When I type $uname -m -p I get this i686 unknown I have Ubuntu 10.10 netbook remix but the cat /proc/cpuinfo gives right identification of two processors as below processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz stepping : 6 cpu MHz : 1826.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts tpr_shadow bogomips : 3657.99 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz stepping : 6 cpu MHz : 1826.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts tpr_shadow bogomips : 3657.53 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: The problem is with programs that uses more than one core like virtualbox and bitcoin which refuses to use more than one core Is there anythign wrong or anything that I can do? My installation is from a live usb iso on a USB

    Read the article

  • does my js replace view?

    - by Milla Well
    I am writing a web application which is based on Codeigniter and jQuery. I primarily use ajax to call my controller functions and it turned out, that there are just 4 view*.php files, because most of my contoller functions return JSON data, which is processed in my jQuery. So my actual code is divided in kind of MVCC model: Codeigniter model (db, computations) Codeigniter controller (filtering, xss-cleaning, checking permissions, call model functions) jQuery controller (callback functions) jQuery view (adding/removing classes, appending elements,... ) So I violate the paradigm of not using the echo function in my Codeiginter controller and simply call echo json_encode($result); because it doesn't make any sense to me to create a view*.php file for one loc. Especially because all the regular view*.php stuff is covered in my jQuery view. I was wondering if I am missing something out, or if there is a way to integrate this jQuery-controller in my Codeigniter. I found some words on this topic, but this seems pretty handmade. Are there some neat solutions? Does a MVCC model make sense?

    Read the article

  • What is the rationale behind Apache Jena's *everything is an interface if possible* design philosophy?

    - by David Cowden
    If you are familiar with the Java RDF and OWL engine Jena, then you have run across their philosophy that everything should be specified as an interface when possible. This means that a Resource, Statement, RDFNode, Property, and even the RDF Model, etc., are, contrary to what you might first think, Interfaces instead of concrete classes. This leads to the use of Factories quite often. Since you can't instantiate a Property or Model, you must have something else do it for you --the Factory design pattern. My question, then, is, what is the reasoning behind using this pattern as opposed to a traditional class hierarchy system? It is often perfectly viable to use either one. For example, if I want a memory backed Model instead of a database-backed Model I could just instantiate those classes, I don't need ask a Factory to give me one. As an aside, I'm in the process of writing a library for manipulating Pearltrees data, which is exported from their website in the form of an RDF/XML document. As I write this library, I have many options for defining the relationships present in the Peartrees data. What is nice about the Pearltrees data is that it has a very logical class system: A tree is made up of pearls, which can be either Page, Reference, Alias, or Root pearls. My question comes from trying to figure out if I should adopt the Jena philosophy in my library which uses Jena, or if I should disregard it, pick my own design philosophy, and stick with it.

    Read the article

  • MVVM - child windows and data contexts

    - by GlenH7
    Should a child window have it's own data context (View-Model) or use the data context of the parent? More broadly, should each View have its own View-Model? Are there are any rules to guide making that decision? What if the various View-Models will be accessing the same Model? I haven't been able to find any consistent guidance on my question. The MS definition of MVVM appears to be silent on child windows. For one example, I have created a warning message notification View. It really didn't need a data context since it was passed the message to display. But if I needed to fancy it up a bit, I would have tapped the parent's data context. I have run into another scenario that needs a child window and is more complicated than the notification box. The parent's View-Model is already getting cluttered, so I had planned on generating a dedicated VM for the child window. But I can't find any guidance on whether this is a good idea or what the potential consequences may be. FWIW, I happen to be working in Silverlight, but I don't know that this question is strictly a Silverlight issue.

    Read the article

  • Load order in XNA?

    - by marc wellman
    I am wondering whether the is a mechanism to manually control the call-order of void Game.LoadContent() as it is the case with void Game.Draw(GameTime gt) by setting int DrawableGameComponent.DrawOrder ? except the order that results from adding components to the Game.Components container and maybe there exists something similar with Game.Update(GameTime gt) ? UPDATE To exemplify my issue consider you have several game components which do depends to each other regarding their instantiation. All are inherited from DrawableGameComponent. Now suppose that in one of these components you are loading a Model from the games content pipeline and add it to some static container in order to provide access to it for other game components. public override LoadContent() { // ... Model m = _contentManager.Load<Model>(@"content/myModel"); // GameComponents is a static class with an accessible list where game components reside. GameComponents.AddCompnent(m); // ... } Now it's easy to imagine that this components load method has to precede other game components that do want to access the model m in their own load method.

    Read the article

  • Automated texture mapping

    - by brandon
    I have a set of seamless tiling textures. I want to be able to take an arbitrary model and create a UV map with these properties: No stretching (all textures tile appropriately so there is no stretching and sheering of the texture) The textures display on the correct axis relative to the model it's mapping to (if you look at the example, you can see some of the letters on the front are tilted, the y axis of the texture should be matching up with the y axis of the object. Some other faces have upside down letters too) the texture is as continuous as possible on the surface of the model (if two faces are adjacent, the texture continues on the adjacent face where it left off) the model is closed (all faces are completely enclosed by other faces) A few notes. This mapping will occur before triangulation. I realize there are ways to do this by hand and it's probably a hard problem to automatically map textures in general, but since these textures are seamless and I just need uniform coverage it seems like an easier problem. I'm looking for an algorithmic approach to this that I can apply in general, not a tool that does it. What approach would work for this, is there an existing one? (I assume so)

    Read the article

  • Review: Data Modeling 101

    I just recently read “Data Modeling 101”by Scott W. Ambler where he gave an overview of fundamental data modeling skills. I think this article was excellent for anyone who was just starting to learn or refresh their skills in regards to the modeling of data.  Scott defines data modeling as the act of exploring data oriented structures.  He goes on to explain about how data models are actually used by defining three different types of models. Types of Data Models Conceptual Data Model  Logical Data Model (LDMs) Physical Data Model(PDMs) He further expands on modeling by exploring common data modeling notations because there are no industry standards for the practice of data modeling. Scott then defines how to actually model data by expanding on entities, attributes, identities, and relationships which are the basic building blocks of data models. In addition he discusses the value of normalization for redundancy and demoralization for performance. Finally, he discuss ways in which Developers and DBAs can become better data modelers through the use of practice, and seeking guidance from more experienced data modelers.

    Read the article

  • How Service Component Architecture (SCA) Can Be Incorporated Into Existing Enterprise Systems

    After viewing Rob High’s presentation “The SOA Component Model” hosted on InfoQ.com, I can foresee how Service Component Architecture (SCA) can be incorporated in to an existing enterprise. According to IBM’s DeveloperWorks website, SCA is a set of conditions which outline a model for constructing applications/systems using a Service-Oriented Architecture (SOA). In addition, SCA builds on open standards such as Web services. In the future, I can easily see how some large IT shops could potently divide development teams or work groups up into Component/Data Object Groups, and Standard Development Groups. The Component/Data Object Group would only work on creating and maintaining components that are reused throughout the entire enterprise. The Standard Development Group would work on new and existing projects that incorporate the use of various components to accomplish various business tasks. In my opinion the incorporation of SCA in to any IT department will initially slow down the number of new features developed due to the time needed to create the new and loosely-coupled components. However once a company becomes more mature in its SCA process then the number of program features developed will greatly increase. I feel this is due to the fact that the loosely-coupled components needed in order to add the new features will already be built and ready to incorporate into any new development feature request. References: BEA Systems, Cape Clear Software, IBM, Interface21, IONA Technologies PLC, Oracle, Primeton Technologies Ltd, Progress Software, Red Hat Inc., Rogue Wave Software, SAP AG, Siebel Systems, Software AG, Sun Microsystems, Sybase, TIBCO Software Inc. (2006). Service Component Architecture. Retrieved 11 27, 2011, from DeveloperWorks: http://www.ibm.com/developerworks/library/specification/ws-sca/ High, R. (2007). The SOA Component Model. Retrieved 11 26, 2011, from InfoQ: http://www.infoq.com/presentations/rob-high-sca-sdo-soa-programming-model

    Read the article

  • Dependency injection: what belongs in the constructor?

    - by Adam Backstrom
    I'm evaluating my current PHP practices in an effort to write more testable code. Generally speaking, I'm fishing for opinions on what types of actions belong in the constructor. Should I limit things to dependency injection? If I do have some data to populate, should that happen via a factory rather than as constructor arguments? (Here, I'm thinking about my User class that takes a user ID and populates user data from the database during construction, which obviously needs to change in some way.) I've heard it said that "initialization" methods are bad, but I'm sure that depends on what exactly is being done during initialization. At the risk of getting too specific, I'll also piggyback a more detailed example onto my question. For a previous project, I built a FormField class (which handled field value setting, validation, and output as HTML) and a Model class to contain these fields and do a bit of magic to ease working with fields. FormField had some prebuilt subclasses, e.g. FormText (<input type="text">) and FormSelect (<select>). Model would be subclassed so that a specific implementation (say, a Widget) had its own fields, such as a name and date of manufacture: class Widget extends Model { public function __construct( $data = null ) { $this->name = new FormField('length=20&label=Name:'); $this->manufactured = new FormDate; parent::__construct( $data ); // set above fields using incoming array } } Now, this does violate some rules that I have read, such as "avoid new in the constructor," but to my eyes this does not seem untestable. These are properties of the object, not some black box data generator reading from an external source. Unit tests would progressively build up to any test of Widget-specific functionality, so I could be confident that the underlying FormFields were working correctly during the Widget test. In theory I could provide the Model with a FieldFactory() which could supply custom field objects, but I don't believe I would gain anything from this approach. Is this a poor assumption?

    Read the article

  • Displaying Exceptions Thrown or Caught in Managed Beans

    - by Frank Nimphius
    Just came a cross a sample written by Steve Muench, which somewhere deep in its implementation details uses the following code to route exceptions to the ADF binding layer to be handled by the ADF model error handler (which can be customized by overriding the DCErrorHandlerImpl class and configuring the custom class in DataBindings.cpx file) To route an exception to the ADFm error handler, Steve used the following code ((DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry()).reportException(ex); The same code however can be used in managed beans as well to enforce consistent error handling in ADF. As an example, lets assume a managed bean method hits an exception. To simulate this, let's use the following code: public void onToolBarButtonAction(ActionEvent actionEvent) {    throw new JboException("Just to tease you !!!!!");        } The exception shows at runtime as displayed in the following image: Assuming a try-catch block is used to intercept the exception caused by a managed bean action, you can route the error message display to the ADF model error handler. Again, let's simulate the code that would need to go into a try-catch block public void onToolBarButtonAction(ActionEvent actionEvent) {    JboException ex = new JboException("Just to tease you !!!!!");  BindingContext bctx = BindingContext.getCurrent();    ((DCBindingContainer)bctx.getCurrentBindingsEntry()).reportException(ex); } The error now displays as shown in the image below As you can see, the error is now handled by the ADFm Error handler, which - as mentioned before - could be a custom error handler. Using the ADF model error handling for displaying exceptions thrown in managed beans require the current ADF Faces page to have an associated PageDef file (which is the case if the page or view contains ADF bound components). Note that to invoke methods exposed on the business service it is recommended to always work through the binding layer (method binding) so that in case of an error the ADF model error handler is automatically used.

    Read the article

  • XNA Skinned Animated Mesh Rendering Exported from Maya

    - by Devin Garner
    I am working on translating an old RTS game engine I wrote from DirectX9 to XNA. My old models didn't have animation & are an old format, so I'm trying with an FBX file. I temporarily "borrowed" a model from League of Legends just to test if my rendering is working correctly. I imported the mesh/bones/skin/animation into Maya 2012 using an "unnamed" 3rd-party import tool. (obviously I'll have to get legit models later, but I just want to test if my programming is correct). Everything looks correct in maya and it renders the animations flawlessly. I exported everything into a single FBX file (with only a single animation). I then tried to load this model using the example at the following site: http://create.msdn.com/en-US/education/catalog/sample/skinned_model With my exported FBX, the animation looks correct for most of the frames, however at random times it screws up for a split second. Basically, the body/arms/head will look right, but the leg/foot will shoot out to a random point in space for a second & then go back to the normal position. The original FBX from the sample looks correct in my program. It seems odd that my model was imported into maya wrong, since it displays fine in Maya. So, I'm thinking either I'm exporting it wrong, or the sample code is bad & the model from the sample caters to the samples bad code. I'm new to 3D programming & maya, so chances are I'm doing something wrong in the export. I'm using mostly the defaults, but I've tried all 3 interpolation modes (quaternion, euler, resample). Thanks

    Read the article

  • MVC + 3 tier; where ViewModels come into play?

    - by mikhairu
    I'm designing a 3-tiered application using ASP.NET MVC 4. I used the following resources as a reference. CodeProject: MVC + N-tier + Entity Framework Separating data access in ASP.NET MVC I have the following desingn so far. Presentation Layer (PL) (main MVC project, where M of MVC was moved to Data Access Layer): MyProjectName.Main Views/ Controllers/ ... Business Logic Layer (BLL): MyProjectName.BLL ViewModels/ ProjectServices/ ... Data Access Layer (DAL): MyProjectName.DAL Models/ Repositories.EF/ Repositories.Dapper/ ... Now, PL references BLL and BLL references DAL. This way lower layer does not depend on the one above it. In this design PL invokes a service of the BLL. PL can pass a View Model to BLL and BLL can pass a View Model back to PL. Also, BLL invokes DAL layer and DAL layer can return a Model back to BLL. BLL can in turn build a View Model and return it to PL. Up to now this pattern was working for me. However, I've ran into a problem where some of my ViewModels require joins on several entities. In the plain MVC approach, in the controller I used a LINQ query to do joins and then select new MyViewModel(){ ... }. But now, in the DAL I do not have access to where ViewModels are defined (in the BLL). This means I cannot do joins in DAL and return it to BLL. It seems I have to do separate queries in DAL (instead of joins in one query) and BLL would then use the result of these to build a ViewModel. This is very inconvenient, but I don't think I should be exposing DAL to ViewModels. Any ideas how I can solve this dilemma? Thanks.

    Read the article

  • How to draw an E-R diagram?

    - by Appy
    I am learning DBMS in my college. Recently I was given an assignment to draw a E-R Model of a Bus Reservation system which handles Reservations, Ticketing and cancellations. I understand the theory about E-R model when I study from a book, but it gets confusing when I try to draw one from scratch. How should one proceed? There seem to be a lot of ways to model a E-R diagram for a particular requirement. It's really confusing. Can anyone explain taking Bus reservation System as an example? Here is the model I made (But I am not confident with it because at every step, I could think of many more alternatives!) - Entity_Set Passenger(passengerID,name,age,gender) Entity_Set Ticket(ticketID,status) //Status is either WaitingList , Confirmed or Cancelled Entity_Set Bus (busID,MaxSeats,Type) //Type is Ac or Non-AC Entity_Set Route(routeID,ArrivalTime,DepartureTime,Source,Destination) And a Ternary relationship between Passenger, Ticket and Bus with attributes as passengerID, ticketID, busID . Binary relationship between Bus and Route with attributes as busID, routeID . I have few doubts regarding - 1 . Should we take Time as a composite attribute with Arrival and Departure as its attributes (What's the difference if we take that way?) 2 . The same with Source and Destination. Should they be made into a composite attribute "Place" or something like "Location"? 3. Are there any weak entity sets here? Can you please 'create' a weak entity set and explain? Because I have no idea at all what to take as a Weak entity set?

    Read the article

  • XNA shield effect with a Primative sphere problem

    - by Sparky41
    I'm having issue with a shield effect i'm trying to develop. I want to do a shield effect that surrounds part of a model like this: http://i.imgur.com/jPvrf.png I currently got this: http://i.imgur.com/Jdin7.png (The red likes are a simple texture a black background with a red cross in it, for testing purposes: http://i.imgur.com/ODtzk.png where the smaller cross in the middle shows the contact point) This sphere is drawn via a primitive (DrawIndexedPrimitives) This is how i calculate the pieces of the sphere using a class i've called Sphere (this class is based off the code here: http://xbox.create.msdn.com/en-US/education/catalog/sample/primitives_3d) public class Sphere { // During the process of constructing a primitive model, vertex // and index data is stored on the CPU in these managed lists. List vertices = new List(); List indices = new List(); // Once all the geometry has been specified, the InitializePrimitive // method copies the vertex and index data into these buffers, which // store it on the GPU ready for efficient rendering. VertexBuffer vertexBuffer; IndexBuffer indexBuffer; BasicEffect basicEffect; public Vector3 position = Vector3.Zero; public Matrix RotationMatrix = Matrix.Identity; public Texture2D texture; /// <summary> /// Constructs a new sphere primitive, /// with the specified size and tessellation level. /// </summary> public Sphere(float diameter, int tessellation, Texture2D text, float up, float down, float portstar, float frontback) { texture = text; if (tessellation < 3) throw new ArgumentOutOfRangeException("tessellation"); int verticalSegments = tessellation; int horizontalSegments = tessellation * 2; float radius = diameter / 2; // Start with a single vertex at the bottom of the sphere. AddVertex(Vector3.Down * ((radius / up) + 1), Vector3.Down, Vector2.Zero);//bottom position5 // Create rings of vertices at progressively higher latitudes. for (int i = 0; i < verticalSegments - 1; i++) { float latitude = ((i + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude / up);//(up)5 float dxz = (float)Math.Cos(latitude); // Create a single ring of vertices at this latitude. for (int j = 0; j < horizontalSegments; j++) { float longitude = j * MathHelper.TwoPi / horizontalSegments; float dx = (float)(Math.Cos(longitude) * dxz) / portstar;//port and starboard (right)2 float dz = (float)(Math.Sin(longitude) * dxz) * frontback;//front and back1.4 Vector3 normal = new Vector3(dx, dy, dz); AddVertex(normal * radius, normal, new Vector2(j, i)); } } // Finish with a single vertex at the top of the sphere. AddVertex(Vector3.Up * ((radius / down) + 1), Vector3.Up, Vector2.One);//top position5 // Create a fan connecting the bottom vertex to the bottom latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(0); AddIndex(1 + (i + 1) % horizontalSegments); AddIndex(1 + i); } // Fill the sphere body with triangles joining each pair of latitude rings. for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; AddIndex(1 + i * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); } } // Create a fan connecting the top vertex to the top latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(CurrentVertex - 1); AddIndex(CurrentVertex - 2 - (i + 1) % horizontalSegments); AddIndex(CurrentVertex - 2 - i); } //InitializePrimitive(graphicsDevice); } /// <summary> /// Adds a new vertex to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddVertex(Vector3 position, Vector3 normal, Vector2 texturecoordinate) { vertices.Add(new VertexPositionNormal(position, normal, texturecoordinate)); } /// <summary> /// Adds a new index to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddIndex(int index) { if (index > ushort.MaxValue) throw new ArgumentOutOfRangeException("index"); indices.Add((ushort)index); } /// <summary> /// Queries the index of the current vertex. This starts at /// zero, and increments every time AddVertex is called. /// </summary> protected int CurrentVertex { get { return vertices.Count; } } public void InitializePrimitive(GraphicsDevice graphicsDevice) { // Create a vertex declaration, describing the format of our vertex data. // Create a vertex buffer, and copy our vertex data into it. vertexBuffer = new VertexBuffer(graphicsDevice, typeof(VertexPositionNormal), vertices.Count, BufferUsage.None); vertexBuffer.SetData(vertices.ToArray()); // Create an index buffer, and copy our index data into it. indexBuffer = new IndexBuffer(graphicsDevice, typeof(ushort), indices.Count, BufferUsage.None); indexBuffer.SetData(indices.ToArray()); // Create a BasicEffect, which will be used to render the primitive. basicEffect = new BasicEffect(graphicsDevice); //basicEffect.EnableDefaultLighting(); } /// <summary> /// Draws the primitive model, using the specified effect. Unlike the other /// Draw overload where you just specify the world/view/projection matrices /// and color, this method does not set any renderstates, so you must make /// sure all states are set to sensible values before you call it. /// </summary> public void Draw(Effect effect) { GraphicsDevice graphicsDevice = effect.GraphicsDevice; // Set our vertex declaration, vertex buffer, and index buffer. graphicsDevice.SetVertexBuffer(vertexBuffer); graphicsDevice.Indices = indexBuffer; graphicsDevice.BlendState = BlendState.Additive; foreach (EffectPass effectPass in effect.CurrentTechnique.Passes) { effectPass.Apply(); int primitiveCount = indices.Count / 3; graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Count, 0, primitiveCount); } graphicsDevice.BlendState = BlendState.Opaque; } /// <summary> /// Draws the primitive model, using a BasicEffect shader with default /// lighting. Unlike the other Draw overload where you specify a custom /// effect, this method sets important renderstates to sensible values /// for 3D model rendering, so you do not need to set these states before /// you call it. /// </summary> public void Draw(Camera camera, Color color) { // Set BasicEffect parameters. basicEffect.World = GetWorld(); basicEffect.View = camera.view; basicEffect.Projection = camera.projection; basicEffect.DiffuseColor = color.ToVector3(); basicEffect.TextureEnabled = true; basicEffect.Texture = texture; GraphicsDevice device = basicEffect.GraphicsDevice; device.DepthStencilState = DepthStencilState.Default; if (color.A < 255) { // Set renderstates for alpha blended rendering. device.BlendState = BlendState.AlphaBlend; } else { // Set renderstates for opaque rendering. device.BlendState = BlendState.Opaque; } // Draw the model, using BasicEffect. Draw(basicEffect); } public virtual Matrix GetWorld() { return /*world */ Matrix.CreateScale(1f) * RotationMatrix * Matrix.CreateTranslation(position); } } public struct VertexPositionNormal : IVertexType { public Vector3 Position; public Vector3 Normal; public Vector2 TextureCoordinate; /// <summary> /// Constructor. /// </summary> public VertexPositionNormal(Vector3 position, Vector3 normal, Vector2 textCoor) { Position = position; Normal = normal; TextureCoordinate = textCoor; } /// <summary> /// A VertexDeclaration object, which contains information about the vertex /// elements contained within this struct. /// </summary> public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(12, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0), new VertexElement(24, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0) ); VertexDeclaration IVertexType.VertexDeclaration { get { return VertexPositionNormal.VertexDeclaration; } } } A simple call to the class to initialise it. The Draw method is called in the master draw method in the Gamecomponent. My current thoughts on this are: The direction of the weapon hitting the ship is used to get the middle position for the texture Wrap a texture around the drawn sphere based on this point of contact Problem is i'm not sure how to do this. Can anyone help or if you have a better idea please tell me i'm open for opinion? :-) Thanks.

    Read the article

  • Unable To Get Sound Working to External Speaker on HP TouchSmart 320 on 11.04 or 11.10

    - by Schof
    This is an HP TouchSmart 320, model number 320-1200m. I'm using Ubuntu 11.04. Hardware information: root@hp320:/home/mpower# aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Generic [HD-Audio Generic], device 0: STAC92xx Analog [STAC92xx Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 root@hp320:~$ cat /proc/asound/card0/codec#0 | grep Codec Codec: IDT 92HD91BXX Sound to headphone jack works properly, but sound to built-in speakers does not work. I have installed Windows, and with Windows 7 installed, all audio hardware works properly, so this isn't a hardware fault. I've looked at https://help.ubuntu.com/community/HdaIntelSoundHowto and have been unable to find my card in http://www.kernel.org/doc/Documentation/sound/alsa/HD-Audio-Models.txt . I have tried adding almost every conceivable model in the line "options snd-hda-intel model=MODEL" line I added to /etc/modprobe.d/alsa-base.conf. Update 2011-11-09 2:31 PM PST: I've gone to Control Center - Sound Preferences to attempt settings that make sound work. The "Hardware" tab shows one device: "Internal Audio 1 Output / 1 Input Analog Stereo Duplex." There are two output profiles listed in the selection box at the bottom of the tag: Analog Stereo Duplex and Analog Stereo Output. Neither cause sound to emit from the speakers. I've also run alsamixer on the command-line and ensured that everything is set to maximum and nothing is muted. Update 2011-11-09 5:15 PM PST: I've replicated the exact same symptoms in 11.10. Update 2011-11-10 11:31 AM PST: I've filed a bug in launchpad: https://launchpad.net/ubuntu/+source/alsa-driver/+bug/888703

    Read the article

  • How to use mount points in MilkShape models?

    - by vividos
    I have bought the Warriors & Commoners model pack from Frogames and the pack contains (among other formats) two animated models and several non-animated objects (axe, shield, pilosities, etc.) in MilkShape3D format. I looked at the official "MilkShape 3D Viewer v2.0" (msViewer2.zip at http://www.chumba.ch/chumbalum-soft/ms3d/download.html) source code and implemented loading the model, calculating the joint matrices and everything looks fine. In the model there are several joints that are designated as the "mount points" for the static objects like axe and shield. I now want to "put" the axe into the hand of the animated model, and I couldn't quite figure out how. I put the animated vertices in a VBO that gets updated every frame (I know I should do this with a shader, but I didn't have time to do this yet). I put the static vertices in another VBO that I want to keep static and not updated every frame. I now tried to render the animated vertices first, then use the joint matrix for the "mount joint" to calculate the location of the static object. I tried many things, and what about seems to be right is to transpose the joint matrix, then use glMatrixMult() to transform the modelview matrix. For some objects like the axe this is working, but not for others, e.g. the pilosities. Now my question: How is this generally implemented when using bone/joint models, and especially with MilkShape3D models? Am I on the right track?

    Read the article

  • Javascript MVC in principle

    - by Michael
    Suppose I want to implement MVC in JavaScript. I am not asking about MVC frameworks. I am asking how to build it in principle. Let's consider a search page of an e-commerce site, which works s follows: User chooses a product and its attributes and press a "search" button. Application sends the search request to the server, receives a list of products. Application displays the list in the web page. I think the Model holds a Query and list of Product objects and "publishes" such events as "query updated", "list of products updated", etc. The Model is not aware of DOM and server, or course. View holds the entire DOM tree and "subscribes" to the Model events to update the DOM. Besides, it "publishes" such events as "user chose a product", "user pressed the search button" etc. Controller does not hold any data. It "subscribes" to the View events, calls the server and updates the Model. Does it make sense?

    Read the article

  • What is the disadvantage of using abstract class as a database connectivity in zend framework 2 instead of service locator

    - by arslaan ejaz
    If I use database by creating adapter with drivers, initialize it in some abstract class and extend that abstract class to required model. Then use simple query statement. Like this: namespace My-Model\Model\DB; abstract class MysqliDB { protected $adapter; public function __construct(){ $this->adapter = new \Zend\Db\Adapter\Adapter(array( 'driver' => 'Mysqli', 'database' => 'my-database', 'username' => 'root', 'password' => '' )); } } And use abstract class of database like this in my models: class States extends DB\MysqliDB{ public function __construct(){ parent::__construct(); } protected $states = array(); public function select_all_states(){ $data = $this->adapter->query('select * from states'); foreach ($data->execute() as $row){ $this->states[] = $row; } return $this->states; } } I am new to zend framework, before i have experience of working in YII and Codeigniter. I like the object oriented in zend so i want to use it like this. And don't want to use it through service locater something like this: public function getServiceConfig(){ return array( 'factories' => array( 'addserver-mysqli' => new Model\MyAdapterFactory('addserver-mysqli'), 'loginDB' => function ($sm){ $adapter = $sm->get('addserver-mysqli'); return new LoginDB($adapter); } ) ); } In module. Am i Ok with this approach?

    Read the article

  • Architecture : am I doing things right?

    - by Jeremy D
    I'm trying to use a '~classic' layered arch using .NET and Entity Framework. We are starting from a legacy database which is a little bit crappy: Inconsistent naming Unneeded views (view referencing other views, select * views etc...) Aggregated columns Potatoes and Carrots in the same table etc... So I ended with fully isolating my database structure from my domain model. To do so EF entities are hidden from presentation layer. The goal is to permit an easier database refactoring while lowering the impact of it on applications. I'm now facing a lot of challenges and I'm starting to ask myself if I'm doing things right. My Domain Model is highly volatile, it keeps evolving with apps as new fields needs are arising. Complexity of it keeps raising and class it contains start to get a lot of properties. Creating include strategy and reprojecting to EF is very tricky (my domain objects don't have any kind of lazy/eager loading relationship properties): DomainInclude<Domain.Model.Bar>.Include("Customers").Include("Customers.Friends") // To... IFooContext.Bars.Include(...).Include(...).Where(...) Some framework are raping the isolation levels (Devexpress Grids which needs either XPO or IQueryable for filtering and paging large data sets) I'm starting to ask myself if : the isolation of EF auto-generated entities is an unneeded cost. I should allow frameworks to hit IQueryable? Slow slope to hell? (it's really hard to isolate DevExpress framework, any successful experience?) the high volatility of my domain model is normal? Did you have similar difficulties? Any advice based on experience?

    Read the article

  • Creating an update method in a different class

    - by Sweta Dwivedi
    I have created a class called 3D model which will animate my 3D model by changing the model position according to the values based in a .txt file through a list... Since i'm using a foreach loop to read the point values when it reaches the end of the file.. XNA throws an out of bounds exception .. (which is obvious) but if i add the same code in my Game.cs update(gameTime) method.. then i dont have this problem..Any idea how to make my 3D model update work same as the update in game.cs .. Here is the code for some idea: public void patterns(GameTime gameTime) { motion_z = new List<Point3D>(); if (pattern == 1) { f = "E:/Motion_Track-output/Output1.txt"; } if (pattern == 2) { f = "E:/Motion_Track-output/cruse.txt"; } // TODO: Add your update logic here using (StreamReader r = new StreamReader(f)) { string line; //Viewport view = graphics.GraphicsDevice.Viewport; int maxWidth = view.Width; int maxHeight = view.Height; while ((line = r.ReadLine()) != null) { string[] temp = line.Split(','); int x = (int)Math.Floor(((float.Parse(temp[0]) * 0.5f) + 0.5f) * maxWidth); int y = (int)Math.Floor(((float.Parse(temp[1]) * -0.5f) + 0.5f) * maxHeight); int z = (int)Math.Floor(((float.Parse(temp[2]) / 4 * 20000))); motion_z.Add(new Point3D(x, y, z)); } modelPosition.X = (float)(motion_z[i].X); modelPosition.Y = (float)(motion_z[i].Y); modelPosition.Z = (float)(motion_z[i].Z); i++; } //Console.WriteLine("modelposX:" + modelPosition.X + "," + "motionzX:" + motion_z[i].X); }

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >