Search Results

Search found 16051 results on 643 pages for 'schema design'.

Page 116/643 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Replacing ORM schema without dropping the entire data

    - by Udi
    Hey, I'm using OpenJPA as a JPA provider. Is there a way in which I can recreate the database tables (When an entity changes) without dropping the entire data? When an entity changes, I drop and create every table in the store, and obviously lose the data within. Is there a tool or product to keep the data somehow? Thanks, Udi

    Read the article

  • UML or design tools for websites

    - by Malfist
    What is a good tool to design the implementation of websites? I typically use UMl to design applications, but I feel that does not apply well to websites, specifically the heavy emphasis on UI that websites require. What would be a good tool to use to plan a webpage?

    Read the article

  • How is architectural design done in an agile environment?

    - by B?????
    I have read Principles for the Agile Architect, where they defined next principles : Principle #1 The teams that code the system design the system. Principle #2 Build the simplest architecture that can possibly work. Principle #3 When in doubt, code it out. Principle #4 They build it, they test it. Principle #5 The bigger the system, the longer the runway. Principle #6 System architecture is a role collaboration. Principle #7 There is no monopoly on innovation. The paper says that most of the architecture design is done during the coding phase, and only system design before that. That is fine. So, how is the system design done? Using UML? Or a document that defines interfaces and major blocks? Maybe something else?

    Read the article

  • User Management: Managing users in user-defined "groups", database schema and logistics

    - by Kevin Brown
    I'm a noob, development wise and logistically-wise. I'm developing a site that lets people take a test... My client wants the ability for a user with the roll/privledge "admin" (a step below a super-admin) to be allowed to create users and only see/edit the users that they create... The users created in that "category" or group need some information that their superior provides. For example, I log in as a "manager", I have the ability to invite people to take the test, and manage those people. Before adding those people, I will have filled out a short survey about myself... Right now, the users that are invited will be asked some of the same questions as the manager. I'd like to cut down the redundancy by using the information put into the database by the manager and apply it to the invited users. How do I set up my database to work with this criterion? I'm a little confused about how to do this! Let me know if I can add more details... (This is a mysql and php app)

    Read the article

  • ASMX schema varies when using WCF Service

    - by Lijo
    Hi, I have a client (created using ASMX "Add Web Reference"). The service is WCF. The signature of the methods varies for the client and the Service. I get some unwanted parameteres to the method. Note: I have used IsRequired = true for DataMember. Service: [OperationContract] int GetInt(); Client: proxy.GetInt(out requiredResult, out resultBool); Could you please help me to make the schame non-varying in both WCF clinet and non-WCF cliet? Do we have any best practices for that? using System.ServiceModel; using System.Runtime.Serialization; namespace SimpleLibraryService { [ServiceContract(Namespace = "http://Lijo.Samples")] public interface IElementaryService { [OperationContract] int GetInt(); [OperationContract] int SecondTestInt(); } public class NameDecorator : IElementaryService { [DataMember(IsRequired=true)] int resultIntVal = 1; int firstVal = 1; public int GetInt() { return firstVal; } public int SecondTestInt() { return resultIntVal; } } } Binding = "basicHttpBinding" using NonWCFClient.WebServiceTEST; namespace NonWCFClient { class Program { static void Main(string[] args) { NonWCFClient.WebServiceTEST.NameDecorator proxy = new NameDecorator(); int requiredResult =0; bool resultBool = false; proxy.GetInt(out requiredResult, out resultBool); Console.WriteLine("GetInt___"+requiredResult.ToString() +"__" + resultBool.ToString()); int secondResult =0; bool secondBool = false; proxy.SecondTestInt(out secondResult, out secondBool); Console.WriteLine("SecondTestInt___" + secondResult.ToString() + "__" + secondBool.ToString()); Console.ReadLine(); } } } Please help.. Thanks Lijo

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • sql server: import operation won't copy full schema

    - by P a u l
    I recall that the import tool in sql server 2000 would copy indexes, relationships, etc. In sql server 2005/2008 the import tool in SSMS will only create the tables, copy the data, but the keys, indexes, relationships are missing. I can find no option in the import wizard to enable this? What am I missing here? Is this not possible anymore for any good reason?

    Read the article

  • Duplicate partitioning key performance impact

    - by Anshul
    I've read in some posts that having duplicate partitioning key can have a performance impact. I've two tables like: CREATE TABLE "Test1" ( CREATE TABLE "Test2" ( key text, key text, column1 text, name text, value text, age text, PRIMARY KEY (key, column1) ... ) PRIMARY KEY (key, name,age) ) In Test1 column1 will contain column name and value will contain its corresponding value.The main advantage of Test1 is that I can add any number of column/value pairs to it without altering the table by just providing same partitioning key each time. Now my question is how will each of these table schema's impact the read/write performance if I've millions of rows and number of columns can be upto 50 in each row. How will it impact the compaction/repair time if I'm writing duplicate entries frequently?

    Read the article

  • Fact table with multiple facts

    - by Jeff Meatball Yang
    I have a dimension (SiteItem) has two important facts: perUserClicks perBrowserClicks however, within this dimension, I have groups of dimensions based on an attribute column (let's call the groups AboveFoldItems, LeftNavItems, OnTheFlyItems, etc.) each have more facts that are specific to that group: AboveFoldItems: eyeTime, loadTime LeftNavItems: mouseOverTime OnTheFlyItems: doesn't have any extra, but may in the future Is the following fact table schema ok? DateKey SessionKey SiteItemKey perUserClicks perBrowserClicks eyeTime loadTime mouseOverTime It seems a little wasteful since only some columns pertain to some dimension keys (the irrelevant facts are left NULL). But... this seems like it would be a common problem, so there should be a common solution for this, right?

    Read the article

  • How do we provide valid time estimates during Sprint Planning without doing "too much" design?

    - by Michael Edenfield
    My team is getting up to speed with Scrum, but most of us are more familiar with non-agile or "pseudo-"agile methodologies. The part that is the biggest hurdle for us is running an efficient Sprint Planning meeting where we break our backlog items into tasks, and estimate hours. (I'm using the terminology from the VS2010 Scrum Template; apologies if I use the wrong word somewhere.) When we try to figure out how long a task is going to take, we often fall into the trap of designing the feature at the code level -- table layout, interfaces, etc -- in order to figure out how long that's going to take. I'm pretty sure this is not the appropriate place to be doing that kind of design. We should be scheduling tasks for these design meetings during the sprint. However, we are having trouble figuring out how else to come up with meaningful estimates for the tasks. Are there any practical habits/techniques/etc. for making a judgement call about how long a feature is going to take, without knowing how you plan to implement it? If our time estimates are going to change significantly once the design has been completed, how can we properly budget our Sprint backlog ahead of time? EDIT: Just to clarify, since some of the comments/answers are very valid but I think addressing the wrong question. We know that what we're doing is not right, and that we should be building time into the sprint for this design. Conceptually all of the developers understand that. We also also bringing in a team member with Scrum experience to keep us on track if we start going off into the weeds. The problem is that, without going through this design process, we are finding it difficult to provide concrete time estimates for anything. We are constantly saying things like "well if we design it this way it might take 8 hours but if we end up having to do this other way instead that will take about 32 but it might not be as bad once we start trying to write it...". I also assume that this process will get better once we have some historical velocity to work from, but many of the technologies and architectural patterns we are using are new to us. But if potentially-wildly-wrong estimates are just a natural part of adapting this process then we will just need to recondition ourselves to accept that :)

    Read the article

  • Correct way to give users access to additional schemas in Oracle

    - by Jacob
    I have two users Bob and Alice in Oracle, both created by running the following commands as sysdba from sqlplus: create user $blah identified by $password; grant resource, connect, create view to $blah; I want Bob to have complete access to Alice's schema (that is, all tables), but I'm not sure what grant to run, and whether to run it as sysdba or as Alice. Happy to hear about any good pointers to reference material as well -- don't seem to be able to get a good answer to this from either the Internet or "Oracle Database 10g The Complete Reference", which is sitting on my desk.

    Read the article

  • Validating XML tag by tag

    - by Greatful
    Hi All I'm having some issues validating some XML against a Schema, using .net and C#. I am using XmlReaderSettings with the ValidationEventHandler. However, this seems to stop catching errors after it has encountered the first error at a particular level within the XML file, instead of checking the next tag at the same level, so basically it does not check each and every tag within the XML file instead skipping a level when it has found an error. I was hoping to get some advice from somebody who has successfully accomplished this type of validation. Thanks very much

    Read the article

  • What xsd will let an element have itself as a sub element infinitely?

    - by David Basarab
    How can I create an xsd to give me this type of xml structure that can go on infinitely? <?xml version="1.0" encoding="utf-8" ?> <SampleXml> <Items> <Item name="SomeName" type="string"> This would be the value </Item> <Item name="SecondName" type="string"> This is the next string </Item> <Item name="AnotherName" type="list"> <Items> <Item name="SubName" type="string"> A string in a sub list </Item> <Item name="SubSubName" type="list"> <Items> <Item name="HowDoI" type="string"> How do I keep this going infinately? </Item> </Items> </Item> </Items> </Item> </Items> </SampleXml> The only solution I have found has been to just repeat in the xsd as many times as I am willing to copy. Like below. <?xml version="1.0" encoding="utf-8"?> <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="SampleXml"> <xs:complexType> <xs:sequence> <xs:element name="Items"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="Item"> <xs:complexType mixed="true"> <xs:sequence minOccurs="0"> <xs:element name="Items"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="Item"> <xs:complexType mixed="true"> <xs:sequence minOccurs="0"> <xs:element name="Items"> <xs:complexType> <xs:sequence> <xs:element name="Item"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute name="name" type="xs:string" use="required" /> <xs:attribute name="type" type="xs:string" use="required" /> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> <xs:attribute name="name" type="xs:string" use="required" /> <xs:attribute name="type" type="xs:string" use="required" /> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> <xs:attribute name="name" type="xs:string" use="required" /> <xs:attribute name="type" type="xs:string" use="required" /> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>

    Read the article

  • Migrate Data and Schema from MySQL to MSSQL

    - by colithium
    Are there any free solutions for automatically migrating a database from MySQL to MSSQL Server that "just works"? I've been attempting this simple (at least I thought so) task all day now. I've tried: MSSQL Server Management Studio's Import Data feature Create an empty database Tasks - Import Data... .NET Framework Data Provider for Odbc Valid DSN (verified it connects) Copy data from one or more tables or views Check 1 VERY simple table Click Preview Get Error: The preview data could not be retrieved. ADDITIONAL INFORMATION: ERROR [42000] [MySQL][ODBC 5.1 Driver][mysqld-5.1.45-community]You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"table_name"' at line 1 (myodbc5.dll) A similar error occurs if I go through the rest of the wizard and perform the operation. The failed step is "Setting Source Connection" the error refers to retrieving column information and then lists the above error. It can retrieve column information just fine when I modify column mappings so I really don't know what the issue is. I've also tried getting various MySql tools to output ddl statements that MSSQL understand but haven't succeeded. I've tried with MySQL v5.1.11 to SQL Server 2005 and with MySQL v5.1.45 to SQL Server 2008 (with ODBC drivers 3.51.27.00 and 5.01.06.00 respectively)

    Read the article

  • Scalable Database Tagging Schema

    - by Longpoke
    EDIT: To people building tagging systems. Don't read this. It is not what you are looking for. I asked this when I wasn't aware that RDBMS all have their own optimization methods, just use a simple many to many scheme. I have a posting system that has millions of posts. Each post can have an infinite number of tags associated with it. Users can create tags which have notes, date created, owner, etc. A tag is almost like a post itself, because people can post notes about the tag. Each tag association has an owner and date, so we can see who added the tag and when. My question is how can I implement this? It has to be fast searching posts by tag, or tags by post. Also, users can add tags to posts by typing the name into a field, kind of like the google search bar, it has to fill in the rest of the tag name for you. I have 3 solutions at the moment, but not sure which is the best, or if there is a better way. Note that I'm not showing the layout of notes since it will be trivial once I get a proper solution for tags. Method 1. Linked list tagId in post points to a linked list in tag_assoc, the application must traverse the list until flink=0 post: id, content, ownerId, date, tagId, notesId tag_assoc: id, tagId, ownerId, flink tag: id, name, notesId Method 2. Denormalization tags is simply a VARCHAR or TEXT field containing a tab delimited array of tagId:ownerId. It cannot be a fixed size. post: id, content, ownerId, date, tags, notesId tag: id, name, notesId Method 3. Toxi (from: http://www.pui.ch/phred/archives/2005/04/tags-database-schemas.html, also same thing here: http://stackoverflow.com/questions/20856/how-do-you-recommend-implementing-tags-or-tagging) post: id, content, ownerId, date, notesId tag_assoc: ownerId, tagId, postId tag: id, name, notesId Method 3 raises the question, how fast will it be to iterate through every single row in tag_assoc? Methods 1 and 2 should be fast for returning tags by post, but for posts by tag, another lookup table must be made. The last thing I have to worry about is optimizing searching tags by name, I have not worked that out yet. I made an ASCII diagram here: http://pastebin.com/f1c4e0e53

    Read the article

  • Google I/O 2012 - So You've Read the Design Guide; Now What?

    Google I/O 2012 - So You've Read the Design Guide; Now What? Daniel Lehmann, Tor Norbye, Richard Ngo The Android Design Guide describes how to design beautiful Android apps, but not how to build them. In this talk we'll give practical tips for how to apply fit & finish as you are implementing your design, we'll show you how to avoid some common pitfalls, we'll describe some useful patterns, and show you how tools can help. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 38 1 ratings Time: 56:31 More in Science & Technology

    Read the article

  • migrate database from sybase to mysql

    - by jindalsyogesh
    I have been trying to migrate a database from sybase to Mysql. This is my approach: Generate pojo classes from my sybase database using hibernate in eclipse Use these pojo classes to generate the schema in mysql database Then somehow migrate the data from sybase to mysql I guess this approach should work??? Please let me know if there is any better or easier approach. The thing is I am not even able to get the first step done. I added hibernate plugin in eclipse from this link: [http://download.jboss.org/jbosstools/updates/stable/][1] I added sybase jar file to my project classpath Then I added hibernate console configuration file Then I added hibernate configuration file Then I added hibernate code generation configuration When I try to run the code generation configuration file, I am getting java.lang.NullPointerException and I have no idea how to fix it. I searched a lot of forums, tried to google it, but I not able to find any solution. Can anybody tell me what mistake I am making here or point to some hibernate tutorial for eclipse??

    Read the article

  • how to import other schema jars when using the scomp tool

    - by MikeJiang
    there is a huge amount of xml schemas for the business, some of them are common types like Money.xsd, Address.xsd, etc, while others are business specific like Customer.xsd, ShippingOrder.xsd, etc. So I decide to compile these schemas into 2 jars, one is commonbeans.jar, the other is businessbeans.jar. I've separated them into different folders. to build the commonbeans.jar is simple, just run "scomp -out commonbeans.jar ....\common*.xsd"; while run "scomp -out businessbeans.jar ....\business*.xsd" is a different story, there are errors say can't find those common types, and run "scomp -out businessbeans.jar ....\business*.xsd ....\business*.xsd" will blindly duplicate all the common types into the businessbeans.jar. so is there any way to link the commonbeans.jar when compile those busimess schemas, maybe something like "scomp -out businessbeans.jar ....\business*.xsd commonbeans.jar". I hope my poor english has expressed my issue!

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >