Search Results

Search found 3888 results on 156 pages for 'star schema'.

Page 137/156 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • how to create aro using dbAcl using console

    - by Praveen kalal
    hi i am using cakephp for my project but whille creating acl using command promt. when i run the following command cake schema run create DbAcl it genrate three tables in database. but after puting the following code in users_controller.php. and this command. cake acl view aro it dont create aros. function index() { $aro =& $this->Acl->Aro; //pr($aro); exit; //Here's all of our group info in an array we can iterate through $groups = array( 0 => array( 'alias' => 'admins' ), 1 => array( 'alias' => 'guests' ), 2 => array( 'alias' => 'mangers' ) ); //Iterate and create ARO groups foreach($groups as $data) { //Remember to call create() when saving in loops... $aro->create(); //Save data $aro->save($data); } }

    Read the article

  • how to Solve the "Digg" problem in MongoDB

    - by user193116
    A while back,a Digg developer had posted this blog ,"http://about.digg.com/blog/looking-future-cassandra", where the he described one of the issues that were not optimally solved in MySQL. This was cited as one of the reasons for their move to Cassandra. I have been playing with MongoDB and I would like to understand how to implement the MongoDB collections for this problem From the article, the schema for this information in MySQL : CREATE TABLE Diggs ( id INT(11), itemid INT(11), userid INT(11), digdate DATETIME, PRIMARY KEY (id), KEY user (userid), KEY item (itemid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE Friends ( id INT(10) AUTO_INCREMENT, userid INT(10), username VARCHAR(15), friendid INT(10), friendname VARCHAR(15), mutual TINYINT(1), date_created DATETIME, PRIMARY KEY (id), UNIQUE KEY Friend_unique (userid,friendid), KEY Friend_friend (friendid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; This problem is ubiquitous in social networking scenario implementation. People befriend a lot of people and they in turn digg a lot of things. Quickly showing a user what his/her friends are up to is very critical. I understand that several blogs have since then provided a pure RDBMs solution with indexes for this issue; however I am curious as to how this could be solved in MongoDB.

    Read the article

  • ignoring elements order in xml validation against xsd

    - by segolas
    Hi, Ia processing an email and saving some header inside a xml document. I also need to validate the document against a xml schema. As the subject suggest, I need to validate ignoring the elements order but, as far as I read this seems to be impossible. Am I correct? If I put the headers in a<xsd:sequence>, the order obviously matter. If I us the order is ignored but for some strange reason this imply that the elements must occur at least once. My xml is something like this: <headers> <subject>bla bla bla</subject> <recipient>[email protected]</recipient> <recipient>rcp02domain.com</recipient> <recipient>[email protected]</recipient> </headers> but I think the final document is valid even if subject and recipient elements are swapped. There is really nothing to do?

    Read the article

  • How can I build a generic dataset-handling Perl library?

    - by Pep.
    Hello, I want to build a generic Perl module for handling and analysing biomedical character separated datasets and which can, most certain, be used on any kind of datasets that contain a mixture of categorical (A,B,C,..) and continuous (1.2,3,881..) and identifier (XXX1,XXX2...). The plan is to have people initialize the module and then use some arguments to point to the data file(s), the place were the analysis reports should be placed and the structure of the data. By structure of data I mean which variable is in which place and its name/type. And this is where I need some enlightenment. I am baffled how to do this in a clean way. Obviously, having people create a simple schema file, be it XML or some other format would be the cleanest but maybe not all people enjoy doing something like this. The solutions I can think of are: Create a configuration file in XML or similar and with a prespecified format. Pass the information during initialization of the module. Use the first row of the data as headers and try to guess types (ouch) Surely there must be a "canonical" way of doing this that is also usable and efficient. Thanks p.

    Read the article

  • MSBuild: automate collecting of db migration scripts?

    - by P Dub
    Summary of environment. Asp.net web application (source stored in svn) sqlserver database. (Database schema (tables/sprocs) stored in svn) db version is synced with web application assembly version. (stored in table 'CurrentVersion') CI hudson server that checks out web app from repo and runs custom msbuild file to publish/package app. My msbuild script updates the assembly version of the web app (Major.Minor.Revision.Build) on each build. The 'Revision' is set to the currently checked out svn revision and the 'Build' to the hudson build number (incremented on each automated build). This way i can match the app to a specific trunk revision also get other build stats from the hudson build number. I'd like to automate the collecting of migration scripts (updated sprocs etc) to add to the zip package. I guess by comparing the svn revision of the db that has yet to be deployed to, to the revision being deployed, i can find what db files have changed in the trunk since the last deployment to that database/environment. This could easily be achieved by manually calling the svn diff -r REVNO:REVNO command to list changed .sql files. These files could then manually have to be added to the package. It would be great if this could be automated. Firstly i'd imagine I'll have to write a custom task to check the version of the db that has yet to be deployed to. After that I'm quite unsure. Does anyone have any suggestion on how this would be achieved through an msbuild task either existing or custom? Finally I'll have to autogen a script to add to the package that updates the database version table so as to be in sync with the application.

    Read the article

  • Table not created by Hibernate

    - by User1
    I annotated a bunch of POJO's so JPA can use them to create tables in Hibernate. It appears that all of the tables are created except one very central table called "Revision". The Revision class has an @Entity(name="RevisionT") annotation so it will be renamed to RevisionT so there is not a conflict with any reserved words in MySQL (the target database). I delete the entire database, recreate it and basically open and close a JPA session. All the tables seem to get recreated without a problem. Why would a single table be missing from the created schema? What instrumentation can be used to see what Hibernate is producing and which errors? Thanks. UPDATE: I tried to create as a Derby DB and it was successful. However, one of the fields has a a name of "index". I use @org.hibernate.annotations.IndexColumn to specify the name to something other than a reserved word. However, the column is always called "index" when it is created. Here's a sample of the suspect annotations. @ManyToOne @JoinColumn(name="MasterTopID") @IndexColumn(name="Cx3tHApe") protected MasterTop masterTop; Instead of creating MasterTop.Cx3tHApe as a field, it creates MasterTop.Index. Why is the name ignored?

    Read the article

  • Connect Rails model to non-rails database

    - by the_snitch
    I'm creating a new web application (Rails 3 beta), of which pieces of it will access data from a legacy mysql database that a current php application is using. I do not wish to modify the legacy db schema, I just want to be able to read/write to it, as well as the rails application having it's own database using activerecord for the newer stuff. I'm using mysql for the rails app, so I have the adapter installed. How is the best way to do this? For example, I want contacts to come from the old database. Should I create a contacts controller, and manually call sql to get the variables for the views? Or should I create a Contact model, and define attributes that match the fields in the database, and am I able to use it like Contact.mail_address to have it call "SELECT mailaddr FROM contacts WHERE id=Contact.id". Sorry, I've never done much in Rails outside of the standard stuff that is documented well. I'm not sure of what the best approach would be. Ideally, I want the contacts to be presented to my rails application as native as possible, so that I can expose them RESTfully for API access. Any suggestions and code examples would be much appreciated

    Read the article

  • Java-Hibernate: How can I translate these tables to hibernate annotations?

    - by penas
    I need to create a simple application using these tables: http://stackoverflow.com/questions/2612848/are-these-tables-respect-the-3nf-database-normalization I have created the application using simple old JDBC, but I would like to see how the application would look like using Hibernate, but I don't know how to put the sql code in java. I have found LOTS of examples, but I'm pretty much confused about using Hibernate and I don't know If I made such a good joob. For example, for the first three tables: AUTHOR table * Author_ID, PK * First_Name * Last_Name TITLES table * TITLE_ID, PK * NAME * Author_ID, FK DOMAIN table * DOMAIN_ID, PK * NAME * TITLE_ID, FK The code in java: Table 1 @Entity @Table(name = "AUTHORS", schema = "LIBRARY") public class Author{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "Author_ID") private int authorId; @Column(name = "First_Name", nullable = false, length = 50) private String firstName; @Column(name = "Last_Name", nullable = false, length = 40) private String lastName; @OneToMany @JoinColumn(name = "Title_ID") private List<Title> titles; Table 2 @Entity @Table(name = "TITLES") public class Title{ @Id @Column(name = "Title_ID") private int titleID; @Column(name = "Name", nullable = false, length = 50) private String name; @ManyToOne @JoinColumn(name = "Domain_ID") private Domain domains; Table 3 @Entity @Table(name = "DOMAINS") public class Domain{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "Domain_ID") private int Domain_ID; @Column(name = "Name", nullable = false, length = 50) private String name; @OneToOne(mappedBy = "domains") private Title title; } Any good? :)

    Read the article

  • SQL problem - select accross multiple tables (user groups)

    - by morpheous
    I have a db schema which looks something like this: create table user (id int, name varchar(32)); create table group (id int, name varchar(32)); create table group_member (foobar_id int, user_id int, flag int); I want to write a query that allows me to so the following: Given a valid user id (UID), fetch the ids of all users that are in the same group as the specified user id (UID) AND have group_member.flag=3. Rather than just have the SQL. I want to learn how to think like a Db programmer. As a coder, SQL is my weakest link (since I am far more comfortable with imperative languages than declarative ones) - but I want to change that. Anyway here are the steps I have identified as necessary to break down the task. I would be grateful if some SQL guru can demonstrate the simple SQL statements - i.e. atomic SQL statements, one for each of the identified subtasks below, and then finally, how I can combine those statements to make the ONE statement that implements the required functionality. Here goes (assume specified user_id [UID] = 1): //Subtask #1. Fetch list of all groups of which I am a member Select group.id from user inner join group_member where user.id=group_member.user_id and user.id=1 //Subtask #2 Fetch a list of all members who are members of the groups I am a member of (i.e. groups in subtask #1) Not sure about this ... select user.id from user, group_member gm1, group_member gm2, ... [Stuck] //Subtask #3 Get list of users that satisfy criteria group_member.flag=3 Select user.id from user inner join group_member where user.id=group_member.user_id and user.id=1 and group_member.flag=3 Once I have the SQL for subtask2, I'd then like to see how the complete SQL statement is built from these subtasks (you dont have to use the SQL in the subtask, it just a way of explaining the steps involved - also, my SQL may be incorrect/inefficient, if so, please feel free to correct it, and point out what was wrong with it). Thanks

    Read the article

  • Need some advice before starting coding my next iPhone app...

    - by Tom
    Hi! I need some advice about how should I start coding something. So here is the context: I've just finished building a CMS that manage a SQLite database. My application will be picking this database and use its content as the application's content. So far it's pretty simple. The application will have a navigation that will browse through various workflows, and once at the end workflow, it'll show contents from the database. A consultation kind a thing, example: Liquids - Juice - Orange Juice - Informations about Orange Juice. For my SQLite transactions, so far I believe I'll be using fmdb. It looks like a great wrapper. Here's a simple schema from one of the database: Workflow: id: { type: integer(3), primary: true, autoincrement: true } workflow_id: { type: integer(1) } name: { type: string(255) } That table's rows will be my navigations. Do you believe I should use a navigation controller? If so, then how could I generate the navigation tree from it? I have a good working knowledge of Objective-C and Foundation framework, but never went too far with it so that is why I'm asking before starting in the wrong direction :) Thanks a lot.

    Read the article

  • Is there a reason why SSIS significantly slows down after a few minutes?

    - by Mark
    I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64). The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.) The problem has the following qualities and resiliences: Problem persists no matter what the target table is. RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use. Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high. CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe) Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s) OLE DB destination and SQL Server Destination both exhibit this problem. To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem. I'll gratefully reward any reasonable answers / suggestions.

    Read the article

  • Why is using a common-lookup table to restrict the status of entity wrong?

    - by FreshCode
    According to Five Simple Database Design Errors You Should Avoid by Anith Sen, using a common-lookup table to store the possible statuses for an entity is a common mistake. Why is this wrong? I disagree that it's wrong, citing the example of jobs at a repair service with many possible statuses that generally have a natural flow, eg.: Booked In Assigned to Technician Diagnosing problem Waiting for Client Confirmation Repaired & Ready for Pickup Repaired & Couriered Irreparable & Ready for Pickup Quote Rejected Arguably, some of these statuses can be normalised to tables like Couriered Items, Completed Jobs and Quotes (with Pending/Accepted/Rejected statuses), but that feels like unnecessary schema complication. Another common example would be order statuses that restrict the status of an order, eg: Pending Completed Shipped Cancelled Refunded The status titles and descriptions are in one place for editing and are easy to scaffold as a drop-down with a foreign key for dynamic data applications. This has worked well for me in the past. If the business rules dictate the creation of a new order status, I can just add it to OrderStatus table, without rebuilding my code.

    Read the article

  • .NET XML serialization gotchas?

    - by kurious
    I've run into a few gotchas when doing C# XML serialization that I thought I'd share: You can't serialize items that are read-only (like KeyValuePairs) You can't serialize a generic dictionary. Instead, try this wrapper class (from http://weblogs.asp.net/pwelter34/archive/2006/05/03/444961.aspx): using System; using System.Collections.Generic; using System.Text; using System.Xml.Serialization; [XmlRoot("dictionary")] public class SerializableDictionary<TKey, TValue> : Dictionary<TKey, TValue>, IXmlSerializable { public System.Xml.Schema.XmlSchema GetSchema() { return null; } public void ReadXml(System.Xml.XmlReader reader) { XmlSerializer keySerializer = new XmlSerializer(typeof(TKey)); XmlSerializer valueSerializer = new XmlSerializer(typeof(TValue)); bool wasEmpty = reader.IsEmptyElement; reader.Read(); if (wasEmpty) return; while (reader.NodeType != System.Xml.XmlNodeType.EndElement) { reader.ReadStartElement("item"); reader.ReadStartElement("key"); TKey key = (TKey)keySerializer.Deserialize(reader); reader.ReadEndElement(); reader.ReadStartElement("value"); TValue value = (TValue)valueSerializer.Deserialize(reader); reader.ReadEndElement(); this.Add(key, value); reader.ReadEndElement(); reader.MoveToContent(); } reader.ReadEndElement(); } public void WriteXml(System.Xml.XmlWriter writer) { XmlSerializer keySerializer = new XmlSerializer(typeof(TKey)); XmlSerializer valueSerializer = new XmlSerializer(typeof(TValue)); foreach (TKey key in this.Keys) { writer.WriteStartElement("item"); writer.WriteStartElement("key"); keySerializer.Serialize(writer, key); writer.WriteEndElement(); writer.WriteStartElement("value"); TValue value = this[key]; valueSerializer.Serialize(writer, value); writer.WriteEndElement(); writer.WriteEndElement(); } } } Any other XML gotchas out there?

    Read the article

  • XSLT, worth investing time in, any actual alternatives?

    - by Keeno
    I realise this has been a few other questions on this topic, and people are saying use your language of choice to manipulate the xml etc etc however, not quite fit my question exactly. Firstly, the scope of the project: We want to develop platform independant e-learning, currently, its a bunch of HTML pages but as they grow and develop they become hard to maintain. The idea: Generate up an XML file + Schema, then produce some XSLT files that process the XML into the eLearning modiles. XML to HTML via XSLT. Why: We would like the flexibily to be able to easy reformat the content (i realise CSS is a viable alternative here) If we decide to alter the pages layout or functionality in anyway, im guessing altering the "shared" XSLT files would be easier than updating the HTML files. So far, we have about 30 modules, with up to 10-30 pages each Depending on some "parameters" we could output drastically different page layouts/structures, above and beyond what CSS can do Now, all this has to be platform independant, and to be able to run "offline" i.e. without a server powering the HTML Negatives ive read so far for XSLT: Overheard? Not exactly sure why...is it the compute power need to convert to HTML? Difficult to learn Better alternatives Now, what I would like to know exactly is: are there actually any viable alternatives for this "offline"? Am I going about it in the correct manner, do you guys have any advice or alternatives. Thanks!

    Read the article

  • Concept: Is mongo right for applying schemas?

    - by Jan
    I am currently in charge of checking wether it is valuable for one of our upcoming products to be developed on mongo. Without going too much into detail, I'll try to explain, what the app does. The app simply has "entities". These entities are technical stuff, like cell phones, TVs, Laptops, tablet pcs, and so forth. Of course, a cell phone has other attributes than a Tablet PCs and a Laptop has even other attributes, like RAM, CPU, display size and so on. Now I want to have something that we wanna call a scheme: We define that we need to have saved the display size, amount of ram size of flash devices, processor type, processor speed and so on for tablet pcs. For cell phone we might save display size, GSM, Edge, 3g, 4g, processor, ram, touch screen technology, bla bla bla. I think you got it :) What I want to realize is, that each "category" has a schema and when one of the system's users enters a new product (let's say the new iphone 4), the app constructs the form to be filled out with the appropriate attributes. So far it sounds nice and should not be a problem with mongo. But now the tough for which I could not find a clean solution.... An attribute modeled in mongo looks like: { _id: 1234456, name: "Attribute name", type: 0, "description" } But what to do, if i need this attribute in several languages, like: { en: {name: "Attribute name", type: 0, "description"}, de: {name: "Name des Attributs, type: 0, "Beschreibung"} } I also need to ensure that the german attribute gets updated as soon as the english gets updated, for instance when type changes from 0 to 1. Any ideas on that?

    Read the article

  • Compound Primary Key in Hibernate using Annotations

    - by Rich
    Hi, I have a table which uses two columns to represent its primary key, a transaction id and then the sequence number. I tried what was recommended http://docs.jboss.org/hibernate/stable/annotations/reference/en/html_single/#entity-mapping in section 2.2.3.2.2, but when I used the Hibernate session to commit this Entity object, it leaves out the TXN_ID field in the insert statement and only includes the BA_SEQ field! What's going wrong? Here's the related code excerpt: @Id @Column(name="TXN_ID") private long txn_id; public long getTxnId(){return txn_id;} public void setTxnId(long t){this.txn_id=t;} @Id @Column(name="BA_SEQ") private int seq; public int getSeq(){return seq;} public void setSeq(int s){this.seq=s;} And here are some log statements to show what exactly happens to fail: In createKeepTxnId of DAO base class: about to commit Transaction :: txn_id->90625 seq->0 ...<Snip>... Hibernate: insert into TBL (BA_ACCT_TXN_ID, BA_AUTH_SRVC_TXN_ID, BILL_SRVC_ID, BA_BILL_SRVC_TXN_ID, BA_CAUSE_TXN_ID, BA_CHANNEL, CUSTOMER_ID, BA_MERCHANT_FREETEXT, MERCHANT_ID, MERCHANT_PARENT_ID, MERCHANT_ROOT_ID, BA_MERCHANT_TXN_ID, BA_PRICE, BA_PRICE_CRNCY, BA_PROP_REQS, BA_PROP_VALS, BA_REFERENCE, RESERVED_1, RESERVED_2, RESERVED_3, SRVC_PROD_ID, BA_STATUS, BA_TAX_NAME, BA_TAX_RATE, BA_TIMESTAMP, BA_SEQ) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) [WARN] util.JDBCExceptionReporter SQL Error: 1400, SQLState: 23000 [ERROR] util.JDBCExceptionReporter ORA-01400: cannot insert NULL into ("SCHEMA"."TBL"."TXN_ID") The important thing to note is I print out the entity object which has a txn_id set, and then the following insert into statement does not include TXN_ID in the listing and thus the NOT NULL table constraint rejects the query.

    Read the article

  • Advanced All In One .NET Framework (should i go for a software factory ?)

    - by alfredo dobrekk
    Hi, i m starting a new project that would basically take input from user and save them to database among about 30 screens, and i would like to find a framework that will allow the maximum number of these features out of the box : .net c#. windows form. unit testing continuous integration logging screens with lists, combo boxes, text boxes, add, delete, save, cancel that are easy to update when you add a property to your classes or a field to your database. auto completion on controls to help user find its way use of an orm like nhibernate easy multithreading and display of wait screens for user easy undo redo tabbed child windows search forms ability to grant access to some functionnalities according to user profiles mvp/mvvm or whatever design patterns either some code generation from database to c# classe or generation of database schema from c# classes some kind of database versioning / upgrade to easily update database when i release patches to application once in production automatic control resizing code metrics analysis some code generator i can use against my entities that would generate some rough form i can rearrange after code documentation generator ... At this point i have 3 options : Build from scratch on top of clr :( Find functionnalities among several open source framework and use them as a stack for infrastucture Find a "software factory" I know its lot but i really would like to use existing code to build upon so i can focus on business rules. What open source tools would u use to achieve these ?

    Read the article

  • Connection Pool Strategy: Good, Bad or Ugly?

    - by Drew
    I'm in charge of developing and maintaining a group of Web Applications that are centered around similar data. The architecture I decided on at the time was that each application would have their own database and web-root application. Each application maintains a connection pool to its own database and a central database for shared data (logins, etc.) A co-worker has been positing that this strategy will not scale because having so many different connection pools will not be scalable and that we should refactor the database so that all of the different applications use a single central database and that any modifications that may be unique to a system will need to be reflected from that one database and then use a single pool powered by Tomcat. He has posited that there is a lot of "meta data" that goes back and forth across the network to maintain a connection pool. My understanding is that with proper tuning to use only as many connections as necessary across the different pools (low volume apps getting less connections, high volume apps getting more, etc.) that the number of pools doesn't matter compared to the number of connections or more formally that the difference in overhead required to maintain 3 pools of 10 connections is negligible compared to 1 pool of 30 connections. The reasoning behind initially breaking the systems into a one-app-one-database design was that there are likely going to be differences between the apps and that each system could make modifications on the schema as needed. Similarly, it eliminated the possibility of system data bleeding through to other apps. Unfortunately there is not strong leadership in the company to make a hard decision. Although my co-worker is backing up his worries only with vagueness, I want to make sure I understand the ramifications of multiple small databases/connections versus one large database/connection pool.

    Read the article

  • How to exclude rows where matching join is in an SQL tree

    - by Greg K
    Sorry for the poor title, I couldn't think how to concisely describe this problem. I have a set of items that should have a 1-to-1 relationship with an attribute. I have a query to return those rows where the data is wrong and this relationship has been broken (1-to-many). I'm gathering these rows to fix them and restore this 1-to-1 relationship. This is a theoretical simplification of my actual problem but I'll post example table schema here as it was requested. item table: +------------+------------+-----------+ | item_id | name | attr_id | +------------+------------+-----------+ | 1 | BMW 320d | 20 | | 1 | BMW 320d | 21 | | 2 | BMW 335i | 23 | | 2 | BMW 335i | 34 | +------------+------------+-----------+ attribute table: +---------+-----------------+------------+ | attr_id | value | parent_id | +---------+-----------------+------------+ | 20 | SE | 21 | | 21 | M Sport | 0 | | 23 | AC | 24 | | 24 | Climate control | 0 | .... | 34 | Leather seats | 0 | +---------+-----------------+------------+ A simple query to return items with more than one attribute. SELECT item_id, COUNT(DISTINCT(attr_id)) AS attributes FROM item GROUP BY item_id HAVING attributes > 1 This gets me a result set like so: +-----------+------------+ | item_id | attributes | +-----------+------------+ | 1 | 2 | | 2 | 2 | | 3 | 2 | -- etc. -- However, there's an exception. The attribute table can hold a tree structure, via parent links in the table. For certain rows, parent_id can hold the ID of another attribute. There's only one level to this tree. Example: +---------+-----------------+------------+ | attr_id | value | parent_id | +---------+-----------------+------------+ | 20 | SE | 21 | | 21 | M Sport | 0 | .... I do not want to retrieve items in my original query where, for a pair of associated attributes, they related like attributes 20 & 21. I do want to retrieve items where: the attributes have no parent for two or more attributes they are not related (e.g. attributes 23 & 34) Example result desired, just the item ID: +------------+ | item_id | +------------+ | 2 | +------------+ How can I join against attributes from items and exclude these rows? Do I use a temporary table or can I achieve this from a single query? Thanks.

    Read the article

  • Symfony 1.4 - Don't save a blank password on a executeUpdate action.

    - by Twelve47
    I have a form to edit a UserProfile which is stored in mysql db. Which includes the following custom configuration: public function configure() { $this->widgetSchema['password']=new sfWidgetFormInputPassword(); $this->validatorSchema['password']->setOption('required', false); // you don't need to specify a new password if you are editing a user. } When the user tries to save the executeUpdate method is called to commit the changes. If the password is left blank, the password field is set to '', but I want it to retain the old password instead of overwriting it. What is the best (/most in the symfony ethos) way of doing this? My solution was to override the setter method on the model (which i had done anyway for password encryption), and ignore blank values. public function setPassword( $password ) { if ($password=='') return false; // if password is blank don't save it. return $this->_set('password', UserProfile ::encryptPassword( $password )); } It seems to work fine like this, but is there a better way? If you're wondering I cannot use sfDoctrineGuard for this project as I am dealing with a legacy database, and cannot change the schema.

    Read the article

  • Currently using View, Should I use a hard table instead?

    - by 1001010101
    I am currently debating whether my table, mapping_uGroups_uProducts, which is a view formed by the following table: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `db`.`mapping_uGroups_uProducts` AS select distinct `X`.`upID` AS `upID`,`Z`.`ugID` AS `ugID` from ((`db`.`mapping_uProducts_Products` `X` join `db`.`productsInfo` `Y` on((`X`.`pID` = `Y`.`pID`))) join `db`.`mapping_uGroups_Groups` `Z` on((`Y`.`gID` = `Z`.`gID`))); My current query is: SELECT upID FROM uProductsInfo \ JOIN fs_uProducts USING (upID) column \ JOIN mapping_uGroups_uProducts USING (upID) -- could be faster if we use hard table and index \ JOIN mapping_fs_key USING (fsKeyID) \ WHERE fsName="OVERALL" \ AND ugID=1 \ ORDER BY score DESC \ LIMIT 0,30; which is pretty slow. (for 30 results, it requires about 10 secondes). I think the reason for my query being so slow is definitely due to the fact that that particular query relies on a VIEW which has no index to speed things up. +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | 1 | PRIMARY | mapping_fs_key | const | PRIMARY,fsName | fsName | 386 | const | 1 | Using temporary; Using filesort | | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 19706 | Using where | | 1 | PRIMARY | uProductsInfo | eq_ref | PRIMARY | PRIMARY | 4 | mapping_uGroups_uProducts.upID | 1 | Using index | | 1 | PRIMARY | fs_uProducts | ref | upID | upID | 4 | db.uProductsInfo.upID | 221 | Using where | | 2 | DERIVED | X | ALL | PRIMARY | NULL | NULL | NULL | 40772 | Using temporary | | 2 | DERIVED | Y | eq_ref | PRIMARY | PRIMARY | 4 | db.X.pID | 1 | Distinct | | 2 | DERIVED | Z | ref | PRIMARY | PRIMARY | 4 | db.Y.gID | 2 | Using index; Distinct | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ 7 rows in set (0.48 sec) The explain here looks pretty cryptic, and I don't know whether I should drop view and write a script to just insert everything in the view to a hard table. ( obviously, it will lose the flexibility of the view since the mapping changes quite frequently). Does anyone have any idea to how I can optimize my schema better?

    Read the article

  • How to Map Two Tables To One Class in Fluent NHibernate

    - by Richard Nagle
    I am having a problem with fluent nhiberbate mapping two tables to one class. I have the following database schema: TABLE dbo.LocationName ( LocationId INT PRIMARY KEY, LanguageId INT PRIMARY KEY, Name VARCHAR(200) ) TABLE dbo.Language ( LanguageId INT PRIMARY KEY, Locale CHAR(5) ) And want to build the following class definition: public class LocationName { public virtual int LocationId { get; private set; } public virtual int LanguageId { get; private set; } public virtual string Name { get; set; } public virtual string Locale { get; set; } } Here is my mapping class: public LocalisedNameMap() { WithTable("LocationName"); UseCompositeId() .WithKeyProperty(x => x.LanguageId) .WithKeyProperty(x => x.LocationId); Map(x => x.Name); WithTable("Language", lang => { lang.WithKeyColumn("LanguageId"); lang.Map(x => x.Locale); }); } The problem is with the mapping of the Locale field being from another table, and in particular that the keys between those tables don't match. Whenever I run the application with this mapping I get the following error on startup: Foreign key (FK7FC009CCEEA10EEE:Language [LanguageId])) must have same number of columns as the referenced primary key (LocationName [LanguageId, LocationId]) How do I tell nHibernate to map from LocationName to Language using only the LanguageId field?

    Read the article

  • How to fix this simple SQL query?

    - by morpheous
    I have a database with three tables: user_table country_table city_table I want to write ANSI SQL which will allow me to fetch all the user data (i.e. user details including the name of the country of the last school and the name of the city they live in now). The problem I am having is that I have to use a self join, and I am getting slightly confused. The schema is shown below: CREATE TABLE user_table (id int, first_name varchar(16), last_school_country_id int, city_id int); CREATE TABLE country_table (id int, name varchar(32)); CREATE TABLE city_table (id int, country_id int, name varchar(32)); This is the query I have come up with so far, but the results are wrong, and sometimes, the db engine (mySQL), asks me if I want to show all [HUGE NUMBER HERE] results - which makes me suspect that I am unintentionally creating a cartesian product somewhere. Can someone explain what is wrong with this SQL statement, and what I need to do to fix it? SELECT usr.id AS id, usr.first_name, ctry1.name as loc_country_name, ctry2.name as school_country_name, city.name as loc_city_name FROM user_table usr, country_table ctry1, country_table ctry2, city_table city WHERE usr.last_school_country_id=ctry2.id AND usr.city_id=city.id AND city.country_id=ctry1.id AND ctry1.id=ctry2.id;

    Read the article

  • Multiple "ObjectChangeTracker" getting created, can it be avoided?

    - by user555937
    Hi, We are working on a POC where we have following architecture (MVVM), WPF(Client) + WCF + Model(DataAccess)+ ADO.Net Entity Framework 4.0 (with SQL Server 2008 R2 as DB) All are different projects. In the DataAccess layer we have created different Entity Models(edmx) based on the functionality. The tables under perticular flow are grouped and created different entity models. We are using self tracking entities to and fro to communicate with the WPF client through wcf service. For Single model everything works fine. But when we created a Multiple models then few issues started coming. Mutliple models have few duplicate tables/entities. Two probels are, 1) When we try to access entities from different models mutiple objects "ObjectChangeTracker" are getting created. E.g. CompanyModel(edmx) - Company(Entity) - ObjectChangeTracker, ObjectState ProductModel(edmx) - Customer(Entity) - ObjectChangeTracker1, ObjectState1 OrderModel(edmx) - Oder(Entity) - ObjectChangeTracker2, ObjectState2 Is there any way to avoid this? 2) There are few tables which shared across the Models, E.g. Company(Entity) is used in All above mdoels. During compile time it does not thow any error. But run time It gives error saying "Schema specified is not valid. Errors: The mapping of CLR type to EDM type is ambiguous because multiple CLR types match the EDM type "Company"".. To resolve this, we renamed the entities with some prefix to make them Unique. Is there any other way we can resolve this without changing the name of the entity in the same assembly? Thanks in advance and appreciate if anyone has approach for these issues. Thanks, Kiran

    Read the article

  • Flexible forms and supporting database structure

    - by sunwukung
    I have been tasked with creating an application that allows administrators to alter the content of the user input form (i.e. add arbitrary fields) - the contents of which get stored in a database. Think Modx/Wordpress/Expression Engine template variables. The approach I've been looking at is implementing concrete tables where the specification is consistent (i.e. user profiles, user content etc) and some generic field data tables (i.e. text, boolean) to store non-specific values. Forms (and model fields) would be generated by first querying the tables and retrieving the relevant columns - although I've yet to think about how I would setup validation. I've taken a look at this problem, and it seems to be indicating an EAV type approach - which, from my brief research - looks like it could be a greater burden than the blessings it's flexibility would bring. I've read a couple of posts here, however, which suggest this is a dangerous route: How to design a generic database whose layout may change over time? Dynamic Database Schema I'd appreciate some advice on this matter if anyone has some to give regards SWK

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >