Search Results

Search found 389 results on 16 pages for 'batosai fk'.

Page 5/16 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Using ASP.NET SQL Membership Provider, how do I store my own per-user data?

    - by Gary McGill
    I'm using the ASP.NET SQL Membership Provider. So, there's an aspnet_Users table that has details of each of my users. (Actually, the aspnet_Membership table seems to contain most of the actual data). I now want to store some per-user information in my database, so I thought I'd just create a new table with a UserId (GUID) column and an FK relationship to aspnet_Users. However, I then discovered that I can't easily get access to the UserId since it's not exposed via the membership API. (I know I can access it via the ProviderUserKey, but it seems like the API is abstracting away the internal UserID in favor of the UserName, and I don't want to go too far against the grain). So, I thought I should instead put a LoweredUserName column in my table, and create an FK relationship to aspnet_Users using that. Bzzzt. Wrong again, because while there is a unique index in aspnet_Users that includes the LoweredUserName, it also includes the ApplicationId - so in order to create my FK relationship, I'd need to have an ApplicationId column in my table too. At first I thought: fine, I'm only dealing with a single application, so I'll just add such a column and give it a default value. Then I realised that the ApplicationId is a GUID, so it'd be a pain to do this. Not hard exactly, but until I roll out my DB I can't predict what the GUID is going to be. I feel like I'm missing something, or going about things the wrong way. What am I supposed to do?

    Read the article

  • Generic version control strategy for select table data within a heavily normalized database

    - by leppie
    Hi Sorry for the long winded title, but the requirement/problem is rather specific. With reference to the following sample (but very simplified) structure (in psuedo SQL), I hope to explain it a bit better. TABLE StructureName { Id GUID PK, Name varchar(50) NOT NULL } TABLE Structure { Id GUID PK, ParentId GUID (FK to Structure), NameId GUID (FK to StructureName) NOT NULL } TABLE Something { Id GUID PK, RootStructureId GUID (FK to Structure) NOT NULL } As one can see, Structure is a simple tree structure (not worried about ordering of children for the problem). StructureName is a simplification of a translation system. Finally 'Something' is simply something referencing the tree's root structure. This is just one of many tables that need to be versioned, but this one serves as a good example for most cases. There is a requirement to version to any changes to the name and/or the tree 'layout' of the Structure table. Previous versions should always be available. There seems to be a few possibilities to tackle this issue, like copying the entire structure, but most approaches causes one to 'loose' referential integrity. Example if one followed this approach, one would have to make a duplicate of the 'Something' record, given that the root structure will be a new record, and have a new ID. Other avenues of possible solutions are looking into how Wiki's handle this or go a lot further and look how proper version control systems work. Currently, I feel a bit clueless how to proceed on this in a generic way. Any ideas will be greatly appreciated. Thanks leppie

    Read the article

  • JPA + EJB + JSF: how can design complicated query

    - by Harry Pham
    I am using netbean 6.8 btw. Let say that I have 4 different tables: Company, Facility, Project, and Document. So the relationship is this. A company can have multiple facilities. A facility can have multiple projects, and a project can have multiple documents. Company: +companyNum: PK +facilityNum: FK Facility: +facilityNum: PK +projectNum: FK Project: +projectNum: PK +drawingNum: FK So when I create Entity Class From Database in netbean 6.8, I have 4 entity classes that named after the above 4 tables. So if I want to see all the Document in the database, then it is easy. In my SessionBean, I would do this: @PersistenceContext private EntityManager em; List<Document> documents = em.createNamedQuery("Document.findAll").getResultList(); However, that is not all what I need. Let say that I want to know all the Document from a particular Company, or all the Document from a particular Project from a particular Facility from a particular Company. I am very new to JPA + EJB + JSF as a whole. Please help me out.

    Read the article

  • How to emulate a BEFORE DELETE trigger in SQL Server 2005

    - by Mark
    Let's say I have three tables, [ONE], [ONE_TWO], and [TWO]. [ONE_TWO] is a many-to-many join table with only [ONE_ID and [TWO_ID] columns. There are foreign keys set up to link [ONE] to [ONE_TWO] and [TWO] to [ONE_TWO]. The FKs use the ON DELETE CASCADE option so that if either a [ONE] or [TWO] record is deleted, the associated [ONE_TWO] records will be automatically deleted as well. I want to have a trigger on the [TWO] table such that when a [TWO] record is deleted, it executes a stored procedure that takes a [ONE_ID] as a parameter, passing the [ONE_ID] values that were linked to the [TWO_ID] before the delete occurred: DECLARE @Statement NVARCHAR(max) SET @Statement = '' SELECT @Statement = @Statement + N'EXEC [MyProc] ''' + CAST([one_two].[one_id] AS VARCHAR(36)) + '''; ' FROM deleted JOIN [one_two] ON deleted.[two_id] = [one_two].[two_id] EXEC (@Statement) Clearly, I need a BEFORE DELETE trigger, but there is no such thing in SQL Server 2005. I can't use an INSTEAD OF trigger because of the cascading FK. I get the impression that if I use a FOR DELETE trigger, when I join [deleted] to [ONE_TWO] to find the list of [ONE_ID] values, the FK cascade will have already deleted the associated [ONE_TWO] records so I will never find any [ONE_ID] values. Is this true? If so, how can I achieve my objective? I'm thinking that I'd need to change the FK joining [TWO] to [ONE_TWO] to not use cascades and to do the delete from [ONE_TWO] manually in the trigger just before I manually delete the [TWO] records. But I'd rather not go through all that if there is a simpler way.

    Read the article

  • EF Query with conditional include that uses Joins

    - by makerofthings7
    This is a follow up to another user's question. I have 5 tables CompanyDetail CompanyContacts FK to CompanyDetail CompanyContactsSecurity FK to CompanyContact UserDetail UserGroupMembership FK to UserDetail How do I return all companies and include the contacts in the same query? I would like to include companies that contain zero contacts. Companies have a 1 to many association to Contacts, however not every user is permitted to see every Contact. My goal is to get a list of every Company regardless of the count of Contacts, but include contact data. Right now I have this working query: var userGroupsQueryable = _entities.UserGroupMembership .Where(ug => ug.UserID == UserID) .Select(a => a.GroupMembership); var contactsGroupsQueryable = _entities.CompanyContactsSecurity;//.Where(c => c.CompanyID == companyID); /// OLD Query that shows permitted contacts /// ... I want to "use this query inside "listOfCompany" /// //var permittedContacts= from c in userGroupsQueryable //join p in contactsGroupsQueryable on c equals p.GroupID //select p; However this is inefficient when I need to get all contacts for all companies, since I use a For..Each loop and query each company individually and update my viewmodel. Question: How do I shoehorn the permittedContacts variable above and insert that into this query: var listOfCompany = from company in _entities.CompanyDetail.Include("CompanyContacts").Include("CompanyContactsSecurity") where company.CompanyContacts.Any( // Insert Query here.... // b => b.CompanyContactsSecurity.Join(/*inner*/,/*OuterKey*/,/*innerKey*/,/*ResultSelector*/) ) select company; My attempt at doing this resulted in: var listOfCompany = from company in _entities.CompanyDetail.Include("CompanyContacts").Include("CompanyContactsSecurity") where company.CompanyContacts.Any( // This is concept only... doesn't work... from grps in userGroupsQueryable join p in company.CompanyContactsSecurity on grps equals p.GroupID select p ) select company;

    Read the article

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

  • Asp.Net MVC Create/Update/Delete with composite key

    - by nubm
    Hi, I'm not sure how to use composite key. My Categories table has CategoryId (PK,FK), LanguageId (PK,FK), CategoryName CategoryId | LanguageId | CategoryName 1 | 1 | Car 1 | 2 | Auto 1 | 3 | Automobile etc. I'm following this design The default action looks like // // GET: /Category/Edit/5 public ActionResult Edit(int id) { return View(); } and ActionLink <%= Html.ActionLink("Edit", "Edit", new { id= item.CategoryId }) %> Should I use something like <%= Html.ActionLink("Edit", "Edit", new { id= (item.CategoryId + "-" + item.LanguageId) }) %> so the url is /Category/Edit/5-5 and // // GET: /Category/Edit/5-5 public ActionResult Edit(string id) { // parse id return View(); } or change the route to something like /Category/Edit/5/5 Or there is some better way?

    Read the article

  • Fluent NHibernate - How to map a non nullable foreign key that exists in two joined tables

    - by vakman
    I'm mapping a set of membership classes for my application using Fluent NHibernate. I'm mapping the classes to the asp.net membership database structure. The database schema relevant to the problem looks like this: ASPNET_USERS UserId PK ApplicationId FK NOT NULL other user columns ... ASPNET_MEMBERSHIP UserId PK,FK ApplicationID FK NOT NULL other membership columns... There is a one to one relationship between these two tables. I'm attempting to join the two tables together and map data from both tables to a single 'User' entity which looks like this: public class User { public virtual Guid Id { get; set; } public virtual Guid ApplicationId { get; set; } // other properties to be mapped from aspnetuser/membership tables ... My mapping file is as follows: public class UserMap : ClassMap<User> { public UserMap() { Table("aspnet_Users"); Id(user => user.Id).Column("UserId").GeneratedBy.GuidComb(); Map(user => user.ApplicationId); // other user mappings Join("aspnet_Membership", join => { join.KeyColumn("UserId"); join.Map(user => user.ApplicationId); // Map other things from membership to 'User' class } } } If I try to run with the code above I get a FluentConfiguration exception Tried to add property 'ApplicationId' when already added. If I remove the line "Map(user = user.ApplicationId);" or change it to "Map(user = user.ApplicationId).Not.Update().Not.Insert();" then the application runs but I get the following exception when trying to insert a new user: Cannot insert the value NULL into column 'ApplicationId', table 'ASPNETUsers_Dev.dbo.aspnet_Users'; column does not allow nulls. INSERT fails. The statement has been terminated. And if I leave the .Map(user = user.ApplicationId) as it originally was and make either of those changes to the join.Map(user = user.ApplicationId) then I get the same exception above except of course the exception is related to an insert into the aspnet_Membership table So... how do I do this kind of mapping assuming I can't change my database schema?

    Read the article

  • query SQL how to check all records from a three table join share the same value

    - by Stefano
    Hello Since i'm a poor sql developer, i need support to write a sql query for the following scenario (just a simplified example of my situation): i've got 3 tables, say employe table,department table and companybranch table. the dept column , on the employe table is a fk on the department table; the branch column on the department table is a fk on the companybranch table. Finally more employee are "marked" with the same value . There's a way to select all employes with the same "mark" and, in the same query, check that they work in the same company branch ? thank you in advance Stefano

    Read the article

  • MYSQL Fast Insert dependent on flag from a seperate table

    - by Stuart P
    Hi all. For work I'm dealing with a large database (160 million + rows a year, 10 years of data) and have a quandary; A large percentage of the data we upload is null data and I'd like to stop it from being uploaded. The data in question is spatial in nature, so I have one table like so: idLocations (Auto-increment int, PK) X (float) Y (foat) Alwaysignore (Bool) Which is used as a reference in a second table like so: idLocations (Int, PK, "FK") idDates (Int, PK, "FK") DATA1 (float) DATA2 (float) ... DATA7 (float) So, Ideally I'd like to find a method where I can do something like: INSERT INTO tblData(idLocations, idDates, DATA1, ..., DATA7) VALUES (...), ..., (...) WHERE VALUES(idLocations) NOT LIKE (SELECT FROM tblLocation WHERE alwaysignore=TRUE ON DUPLICATE KEY UPDATE DATA1=VALUES(DATA1) So, for my large batch of input data (250 values in a block), ignore the inserts where the idLocations matches an idLocations values flagged with alwaysignore. Anyone have any suggestions? Cheers. -Stuart Other details: Running MySQL on a semi-dedicated machine, MyISAM engine for the tables.

    Read the article

  • mysql query performance help

    - by Stefano
    Hi I have a quite large table storing words contained in email messages mysql> explain t_message_words; +----------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+---------+------+-----+---------+----------------+ | mwr_key | int(11) | NO | PRI | NULL | auto_increment | | mwr_message_id | int(11) | NO | MUL | NULL | | | mwr_word_id | int(11) | NO | MUL | NULL | | | mwr_count | int(11) | NO | | 0 | | +----------------+---------+------+-----+---------+----------------+ table contains about 100M rows mwr_message_id is a FK to messages table mwr_word_id is a FK to words table mwr_count is the number of occurrencies of word mwr_word_id in message mwr_message_id To calculate most used words, I use the following query SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id ORDER BY word_count DESC LIMIT 100; that runs almost forever (more than half an hour on the test server) mysql> show processlist; +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- | Id | User | Host | db | Command | Time | State | Info +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- processlist | 41 | root | localhost:3148 | tst_db | Query | 1955 | Copying to tmp table | SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id | +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- 3 rows in set (0.00 sec) Is there anything I can do to "speed up" the query (apart from adding more ram, more cpu, faster disks)? thank you in advance stefano

    Read the article

  • named_scope or find_by_sql?

    - by keruilin
    I have three models: User Award Trophy The associations are: User has many awards Trophy has many awards Award belongs to user Award belongs to trophy User has many trophies through awards Therefore, user_id is a fk in awards, and trophy_id is a fk in awards. In the Trophy model, which is an STI model, there's a trophy_type column. I want to return a list of users who have been awarded a specific trophy -- (trophy_type = 'GoldTrophy'). Users can be awarded the same trophy more than once. (I don't want distinct results.) Can I use a named_scope? How about chaining them? Or do I need to use find_by_sql? Either way, how would I code it?

    Read the article

  • SQL Server: export data via SQL query?

    - by rlb.usa
    I have FK and PK all over my db and table data needs to be specified in a certain order or else I get FK/PK insertion errors. I'm tired of executing the wizard again and again to transfer data one table at a time. In the SQL Server export data wizard there is an option to "Write a query to specify the data to transfer". I'd like to write the query myself and specify the correct order. Will this solve my problem? How do I do this? Can you provide a sample query (or link to one) The databases are on two different servers - SQL Server 2008 on each ; The database names & permissions are the same ; each table name & col is the same ; I need Identity Insert for each table.

    Read the article

  • group inlines in django admin

    - by pablo
    Hi I have a two models, Model1 and Model2. Model2 has a FK to Model1 and FK to iteself. In the admin I show Model2 as inlines in Model1 change_form. I want to modify the way the inlines are shown in the admin. I need to group all the instances that have the same parent_model2 and display them as a readonly field with a string of 'childs' in the parent Model2 instance. I know how to use itertools.groupby (or the django version) but don't know how to do it in the admin. What should I override to be able to iterate over all the Model2 instances, group them by parent, add children to the parent and remove children from the inlines? class Model1(models.Model): name = models.CharField() class Model2(models.Model): name = models.CharField() fk_model1 = models.ForeignKey('self', blank=True, null=True) parent_model2 = models.ForeignKey('self', blank=True, null=True) Thanks

    Read the article

  • Many to Many Association Tables - Is it customary to put additional columns in these tables?

    - by Randy Minder
    We've encountered the following situation in our database. We have table 'A' and table 'B' which have a M2M relationship. The association table is named 'AB' and contains a FK column to table 'A' and a FK column to table 'B'. Now we've identified a need to store additional data about this association. For example, a date when the association occurred, and who made the association etc. We've decided to put these additional columns in the 'AB' association table. However, something tells me this is frowned upon by database purists. On the other hand, it makes no sense to us to create yet an additional table to store this associated data. What's the prevailing thought on this?

    Read the article

  • PostgreSQL - disabling constraints

    - by azp74
    I have a table with approx 5 million rows which has a fk constraint referencing the primary key of another table (also approx 5 million rows). I need to delete about 75000 rows from both tables. I know that if I try doing this with the fk constraint enabled it's going to take an unacceptable amount of time. Coming from an Oracle background my first thought was to disable the constraint, do the delete & then reenable the constraint. PostGres appears to let me disable constraint triggers if I am a super user (I'm not, but I am logging in as the user that owns/created the objects) but that doesn't seem to be quite what I want. The other option is to drop the constraint and then reinstate it. I'm worried that rebuilding the constraint is going to take ages given the size of my tables. Any thoughts?

    Read the article

  • LINQ DataLoadOptions - Retrieval of data via Fulltext/Broken Foreign Key Relationship

    - by Alex
    I've hit a brick wall: I have an SQL Function: FUNCTION [dbo].[ContactsFTS] (@searchtext nvarchar(4000)) RETURNS TABLE AS RETURN SELECT * FROM Contacts INNER JOIN CONTAINSTABLE(Contacts, *, @searchtext) AS KEY_TBL ON Contacts.Id = KEY_TBL.[KEY] which I am calling via LINQ public IQueryable<ContactsFTSResult> SearchByFullText(String searchText) { return db.ContactsFTS(searchText); } I am projecting the ContactsFTSResult collection into a List<Contact> which is then given to my viewmodel. Here is the problem: My Contacts table (and therefore the Contact object created via LINQ to SQL) has multiple FK relationships to other information, such as Contact.BillingAddressId is an FK to an Address.Id. That information is missing after I do the fulltext search (e.g. if I try to access Contact.BillingAddress it is null). Can I add this information somehow via DataLoadOptions? I tried LoadWith<Contact>(c => c.BillingAddress) but this doesn't work, I assume because of the fact that I'm calling the function instead of doing the whole query via LINQ.

    Read the article

  • Where is the CleanUp method?

    - by Apocatastasis
    Hi. I'm working fine with Subsonic 3 and the templates until november 2009. Now I'm geting the source directly from Git, but the ActiveRecord template doesn't generates correctly anymore. In the line 334 of the ActiveRecord template for SQLite: return from items in repo.GetAll() where items.<#=CleanUp(fk.OtherColumn)#> == <#=CleanUp(fk.ThisColumn)#> Error: The name 'CleanUp' doesn't exist in the current context But I can't find the CleanUp method in the Subsonic.Core project. Someone knows something about this?

    Read the article

  • schema for storing different varchar fields over time?

    - by Henry
    This app I'm working on needs to store some meta data fields about an entity. The problem is that we can already foresee that these fields are going to change a lot in the future. I'm using Hibernate (in ColdFusion) and each entity's properties are translated to one column in the entity table, but altering table columns later down the raod will be costly and error-prone right? Should I go for something like this? MetaDataField ----- metaDataFieldID (PK), name FieldValue ---------- EntityID (PK, FK), metaDataFieldID (PK, FK), value [varchar(255)] Is this common? Anything to watch out for? p.s. I also thought of using XML on SQL Server 05+. After talking to some ppl, seems like it is not a viable solution 'cause it will be too slow for doing certain query for reporting purposes.

    Read the article

  • Fluent NHibernate: Entity from one table, but reference will map to another tables?

    - by Andy
    Given the following tables: Product ----------- ProductId : int (PK) ProductVersion : int ProductHistory ----------- ProductId : int (PK) ProductVersion : int (PK) Item ----------- ItemId : int (PK) ProductId : int (FK) -- ProductId + ProductVersion relates to ProductHistory ProductVersion : int (FK) And the following classes: public class Product { } public class Item { public Product Product { get; set; } } What I want to happen is this; we get a Product from the Product table, assign it to Item.Product property. But that Item.Product property should map to ProductHistory. The idea is that only the latest version of a product is in the main Product table, so we allow customers to search against that table (so that if each product has 4 versions and there are 1000 products, we only need to query though 1000 products, not 1000 products * 4 versions of each). Any idea how to acomplish this? Thanks Andy

    Read the article

  • Prepare and import data into existing database

    - by Álvaro G. Vicario
    I maintain a PHP application with SQL Server backend. The DB structure is roughly this: lot === lot_id (pk, identify) lot_code building ======== buildin_id (pk, identity) lot_id (fk) inspection ========== inspection_id (pk, identify) building_id (fk) date inspector result The database already has lots and buildings and I need to import some inspections. Key points are: It's a one-time initial load. Data comes in an Excel file. The Excel data is unaware of DB autogenerated IDs: inspections must be linked to buildings through their lot_code What are my options to do such data load? date inspector result lot_code ========== =========== ======== ======== 31/12/2009 John Smith Pass 987654X 28/02/2010 Bill Jones Fail 123456B

    Read the article

  • how to run foreach loop with ternary condition

    - by I Like PHP
    i have two foreach loop to display some data, but i want to use a single foreach on basis of database result. means if there is any row returns from database then forach($first as $fk=>$fv) should execute otherwise foreach($other as $ok) should execute. i m unsing below ternary operator which gives parse error $n=$db->numRows($taskData); // databsse results <?php ($n) ? foreach ($first as $fk=>$fv) : foreach ($other as $ok) { ?> <table><tr><td>......some data...</td></tr></table> <?php } ?> please suggest me how to handle such condition via ternary operator or any other idea. Thanks

    Read the article

  • JPA One to Many using JoinTable Error

    - by user553015
    I am trying to model 1:N (Person & Address) relationship using a junction table (Person_Address). 1.Person (personId PK) 2.Address (addressId PK) 3.PersonAddress ( personId, addressId composite PK, personId FK references Person, addressid FK references Address ) @Entity public class Person { @OneToMany @JoinTable( name="PersonAddress", joinColumns = @JoinColumn( name="personId"), inverseJoinColumns = @JoinColumn( name="addressId") ) public Set<Address> getAddresses() {...} ... } I encounter following error. Not able to find any solution. Caused by: org.hibernate.MappingException: Could not determine type for: com.realestate.details.Address, at table: Person, for columns: [org.hibernate.mapping.Column(address)] at org.hibernate.mapping.SimpleValue.getType(SimpleValue.java:269) at org.hibernate.mapping.SimpleValue.isValid(SimpleValue.java:253) at org.hibernate.mapping.Property.isValid(Property.java:185) at org.hibernate.mapping.PersistentClass.validate(PersistentClass.java:440) at org.hibernate.mapping.RootClass.validate(RootClass.java:192) at org.hibernate.cfg.Configuration.validate(Configuration.java:1108) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1293)

    Read the article

  • Possible to get PrimayKey IDs back after a SQL BulkCopy?

    - by chobo2
    Hi I am using C# and using SqlBulkCopy. I have a problem though. I need to do a mass insert into one table then another mass insert into another table. These 2 have a PK/FK relationship. Table A Field1 -PK auto incrementing (easy to do SqlBulkCopy as straight forward) Table B Field1 -PK/FK - This field makes the relationship and is also the PK of this table. It is not auto incrementing and needs to have the same id as to the row in Table A. So these tables have a one to one relationship but I am unsure how to get back all those PK Id that the mass insert made since I need them for Table B.

    Read the article

  • Linq query joining with a subquery

    - by Alan Fisher
    I am trying to reproduce a SQL query using a LINQ to Entities query. The following SQL works fine, I just don't see how to do it in LINQ. I have tried for a few hours today but I'm just missing something. SELECT h.ReqID, rs.RoutingSection FROM ReqHeader h JOIN ReqRoutings rr ON rr.ReqRoutingID = (SELECT TOP 1 r1.ReqRoutingID FROM ReqRoutings r1 WHERE r1.ReqID = h.ReqID ORDER BY r1.ReqRoutingID desc) JOIN ReqRoutingSections rs ON rs.RoutingSectionID = rr.RoutingSectionID Edit*** Here is my table scema- Requisitions: ReqID PK string ReqDate datetime etc... ReqRoutings: ID PK int ReqID FK RoutingSection FK int RoutingDate ReqRoutingSections: Id PK int RoutingSection string The idea is that each Requisition can be routed many times, for my query I need the last RoutingSection to be returned along with the Requisition info. Sample data: Requisitions: - 1 record ReqID 123456 ReqDate '12/1/2012' ReqRoutings: -- 3 records id 1 ReqID 123456 RoutingSection 3 RoutingDate '12/2/2012' id 2 ReqID 123456 RoutingSection 2 RoutingDate '12/3/2012' id 3 ReqID 123456 RoutingSection 4 RoutingDate '12/4/2012' ReqRoutingSections: -- 3 records id 2 Supervision id 3 Safety id 4 Qaulity Control The results of the query would be ReqID = '123456' RoutingSection = 'QualityControl' -- Last RoutingSection requisition was routed to

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >