Search Results

Search found 6628 results on 266 pages for 'foreign keys'.

Page 240/266 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • Django Multi-Table Inheritance VS Specifying Explicit OneToOne Relationship in Models

    - by chefsmart
    Hope all this makes sense :) I'll clarify via comments if necessary. Also, I am experimenting using bold text in this question, and will edit it out if I (or you) find it distracting. With that out of the way... Using django.contrib.auth gives us User and Group, among other useful things that I can't do without (like basic messaging). In my app I have several different types of users. A user can be of only one type. That would easily be handled by groups, with a little extra care. However, these different users are related to each other in hierarchies / relationships. Let's take a look at these users: - Principals - "top level" users Administrators - each administrator reports to a Principal Coordinators - each coordinator reports to an Administrator Apart from these there are other user types that are not directly related, but may get related later on. For example, "Company" is another type of user, and can have various "Products", and products may be supervised by a "Coordinator". "Buyer" is another kind of user that may buy products. Now all these users have various other attributes, some of which are common to all types of users and some of which are distinct only to one user type. For example, all types of users have to have an address. On the other hand, only the Principal user belongs to a "BranchOffice". Another point, which was stated above, is that a User can only ever be of one type. The app also needs to keep track of who created and/or modified Principals, Administrators, Coordinators, Companies, Products etc. (So that's two more links to the User model.) In this scenario, is it a good idea to use Django's multi-table inheritance as follows: - from django.contrib.auth.models import User class Principal(User): # # # branchoffice = models.ForeignKey(BranchOffice) landline = models.CharField(blank=True, max_length=20) mobile = models.CharField(blank=True, max_length=20) created_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalcreator") modified_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalmodifier") # # # Or should I go about doing it like this: - class Principal(models.Model): # # # user = models.OneToOneField(User, blank=True) branchoffice = models.ForeignKey(BranchOffice) landline = models.CharField(blank=True, max_length=20) mobile = models.CharField(blank=True, max_length=20) created_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalcreator") modified_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalmodifier") # # # Please keep in mind that there are other user types that are related via foreign keys, for example: - class Administrator(models.Model): # # # principal = models.ForeignKey(Principal, help_text="The supervising principal for this Administrator") user = models.OneToOneField(User, blank=True) province = models.ForeignKey( Province) landline = models.CharField(blank=True, max_length=20) mobile = models.CharField(blank=True, max_length=20) created_by = models.ForeignKey(User, editable=False, blank=True, related_name="administratorcreator") modified_by = models.ForeignKey(User, editable=False, blank=True, related_name="administratormodifier") I am aware that Django does use a one-to-one relationship for multi-table inheritance behind the scenes. I am just not qualified enough to decide which is a more sound approach.

    Read the article

  • perl dancer: passing database info to template

    - by Bubnoff
    Following Dancer tutorial here: http://search.cpan.org/dist/Dancer/lib/Dancer/Tutorial.pod I'm using my own sqlite3 database with this schema CREATE TABLE if not exists location (location_code TEXT PRIMARY KEY, name TEXT, stations INTEGER); CREATE TABLE if not exists session (id INTEGER PRIMARY KEY, date TEXT, sessions INTEGER, location_code TEXT, FOREIGN KEY(location_code) REFERENCES location(location_code)); My dancer code ( helloWorld.pm ) for the database: package helloWorld; use Dancer; use DBI; use File::Spec; use File::Slurp; use Template; our $VERSION = '0.1'; set 'template' => 'template_toolkit'; set 'logger' => 'console'; my $base_dir = qq(/home/automation/scripts/Area51/perl/dancer); # database crap sub connect_db { my $db = qw(/home/automation/scripts/Area51/perl/dancer/sessions.sqlite); my $dbh = DBI->connect("dbi:SQLite:dbname=$db", "", "", { RaiseError => 1, AutoCommit => 1 }); return $dbh; } sub init_db { my $db = connect_db(); my $file = qq($base_dir/schema.sql); my $schema = read_file($file); $db->do($schema) or die $db->errstr; } get '/' => sub { my $branch_code = qq(BPT); my $dbh = connect_db(); my $sql = q(SELECT * FROM session); my $sth = $dbh->prepare($sql) or die $dbh->errstr; $sth->execute or die $dbh->errstr; my $key_field = q(id); template 'show_entries.tt', { 'branch' => $branch_code, 'data' => $sth->fetchall_hashref($key_field), }; }; init_db(); true; Tried the example template on the site, doesn't work. <% FOREACH id IN data.keys.nsort %> <li>Date is: <% data.$id.sessions %> </li> <% END %> Produces page but with no data. How do I troubleshoot this as no clues come up in the console/cli? Thanks Bubnoff

    Read the article

  • What is a good platform for building a game framework targetting both web and native languages?

    - by fuzzyTew
    I would like to develop (or find, if one is already in development) a framework with support for accelerated graphics and sound built on a system flexible enough to compile to the following: native ppc/x86/x86_64/arm binaries or a language which compiles to them javascript actionscript bytecode or a language which compiles to it (actionscript 3, haxe) optionally java I imagine, for example, creating an API where I can open windows and make OpenGL-like calls and the framework maps this in a relatively efficient manner to either WebGL with a canvas object, 3d graphics in Flash, OpenGL ES 2 with EGL, or desktop OpenGL in an X11, Windows, or Cocoa window. I have so far looked into these avenues: Building the game library in haXe Pros: Targets exist for php, javascript, actionscript bytecode, c++ High level, object oriented language Cons: No support for finally{} blocks or destructors, making resource cleanup difficult C++ target does not allow room for producing highly optimized libraries -- the foreign function interface requires all primitive types be boxed in a wrapper object, as if writing bindings for a scripting language; these feel unideal for real-time graphics and audio, especially exporting low-level functions. Doesn't seem quite yet mature Using the C preprocessor to create a translator, writing programs entirely with macros Pros: CPP is widespread and simple to use Cons: This is an arduous task and probably the wrong tool for the job CPP implementations differ widely in support for features (e.g. xcode cpp has no variadic macros despite claiming C99 compliance) There is little-to-no room for optimization in this route Using llvm's support for multiple backends to target c/c++ to web languages Pros: Can code in c/c++ LLVM is a very mature highly optimizing compiler performing e.g. global inlining Targets exist for actionscript (alchemy) and javascript (emscripten) Cons: Actionscript target is closed source, unmaintained, and buggy. Javascript targets do not use features of HTML5 for appropriate optimization (e.g. linear memory with typed arrays) and are immature An LLVM target must convert from low-level bytecode, so high-level constructs are lost and bloated unreadable code is created from translating individual instructions, which may be more difficult for an unprepared JIT to optimize. "jump" instructions cause problems for languages with no "goto" statements. Using libclang to write a translator from C/C++ to web languages Pros: A beautiful parsing library providing easy access to the code structure Can code in C/C++ Has sponsored developer effort from Apple Cons: Incomplete; current feature set targets IDEs. Basic operators are unexposed and must be manually parsed from the returned AST element to be identified. Translating code prior to compilation may forgo optimizations assumed in c/c++ such as inlining. Creating new code generators for clang to translate into web languages Pros: Can code in C/C++ as libclang Cons: There is no API; code structure is unstable A much larger job than using libclang; the innards of clang are complex Building the game library in Common Lisp Pros: Flexible, ancient, well-developed language Extensive introspection should ease writing translators Translators exist for at least javascript Cons: Unfamiliar language No standardized library functions, widely varying implementations Which of these avenues should I pursue? Do you know of any others, or any systems that might be useful? Does a general project like this exist somewhere already? Thank you for any input.

    Read the article

  • Complex SQL query with group by and two rows in one

    - by Ricket
    Okay, I need help. I'm usually pretty good at SQL queries but this one baffles me. By the way, this is not a homework assignment, it's a real situation in an Access database and I've written the requirements below myself. Here is my table layout. It's in Access 2007 if that matters; I'm writing the query using SQL. Id (primary key) PersonID (foreign key) EventDate NumberOfCredits SuperCredits (boolean) There are events that people go to. They can earn normal credits, or super credits, or both at one event. The SuperCredits column is true if the row represents a number of super credits earned at the event, or false if it represents normal credits. So for example, if there is an event which person 174 attends, and they earn 3 normal credits and 1 super credit at the event, the following two rows would be added to the table: ID PersonID EventDate NumberOfCredits SuperCredits 1 174 1/1/2010 3 false 2 174 1/1/2010 1 true It is also possible that the person could have done two separate things at the event, so there might be more than two columns for one event, and it might look like this: ID PersonID EventDate NumberOfCredits SuperCredits 1 174 1/1/2010 1 false 2 174 1/1/2010 2 false 3 174 1/1/2010 1 true Now we want to print out a report. Here will be the columns of the report: PersonID LastEventDate NumberOfNormalCredits NumberOfSuperCredits The report will have one row per person. The row will show the latest event that the person attended, and the normal and super credits that the person earned at that event. What I am asking of you is to write, or help me write, the SQL query to SELECT the data and GROUP BY and SUM() and whatnot. Or, let me know if this is for some reason not possible, and how to organize my data to make it possible. This is extremely confusing and I understand if you do not take the time to puzzle through it. I've tried to simplify it as much as possible, but definitely ask any questions if you give it a shot and need clarification. I'll be trying to figure it out but I'm having a real hard time with it, this is grouping beyond my experience...

    Read the article

  • How to optimize this SQL query for a rectangular region?

    - by Andrew B.
    I'm trying to optimize the following query, but it's not clear to me what index or indexes would be best. I'm storing tiles in a two-dimensional plane and querying for rectangular regions of that plane. The table has, for the purposes of this question, the following columns: id: a primary key integer world_id: an integer foreign key which acts as a namespace for a subset of tiles tileY: the Y-coordinate integer tileX: the X-coordinate integer value: the contents of this tile, a varchar if it matters. I have the following indexes: "ywot_tile_pkey" PRIMARY KEY, btree (id) "ywot_tile_world_id_key" UNIQUE, btree (world_id, "tileY", "tileX") "ywot_tile_world_id" btree (world_id) And this is the query I'm trying to optimize: ywot=> EXPLAIN ANALYZE SELECT * FROM "ywot_tile" WHERE ("world_id" = 27685 AND "tileY" <= 6 AND "tileX" <= 9 AND "tileX" >= -2 AND "tileY" >= -1 ); QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on ywot_tile (cost=11384.13..149421.27 rows=65989 width=168) (actual time=79.646..80.075 rows=96 loops=1) Recheck Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) -> Bitmap Index Scan on ywot_tile_world_id_key (cost=0.00..11367.63 rows=65989 width=0) (actual time=79.615..79.615 rows=125 loops=1) Index Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) Total runtime: 80.194 ms So the world is fixed, and we are querying for a rectangular region of tiles. Some more information that might be relevant: All the tiles for a queried region may or may not be present The height and width of a queried rectangle are typically about 10x10-20x20 For any given (world, X) or (world, Y) pair, there may be an unbounded number of matching tiles, but the worst case is currently around 10,000, and typically there are far fewer. New tiles are created far less frequently than existing ones are updated (changing the 'value'), and that itself is far less frequent that just reading as in the query above. The only thing I can think of would be to index on (world, X) and (world, Y). My guess is that the database would be able to take those two sets and intersect them. The problem is that there is a potentially unbounded number of matches for either for either of those. Is there some other kind of index that would be more appropriate?

    Read the article

  • Many-To-Many Query with Linq-To-NHibernate

    - by rjygraham
    Ok guys (and gals), this one has been driving me nuts all night and I'm turning to your collective wisdom for help. I'm using Fluent Nhibernate and Linq-To-NHibernate as my data access story and I have the following simplified DB structure: CREATE TABLE [dbo].[Classes]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [Name] [nvarchar](100) NOT NULL, [StartDate] [datetime2](7) NOT NULL, [EndDate] [datetime2](7) NOT NULL, CONSTRAINT [PK_Classes] PRIMARY KEY CLUSTERED ( [Id] ASC ) CREATE TABLE [dbo].[Sections]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [ClassId] [bigint] NOT NULL, [InternalCode] [varchar](10) NOT NULL, CONSTRAINT [PK_Sections] PRIMARY KEY CLUSTERED ( [Id] ASC ) CREATE TABLE [dbo].[SectionStudents]( [SectionId] [bigint] NOT NULL, [UserId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_SectionStudents] PRIMARY KEY CLUSTERED ( [SectionId] ASC, [UserId] ASC ) CREATE TABLE [dbo].[aspnet_Users]( [ApplicationId] [uniqueidentifier] NOT NULL, [UserId] [uniqueidentifier] NOT NULL, [UserName] [nvarchar](256) NOT NULL, [LoweredUserName] [nvarchar](256) NOT NULL, [MobileAlias] [nvarchar](16) NULL, [IsAnonymous] [bit] NOT NULL, [LastActivityDate] [datetime] NOT NULL, PRIMARY KEY NONCLUSTERED ( [UserId] ASC ) I omitted the foreign keys for brevity, but essentially this boils down to: A Class can have many Sections. A Section can belong to only 1 Class but can have many Students. A Student (aspnet_Users) can belong to many Sections. I've setup the corresponding Model classes and Fluent NHibernate Mapping classes, all that is working fine. Here's where I'm getting stuck. I need to write a query which will return the sections a student is enrolled in based on the student's UserId and the dates of the class. Here's what I've tried so far: 1. var sections = (from s in this.Session.Linq<Sections>() where s.Class.StartDate <= DateTime.UtcNow && s.Class.EndDate > DateTime.UtcNow && s.Students.First(f => f.UserId == userId) != null select s); 2. var sections = (from s in this.Session.Linq<Sections>() where s.Class.StartDate <= DateTime.UtcNow && s.Class.EndDate > DateTime.UtcNow && s.Students.Where(w => w.UserId == userId).FirstOrDefault().Id == userId select s); Obviously, 2 above will fail miserably if there are no students matching userId for classes the current date between it's start and end dates...but I just wanted to try. The filters for the Class StartDate and EndDate work fine, but the many-to-many relation with Students is proving to be difficult. Everytime I try running the query I get an ArgumentNullException with the message: Value cannot be null. Parameter name: session I've considered going down the path of making the SectionStudents relation a Model class with a reference to Section and a reference to Student instead of a many-to-many. I'd like to avoid that if I can, and I'm not even sure it would work that way. Thanks in advance to anyone who can help. Ryan

    Read the article

  • Transaction on Entity FrameWork Refactoring and best performance how can i?

    - by programmerist
    i try to use transaction in Entity FrameWork. i have 3 tables Personel, Prim, Finans. in Prim table you look SatisTutari (int) if i add data in SatisTutari.Text instead of int value adding float value. Trannsaction must be run! Everything is ok but how can i refactoring or give best performance or best writing Transaction coding! i have 3 table so i have 3 entities: CREATE TABLE Personel (PersonelID integer PRIMARY KEY identity not null, Ad varchar(30), Soyad varchar(30), Meslek varchar(100), DogumTarihi datetime, DogumYeri nvarchar(100), PirimToplami float); Go create TABLE Prim (PrimID integer PRIMARY KEY identity not null, PersonelID integer Foreign KEY references Personel(PersonelID), SatisTutari int, Prim float, SatisTarihi Datetime); Go CREATE TABLE Finans (ID integer PRIMARY KEY identity not null, Tutar float); Personel, Prim,Finans my tables. if you look Prim table you can see Prim value float value if i write a textbox not float value my transaction must run. protected void btnSave_Click(object sender, EventArgs e) { using (TestEntities testCtx = new TestEntities()) { using (TransactionScope scope = new TransactionScope()) { Personel personel = new Personel(); Prim prim = new Prim(); Finans finans = new Finans(); //-----------------------------------------------------------------------Step 1 personel.Ad = txtName.Text; personel.Soyad = txtSurName.Text; personel.Meslek = txtMeslek.Text; personel.DogumTarihi = DateTime.Parse(txtSatisTarihi.Text); personel.DogumYeri = txtDogumYeri.Text; personel.PirimToplami = float.Parse(txtPrimToplami.Text); testCtx.AddToPersonel(personel); testCtx.SaveChanges(); //----------------------------------------------------------------------- step 2 prim.PersonelID = personel.PersonelID; prim.SatisTutari = int.Parse(txtSatisTutari.Text); prim.SatisTarihi = DateTime.Parse(txtSatisTarihi.Text); prim.Prim1 = double.Parse(txtPrim.Text); finans.Tutar = prim.SatisTutari * prim.Prim1; testCtx.AddToPrim(prim); testCtx.SaveChanges(); //----------------------------------------------------------------------- step 3 lblTutar.Text = finans.Tutar.Value.ToString(); testCtx.AddToFinans(finans); testCtx.SaveChanges(); scope.Complete(); } } How can i rearrange codes. i need best practice refactoring and best solution for reading easly and performance!!!

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • JPA Inheritance and Relations - Clarification question

    - by Michael
    Here the scenario: I have a unidirectional 1:N Relation from Person Entity to Address Entity. And a bidirectional 1:N Relation from User Entity to Vehicle Entity. Here is the Address class: @Entity public class Address implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) privat Long int ... The Vehicles Class: @Entity public class Vehicle implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @ManyToOne private User owner; ... @PreRemove protected void preRemove() { //this.owner.removeVehicle(this); } public Vehicle(User owner) { this.owner = owner; ... The Person Class: @Entity @Inheritance(strategy = InheritanceType.JOINED) @DiscriminatorColumn(name="PERSON_TYP") public class Person implements Serializable { @Id protected String username; @OneToMany(cascade = CascadeType.ALL, orphanRemoval=true) @JoinTable(name = "USER_ADDRESS", joinColumns = @JoinColumn(name = "USERNAME"), inverseJoinColumns = @JoinColumn(name = "ADDRESS_ID")) protected List<Address> addresses; ... @PreRemove protected void prePersonRemove(){ this.addresses = null; } ... The User Class which is inherited from the Person class: @Entity @Table(name = "Users") @DiscriminatorValue("USER") public class User extends Person { @OneToMany(mappedBy = "owner", cascade = {CascadeType.PERSIST, CascadeType.REMOVE}) private List<Vehicle> vehicles; ... When I try to delete a User who has an address I have to use orphanremoval=true on the corresponding relation (see above) and the preRemove function where the address List is set to null. Otherwise (no orphanremoval and adress list not set to null) a foreign key contraint fails. When i try to delete a user who has an vehicle a concurrent Acces Exception is thrown when do not uncomment the "this.owner.removeVehicle(this);" in the preRemove Function of the vehicle. The thing i do not understand is that before i used this inheritance there was only a User class which had all relations: @Entity @Table(name = "Users") public class User implements Serializable { @Id protected String username; @OneToMany(mappedBy = "owner", cascade = {CascadeType.PERSIST, CascadeType.REMOVE}) private List<Vehicle> vehicles; @OneToMany(cascade = CascadeType.ALL) @JoinTable(name = "USER_ADDRESS", joinColumns = @JoinColumn(name = "USERNAME") inverseJoinColumns = @JoinColumn(name = "ADDRESS_ID")) ptivate List<Address> addresses; ... No orphanremoval, and the vehicle class has used the uncommented statement above in its preRemove function. And - I could delte a user who has an address and i could delte a user who has a vehicle. So why doesn't everything work without changes when i use inheritance? I use JPA 2.0, EclipseLink 2.0.2, MySQL 5.1.x and Netbeans 6.8

    Read the article

  • Stored procedure to remove FK of a given table

    - by Nicole
    I need to create a stored procedure that: Accepts a table name as a parameter Find its dependencies (FKs) Removes them Truncate the table I created the following so far based on http://www.mssqltips.com/sqlservertip/1376/disable-enable-drop-and-recreate-sql-server-foreign-keys/ . My problem is that the following script successfully does 1 and 2 and generates queries to alter tables but does not actually execute them. In another word how can execute the resulting "Alter Table ..." queries to actually remove FKs? CREATE PROCEDURE DropDependencies(@TableName VARCHAR(50)) AS BEGIN SELECT 'ALTER TABLE ' + OBJECT_SCHEMA_NAME(parent_object_id) + '.[' + OBJECT_NAME(parent_object_id) + '] DROP CONSTRAINT ' + name FROM sys.foreign_keys WHERE referenced_object_id=object_id(@TableName) END EXEC DropDependencies 'TableName' Any idea is appreciated! Update: I added the cursor to the SP but I still get and error: "Msg 203, Level 16, State 2, Procedure DropRestoreDependencies, Line 75 The name 'ALTER TABLE [dbo].[ChildTable] DROP CONSTRAINT [FK__ChileTable__ParentTable__745C7C5D]' is not a valid identifier." Here is the updated SP: CREATE PROCEDURE DropRestoreDependencies(@schemaName sysname, @tableName sysname) AS BEGIN SET NOCOUNT ON DECLARE @operation VARCHAR(10) SET @operation = 'DROP' --ENABLE, DISABLE, DROP DECLARE @cmd NVARCHAR(1000) DECLARE @FK_NAME sysname, @FK_OBJECTID INT, @FK_DISABLED INT, @FK_NOT_FOR_REPLICATION INT, @DELETE_RULE smallint, @UPDATE_RULE smallint, @FKTABLE_NAME sysname, @FKTABLE_OWNER sysname, @PKTABLE_NAME sysname, @PKTABLE_OWNER sysname, @FKCOLUMN_NAME sysname, @PKCOLUMN_NAME sysname, @CONSTRAINT_COLID INT DECLARE cursor_fkeys CURSOR FOR SELECT Fk.name, Fk.OBJECT_ID, Fk.is_disabled, Fk.is_not_for_replication, Fk.delete_referential_action, Fk.update_referential_action, OBJECT_NAME(Fk.parent_object_id) AS Fk_table_name, schema_name(Fk.schema_id) AS Fk_table_schema, TbR.name AS Pk_table_name, schema_name(TbR.schema_id) Pk_table_schema FROM sys.foreign_keys Fk LEFT OUTER JOIN sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id --inner join WHERE TbR.name = @tableName AND schema_name(TbR.schema_id) = @schemaName OPEN cursor_fkeys FETCH NEXT FROM cursor_fkeys INTO @FK_NAME,@FK_OBJECTID, @FK_DISABLED, @FK_NOT_FOR_REPLICATION, @DELETE_RULE, @UPDATE_RULE, @FKTABLE_NAME, @FKTABLE_OWNER, @PKTABLE_NAME, @PKTABLE_OWNER WHILE @@FETCH_STATUS = 0 BEGIN -- create statement for dropping FK and also for recreating FK IF @operation = 'DROP' BEGIN -- drop statement SET @cmd = 'ALTER TABLE [' + @FKTABLE_OWNER + '].[' + @FKTABLE_NAME + '] DROP CONSTRAINT [' + @FK_NAME + ']' EXEC @cmd -- create process DECLARE @FKCOLUMNS VARCHAR(1000), @PKCOLUMNS VARCHAR(1000), @COUNTER INT -- create cursor to get FK columns DECLARE cursor_fkeyCols CURSOR FOR SELECT COL_NAME(Fk.parent_object_id, Fk_Cl.parent_column_id) AS Fk_col_name, COL_NAME(Fk.referenced_object_id, Fk_Cl.referenced_column_id) AS Pk_col_name FROM sys.foreign_keys Fk LEFT OUTER JOIN sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id INNER JOIN sys.foreign_key_columns Fk_Cl ON Fk_Cl.constraint_object_id = Fk.OBJECT_ID WHERE TbR.name = @tableName AND schema_name(TbR.schema_id) = @schemaName AND Fk_Cl.constraint_object_id = @FK_OBJECTID -- added 6/12/2008 ORDER BY Fk_Cl.constraint_column_id OPEN cursor_fkeyCols FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME SET @COUNTER = 1 SET @FKCOLUMNS = '' SET @PKCOLUMNS = '' WHILE @@FETCH_STATUS = 0 BEGIN IF @COUNTER > 1 BEGIN SET @FKCOLUMNS = @FKCOLUMNS + ',' SET @PKCOLUMNS = @PKCOLUMNS + ',' END SET @FKCOLUMNS = @FKCOLUMNS + '[' + @FKCOLUMN_NAME + ']' SET @PKCOLUMNS = @PKCOLUMNS + '[' + @PKCOLUMN_NAME + ']' SET @COUNTER = @COUNTER + 1 FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME END CLOSE cursor_fkeyCols DEALLOCATE cursor_fkeyCols END FETCH NEXT FROM cursor_fkeys INTO @FK_NAME,@FK_OBJECTID, @FK_DISABLED, @FK_NOT_FOR_REPLICATION, @DELETE_RULE, @UPDATE_RULE, @FKTABLE_NAME, @FKTABLE_OWNER, @PKTABLE_NAME, @PKTABLE_OWNER END CLOSE cursor_fkeys DEALLOCATE cursor_fkeys END For running use: EXEC DropRestoreDependencies dbo, ParentTable

    Read the article

  • Implementing a logging library in .NET with a database as the storage medium

    - by Dave
    I'm just starting to work on a logging library that everyone can use to keep track of any sort of system information while the user is running our application. The simplest example so far is to track Info, Warnings, and Errors. I want all plugins to be able to use this feature, but since each developer might have a different idea of what's important to report, I want to keep this as generic as possible. In the C++ world, I would normally use something like a stl::pair<string,string> to act as a key value pair structure, and have a stl::list of these to act as a "row" in the log. The log cache would then be a list<list<pair<string,string>>> (ugh!). This way, the developers can use a const string key like INFO, WARNING, ERROR to have a consistent naming for a column in the database (for SELECTing specific types of information). I'd like the database to be able to deal with any number of distinct column names. For example, John might have an INFO row with a column called USER, and Bill might have an INFO row with a column called FILENAME. I want the log viewer to be able to display all information, and if one report doesn't have a value for INFO / FILENAME, those fields should just appear blank. So one option is to use List<List<KeyValuePair<String,String>>, and the another is to have the log library consumer somehow "register" its schema, and then have the database do an ALTER TABLE to handle this situation. Yet another idea is to have a table that's just for key value pairs, with a foreign key that maps the key value pairs back to the original log entry. I obviously don't want logging to bog down the system, so I only lock the log cache to make a copy of the data (and remove the already-copied data), then a background thread will dump the information to the database. My specific questions regarding this are: Do you see any performance issues? In other words, have you ever tried something like this and found that certain things just don't work well in practice? Is there a more .NETish way to implement the key value pairs, other than List<List<KeyValuePair<String,String>>>? Even if there is a way to do #2 better, is the ALTER TABLE idea I proposed above a Bad Thing? Would you recommend multiple databases over a single one? I don't yet have an idea of how frequently the log would get written to, but we ideally would like to have lots of low level information. Perhaps there should be a DB with a fixed schema only for the low level stuff, and then another DB that's more flexible for reporting information back to users.

    Read the article

  • Mysterious constraints problem with SQL Server 2000

    - by Ramon
    Hi all I'm getting the following error from a VB NET web application written in VS 2003, on framework 1.1. The web app is running on Windows Server 2000, IIS 5, and is reading from a SQL server 2000 database running on the same machine. System.Data.ConstraintException: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. at System.Data.DataSet.FailedEnableConstraints() at System.Data.DataSet.EnableConstraints() at System.Data.DataSet.set_EnforceConstraints(Boolean value) at System.Data.DataTable.EndLoadData() at System.Data.Common.DbDataAdapter.FillFromReader(Object data, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.Common.DbDataAdapter.FillFromCommand(Object data, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet) The problem appears when the web app is under a high load. The system runs fine when volume is low, but when the number of requests becomes high, the system starts rejecting incoming requests with the above exception message. Once the problem appears, very few requests actually make it through and get processed normally, about 2 in every 30. The vast majority of requests fail, until a SQL Server restart or IIS reset is performed. The system then start processing requests normally, and after some time it starts throwing the same error. The error occurs when a data adapter runs the Fill() method against a SELECT statement, to populate a strongly-typed dataset. It appears that the dataset does not like the data it is given and throws this exception. This error occurs on various SELECT statements, acting on different tables. I have regenerated the dataset and checked the relevant constraints, as well as the table from which the data is read. Both the dataset definition and the data in the table are fine. Admittedly, the hardware running both the web app and SQL Server 2000 is seriously outdated, considering the numbers of incoming requests it currently receives. The amount of RAM consumed by SQL Server is dynamically allocated, and at peak times SQL Server can consume up to 2.8 GB out of a total of 3.5 GB on the server. At first I suspected some sort of index or database corruption, but after running DBCC CHECKDB, no errors were found in the database. So now I'm wondering whether this error is a result of the hardware limitations of the system. Is it possible for SQL Server to somehow mess up the data it's supposed to pass to the dataset, resulting in constraint violation due to, say, data type/length mismatch? I tried accessing the RowError messages of the data rows in the retrieved dataset tables but I kept getting empty strings. I know that HasErrors = true for the datatables in question. I have not set the EnableConstraints = false, and I don't want to do that. Thanks in advance. Ray

    Read the article

  • How to add new object to an IList mapped as a one-to-many with NHibernate?

    - by Jørn Schou-Rode
    My model contains a class Section which has an ordered list of Statics that are part of this section. Leaving all the other properties out, the implementation of the model looks like this: public class Section { public virtual int Id { get; private set; } public virtual IList<Static> Statics { get; private set; } } public class Static { public virtual int Id { get; private set; } } In the database, the relationship is implemented as a one-to-many, where the table Static has a foreign key pointing to Section and an integer column Position to store its index position in the list it is part of. The mapping is done in Fluent NHibernate like this: public SectionMap() { Id(x => x.Id); HasMany(x => x.Statics).Cascade.All().LazyLoad() .AsList(x => x.WithColumn("Position")); } public StaticMap() { Id(x => x.Id); References(x => x.Section); } Now I am able to load existing Statics, and I am also able to update the details of those. However, I cannot seem to find a way to add new Statics to a Section, and have this change persisted to the database. I have tried several combinations of: mySection.Statics.Add(myStatic) session.Update(mySection) session.Save(myStatic) but the closest I have gotten (using the first two statements), is to an SQL exception reading: "Cannot insert the value NULL into column 'Position'". Clearly an INSERT is attempted here, but NHibernate does not seem to automatically append the index position to the SQL statement. What am I doing wrong? Am I missing something in my mappings? Do I need to expose the Position column as a property and assign a value to it myself? EDIT: Apparently everything works as expected, if I remove the NOT NULL constraint on the Static.Position column in the database. I guess NHibernate makes the insert and immediatly after updates the row with a Position value. While this is an anwers to the question, I am not sure if it is the best one. I would prefer the Position column to be not nullable, so I still hope there is some way to make NHibernate provide a value for that column directly in the INSERT statement. Thus, the question is still open. Any other solutions?

    Read the article

  • count number of business that belongs to subcategory in doctrine

    - by Ibrahim Azhar Armar
    this is follow up of this question. count number of foreign keys i am using doctrine 1.2 and i want to count the number of business that belongs to subcategory. following are the mysql tables. 1.fi_category +----+-----------------+-----------------+ | id | name | slug | +----+-----------------+-----------------+ 2.fi_subcategory +----+-----------------+-----------------+-------------+ | id | name | slug | category_id | +----+-----------------+-----------------+-------------+ 3.fi_business_subcategory +----+-------------+----------------+ | id | business_id | subcategory_id | +----+-------------+----------------+ i am using this DQL. $q = Doctrine_Query::create() ->select('c.name, c.slug, sc.name, sc.slug') ->from('Model_Category c') ->leftJoin('c.Subcategory sc') ->leftJoin('sc.BusinessSubcategory bsc'); which gives me something like this. Array ( [0] => Array ( [id] => 1 [name] => Entertainment & Lifestyle [slug] => entertainment-lifestyle [Subcategory] => Array ( [0] => Array ( [id] => 1 [name] => Arts and Crafts [slug] => arts-and-crafts ) [1] => Array ( [id] => 2 [name] => Family [slug] => family ) [2] => Array ( [id] => 3 [name] => Fashion [slug] => fashion ) ) ) ) i am looking to fetch the number of business, i.e the returned result should be something like this depending on the business it belongs. Array ( [0] => Array ( [id] => 1 [name] => Entertainment & Lifestyle [slug] => entertainment-lifestyle [Subcategory] => Array ( [0] => Array ( [id] => 1 [name] => Arts and Crafts [slug] => arts-and-crafts [business_count] => 35 ) [1] => Array ( [id] => 2 [name] => Family [slug] => family [business_count] => 10 ) [2] => Array ( [id] => 3 [name] => Fashion [slug] => fashion [business_count] => 27 ) ) ) ) tried various ways using DQL, but nothing seems to work out. any idea how should i go with what i want?

    Read the article

  • In what circumstances are instance variables declared as '_var' in 'use fields' readonly?

    - by Pedro Silva
    I'm trying to understand the behavior of the fields pragma, which I find poorly documented, regarding fields prefixed with underscores. This is what the documentation has to say about it: Field names that start with an underscore character are made private to the class and are not visible to subclasses. Inherited fields can be overridden but will generate a warning if used together with the -w switch. This is not consistent with its actual behavior, according to my test, below. Not only are _-prefixed fields visible within a subclass, they are visible within foreign classes as well (unless I don't get what 'visible' means). Also, directly accessing the restricted hash works fine. Where can I find more about the behavior of the fields pragma, short of going at the source code? { package Foo; use strict; use warnings; use fields qw/a _b __c/; sub new { my ( $class ) = @_; my Foo $self = fields::new($class); $self->a = 1; $self->b = 2; $self->c = 3; return $self; } sub a : lvalue { shift->{a} } sub b : lvalue { shift->{_b} } sub c : lvalue { shift->{__c} } } { package Bar; use base 'Foo'; use strict; use warnings; use Data::Dumper; my $o = Bar->new; print Dumper $o; ##$VAR1 = bless({'_b' => 2, '__c' => 3, 'a' => 1}, 'Foo'); $o->a = 4; $o->b = 5; $o->c = 6; print Dumper $o; ##$VAR1 = bless({'_b' => 5, '__c' => 6, 'a' => 4}, 'Foo'); $o->{a} = 7; $o->{_b} = 8; $o->{__c} = 9; print Dumper $o; ##$VAR1 = bless({'_b' => 8, '__c' => 9, 'a' => 7}, 'Foo'); }

    Read the article

  • What to Expect in Rails 4

    - by mikhailov
    Rails 4 is nearly there, we should be ready before it released. Most developers are trying hard to keep their application on the edge. Must see resources: 1) @sikachu talk: What to Expect in Rails 4.0 - YouTube 2) Rails Guides release notes: http://edgeguides.rubyonrails.org/4_0_release_notes.html There is a mix of all major changes down here: ActionMailer changes excerpt: Asynchronously send messages via the Rails Raise an ActionView::MissingTemplate exception when no implicit template could be found ActionPack changes excerpt Added controller-level etag additions that will be part of the action etag computation Add automatic template digests to all CacheHelper#cache calls (originally spiked in the cache_digests plugin) Add Routing Concerns to declare common routes that can be reused inside others resources and routes Added ActionController::Live. Mix it in to your controller and you can stream data to the client live truncate now always returns an escaped HTML-safe string. The option :escape can be used as false to not escape the result Added ActionDispatch::SSL middleware that when included force all the requests to be under HTTPS protocol ActiveModel changes excerpt AM::Validation#validates ability to pass custom exception to :strict option Changed `AM::Serializers::JSON.include_root_in_json' default value to false. Now, AM Serializers and AR objects have the same default behaviour Added ActiveModel::Model, a mixin to make Ruby objects work with AP out of box Trim down Active Model API by removing valid? and errors.full_messages ActiveRecord changes excerpt Use native mysqldump command instead of structure_dump method when dumping the database structure to a sql file. Attribute predicate methods, such as article.title?, will now raise ActiveModel::MissingAttributeError if the attribute being queried for truthiness was not read from the database, instead of just returning false ActiveRecord::SessionStore has been extracted from Active Record as activerecord-session_store gem. Please read the README.md file on the gem for the usage Fix reset_counters when there are multiple belongs_to association with the same foreign key and one of them have a counter cache Raise ArgumentError if list of attributes to change is empty in update_all Add Relation#load. This method explicitly loads the records and then returns self Deprecated most of the 'dynamic finder' methods. All dynamic methods except for find_by_... and find_by_...! are deprecated Added ability to ActiveRecord::Relation#from to accept other ActiveRecord::Relation objects Remove IdentityMap ActiveSupport changes excerpt ERB::Util.html_escape now escapes single quotes ActiveSupport::Callbacks: deprecate monkey patch of object callbacks Replace deprecated memcache-client gem with dalli in ActiveSupport::Cache::MemCacheStore Object#try will now return nil instead of raise a NoMethodError if the receiving object does not implement the method, but you can still get the old behavior by using the new Object#try! Object#try can't call private methods Add ActiveSupport::Deprecations.behavior = :silence to completely ignore Rails runtime deprecations What are the most important changes for you?

    Read the article

  • In what circumstances are instance variables declared as '_var' in 'use fields' private?

    - by Pedro Silva
    I'm trying to understand the behavior of the fields pragma, which I find poorly documented, regarding fields prefixed with underscores. This is what the documentation has to say about it: Field names that start with an underscore character are made private to the class and are not visible to subclasses. Inherited fields can be overridden but will generate a warning if used together with the -w switch. This is not consistent with its actual behavior, according to my test, below. Not only are _-prefixed fields visible within a subclass, they are visible within foreign classes as well (unless I don't get what 'visible' means). Also, directly accessing the restricted hash works fine. Where can I find more about the behavior of the fields pragma, short of going at the source code? { package Foo; use strict; use warnings; use fields qw/a _b __c/; sub new { my ( $class ) = @_; my Foo $self = fields::new($class); $self->a = 1; $self->b = 2; $self->c = 3; return $self; } sub a : lvalue { shift->{a} } sub b : lvalue { shift->{_b} } sub c : lvalue { shift->{__c} } } { package Bar; use base 'Foo'; use strict; use warnings; use Data::Dumper; my $o = Bar->new; print Dumper $o; ##$VAR1 = bless({'_b' => 2, '__c' => 3, 'a' => 1}, 'Foo'); $o->a = 4; $o->b = 5; $o->c = 6; print Dumper $o; ##$VAR1 = bless({'_b' => 5, '__c' => 6, 'a' => 4}, 'Foo'); $o->{a} = 7; $o->{_b} = 8; $o->{__c} = 9; print Dumper $o; ##$VAR1 = bless({'_b' => 8, '__c' => 9, 'a' => 7}, 'Foo'); }

    Read the article

  • BULK INSERT from one table to another all on the server

    - by steve_d
    I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file) Edit: I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt) Edit 2: The schema changes include (but are not limited to) the following: 1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other. 2. Moving columns from one table to another. 3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables. 4. Some new columns will be created with default values. 5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.

    Read the article

  • Using Entity Framework 4.0 with Code-First and POCO: How to Get Parent Object with All its Children

    - by SirEel
    I'm new to EF 4.0, so maybe this is an easy question. I've got VS2010 RC and the latest EF CTP. I'm trying to implement the "Foreign Keys" code-first example on the EF Team's Design Blog, http://blogs.msdn.com/efdesign/archive/2009/10/12/code-only-further-enhancements.aspx. public class Customer { public int Id { get; set; public string CustomerDescription { get; set; public IList<PurchaseOrder> PurchaseOrders { get; set; } } public class PurchaseOrder { public int Id { get; set; } public int CustomerId { get; set; } public Customer Customer { get; set; } public DateTime DateReceived { get; set; } } public class MyContext : ObjectContext { public RepositoryContext(EntityConnection connection) : base(connection){} public IObjectSet<Customer> Customers { get {return base.CreateObjectSet<Customer>();} } } I use a ContextBuilder to configure MyContext: { var builder = new ContextBuilder<MyContext>(); var customerConfig = _builder.Entity<Customer>(); customerConfig.Property(c => c.Id).IsIdentity(); var poConfig = _builder.Entity<PurchaseOrder>(); poConfig.Property(po => po.Id).IsIdentity(); poConfig.Relationship(po => po.Customer) .FromProperty(c => c.PurchaseOrders) .HasConstraint((po, c) => po.CustomerId == c.Id); ... } This works correctly when I'm adding new Customers, but not when I try to retrieve existing Customers. This code successfully saves a new Customer and all its child PurchaseOrders: using (var context = builder.Create(connection)) { context.Customers.AddObject(customer); context.SaveChanges(); } But this code only retrieves Customer objects; their PurchaseOrders lists are always empty. using (var context = _builder.Create(_conn)) { var customers = context.Customers.ToList(); } What else do I need to do to the ContextBuilder to make MyContext always retrieve all the PurchaseOrders with each Customer?

    Read the article

  • SQL Design Question regarding schema and if Name value pair is the best solution

    - by Aur
    I am having a small problem trying to decide on database schema for a current project. I am by no means a DBA. The application parses through a file based on user input and enters that data in the database. The number of fields that can be parsed is between 1 and 42 at the current moment. The current design of the database is entirely flat with there being 42 columns; some have repeated columns such as address1, address2, address3, etc... This says that I should normalize the data. However, data integrity is not needed at this moment and the way the data is shaped I'm looking at several joins. Not a bad thing but the data is still in a 1 to 1 relationship and I still see a lot of empty fields per row. So my concerns are that this does not allow the database or the application to be very extendable. If they want to add more fields to be parsed (which they do) than I'd need to create another table and add another foreign key to the linking table. The third option is I have a table where the fields are defined and a table for each record. So what I was thinking is to make a table that stores the value and then links to those two tables. The problem is I can picture the size of that table growing large depending on the input size. If someone gives me a file with 300,000 records than 300,000 x 40 = 12 million so I have some reservations. However I think if I get to that point than I should be happy it is being used. This option also allows for more custom displaying of information albeit a bit more work but little rework even if you add more fields. So the problem boils down to: 1. Current design is a flat file which makes extending it hard and it is not normalized. 2. Normalize the tables although no real benefits for the moment but requirements change. 3. Normalize it down into the name value pair and hope size doesn't hurt. There are a large number of inserts, updates, and selects against that table. So performance is a worry but I believe the saying is design now, performance testing later? I'm probably just missing something practical so any comments would be appreciated even if it’s a quick sanity check. Thank you for your time.

    Read the article

  • Authenticating users in iPhone app

    - by Myron
    I'm developing an HTTP api for our web application. Initially, the primary consumer of the API will be an iPhone app we're developing, but I'm designing this with future uses in mind (such as mobile apps for other platforms). I'm trying to decide on the best way to authenticate users so they can access their accounts from the iPhone. I've got a design that I think works well, but I'm no security expert, so I figured it would be good to ask for feedback here. The design of the user authentication has 3 primary goals: Good user experience: We want to allow users to enter their credentials once, and remain logged in indefinitely, until they explicitly log out. I would have considered OAuth if not for the fact that the experience from an iPhone app is pretty awful, from what I've heard (i.e. it launches the login form in Safari, then tells the user to return to the app when authentication succeeds). No need to store the user creds with the app: I always hate the idea of having the user's password stored in either plain text or symmetrically encrypted anywhere, so I don't want the app to have to store the password to pass it to the API for future API requests. Security: We definitely don't need the intense security of a banking app, but I'd obviously like this to be secure. Overall, the API is REST-inspired (i.e. treating URLs as resources, and using the HTTP methods and status codes semantically). Each request to the API must include two custom HTTP headers: an API Key (unique to each client app) and a unique device ID. The API requires all requests to be made using HTTPS, so that the headers and body are encrypted. My plan is to have an api_sessions table in my database. It has a unique constraint on the API key and unique device ID (so that a device may only be logged into a single user account through a given app) as well as a foreign key to the users table. The API will have a login endpoint, which receives the username/password and, if they match an account, logs the user in, creating an api_sessions record for the given API key and device id. Future API requests will look up the api_session using the API key and device id, and, if a record is found, treat the request as being logged in under the user account referenced by the api_session record. There will also be a logout API endpoint, which deletes the record from the api_sessions table. Does anyone see any obvious security holes in this?

    Read the article

  • Issue with SQL query for activity stream/feed

    - by blabus
    I'm building an application that allows users to recommend music to each other, and am having trouble building a query that would return a 'stream' of recommendations that involve both the user themselves, as well as any of the user's friends. This is my table structure: Recommendations ID Sender Recipient [other columns...] -- ------ --------- ------------------ r1 u1 u3 ... r2 u3 u2 ... r3 u4 u3 ... Users ID Email First Name Last Name [other columns...] --- ----- ---------- --------- ------------------ u1 ... ... ... ... u2 ... ... ... ... u3 ... ... ... ... u4 ... ... ... ... Relationships ID Sender Recipient Status [other columns...] --- ------ --------- -------- ------------------ rl1 u1 u2 accepted ... rl2 u3 u1 accepted ... rl3 u1 u4 accepted ... rl4 u3 u2 accepted ... So for user 'u4' (who is friends with 'u1'), I want to query for a 'stream' of recommendations relevant to u4. This stream would include all recommendations in which either the sender or recipient is u4, as well as all recommendations in which the sender or recipient is u1 (the friend). This is what I have for the query so far: SELECT * FROM recommendations WHERE recommendations.sender IN ( SELECT sender FROM relationships WHERE recipient='u4' AND status='accepted' UNION SELECT recipient FROM relationships WHERE sender='u4' AND status='accepted') OR recommendations.recipient IN ( SELECT sender FROM relationships WHERE recipient='u4' AND status='accepted' UNION SELECT recipient FROM relationships WHERE sender='u4' AND status='accepted') UNION SELECT * FROM recommendations WHERE recommendations.sender='u4' OR recommendations.recipient='u4' GROUP BY recommendations.id ORDER BY datecreated DESC Which seems to work, as far as I can see (I'm no SQL expert). It returns all of the records from the Recommendations table that would be 'relevant' to a given user. However, I'm now having trouble also getting data from the Users table as well. The Recommendations table has the sender's and recipient's ID (foreign keys), but I'd also like to get the first and last name of each as well. I think I require some sort of JOIN, but I'm lost on how to proceed, and was looking for help on that. (And also, if anyone sees any areas for improvement in my current query, I'm all ears.) Thanks!

    Read the article

  • Manipulating values from database table with php

    - by charliecodex23
    I currently have 5 tables in MySQL database. Some of them share foreign keys and are interdependent of each other. I am displaying classes accordingly to their majors. Each class is taught during the fall, spring or all_year. In my database I have a table named semester which has an id, year, and semester fields. The semester field in particular is a tinyint that has three values 0, 1, 2. This signifies the fall, spring or all_year. When I display the query instead of having it show 0 or 1 or 2 can I have it show fall, spring etc? Extra: How can I add space to the end of each loop so the data doesn't look clustered? Key 0 Fall 1 Spring 2 All-year PHP <? try { $pdo = new PDO ("mysql:host=$hostname;dbname=$dbname","$username","$pw"); } catch (PDOException $e) { echo "Failed to get DB handle: " . $e->getMessage() . "\n"; exit; } $query = $pdo->prepare("SELECT course.name, course.code, course.description, course.hours, semester.semester, semester.year FROM course LEFT JOIN major_course_xref ON course.id = major_course_xref.course_id LEFT JOIN major ON major.id = major_course_xref.major_id LEFT JOIN course_semester_xref ON course.id = course_semester_xref.course_id LEFT JOIN semester ON course_semester_xref.semester_id = semester.id"); $query->execute(); if ($query->execute()){ while ($row = $query->fetch(PDO::FETCH_ASSOC)){ print $row['name'] . "<br>"; print $row['code'] . "<br>"; print $row['description'] . "<br>"; print $row['hours'] . " hrs.<br>"; print $row['semester'] . "<br>"; print $row['year'] . "<br>"; } } else echo 'Could not fetch results.'; unset($pdo); unset($query); ?> Current Display Computer Programming I CPSC1400 Introduction to disciplined, object-oriented program development. 4 hrs. 0 2013 Desire Display Computer Programming I CPSC1400 Introduction to disciplined, object-oriented program development. 4 hrs. Fall 2013

    Read the article

  • Optimal template for change content via XMLHTTPRequest with JQuery,PHP,SQL [closed]

    - by B.F.
    This is my method to handle XMLHTTPRequests. Avoids mysql request, foreign access, nerves user, double requests. jquery var allow=true; var is_loaded=""; $(document).ready(function(){ .... $(".xx").on("click",functio(){ if(allow){ allow=false; if(is_loaded!="that"){ $.post("job.php", {job:"that",word:"aaa",number:"123"},function(data){ $(".aaa").html(data); is_loaded="that"; }); } setTimeout(function(){allow=true},500); } .... }); job.php <?PHP ob_start('ob_gzhandler'); if(!isset($_SERVER['HTTP_X_REQUESTED_WITH']) or strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) != 'xmlhttprequest')exit("bad boy!"); if($_POST['job']=="that"){ include "includes/that.inc; } elseif($_POST['job']== .... ob_end_flush(); ?> that.inc if(!preg_match("/\w/",$_POST['word'])exit("bad boy!"); if(!is_numeric($_POST['number'])exit("bad boy!"); //exclude more. $path="temp/that_".$row['word']."txt"; if(file_exists($path) and filemtime("includes/that.inc")<$filemtime($path)){ readfile($path); } else{ include "includes/openSql.inc"; $call=sql_query("SELECT * FROM that WHERE name='".mysql_real_escape_string($_POST['word'])."'"); if(!$call)exit("ups"); $out=""; while($row=mysql_fetch_assoc($call)){ $out.=$_POST['word']." loves the color ".$row['color'].".<br/>"; } echo $out; $fn=fopen($path,"wb"); fputs($fn,$out); fclose($fn); } if something change at the database, you just have to delete involved files. Hope it was English.

    Read the article

  • Information not getting into the controller from the view. (authologic model)

    - by Gotjosh
    Right now I'm building a project management app in rails, here is some background info: Right now i have 2 models, one is User and the other one is Client. Clients and Users have a one-to-one relationship (client - has_one and user - belongs_to which means that the foreign key it's in the users table) So what I'm trying to do it's once you add a client you can actually add credentials (add an user) to that client, in order to do so all the clients are being displayed with a link next to that client's name meaning that you can actually create credentials for that client. So in order to do that I'm using a helper the link to helper like this. <%= link_to "Credentials", {:controller => 'user', :action => 'new', :client_id => client.id} %> Meaning that he url will be constructed like this: http://localhost:3000/clients/2/user/new By creating the user for the client with he ID of 2. And then capturing the info into the controller like this: @user = User.new(:client_id => params[:client_id]) The weird thing is that EVERY other information BUT the client id it's getting passed and the client ID should be passed with the params[:client_id]. Any ideas? Perhaps it may have something to do with the fact that model User has "acts_as_authentic" because I'm using authologic for it? Model: class User < ActiveRecord::Base acts_as_authentic belongs_to :client end Controller: def create @user = User.new(:client_id => params[:client_id]) if @user.save flash[:notice] = "Credentials created" else flash[:error] = "Credentials failed" end end View: <% form_for @user do |f| % <p> <%= f.label :login, "Username" %> <%= f.text_field :login %> </p> <p> <%= f.label :password, "Password" %> <%= f.password_field :password %> </p> <p> <%= f.label :password_confirmation, "Password Confirmation" %> <%= f.password_field :password_confirmation %> </p> Let me know if this is sufficient or need more. <%= f.submit "Create", :disable_with => 'Please Wait...' %> <% end %>

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >