Search Results

Search found 31421 results on 1257 pages for 'entity sql'.

Page 459/1257 | < Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >

  • In SQL, why is "select *, count(*) from sentGifts group by whenSent;" ok, but when "*" and "count(*)

    - by Jian Lin
    In SQL, using the table: mysql> select * from sentGifts; +--------+------------+--------+------+---------------------+--------+ | sentID | whenSent | fromID | toID | trytryWhen | giftID | +--------+------------+--------+------+---------------------+--------+ | 1 | 2010-04-24 | 123 | 456 | 2010-04-24 01:52:20 | 100 | | 2 | 2010-04-24 | 123 | 4568 | 2010-04-24 01:56:04 | 100 | | 3 | 2010-04-24 | 123 | NULL | NULL | 1 | | 4 | 2010-04-24 | NULL | 111 | 2010-04-24 03:10:42 | 2 | | 5 | 2010-03-03 | 11 | 22 | 2010-03-03 00:00:00 | 6 | | 6 | 2010-04-24 | 11 | 222 | 2010-04-24 03:54:49 | 6 | | 7 | 2010-04-24 | 1 | 2 | 2010-04-24 03:58:45 | 6 | +--------+------------+--------+------+---------------------+--------+ 7 rows in set (0.00 sec) The following is OK: mysql> select *, count(*) from sentGifts group by whenSent; +--------+------------+--------+------+---------------------+--------+----------+ | sentID | whenSent | fromID | toID | trytryWhen | giftID | count(*) | +--------+------------+--------+------+---------------------+--------+----------+ | 5 | 2010-03-03 | 11 | 22 | 2010-03-03 00:00:00 | 6 | 1 | | 1 | 2010-04-24 | 123 | 456 | 2010-04-24 01:52:20 | 100 | 6 | +--------+------------+--------+------+---------------------+--------+----------+ 2 rows in set (0.00 sec) But suppose we want the count(*) to appear as the first column: mysql> select count(*), * from sentGifts group by whenSent; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '* from sentGifts group by whenSent' at line 1 it gave an error. Why is it so and what is a way to fix it? I realized that this is ok: mysql> select count(*), whenSent from sentGifts group by whenSent; +----------+------------+ | count(*) | whenSent | +----------+------------+ | 1 | 2010-03-03 | | 6 | 2010-04-24 | +----------+------------+ 2 rows in set (0.00 sec) but what about the one above that gave an error? thanks.

    Read the article

  • Accessing CoreData relationships

    - by georgeliquor
    I have the following core data model with two entities: entity "item" which holds name, date, description and a to many relationship "image". image is optional. entity "image" holds url, name and relationship to one item. I load the executed Fetchrequest into this NSArray "entityArray" This is what I do to display my data in a UITableView for example to display title in main cell: NSManagedObject *object = (NSManagedObject *)[entityArray objectAtIndex:indexPath.row]; cell.textLabel.text=[object valueForKey:@"title"]; Now I have no clue how to access my relationship image ([object valueForKey:@"image"]), because it contains more than just a string.

    Read the article

  • Risky Business with LINQ to SQL and OR Designer?

    - by Toadmyster
    I have two tables with a one to many relationship in SQL 2008. The first table (BBD) PK | BBDataID | int       | Floor_Qty | tinyint       | Construct_Year | char(4)       | etc, etc describes the data common to all buildings and the second (BBDCerts) PK | BBDCertsID | int       | BBDataID | int       | Certification_Type | varchar(20)       | etc, etc is a collection of certifications for a particular building. Thus, the primary key in BBD (BBDataID) is mapped to the corresponding field in BBDCerts via an FK relationship, but BBDCertsID is the second table's primary key and BBDataID is not because it will not be unique. My problem is that I want to be able to use the OR generated data context to get at the list of certs when I access a particular record in the BBD table. For instance: Dim vals = (From q in db.BBD Where q.BBDataID = x Select q.Floor_Qty, q.Construct_Year, q.BBDCerts).SingleOrDefault and later be able to access a particular certification like this: vals.BBDCerts.Certification_Type.First Now, the automatic associations created when the SQL tables are dropped on the design surface don't generate the EntityRef associations that are needed to access the other table using the dot notation. So, I have to use the OR designer to make the BBDCerts BBDataID a primary key (this doesn't affect the actual database), and then manually change the association properties to the appropriate OneToMany settings. There might be a better way to approach this solution but my question is, is the way I've done it safe? I've done a barrage of tests and the correct cert is referenced or updated every time. Frankly, the whole thing makes me nervous.

    Read the article

  • AppEngine: Can I write a Dynamic property (db.Expando) with a name chosen at runtime?

    - by MarcoB
    If I have an entity derived from db.Expando I can write Dynamic property by just assigning a value to a new property, e.g. "y" in this example: class MyEntity(db.Expando): x = db.IntegerProperty() my_entity = MyEntity(x=1) my_entity.y = 2 But suppose I have the name of the dynamic property in a variable... how can I (1) read and write to it, and (2) check if the Dynamic variable exists in the entity's instance? e.g. class MyEntity(db.Expando): x = db.IntegerProperty() my_entity = MyEntity(x=1) # choose a var name: var_name = "z" # assign a value to the Dynamic variable whose name is in var_name: my_entity.property_by_name[var_name] = 2 # also, check if such a property esists if my_entity.property_exists(var_name): # read the value of the Dynamic property whose name is in var_name print my_entity.property_by_name[var_name] Thanks...

    Read the article

  • A first look at SQL Server 2012 Availability Group Wait Statistics

    If you are trouble-shooting an AlwaysOn Availability Group topology, a study of the wait statistics will give a pointer to many of the causes of problems. Although several wait types are documented, there is nothing like practical experiment to familiarize yourself with new wait stats, and Joe Sack demonstrates a way of testing the sort of waits generated by an availability group under various circumstances.

    Read the article

  • Which approach to create the data access layer has the highest performance?

    - by pooyakhamooshi
    I have to create a very high performance application. Currently, I am using Entity Framework for my data access layer. My application has to insert some communication data almost every second. I found that Entity Framework is slow; it has about 2 seconds delay to finish the SaveChanges() method. I was thinking I have the following options: 1. Create the data access layer myself using ADO.NET; using stored procedures or ad-hoc queries 2. Use Enterprise Library Data access Layer 3. Use NHibernate 4. Use Repository Factory: http://pooyakhamooshi.blogspot.com/search?q=repository What do you think? which one is quicker for inserting data? Which one is quicker to set up?

    Read the article

  • IQueryable<> dynamic ordering/filtering with GetValue fails

    - by MyNameIsJob
    I'm attempting to filter results from a database using Entity Framework CTP5. Here is my current method. IQueryable<Form> Forms = DataContext.CreateFormContext().Forms; foreach(string header in Headers) { Forms = Forms.Where(f => f.GetType() .GetProperty(header) .GetValue(f, null) .ToString() .IndexOf(filter, StringComparison.InvariantCultureIgnoreCase) >= 0); } However, I found that GetValue doesn't work using Entity Framework. It does when the type if IEnumerable< but not IQueryable< Is there an alternative I can use to produce the same effect?

    Read the article

  • PowerShell the SQL Server Way

    Although Windows PowerShell has been available to IT professionals going on seven years, there are still many IT pros who are just now deciding to see what the fuss is all about. Depending on your job, you might find PowerShell an invaluable tool. Microsoft's plan is that PowerShell will be the management tool for all of its servers and platforms. For most IT pros, it's not a matter of if you'll be using PowerShell, only a matter of when.

    Read the article

  • Steps to rollback database changes without impacting SQL Server Log Shipping

    When pushing a major release to a large production database, you want to know that you'll be able to rollback changes if the need arises. These are some simple steps which we can follow to ensure that we don't have to reconfigure log shipping all over again thereby saving time and ensuring systems are not affected when rolling back changes. Deployment Manager 2 is now free!The new version includes tons of new features and we've launched a completely free Starter Edition! Get Deployment Manager here

    Read the article

  • linq with Include and criteria

    - by JMarsch
    How would I translate this into LINQ? Say I have A parent table (Say, customers), and child (addresses). I want to return all of the Parents who have addresses in California, and just the california address. (but I want to do it in LINQ and get an object graph of Entity objects) Here's the old fashioned way: SELECT c.blah, a.blah FROM Customer c INNER JOIN Address a on c.CustomerId = a.CustomerId where a.State = 'CA' The problem I'm having with LINQ is that i need an object graph of concrete Entity types (and it can't be lazy loaded. Here's what I've tried so far: // this one doesn't filter the addresses -- I get the right customers, but I get all of their addresses, and not just the CA address object. from c in Customer.Include(c = c.Addresses) where c.Addresses.Any(a = a.State == "CA") select c // this one seems to work, but the Addresses collection on Customers is always null from c in Customer.Include(c = c.Addresses) from a in c.Addresses where a.State == "CA" select c; Any ideas?

    Read the article

  • Reliable Storage Systems for SQL Server

    By validating the IO path before commissioning the production database system, and performing ongoing validation through page checksums and DBCC checks, you can hopefully avoid data corruption altogether, or at least nip it in the bud. If corruption occurs, then you have to take the right decisions fast to deal with it. Rod Colledge explains how a pessimistic mindset can be an advantage

    Read the article

  • Restoring a Publisher Database in SQL Server

    Introduction Restoring any database is a critical task which will be  complicated by the database to be  restored being a publisher database. For the purposes of this article, I will assume familiarity with the different types ... [Read Full Article]

    Read the article

  • SQL: Getting the full record with the highest count.

    - by sqlnoob
    I'm trying to write sql that produces the desired result from the data below. data: ID Num Opt1 Opt2 Opt3 Count 1 A A E 1 1 A B J 4 2 A A E 9 3 B A F 1 3 B C K 14 4 A A M 3 5 B D G 5 6 C C E 13 6 C C M 1 desired result: ID Num Opt1 Opt2 Opt3 Count 1 A B J 4 2 A A E 9 3 B C K 14 4 A A M 3 5 B D G 5 6 C C E 13 Essentially I want, for each ID Num, the full record with the highest count. I tried doing a group by, but if I group by Opt1, Opt2, Opt3, this doesn't work because it returns the highest count for each (ID Num, Opt2, Opt3, Opt4) combination which is not what I want. If I only group by ID Num, I can get the max for each ID Num but I lose the information as to which (Opt1, Opt2, Opt3) combination gives this count. I feel like I've done this before, but I don't often work with sql and I can't remember how. Is there an easy way to do this?

    Read the article

  • stepping into a linq query

    - by MyNameIsJob
    Is it possible to step into a linq query? I have a linq to entity framework 4 query in it's simplest form: List = List.Where(f => f.Value.ToString().ToLowerInvariant().Contains(filter.ToLowerInvariant())); It's a query against an Entity Framework DbContext and I'm having trouble seeing why it works for something like: List searching for 001 yields no results against the following list Test001 Test002 Test003 Test004 However any other search yields results (Such as t00 or Test) I was hoping to figure out a way to see the resulting sql but it appears that I need Intellitrace, which I don't currently have or possibly stepping through the query itself and see each iteration build itself.

    Read the article

  • Null Values And The T-SQL IN Operator

    - by Jesse
    I came across some unexpected behavior while troubleshooting a failing test the other day that took me long enough to figure out that I thought it was worth sharing here. I finally traced the failing test back to a SELECT statement in a stored procedure that was using the IN t-sql operator to exclude a certain set of values. Here’s a very simple example table to illustrate the issue: Customers CustomerId INT, NOT NULL, Primary Key CustomerName nvarchar(100) NOT NULL SalesRegionId INT NULL   The ‘SalesRegionId’ column contains a number representing the sales region that the customer belongs to. This column is nullable because new customers get created all the time but assigning them to sales regions is a process that is handled by a regional manager on a periodic basis. For the purposes of this example, the Customers table currently has the following rows: CustomerId CustomerName SalesRegionId 1 Customer A 1 2 Customer B NULL 3 Customer C 4 4 Customer D 2 5 Customer E 3   How could we write a query against this table for all customers that are NOT in sales regions 2 or 4? You might try something like this: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE SalesRegionId NOT IN (2,4)   Will this work? In short, no; at least not in the way that you might expect. Here’s what this query will return given the example data we’re working with: CustomerId CustomerName SalesRegionId 1 Customer A 1 5 Customer E 5   I was expecting that this query would also return ‘Customer B’, since that customer has a NULL SalesRegionId. In my mind, having a customer with no sales region should be included in a set of customers that are not in sales regions 2 or 4.When I first started troubleshooting my issue I made note of the fact that this query should probably be re-written without the NOT IN clause, but I didn’t suspect that the NOT IN clause was actually the source of the issue. This particular query was only one minor piece in a much larger process that was being exercised via an automated integration test and I simply made a poor assumption that the NOT IN would work the way that I thought it should. So why doesn’t this work the way that I thought it should? From the MSDN documentation on the t-sql IN operator: If the value of test_expression is equal to any value returned by subquery or is equal to any expression from the comma-separated list, the result value is TRUE; otherwise, the result value is FALSE. Using NOT IN negates the subquery value or expression. The key phrase out of that quote is, “… is equal to any expression from the comma-separated list…”. The NULL SalesRegionId isn’t included in the NOT IN because of how NULL values are handled in equality comparisons. From the MSDN documentation on ANSI_NULLS: The SQL-92 standard requires that an equals (=) or not equal to (<>) comparison against a null value evaluates to FALSE. When SET ANSI_NULLS is ON, a SELECT statement using WHERE column_name = NULL returns zero rows even if there are null values in column_name. A SELECT statement using WHERE column_name <> NULL returns zero rows even if there are nonnull values in column_name. In fact, the MSDN documentation on the IN operator includes the following blurb about using NULL values in IN sub-queries or expressions that are used with the IN operator: Any null values returned by subquery or expression that are compared to test_expression using IN or NOT IN return UNKNOWN. Using null values in together with IN or NOT IN can produce unexpected results. If I were to include a ‘SET ANSI_NULLS OFF’ command right above my SELECT statement I would get ‘Customer B’ returned in the results, but that’s definitely not the right way to deal with this. We could re-write the query to explicitly include the NULL value in the WHERE clause: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE (SalesRegionId NOT IN (2,4) OR SalesRegionId IS NULL)   This query works and properly includes ‘Customer B’ in the results, but I ultimately opted to re-write the query using a LEFT OUTER JOIN against a table variable containing all of the values that I wanted to exclude because, in my case, there could potentially be several hundred values to be excluded. If we were to apply the same refactoring to our simple sales region example we’d end up with: 1: DECLARE @regionsToIgnore TABLE (IgnoredRegionId INT) 2: INSERT @regionsToIgnore values (2),(4) 3:  4: SELECT 5: c.CustomerId, 6: c.CustomerName, 7: c.SalesRegionId 8: FROM Customers c 9: LEFT OUTER JOIN @regionsToIgnore r ON r.IgnoredRegionId = c.SalesRegionId 10: WHERE r.IgnoredRegionId IS NULL By performing a LEFT OUTER JOIN from Customers to the @regionsToIgnore table variable we can simply exclude any rows where the IgnoredRegionId is null, as those represent customers that DO NOT appear in the ignored regions list. This approach will likely perform better if the number of sales regions to ignore gets very large and it also will correctly include any customers that do not yet have a sales region.

    Read the article

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • MySQL query works in PHPMyAdmin but not PHP

    - by Su4p
    I do not understand what's happening. I have a query in PHP who crashes -with a strange error-. When I copy/paste the exact same request in PHPMyAdmin it works as expected. What am I doing wrong here ? SELECT oms_patient.id, oms_patient.date, oms_patient.date_modif, date_modif, AES_DECRYPT(nom,"xxxxx") AS "Nom", AES_DECRYPT(prenom,"xxxxx") AS "Prénom usuel", DATE_FORMAT(ddn, "%d/%m/%Y") AS "Date de naissance", villeNaissance AS "Lieu de naissance (ville)", CONCAT(oms_departement.libelle,"(",id_departement,")") AS "Lieu de vie", CONCAT(oms_pays.libelle,"(",id_pays,")") AS "Pays", CONCAT(patientsexe.libelle,"(",id_sexe,")") AS "Sexe", CONCAT(patientprofession.libelle,"(",id_profession,")") AS "Profession", IF(asthme>0,"Oui","Non") AS "Asthme", IF(rhinite>0,"Oui","Non") AS "Rhinite", IF(bcpo>0,"Oui","Non") AS "BPCO", IF(insuffisanceResp>0,"Oui","Non") AS "Insuffisance respiratoire chronique", IF(chirurgieOrl>0,"Oui","Non") AS "Chirurgie ORL du ronflement", IF(autreChirurgie>0,"Oui","Non") AS "Autre chirurgie ORL", IF(allergies>0,"Oui","Non") AS "Allergies", IF(OLD>0,"Oui","Non") AS "OLD", IF(hypertensionArterielle>0,"Oui","Non") AS "Hypertension artérielle", IF(infarctusMyocarde>0,"Oui","Non") AS "Infarctus du myocarde", IF(insuffisanceCoronaire>0,"Oui","Non") AS "Insuffisance coronaire", IF(troubleRythme>0,"Oui","Non") AS "Trouble du rythme", IF(accidentVasculaireCerebral>0,"Oui","Non") AS "Accident vasculaire cérébral", IF(insuffisanceCardiaque>0,"Oui","Non") AS "Insuffisance cardiaque", IF(arteriopathie>0,"Oui","Non") AS "Artériopathie", IF(tabagismeActuel>0,"Oui","Non") AS "Tabagisme actuel", CONCAT(nbPaquetsActuel," ","PA") AS "", IF(tabagismeAncien>0,"Oui","Non") AS "Tabagisme ancien", CONCAT(nbPaquetsAncien," ","PA") AS "", IF(alcool>0,"Oui","Non") AS "Alcool (conso régulière)", IF(refluxGastro>0,"Oui","Non") AS "Reflux gastro-oesophagien", IF(glaucome>0,"Oui","Non") AS "Glaucome", IF(diabete>0,"Oui","Non") AS "Diabète", CONCAT(patienttypeDiabete.libelle,"(",id_typeDiabete,")") AS "", IF(hypercholesterolemie>0,"Oui","Non") AS "Hypercholestérolémie", IF(hypertriglyceridemie>0,"Oui","Non") AS "Hypertriglycéridémie", IF(dysthyroidie>0,"Oui","Non") AS "Dysthyroïdie", IF(depression>0,"Oui","Non") AS "Dépression", IF(sedentarite>0,"Oui","Non") AS "Sédentarité", IF(syndromeDApneesSommeil>0,"Oui","Non") AS "SAS", IF(obesite>0,"Oui","Non") AS "Obésité", IF(dysmorphieFaciale>0,"Oui","Non") AS "Dysmorphie faciale", TextObservations AS "", id_user FROM oms_patient LEFT JOIN oms_departement ON oms_departement.id = id_departement LEFT JOIN oms_pays ON oms_pays.id = id_pays LEFT JOIN patientsexe ON patientsexe.id = id_sexe LEFT JOIN patientprofession ON patientprofession.id = id_profession LEFT JOIN patienttypeDiabete ON patienttypeDiabete.id = id_typeDiabete WHERE oms_patient.id=1 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'small"(conso régulière)", IF(refluxGastro0,"Oui","Non") as "Reflux ga' at line 1 "near 'small" <-- where is small o_O The PHP code isn't really relevant cause you won't see a lot. $db = mysql_connect(); mysql_select_db();//TODO SWITCH TO PDO mysql_query("SET NAMES UTF8"); $fields = $form->getFields($form); $settingsForm = $form->getSettings(); $sql = 'SELECT oms_patient.id,oms_patient.date,oms_patient.date_modif,'; foreach ($fields as $field) { if (!$field->isMultiSelect()) { $field->select_full(&$sql, 'oms_patient', null); } } if (isset($settingsForm['linkTo'])) { $idLinkTo = 'id_' . str_replace('oms_', '', $settingsForm['linkTo']); $sql .= $idLinkTo; } $sql.=' FROM oms_patient'; foreach ($fields as $field) { if (!$field->isMultiSelect() && $field->getTable('oms_patient')) { $sql .=' LEFT JOIN ' . $field->getTable('oms_patient') . ' ON ' . $field->getTable('oms_patient') . '.id = '.$field->getFieldName().' '; } } $sql.=' where oms_patient.id=' . $this->m_settings['e']; $result = mysql_query($sql) or die('Erreur SQL !<br>' . $sql . '<br>' . mysql_error()); $data = mysql_fetch_assoc($result); var_dump of $sql string(2663) "SELECT oms_patient.id,oms_patient.date,oms_patient.date_modif,date_modif,AES_DECRYPT(nom,"xxxxx") as "Nom",AES_DECRYPT("prenom","xxxxx") as "Prénom usuel",DATE_FORMAT(ddn, "%d/%m/%Y") as "Date de naissance",villeNaissance as "Lieu de naissance (ville)",CONCAT(oms_departement.libelle,"(",id_departement,")") as "Lieu de vie",CONCAT(oms_pays.libelle,"(",id_pays,")") as "Pays",CONCAT(patientsexe.libelle,"(",id_sexe,")") as "Sexe",CONCAT(patientprofession.libelle,"(",id_profession,")") as "Profession", IF"... can't go further to see what is in the output after the "..." <-- if you have an idea

    Read the article

  • NHibernate HiLo - new column per entity and HiLo catches

    - by Gareth
    Im currently using the hilo id generator for my classes but have just been using the minimal of settings eg <class name="ClassA" <id name="Id" column="id" unsaved-value="0" <generator class="hilo" / </id ... But should I really be specifying a new column for NHibernate to use foreach entity and providing it with a max lo? <class name="ClassA" <id name="Id" column="id" unsaved-value="0" <generator class="hilo" <param name="table"hibernate_unique_key</param <param name="column"classA_nexthi</param <param name="max_lo"20</param </generator </id ... <class name="ClassB" <id name="Id" column="id" unsaved-value="0" <generator class="hilo" <param name="table"hibernate_unique_key</param <param name="column"classB_nexthi</param <param name="max_lo"20</param </generator </id ... Also I've noticed that when I do the above the SchemaExport will not create all the columns - only classB_nexthi, is there something else im doing wrong. Thanks

    Read the article

  • Using Lite Version of Entity in nHibernate Relations?

    - by Amitabh
    Is it a good idea to create a lighter version of an Entity in some cases just for performance reason pointing to same table but with fewer columns mapped. E.g If I have a Contact Table which has 50 Columns and in few of the related entities I might be interested in FirstName and LastName property is it a good idea to create a lightweight version of Contact table. E.g. public class ContactLite { public int Id {get; set;} public string FirstName {get; set;} public string LastName {get; set;} } Also is it possible to map multiple classes to same table?

    Read the article

< Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >