Search Results

Search found 25408 results on 1017 pages for 'table relationships'.

Page 54/1017 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Copy data from one SQL Server database table to the other

    - by lucky
    I don't know how to copy table data from one server - database - table TO other sever - database - table. If it is with in the same server and different databases, i have used the following SELECT * INTO DB1..TBL1 FROM DB2..TBL1 (to copy with table structure and data) INSERT INTO DB1..TBL1(F1, F2) SELECT F1, F2 FROM DB2..TBL1 (copy only data) Now my question is copy the data from SERVER1 -- DB1-- TBL1 to SERVER2 -- DB2 --TBL2

    Read the article

  • mysql join table - selecting the newest row

    - by cmancre
    Hi, I have the following two MySQL tables TABLE NAMES NAME_ID NAME 1 name1 2 name2 3 name3 TABLE STATUS STATUS_ID NAME_ID TIMESTAMP 1 1 2010-12-20 12:00 2 2 2010-12-20 10:00 3 3 2010-12-20 10:30 4 3 2010-12-20 14:00 I would like to select all info from table NAMES and add most recent correspondent TIMESTAMP column from table STATUS RESULT NAME_ID NAME TIMESTAMP 1 name1 2010-12-20 12:00 2 name2 2010-12-20 10:00 3 name3 2010-12-20 14:00 Am stuck on this one. How do I left join only on the newer timestamp?

    Read the article

  • stored procedure to transfer data from a table in one database to a table in another database in ora

    - by ranju
    there are 2 databases A AND B. i want to transfer data from a table in A TO a table in B. i want to use cursor for this. the duplicate datas when transferring should go to a table called duplicat table. I want a stored procedure to do the above. first i need to connect database A with database B using db link. i want the complete stored procedure. can anyone help plzzzzzzzzzz...........

    Read the article

  • MSSQL + Copy data from one server database table to the other

    - by lucky
    Hello All, I dont know how to copy table data from one server - database - table TO other sever - database - table. If it is with in the same server and different databases, i have used the following SELECT * INTO DB1..TBL1 FROM DB2..TBL1 (to copy with table structure and data) INSERT INTO DB1..TBL1(F1, F2) SELECT F1, F2 FROM DB2..TBL1 (copy only data) Now my question is copy the data from SERVER1 -- DB1-- TBL1 to SERVER2 -- DB2 --TBL2 Thanks in advance!

    Read the article

  • How effecient is a details table?

    - by Jeffrey Lott
    At my job, we have pseudo-standard of creating one table to hold the "standard" information for an entity, and a second table, named like 'TableNameDetails', which holds optional data elements. On average, for every row in the main table will have about 8-10 detail rows in it. My question is: What kind of performance impacts does this have over adding these details as additional nullable columns on the main table?

    Read the article

  • is Payment table needed when you have an invoice table like this?

    - by EBAGHAKI
    this is my invoice table: Invoice Table: invoice_id creation_date due_date payment_date status enum('not paid','paid','expired') user_id total_price I wonder if it's Useful to have a payment table in order to record user payments for invoices. payment table can be like this: payment_id payment_date invoice_id price_paid status enum('successful', 'not successful')

    Read the article

  • How to calculate unweighted averages in Excel PivotTable?

    - by yonatron
    I often make PivotTables in which each row contains a number of per-person average measures. I then want to look at the unweighted column average for each measure, and usually make some kind of chart from these. Because my individual cells are often averaged from different numbers of data points, the Grand Total row ends up being a weighted average, which I’m not interested in. So I usually make my own average row a few rows above the table to use for my charts. That’s not too much work, but there’s another problem. I often add a few more people’s worth of data to the PivotTables’ source, then refresh the tables. This means my average row needs to be updated to encompass more rows from the PivotTable. Not a huge deal with one table, but when I have lots of them across lots of sheets, I have to do find/replace on a whole bunch of formulas. So: is there a way to automatically get unweighted column averages in a PivotTable, such that when the table is refreshed, the averages don’t change locations and encompass the newly added (or removed) data Thanks

    Read the article

  • How to calculate unweighted averages in Excel PivotTable?

    - by yonatron
    I often make PivotTables in which each row contains a number of per-person average measures. I then want to look at the unweighted column average for each measure, and usually make some kind of chart from these. Because my individual cells are often averaged from different numbers of data points, the Grand Total row ends up being a weighted average, which I’m not interested in. So I usually make my own average row a few rows above the table to use for my charts. That’s not too much work, but there’s another problem. I often add a few more people’s worth of data to the PivotTables’ source, then refresh the tables. This means my average row needs to be updated to encompass more rows from the PivotTable. Not a huge deal with one table, but when I have lots of them across lots of sheets, I have to do find/replace on a whole bunch of formulas. So: is there a way to automatically get unweighted column averages in a PivotTable, such that when the table is refreshed, the averages don’t change locations and encompass the newly added (or removed) data Thanks

    Read the article

  • MySQL : table organisation for very large sets with high update frequency

    - by Remiz
    I'm facing a dilemma in the choice of my MySQL schema application. So before I start here is a picture extremely simplified of my database : Schema here : http://i43.tinypic.com/2wp5lxz.png In one sentence : for each customer, the application harvest text data and attached tags to each data collected. As approximation of the usage of each table, here is what I expect : customer : ~5000, shouldn't grow fast data : 5 millions per customer, could double or triple for big customers. tag : ~1000, quite fixed size data_tag : hundred of millions per customer easily. Each data can be tagged a lot. The harvesting process is permanent, that means that around every 15 minutes new data come and are tagged, that require a very constant index refreshing. A lot of my queries are a SELECT COUNT of DATA between specific DATES and tagged with a specific TAG on a specific CUSTOMER (very rarely it will involve several customers). Here is the situation, you can imagine with this kind of volume of data I'm facing a challenge in term of data organization and indexing. Again, it's a very minimalistic and simplified version of my structure. My question is, is it better: to stick with this model and to manage crazy index optimization ? (which involves potentially having billions of rows in the data_tag table) change the schema and use one data table and one data_tag table per customer ? (which involves having 5000 tables on my database) I'm running all of this on a MySQL 5.0 dedicated server (quad-core, 8Go of ram) replicated. I only use InnoDB, I also have another server that run Sphinx. So knowing all of this, I can't wait to hear your opinion about this. Thanks.

    Read the article

  • Reread partition table without rebooting?

    - by Teddy
    Sometimes, when resizing or otherwise mucking about with partitions on a disk, cfdisk will say: Wrote partition table, but re-read table failed. Reboot to update table. (This also happens with other partitioning tools, so I'm thinking this is a Linux issue rather than a cfdisk issue.) Why is this, and why does it only happens sometimes, and what can I do to avoid it? Note: Please assume that none of the partitions I am actually editing are opened, mounted or otherwise in use. Update: cfdisk uses ioctl(fd, BLKRRPART, NULL) to tell Linux to reread the partition table. Two of the other tools recommended so far (hdparm -z DEVICE, sfdisk -R DEVICE) does exactly the same thing. The partprobe DEVICE command, on the other hand, seems to use a new ioctl called BLKPG, which might be better; I don't know. (It also falls back on BLKRRPART if BLKPG fails.) BLKPG seems to be a "this partition has changed; here is the new size" operation, and it looked like partprobe called it individually on all the partitions on the device passed, so it should work if the individual partitions are unused. However, I have not had the opportunity to try it.

    Read the article

  • Open XML SDK 2.0 - Split table to new power point slide when content flows off current slide

    - by amurra
    I have a bunch of data that I need to export from a website to a power point presentation and have been using Open XML SDK 2.0 to perform this task. I have a power point presentation that I am putting through Open XML SDK 2.0 Productivity Tool to generate the template code that I can use to recreate the export. On one of those slides I have a table and the requirement is to add data to that table and break that table across multiple slides if the table exceeds the bottom of the slide. The approach I have taken is to determine the height of the table and if it exceeds the height of the slide, move that new content into the next slide. I have read Bryan and Jones blog on adding repeating data to a power point slide, but my scenario is a little different. They use the following code: A.Table tbl = current.Slide.Descendants<A.Table>().First(); A.TableRow tr = new A.TableRow(); tr.Height = heightInEmu; tr.Append(CreateDrawingCell(imageRel + imageRelId)); tr.Append(CreateTextCell(category)); tr.Append(CreateTextCell(subcategory)); tr.Append(CreateTextCell(model)); tr.Append(CreateTextCell(price.ToString())); tbl.Append(tr); imageRelId++; This won't work for me since they know what height to set the table row to since it will be the height of the image, but when adding in different amounts of text I do not know the height ahead of time so I just set tr.Heightto a default value. Here is my attempt at figuring at the table height: A.Table tbl = tableSlide.Slide.Descendants<A.Table>().First(); A.TableRow tr = new A.TableRow(); tr.Height = 370840L; tr.Append(PowerPointUtilities.CreateTextCell("This"); tr.Append(PowerPointUtilities.CreateTextCell("is")); tr.Append(PowerPointUtilities.CreateTextCell("a")); tr.Append(PowerPointUtilities.CreateTextCell("test")); tr.Append(PowerPointUtilities.CreateTextCell("Test")); tbl.Append(tr); tableSlide.Slide.Save(); long tableHeight = PowerPointUtilities.TableHeight(tbl); Here are the helper methods: public static A.TableCell CreateTextCell(string text) { A.TableCell tableCell = new A.TableCell( new A.TextBody(new A.BodyProperties(), new A.Paragraph(new A.Run(new A.Text(text)))), new A.TableCellProperties()); return tableCell; } public static Int64Value TableHeight(A.Table table) { long height = 0; foreach (var row in table.Descendants<A.TableRow>() .Where(h => h.Height.HasValue)) { height += row.Height.Value; } return height; } This correctly adds the new table row to the existing table, but when I try and get the height of the table, it returns the original height and not the new height. The new height meaning the default height I initially set and not the height after a large amount of text has been inserted. It seems the height only gets readjusted when it is opened in power point. I have also tried accessing the height of the largest table cell in the row, but can't seem to find the right property to perform that task. My question is how do you determine the height of a dynamically added table row since it doesn't seem to update the height of the row until it is opened in power point? Any other ways to determine when to split content to another slide while using Open XML SDK 2.0? I'm open to any suggestion on a better approach someone might have taken since there isn't much documentation on this subject.

    Read the article

  • Need help displaying a dynamic table

    - by Gideon
    I am designing a job rota planner for a company and need help displaying a dynamic table containing the staff details. I have the following tables in MySQL database: Staff, Event, and Job. The staff table holds staff details (staffed, name, address...etc), the Event table (eventide, eventName, Fromdate, Todate...etc) and the Job table holds (Jobid, Jobdate, Eventid(fk), Staffid (fk)). I need to dynamically display the available staff list from the staff table when the user selects the EVENT and the DATE (3 drop downs: date, month, and year) from a PHP form. I need to display staff members that have not been assigned work on the selected date by checking the Jobdate in the Job table. I have been at this for all day and can't get around it. I am still learning PHP and would surely appreciate any help I can get. My current code displays all staff members when an event is selected: if(isset($_POST['submit'])) { $eventId = $_POST['eventradio']; } $timePeriod = $_POST['timeperiod']; $Day = $_POST['day']; $Month = $_POST['month']; $Year = $_POST['year']; $dateValue = $Year."-".$Month."-".$Day; $selectedDate = date("Y-m-d", strtotime($dateValue)); //construct the available staff list if ($selectedDate) { $staffsql = "SELECT s.StaffId, s.LastName, s.FirstName FROM Staff s WHERE s.StaffId NOT IN (SELECT J.StaffId FROM Job J WHERE J.JobDate != ".$selectedDate.")"; $staffResult = mysql_query($staffsql) or die (mysql_error()); } if ($staffResult){ echo "<p><table cellspacing='1' cellpadding='3'>"; echo "<th colspan=6>List of Available Staff</th>"; echo "</tr><tr><th> Select</th><th>Id</th><th></th><th>Last Name </th><th></th><th>First Name </th></tr>"; while ($staffarray = mysql_fetch_array($staffResult)) { echo "<tr onMouseOver= this.bgColor = 'red' onMouseOut =this.bgColor = 'white' bgcolor= '#FFFFFF'> <td align=center><input type='checkbox' name='selectbox[]' id='selectbox[]' value=".$staffarray['StaffId']."> </td><td align=left>".$staffarray['StaffId']." </td><td>&nbsp&nbsp</td><td align=center>".$staffarray['LastName']." </td><td>&nbsp&nbsp</td><td align=center>".$staffarray['FirstName']." </td></tr>"; } echo "</table>"; } else { echo "<br> The Staff list can not be displayed!"; } echo "</td></tr>"; echo "<tr><td></td>"; echo "<td align=center><input type='submit' name='Submit' value='Assign Staff'>&nbsp&nbsp"; echo "<input type='reset' value='Start Over'>"; echo "</td></tr>"; echo "</table>";

    Read the article

  • Get the Dynamic table data from gui in selenium webDriver

    - by Rabindra
    I am working on a web based Application that I am testing with Selenium. On one page the content is dynamically loaded in table. I want to get the Table data, i am geting a "org.openqa.selenium.NullPointerElementException" in this line. WebElement table = log.driver.findElement(By.xpath(tableXpath)); I tried the following complete code. public int selectfromtable(String tableXpath, String CompareValue, int columnnumber) throws Exception { WebElement table = log.driver.findElement(By.xpath(tableXpath)); List<WebElement> rows = table.findElements(By.tagName("tr")); int flag = 0; for (WebElement row : rows) { List<WebElement> cells = row.findElements(By.tagName("td")); if (!cells.isEmpty() && cells.get(columnnumber).getText().equals(CompareValue)) { flag = 1; Thread.sleep(1000); break; } else { Thread.sleep(2000); flag = 0; } } return flag; } I am calling the above method like String tableXpath = ".//*[@id='event_list']/form/div[1]/table/tbody/tr/td/div/table"; selectfromtable(tableXpath, eventType, 3); my html page is like <table width="100%"> <tbody style="overflow: auto; background-color: #FFFFFF"> <tr class="trOdd"> <td width="2%" align="center"> <td width="20%" align="center"> Account </td> <td width="20%" align="center"> Enter Collection </td> <td width="20%" align="center"> <td width="20%" align="center"> 10 </td> <td width="20%" align="center"> 1 </td> </tr> </tbody> <tbody style="overflow: auto; background-color: #FFFFFF"> <tr class="trEven"> <td width="2%" align="center"> <td width="20%" align="center"> Account </td> <td width="20%" align="center"> Resolved From Collection </td> <td width="20%" align="center"> <td width="20%" align="center"> 10 </td> <td width="20%" align="center"> 1 </td> </tr> </tbody> </table>

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • SQL SERVER – Updating Data in A Columnstore Index

    - by pinaldave
    So far I have written two articles on Columnstore Indexes, and both of them got very interesting readership. In fact, just recently I got a query on my previous article on Columnstore Index. Read the following two articles to get familiar with the Columnstore Index. They will give you a reference to the question which was asked by a certain reader: SQL SERVER – Fundamentals of Columnstore Index SQL SERVER – How to Ignore Columnstore Index Usage in Query Here is the reader’s question: ” When I tried to update my table after creating the Columnstore index, it gives me an error. What should I do?” When the Columnstore index is created on the table, the table becomes Read-Only table and it does not let any insert/update/delete on the table. The basic understanding is that Columnstore Index will be created on the table that is very huge and holds lots of data. If a table is small enough, there is no need to create a Columnstore index. The regular index should just help it. The reason why Columnstore index was needed is because the table was so big that retrieving the data was taking a really, really long time. Now, updating such a huge table is always a challenge by itself. If the Columnstore Index is created on the table, and the table needs to be updated, you need to know that there are various ways to update it. The easiest way is to disable the Index and enable it. Consider the following code: USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO /* It will throw following error Msg 35330, Level 15, State 1, Line 2 UPDATE statement failed because data cannot be updated in a table with a columnstore index. Consider disabling the columnstore index before issuing the UPDATE statement, then rebuilding the columnstore index after UPDATE is complete. */ A similar error also shows up for Insert/Delete function. Here is the workaround. Disable the Columnstore Index and performance update, enable the Columnstore Index: -- Disable the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] DISABLE GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO -- Rebuild the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] REBUILD GO This time it will not throw an error while the update of the table goes successfully. Let us do a cleanup of our tables using this code: -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In the next post we will see how we can use Partition to update the Columnstore Index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What are other ideologies to establish relationships between distinct users besides followers/following and friends?

    - by user784637
    Websites like myspace and facebook establish relationships between distinct users using the "friending" ideology, where one user sends a request to be accepted by another user in order for them to have the mutual permission to do stuff like post messages on each others walls. Less restrictive than the "friending" ideology, Twitter and instagram use the followers/following ideology where you can subscribe to the tweets or posts of another user without their permission. Less restrictive than the "followers/following" ideology, email and calling someone on the phone allows you to directly contact anyone. Are there other ideologies that have been successfully implemented either in social networking sites or other real world constructs to establish relations between users?

    Read the article

  • MySQL dump, output each table row on a new line whilst using --extended-insert

    - by soopadoubled
    I'm having an issue, where for ease of use, I'd like to be able to format a command line MySQL dump so that each row of a given table is on a new line when using the --extended-insert option. Usually when using --extended-insert, every row of a given table is outputted on one line, and as far as I am aware there's no way to change this, other than post-processing the dump with perl or such like. The format I'm looking for is: -- -- Dumping data for table `ww_tbCountry` -- INSERT INTO `ww_tbCountry` (`iCountryId_PK`, `vCountryName`, `vShortName`, `iSortFlag`, `fTax`, `vCountryCode`, `vSageTaxCode`) VALUES (22, 'Albania', 'AL', 1, 0.00, '8', 'T9'), (33, 'Austria', 'AT', 1, 15.00, '40', 'T9'), (40, 'Belarus', 'BY', 1, 0.00, '112', 'T9'), (41, 'Belgium', 'BE', 1, 15.00, '56', 'T9'), (51, 'Bulgaria', 'BG', 1, 15.00, '100', 'T9' However, when I dump a database using Phpmyadmin, using --extended-insert, each row is dumped on a new line (as shown by the example above). I've gone through Phpmyadmin and can't find any documentation that would explain this. Is anyone able to shed any light on this? Thanks in advance, Ian

    Read the article

  • Excel 2010: if( , , "") not treated the same as blank for pivot table group by date

    - by Confused
    I'm trying to group by date in an Excel 2010 pivot table. The column with dates (i.e., the one want to group by), should be the latest date of 2 other columns if neither is null, or blank. i.e., with a formula like: =IF(AND(A4 <> "", B4 <> ""), MAX(A4,B4), "") Normally, this ""in the IF() formula acts the same as an empty cell. In this case, it is preventing me from grouping by date in the Pivot Table. If I filter the date column by (Blanks), then clear the contents of all those cells, then the pivot table does group by date ok. i.e., "" is not being treated the same as an empty cell.

    Read the article

  • Static Routes and the Routing Table

    - by TheD
    This is very much a learning question if someone would be happy to explain a couple of concepts. My question is - the default routing table that exists in, in my case, a default Windows 7 install, what do each of the routes in the table do? Here is a screenshot: The 10.128.4.0 is just a route I've added while messing. I understand from a question I posted on Superuser the first route is just a default route that will route all traffic for any IP to my default gateway on my Interface in use. But what about the others? And how would the routing table handle a machine with multiple NIC's, perhaps connected to two different networks, or maybe even two NIC's on the same network so a VM can have a physical Network card instead of each VM sharing the hosts. Thanks!

    Read the article

  • How To Speed Up Adding Column To Large Table In Sql Server

    - by Chris
    I want to add a column to a Sql Server table with about 10M rows. I think this query would eventually finish adding the column I want: alter table T add mycol bit not null default 0 but it's been going for several hours already. Is there any shortcut to get a "not null default 0" column inserted into a large table? Or is this inherently really slow? This is Sql Server 2000. Later on I have to do something similar on Sql Server 2008.

    Read the article

  • Accidentally replaced the partition table using GParted in UBUNTU

    - by claws
    Hello, This machine has UBUNTU & wINDOWS XP. I'm currently logged into UBUNTU. I was just checking the features of GParted and accidentally clicked Device > Create Partition Table. A default MS-DOS partition table is created. Now if I re-start the Gparted there is nothing. Its showing entire disk as UNALLOCATED space. Lucky thing is All the drives (C:, D:, E:) are currently mounted and I'm in UBUNTU. I guess its possible to re-create the partition table using current status. But I don't know how? Can any one kindly tell me how to do this. This is a lab computer. If its not recoverable. I'm completely screwed!!

    Read the article

  • Deleting MySQL rows causes lock table error

    - by Dave L
    I had a couple million rows to delete but they can't be deleted at once without this error ERROR 1206 (HY000): The total number of locks exceeds the lock table size So I wrote a script to delete 100,000 rows 10,000 at a time. It ran once but when I run it a second time I get the error on the first attempt to delete 10,000. The way I'm trying to delete the 10,000 rows is to use a delete statement that refers to all 2 million rows but I use a limit clause to affect only 10,000. I've tried adding an "unlock tables;" statement to the script before the first delete but that doesn't help. I still get the lock table error on the first delete. Any ideas how I can do this? Is there a way I can tell it NOT to lock records? I can make sure nothing else is accessing the table.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >