Search Results

Search found 24675 results on 987 pages for 'table'.

Page 29/987 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • SQL Server 2008 Running trigger after Insert, Update locks original table

    - by Polity
    Hi Folks, I have a serious performance problem. I have a database with (related to this problem), 2 tables. 1 Table contains strings with some global information. The second table contains the string stripped down to each individual word. So the string is like indexed in the second table, word by word. The validity of the data in the second table is of less important then the validity of the data in the first table. Since the first table can grow like towards 1*10^6 records and the second table having an average of like 10 words for 1 string can grow like 1*10^7 records, i use a nolock in order to read the second this leaves me free for inserting new records without locking it (Expect many reads on both tables). I have a script which keeps on adding and updating rows to the first table in a MERGE statement. On average, the data beeing merged are like 20 strings a time and the scripts runs like ones every 5 seconds. On the first table, i have a trigger which is beeing invoked on a Insert or Update, which takes the newly inserted or updated data and calls a stored procedure on it which makes sure the data is indexed in the second table. (This takes some significant time). The problem is that when having the trigger disbaled, Reading the first table happens in a few ms. However, when enabling the trigger and your in bad luck of trying to read the first table while this is beeing updated, Our webserver gives you a timeout after 10 seconds (which is way to long anyways). I can quess from this part that when running the trigger, the first table is kept (partially) in a lock untill the trigger is completed. What do you think, if i'm right, is there a easy way around this? Thanks in advance! Cheers, Koen

    Read the article

  • Can Sql Server 2005 Pivot table have nText passed into it?

    - by manemawanna
    Right bit of a simple question can I input nText into a pivot table? (SQL Server 2005) What I have is a table which records the answers to a questionnaire consisting of the following elements for example: UserID QuestionNumber Answer Mic 1 Yes Mic 2 No Mic 3 Yes Ste 1 Yes Ste 2 No Ste 3 Yes Bob 1 Yes Bob 2 No Bob 3 Yes With the answers being held in nText. Anyway what id like a Pivot table to do is: UserID 1 2 3 Mic Yes No Yes Ste Yes No Yes Bob Yes No Yes I have some test code, that creates a pivot table but at the moment it just shows the number of answers in each column (code can be found below). So I just want to know is it possible to add nText to a pivot table? As when I've tried it brings up errors and someone stated on another site that it wasn't possible, so I would like to check if this is the case or not. Just for further reference I don't have the opportunity to change the database as it's linked to other systems that I haven't created or have access too. Heres the SQL code I have at present below: DECLARE @query NVARCHAR(4000) DECLARE @count INT DECLARE @concatcolumns NVARCHAR(4000) SET @count = 1 SET @concatcolumns = '' WHILE (@count <=52) BEGIN IF @COUNT > 1 AND @COUNT <=52 SET @concatcolumns = (@concatcolumns + ' + ') SET @concatcolumns = (@concatcolumns + 'CAST ([' + CAST(@count AS NVARCHAR) + '] AS NVARCHAR)') SET @count = (@count+1) END DECLARE @columns NVARCHAR(4000) SET @count = 1 SET @columns = '' WHILE (@count <=52) BEGIN IF @COUNT > 1 AND @COUNT <=52 SET @columns = (@columns + ',') SET @columns = (@columns + '[' + CAST(@count AS NVARCHAR) + '] ') SET @count = (@count+1) END SET @query = ' SELECT UserID, ' + @concatcolumns + ' FROM( SELECT UserID, QuestionNumber AS qNum from QuestionnaireAnswers where QuestionnaireID = 7 ) AS t PIVOT ( COUNT (qNum) FOR qNum IN (' + @columns + ') ) AS PivotTable' select @query exec(@query)

    Read the article

  • How can I work around SQL Server - Inline Table Value Function execution plan variation based on par

    - by Ovidiu Pacurar
    Here is the situation: I have a table value function with a datetime parameter ,lest's say tdf(p_date) , that filters about two million rows selecting those with column date smaller than p_date and computes some aggregate values on other columns. It works great but if p_date is a custom scalar value function (returning the end of day in my case) the execution plan is altered an the query goes from 1 sec to 1 minute execution time. A proof of concept table - 1K products, 2M rows: CREATE TABLE [dbo].[POC]( [Date] [datetime] NOT NULL, [idProduct] [int] NOT NULL, [Quantity] [int] NOT NULL ) ON [PRIMARY] The inline table value function: CREATE FUNCTION tdf (@p_date datetime) RETURNS TABLE AS RETURN ( SELECT idProduct, SUM(Quantity) AS TotalQuantity, max(Date) as LastDate FROM POC WHERE (Date < @p_date) GROUP BY idProduct ) The scalar value function: CREATE FUNCTION [dbo].[EndOfDay] (@date datetime) RETURNS datetime AS BEGIN DECLARE @res datetime SET @res=dateadd(second, -1, dateadd(day, 1, dateadd(ms, -datepart(ms, @date), dateadd(ss, -datepart(ss, @date), dateadd(mi,- datepart(mi,@date), dateadd(hh, -datepart(hh, @date), @date)))))) RETURN @res END Query 1 - Working great SELECT * FROM [dbo].[tdf] (getdate()) The end of execution plan: Stream Aggregate Cost 13% <--- Clustered Index Scan Cost 86% Query 2 - Not so great SELECT * FROM [dbo].[tdf] (dbo.EndOfDay(getdate())) The end of execution plan: Stream Aggregate Cost 4% <--- Filter Cost 12% <--- Clustered Index Scan Cost 86%

    Read the article

  • How can I make a web-based "table" select like a spreadsheet? (rectangular area vs row-wrap selecti

    - by Dax
    I want to make a table that displays on a webpage, but one requirement is to make it easy to copy and paste into a spreadsheet. Normal HTML tables selection behavior is obviously different from how a spreadsheet like Excel selects -- when you select multiple rows it wraps around instead of selecting a rectangular area. Is there any way to make HTML table behave like a spreadsheet in this regard, or is the only way to resort to a Flash table or something similar?

    Read the article

  • SQL help - find the table that has 'somefieldId' as the primary key?

    - by Kettenbach
    Hello All, How can I search my sql database for a table that contains a field 'tiEntityId'. This field is referenced in a stored procedure, but I am unable to identify which table this id is a primary key for? Any suggestions? I currently look through stored procedure definitions for references to text by using something like this Declare @Search varchar(255) SET @Search='[10.10.100.50]' SELECT DISTINCT o.name AS Object_Name,o.type_desc FROM sys.sql_modules m INNER JOIN sys.objects o ON m.object_id=o.object_id WHERE m.definition Like '%'+@Search+'%' ORDER BY 2,1 Any SQL guru's out there know what I need to use to find the table that contains the field, 'preferably the table where that field is the Primary Key. Thanks so much for any tips. Cheers, ~ck in San Diego

    Read the article

  • Dataset holds a table called "Table", not the table I pass in?

    - by dotnetdev
    Hi, I have the code below: string SQL = "select * from " + TableName; using (DS = new DataSet()) using (SqlDataAdapter adapter = new SqlDataAdapter()) using (SqlConnection sqlconn = new SqlConnection(connectionStringBuilder.ToString())) using (SqlCommand objCommand = new SqlCommand(SQL, sqlconn)) { sqlconn.Open(); adapter.SelectCommand = objCommand; adapter.Fill(DS); } System.Windows.Forms.MessageBox.Show(DS.Tables[0].TableName); return DS; However, every time I run this code, the dataset (DS) is filled with one table called "Table". It does not represent the table name I pass in as the parameter TableName and this parameter does not get mutated so I don't know where the name Table comes from. I'd expect the table to be the same as the tableName parameter I pass in? Any idea why this is not so? EDIT: Important fact: This code needs to return a dataset because I use the dataRelation object in another method, which is dependent on this, and without using a dataset, that method throws an exception. The code for that method is: DataRelation PartToIntersection = new DataRelation("XYZ", this.LoadDataToTable(tableName).Tables[tableName].Columns[0], // Treating the PartStat table as the parent - .N this.LoadDataToTable("PartProducts").Tables["PartProducts"].Columns[0]); // 1 // PartsProducts (intersection) to ProductMaterial DataRelation ProductMaterialToIntersection = new DataRelation("", ds.Tables["ProductMaterial"].Columns[0], ds.Tables["PartsProducts"].Columns[1]); Thanks

    Read the article

  • Limit user in sql plus to a single record in a table.

    - by BFK
    I have one employee table...this table has 5 coloumns (empname, empgsm, empsal, empaddr, empdep)...it has 10 records. i've created 10 users equivelent to the empnames coloumn in the table. When a user logs in with his empname aka username & password, he will be able to see only his record from the table. eg.Smith is an employee, a user called smith was created. when this user is in session, and types "Select * from Employee_table" he only gets the record that belongs to him, where empname is smith. how do i do this using privileges? thanks in advance

    Read the article

  • Shrink a cell (in an absolutely-positioned ASP.NET table) to fit its contents?

    - by Giffyguy
    My webpage currently looks like this: <asp:Table runat="server" style="position: absolute; left: 0%; top: 82%; right: 0%; bottom: 0%; width: 100%; height: 18%" CellPadding="0" CellSpacing="0" GridLines="Both"> <asp:TableRow> <asp:TableCell> Content1 </asp:TableCell> <asp:TableCell Width="2.5%"> </asp:TableCell> <asp:TableCell > Content2 </asp:TableCell> </asp:TableRow> </asp:Table> But I need it to look like this: "Content1" is of unknown size, and the table will have to adjust to fit it in, but without taking any unnecessary space away from "Content2." I can't use "display: table" because it isn't supported in IE7 and such, so I'm pretty much stuck using a regular table element unless there is something better out there that is supported in older browsers. Does anyone know how this can be accomplished?

    Read the article

  • How to make header row of a table (with horizontal and vertical scrollers) fixed ?

    - by understack
    I've this sample table and I want to make header row of table visible all the time. Header row should scroll with horizontal scrollbar and shouldn't scroll with vertical scrollbar. table: <div style="width:800px; height:150px;overflow:scroll;margin:50px auto;"> <table style="width:1600px" border="1"> <thead style=""> <tr> <th style="width:800px">id_1</th> <th style="width:800px">id_2</th> </tr> </thead> <tbody style=""> <tr><td>1200</td><td>1200</td></tr> <tr><td>1200</td><td>1200</td></tr> <tr><td>1200</td><td>1200</td></tr> <tr><td>1200</td><td>1200</td></tr> <tr><td>1200</td><td>1200</td></tr> <tr><td>1200</td><td>1200</td></tr> <tr><td>1200</td><td>1200</td></tr> <tr><td>1200</td><td>1200</td></tr> </tbody> </table> </div> How can I do this with css only? Suggestions in this and this thread didn't seem to work, possibly due to presence of scrollbars.

    Read the article

  • Is it possible to inspect the contents of a Table Value Parameter via the debugger?

    - by Stephen Edmonds
    Does anyone know if it is possible to use the Visual Studio / SQL Server Management Studio debugger to inspect the contents of a Table Value Parameter passed to a stored procedure? To give a trivial example: CREATE TYPE [dbo].[ControllerId] AS TABLE( [id] [nvarchar](max) NOT NULL ) GO CREATE PROCEDURE [dbo].[test] @controllerData [dbo].[ControllerId] READONLY AS BEGIN SELECT COUNT(*) FROM @controllerData; END DECLARE @SampleData as [dbo].[ControllerId]; INSERT INTO @SampleData ([id]) VALUES ('test'), ('test2'); exec [dbo].[test] @SampleData; Using the above with a break point on the exec statement, I am able to step into the stored procedure without any trouble. The debugger shows that the @controllerData local has a value of '(table)' but I have not found any tool that would allow me to actual view the rows that make up that table.

    Read the article

  • Oracle bleibt auch 2011 Spitzenreiter im Bereich Datenbanken

    - by Anne Manke
    Mit der Veröffentlichung der aktuellen Ausgabe "Market Share: All Software Markets, Worldwide 2011" bestätigt das weltweit führende Marktanalyseunternehmen Gartner Oracle's Marktführerschaft im Bereich der Relationellen Datenbank Management Systeme (RDBMS). Oracle konnte innerhalb des letzten Jahres seinen Abstand zu seinen Marktbegleitern im Bereich der RDBMS mit einem stabilen Wachstum von 18% sogar ausbauen: der Marktanteil stieg im Jahr 2010 von 48,2% auf 48,8% im Jahr 2011. Damit ist der Abstand zu Oracle's stärkstem Verfolger IBM auf 28,6%.   Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2 {mso-style-name:"Light List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:61; mso-style-unhide:no; border:solid #C0504D 1.0pt; mso-border-themecolor:accent2; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2FirstRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-shading:#C0504D; mso-tstyle-shading-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:2.25pt double #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2FirstCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2OddColumn {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} table.MsoTableLightListAccent2OddRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} Revenue 2010 ($USM) Revenue 2011 ($USM) Growth 2010 Growth 2011 Share 2010 Share 2011 Oracle 9,990.5 11,787.0 10.9% 18.0% 48.2% 48.8% IBM 4,300.4 4,870.4 5.4% 13.3% 20.7% 20.2% Microsoft 3,641.2 4,098.9 10.1% 12.6% 17.6% 17.0% SAP/Sybase 744.4 1,101.1 12.8% 47.9% 3.6% 4.6% Teradata 754.7 882.3 16.9% 16.9% 3.6% 3.7% Source: Gartner’s “Market Share: All Software Markets, Worldwide 2011,” March 29, 2012, By Colleen Graham, Joanne Correia, David Coyle, Fabrizio Biscotti, Matthew Cheung, Ruggero Contu, Yanna Dharmasthira, Tom Eid, Chad Eschinger, Bianca Granetto, Hai Hong Swinehart, Sharon Mertz, Chris Pang, Asheesh Raina, Dan Sommer, Bhavish Sood, Marianne D'Aquila, Laurie Wurster and Jie Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2 {mso-style-name:"Light List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:61; mso-style-unhide:no; border:solid #C0504D 1.0pt; mso-border-themecolor:accent2; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2FirstRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-shading:#C0504D; mso-tstyle-shading-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:2.25pt double #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2FirstCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2OddColumn {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} table.MsoTableLightListAccent2OddRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2 {mso-style-name:"Light List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:61; mso-style-unhide:no; border:solid #C0504D 1.0pt; mso-border-themecolor:accent2; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2FirstRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-shading:#C0504D; mso-tstyle-shading-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:2.25pt double #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2FirstCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2OddColumn {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} table.MsoTableLightListAccent2OddRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;}

    Read the article

  • How to add a footer to a table in Microsoft Word?

    - by dewalla
    I have a table that is longer than one page. I have found the option to make the header of the table to be added to the second portion of the table after the page break. Is there a way to do the same thing but with a footer on the table? I want to add a footer so that if my table was 1000 entries long (12 pages), that the first and last row of each page would be consistant; a header and footer for the table. If I edit the rest of the document (above the table) the table will shift up/down and I want to header and footer of the table to remain at the pagge breaks. Any Ideas? PAGE BREAK HEADER OF TABLE TBL TBL TBL TBL TBL TBL TBL TBL TBL TBL TBL TBL FOOTER OF TABLE PAGE BREAK HEADER OF TABLE TBL TBL TBL TBL TBL TBL FOOTER OF TABLE TEXT TEXT TEXT TEXT TEXT TEXT PAGE BREAK

    Read the article

  • Styling specific columns and rows

    - by hattenn
    I'm trying to style some specific parts of a 5x4 table that I create. It should be like this: Every even numbered row and every odd numbered row should get a different color. Text in the second, third, and fourth columns should be centered. I have this table: <table> <caption>Some caption</caption> <colgroup> <col> <col class="value"> <col class="value"> <col class="value"> </colgroup> <thead> <tr> <th id="year">Year</th> <th>1999</th> <th>2000</th> <th>2001</th> </tr> </thead> <tbody> <tr class="oddLine"> <td>Berlin</td> <td>3,3</td> <td>1,9</td> <td>2,3</td> </tr> <tr class="evenLine"> <td>Hamburg</td> <td>1,5</td> <td>1,3</td> <td>2,0</td> </tr> <tr class="oddLine"> <td>München</td> <td>0,6</td> <td>1,1</td> <td>1,0</td> </tr> <tr class="evenLine"> <td>Frankfurt</td> <td>1,3</td> <td>1,6</td> <td>1,9</td> </tr> </tbody> <tfoot> <tr class="oddLine"> <td>Total</td> <td>6,7</td> <td>5,9</td> <td>7,2</td> </tr> </tfoot> </table> And I have this CSS file: table, th, td { border: 1px solid black; border-collapse: collapse; padding: 0px 5px; } #year { text-align: left; } .oddLine { background-color: #DDDDDD; } .evenLine { background-color: #BBBBBB; } .value { text-align: center; } And this doesn't work. The text in the columns are not centered. What is the problem here? And is there a way to solve it (other than changing the class of all the cells that I want centered)? P.S.: I think there's some interference with .evenLine and .oddLine classes. Because when I put "background: black" in the class "value", it changes the background color of the columns in the first row. The thing is, if I delete those two classes, text-align still doesn't work, but background attribute works perfectly. Argh...

    Read the article

  • Is SELECT INTO able to affect data from its original table during UPDATE

    - by driveby
    Whilst asking this question asp.net scheduling timed events user murph posted some insightful information: Point about this is that its very, very simple - you have an process for exchange that is performing a clearly defined task and you have a high frequency task that is not doing anything particularly complex, its a straightforward query (select from table where sent = false and send at < value) - probably into temporary table so that you can run a single query update after you've done the sends - that you can optimise the index for. You're not trying to queue up a huge pile of event triggers, just one that fires once a minute and processes things that are due. Is it possible to SELECT data from table X INTO table Y and have the UPDATES that are performed on table Y pushed into table X? I guess the alternative would be that the data gets updated in table Y then an update command can be run on table X based on the data in table Y. What would be the advantage of selecting into another table? Thank you,

    Read the article

  • Insert Record by Drag & Drop from ADF Tree to ADF Tree Table

    - by arul.wilson(at)oracle.com
    If you want to create record based on the values Dragged from ADF Tree and Dropped on a ADF Tree Table, then here you go.UseCase DescriptionUser Drags a tree node from ADF Tree and Drops it on a ADF Tree Table node. A new row gets added in the Tree Table based on the source tree node, subsequently a record gets added to the database table on which Tree table in based on.Following description helps to achieve this using ADF BC.Run the DragDropSchema.sql to create required tables.Create Business Components from tables (PRODUCTS, COMPONENTS, SUB_COMPONENTS, USERS, USER_COMPONENTS) created above.Add custom method to App Module Impl, this method will be used to insert record from view layer.   public String createUserComponents(String p_bugdbId, String p_productId, String p_componentId, String p_subComponentId){    Row newUserComponentsRow = this.getUserComponentsView1().createRow();    try {      newUserComponentsRow.setAttribute("Bugdbid", p_bugdbId);      newUserComponentsRow.setAttribute("ProductId", new oracle.jbo.domain.Number(p_productId));      newUserComponentsRow.setAttribute("Component1", p_componentId);      newUserComponentsRow.setAttribute("SubComponent", p_subComponentId);    } catch (Exception e) {        e.printStackTrace();        return "Failure";    }        return "Success";  }Expose this method to client interface.To display the root node we need a custom VO which can be achieved using below query. SELECT Users.ACTIVE, Users.BUGDB_ID, Users.EMAIL, Users.FIRSTNAME, Users.GLOBAL_ID, Users.LASTNAME, Users.MANAGER_ID, Users.MANAGER_PRIVILEGEFROM USERS UsersWHERE Users.MANAGER_ID is NULLCreate VL between UsersView and UsersRootNodeView VOs.Drop ProductsView from DC as ADF Tree to jspx page.Add Tree Level Rule based on ComponentsView and SubComponentsView.Drop UsersRootNodeView as ADF Tree TableAdd Tree Level Rules based on UserComponentsView and UsersView.Add DragSource to ADF Tree and CollectionDropTarget to ADF Tree Table respectively.Bind CollectionDropTarget's DropTarget to backing bean and implement method of signature DnDAction (DropEvent), this method gets invoked when Tree Table encounters a drop action, here details required for creating new record are captured from the drag source and passed to 'createUserComponents' method. public DnDAction onTreeDrop(DropEvent dropEvent) {      String newBugdbId = "";      String msgtxt="";            try {          // Getting the target node bugdb id          Object serverRowKey = dropEvent.getDropSite();          if (serverRowKey != null) {                  //Code for Tree Table as target              String dropcomponent = dropEvent.getDropComponent().toString();              dropcomponent = (String)dropcomponent.subSequence(0, dropcomponent.indexOf("["));              if (dropcomponent.equals("RichTreeTable")){                RichTreeTable richTreeTable = (RichTreeTable)dropEvent.getDropComponent();                richTreeTable.setRowKey(serverRowKey);                int rowIndexTreeTable = richTreeTable.getRowIndex();                //Drop Target Logic                if (((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getAttributeValue()==null) {                  //Get Parent                  newBugdbId = (String)((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getParent().getAttributeValue();                } else {                  if (isNum(((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getAttributeValue().toString())) {                    //Get Parent's parent                              newBugdbId = (String)((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getParent().getParent().getAttributeValue();                  } else{                      //Dropped on USER                                          newBugdbId = (String)((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getAttributeValue();                  }                  }              }           }                     DataFlavor<RowKeySet> df = DataFlavor.getDataFlavor(RowKeySet.class);          RowKeySet droppedValue = dropEvent.getTransferable().getData(df);            Object[] keys = droppedValue.toArray();          Key componentKey = null;          Key subComponentKey = null;           // binding for createUserComponents method defined in AppModuleImpl class  to insert record in database.                      operationBinding = bindings.getOperationBinding("createUserComponents");            // get the Product, Component, Subcomponent details and insert to UserComponents table.          // loop through the keys if more than one comp/subcomponent is select.                   for (int i = 0; i < keys.length; i++) {                  System.out.println("in for :"+i);              List list = (List)keys[i];                  System.out.println("list "+i+" : "+list);              System.out.println("list size "+list.size());              if (list.size() == 1) {                                // we cannot drag and drop  the highest node !                                msgtxt="You cannot drop Products, please drop Component or SubComponent from the Tree.";                  System.out.println(msgtxt);                                this.showInfoMessage(msgtxt);              } else {                  if (list.size() == 2) {                    // were doing the first branch, in this case all components.                    componentKey = (Key)list.get(1);                    Object[] droppedProdCompValues = componentKey.getAttributeValues();                    operationBinding.getParamsMap().put("p_bugdbId",newBugdbId);                    operationBinding.getParamsMap().put("p_productId",droppedProdCompValues[0]);                    operationBinding.getParamsMap().put("p_componentId",droppedProdCompValues[1]);                    operationBinding.getParamsMap().put("p_subComponentId","ALL");                    Object result = operationBinding.execute();              } else {                    subComponentKey = (Key)list.get(2);                    Object[] droppedProdCompSubCompValues = subComponentKey.getAttributeValues();                    operationBinding.getParamsMap().put("p_bugdbId",newBugdbId);                    operationBinding.getParamsMap().put("p_productId",droppedProdCompSubCompValues[0]);                    operationBinding.getParamsMap().put("p_componentId",droppedProdCompSubCompValues[1]);                    operationBinding.getParamsMap().put("p_subComponentId",droppedProdCompSubCompValues[2]);                    Object result = operationBinding.execute();                  }                   }            }                        /* this.getCil1().setDisabled(false);            this.getCil1().setPartialSubmit(true); */                      return DnDAction.MOVE;        } catch (Exception ex) {          System.out.println("drop failed with : " + ex.getMessage());          ex.printStackTrace();                  /* this.getCil1().setDisabled(true); */          return DnDAction.NONE;          }    } Run jspx page and drop a Component or Subcomponent from Products Tree to UserComponents Tree Table.

    Read the article

  • Problem creating a database with PHP PDO

    - by Leandro Alonso
    Hello guys, I'm having a problem with a SQL query in my PHP Application. When the user access it for the first time, the app executes this query to create all the database: CREATE TABLE `databases` ( `id` bigint(20) NOT NULL auto_increment, `driver` varchar(45) NOT NULL, `server` text NOT NULL, `user` text NOT NULL, `password` text NOT NULL, `database` varchar(200) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -------------------------------------------------------- -- -- Table structure for table `modules` -- CREATE TABLE `modules` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(100) NOT NULL, `type` varchar(150) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=29 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_data` -- CREATE TABLE `modules_data` ( `id` bigint(20) NOT NULL auto_increment, `module_id` bigint(20) unsigned NOT NULL, `key` varchar(150) NOT NULL, `value` tinytext, PRIMARY KEY (`id`), KEY `fk_modules_data_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=184 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_position` -- CREATE TABLE `modules_position` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, `column` smallint(1) default NULL, `line` smallint(1) default NULL, PRIMARY KEY (`user_id`,`tab_id`,`module_id`), KEY `fk_modules_order_users` (`user_id`), KEY `fk_modules_order_tabs` (`tab_id`), KEY `fk_modules_order_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `tabs` -- CREATE TABLE `tabs` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(60) NOT NULL, `columns` smallint(1) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=12 ; -- -------------------------------------------------------- -- -- Table structure for table `tabs_has_modules` -- CREATE TABLE `tabs_has_modules` ( `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, PRIMARY KEY (`tab_id`,`module_id`), KEY `fk_tabs_has_modules_tabs` (`tab_id`), KEY `fk_tabs_has_modules_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `users` -- CREATE TABLE `users` ( `id` bigint(20) unsigned NOT NULL auto_increment, `login` varchar(60) NOT NULL, `password` varchar(64) NOT NULL, `email` varchar(100) NOT NULL, `name` varchar(250) default NULL, `user_level` bigint(20) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `fk_users_user_levels` (`user_level`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -------------------------------------------------------- -- -- Table structure for table `users_has_tabs` -- CREATE TABLE `users_has_tabs` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `order` smallint(2) NOT NULL, `columns_width` varchar(255) default NULL, PRIMARY KEY (`user_id`,`tab_id`), KEY `fk_users_has_tabs_users` (`user_id`), KEY `fk_users_has_tabs_tabs` (`tab_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `user_levels` -- CREATE TABLE `user_levels` ( `id` bigint(20) unsigned NOT NULL auto_increment, `level` smallint(2) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -------------------------------------------------------- -- -- Table structure for table `user_meta` -- CREATE TABLE `user_meta` ( `id` bigint(20) unsigned NOT NULL auto_increment, `user_id` bigint(20) unsigned default NULL, `key` varchar(150) NOT NULL, `value` longtext NOT NULL, PRIMARY KEY (`id`), KEY `fk_user_meta_users` (`user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -- Constraints for dumped tables -- -- -- Constraints for table `modules_data` -- ALTER TABLE `modules_data` ADD CONSTRAINT `fk_modules_data_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `modules_position` -- ALTER TABLE `modules_position` ADD CONSTRAINT `fk_modules_order_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_tabs` FOREIGN KEY (`tab_id`) REFERENCES `tabs` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `users` -- ALTER TABLE `users` ADD CONSTRAINT `fk_users_user_levels` FOREIGN KEY (`user_level`) REFERENCES `user_levels` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION; -- -- Constraints for table `user_meta` -- ALTER TABLE `user_meta` ADD CONSTRAINT `fk_user_meta_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; INSERT INTO `user_levels` VALUES(1, 10); INSERT INTO `user_levels` VALUES(2, 1); INSERT INTO `users` VALUES(1, 'admin', 'password', '[email protected]', NULL, 1); INSERT INTO `user_meta` VALUES (NULL, 1, 'last_tab', 1); In some environments i get this error: SQLSTATE[HY000]: General error: 1005 Can't create table 'dms.databases' (errno: 150) I tried everything that I could find on Google but nothing works. The strange part is that if I run this query in PhpMyAdmin he creates my database, without any error.

    Read the article

  • How the number of indexes built on a table can impact performances?

    - by Davide Mauri
    We all know that putting too many indexes (I’m talking of non-clustered index only, of course) on table may produce performance problems due to the overhead that each index bring to all insert/update/delete operations on that table. But how much? I mean, we all agree – I think – that, generally speaking, having many indexes on a table is “bad”. But how bad it can be? How much the performance will degrade? And on a concurrent system how much this situation can also hurts SELECT performances? If SQL Server take more time to update a row on a table due to the amount of indexes it also has to update, this also means that locks will be held for more time, slowing down the perceived performance of all queries involved. I was quite curious to measure this, also because when teaching it’s by far more impressive and effective to show to attended a chart with the measured impact, so that they can really “feel” what it means! To do the tests, I’ve create a script that creates a table (that has a clustered index on the primary key which is an identity column) , loads 1000 rows into the table (inserting 1000 row using only one insert, instead of issuing 1000 insert of one row, in order to minimize the overhead needed to handle the transaction, that would have otherwise ), and measures the time taken to do it. The process is then repeated 16 times, each time adding a new index on the table, using columns from table in a round-robin fashion. Test are done against different row sizes, so that it’s possible to check if performance changes depending on row size. The result are interesting, although expected. This is the chart showing how much time it takes to insert 1000 on a table that has from 0 to 16 non-clustered indexes. Each test has been run 20 times in order to have an average value. The value has been cleaned from outliers value due to unpredictable performance fluctuations due to machine activity. The test shows that in a  table with a row size of 80 bytes, 1000 rows can be inserted in 9,05 msec if no indexes are present on the table, and the value grows up to 88 (!!!) msec when you have 16 indexes on it This means a impact on performance of 975%. That’s *huge*! Now, what happens if we have a bigger row size? Say that we have a table with a row size of 1520 byte. Here’s the data, from 0 to 16 indexes on that table: In this case we need near 22 msec to insert 1000 in a table with no indexes, but we need more that 500msec if the table has 16 active indexes! Now we’re talking of a 2410% impact on performance! Now we can have a tangible idea of what’s the impact of having (too?) many indexes on a table and also how the size of a row also impact performances. That’s why the golden rule of OLTP databases “few indexes, but good” is so true! (And in fact last week I saw a database with tables with 1700bytes row size and 23 (!!!) indexes on them!) This also means that a too heavy denormalization is really not a good idea (we’re always talking about OLTP systems, keep it in mind), since the performance get worse with the increase of the row size. So, be careful out there, and keep in mind the “equilibrium” is the key world of a database professional: equilibrium between read and write performance, between normalization and denormalization, between to few and too may indexes. PS Tests are done on a VMWare Workstation 7 VM with 2 CPU and 4 GB of Memory. Host machine is a Dell Precsioni M6500 with i7 Extreme X920 Quad-Core HT 2.0Ghz and 16Gb of RAM. Database is stored on a SSD Intel X-25E Drive, Simple Recovery Model, running on SQL Server 2008 R2. If you also want to to tests on your own, you can download the test script here: Open TestIndexPerformance.sql

    Read the article

  • Database - Designing an "Events" Table

    - by Alix Axel
    After reading the tips from this great Nettuts+ article I've come up with a table schema that would separate highly volatile data from other tables subjected to heavy reads and at the same time lower the number of tables needed in the whole database schema, however I'm not sure if this is a good idea since it doesn't follow the rules of normalization and I would like to hear your advice, here is the general idea: I've four types of users modeled in a Class Table Inheritance structure, in the main "user" table I store data common to all the users (id, username, password, several flags, ...) along with some TIMESTAMP fields (date_created, date_updated, date_activated, date_lastLogin, ...). To quote the tip #16 from the Nettuts+ article mentioned above: Example 2: You have a “last_login” field in your table. It updates every time a user logs in to the website. But every update on a table causes the query cache for that table to be flushed. You can put that field into another table to keep updates to your users table to a minimum. Now it gets even trickier, I need to keep track of some user statistics like how many unique times a user profile was seen how many unique times a ad from a specific type of user was clicked how many unique times a post from a specific type of user was seen and so on... In my fully normalized database this adds up to about 8 to 10 additional tables, it's not a lot but I would like to keep things simple if I could, so I've come up with the following "events" table: |------|----------------|----------------|--------------|-----------| | ID | TABLE | EVENT | DATE | IP | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190030 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | created | 201004190031 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | activated | 201004190234 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | approved | 201004190930 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | login | 201004191200 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | created | 201004191230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | impressed | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | blocked | 201004200319 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | deleted | 201004200320 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| Basically the ID refers to the primary key (id) field in the TABLE table, I believe the rest should be pretty straightforward. One thing that I've come to like in this design is that I can keep track of all the user logins instead of just the last one, and thus generate some interesting metrics with that data. Due to the nature of the events table I also thought of making some optimizations, such as: #9: Since there is only a finite number of tables and a finite (and predetermined) number of events, the TABLE and EVENTS columns could be setup as ENUMs instead of VARCHARs to save some space. #14: Store IPs as UNSIGNED INT with INET_ATON() instead of VARCHARs. Store DATEs as TIMESTAMPs instead of DATETIMEs. Use the ARCHIVE (or the CSV?) engine instead of InnoDB / MyISAM. Overall, each event would only consume 14 bytes which is okay for my traffic I guess. Pros: Ability to store more detailed data (such as logins). No need to design (and code for) almost a dozen additional tables (dates and statistics). Reduces a few columns per table and keeps volatile data separated. Cons: Non-relational (still not as bad as EAV): SELECT * FROM events WHERE id = 2 AND table = 'user' ORDER BY date DESC(); 6 bytes overhead per event (ID, TABLE and EVENT). I'm more inclined to go with this approach since the pros seem to far outweigh the cons, but I'm still a little bit reluctant.. Am I missing something? What are your thoughts on this? Thanks!

    Read the article

  • Normalizing a table

    - by Alex
    I have a legacy table, which I can't change. The values in it can be modified from legacy application (application also can't be changed). Due to a lot of access to the table from new application (new requirement), I'd like to create a temporary table, which would hopefully speed up the queries. The actual requirement, is to calculate number of business days from X to Y. For example, give me all business days from Jan 1'st 2001 until Dec 24'th 2004. The table is used to mark which days are off, as different companies may have different days off - it isn't just Saturday + Sunday) The temporary table would be created from a .NET program, each time user enters the screen for this query (user may run query multiple times, with different values, table is created once), so I'd like it to be as fast as possible. Approach below runs in under a second, but I only tested it with a small dataset, and still it takes probably close to half a second, which isn't great for UI - even though it's just the overhead for first query. The legacy table looks like this: CREATE TABLE [business_days]( [country_code] [char](3) , [state_code] [varchar](4) , [calendar_year] [int] , [calendar_month] [varchar](31) , [calendar_month2] [varchar](31) , [calendar_month3] [varchar](31) , [calendar_month4] [varchar](31) , [calendar_month5] [varchar](31) , [calendar_month6] [varchar](31) , [calendar_month7] [varchar](31) , [calendar_month8] [varchar](31) , [calendar_month9] [varchar](31) , [calendar_month10] [varchar](31) , [calendar_month11] [varchar](31) , [calendar_month12] [varchar](31) , misc. ) Each month has 31 characters, and any day off (Saturday + Sunday + holiday) is marked with X. Each half day is marked with an 'H'. For example, if a month starts on a Thursday, than it will look like (Thursday+Friday workdays, Saturday+Sunday marked with X): ' XX XX ..' I'd like the new table to look like so: create table #Temp (country varchar(3), state varchar(4), date datetime, hours int) And I'd like to only have rows for days which are off (marked with X or H from previous query) What I ended up doing, so far is this: Create a temporary-intermediate table, that looks like this: create table #Temp_2 (country_code varchar(3), state_code varchar(4), calendar_year int, calendar_month varchar(31), month_code int) To populate it, I have a union which basically unions calendar_month, calendar_month2, calendar_month3, etc. Than I have a loop which loops through all the rows in #Temp_2, after each row is processed, it is removed from #Temp_2. To process the row there is a loop from 1 to 31, and substring(calendar_month, counter, 1) is checked for either X or H, in which case there is an insert into #Temp table. [edit added code] Declare @country_code char(3) Declare @state_code varchar(4) Declare @calendar_year int Declare @calendar_month varchar(31) Declare @month_code int Declare @calendar_date datetime Declare @day_code int WHILE EXISTS(SELECT * From #Temp_2) -- where processed = 0) BEGIN Select Top 1 @country_code = t2.country_code, @state_code = t2.state_code, @calendar_year = t2.calendar_year, @calendar_month = t2.calendar_month, @month_code = t2.month_code From #Temp_2 t2 -- where processed = 0 set @day_code = 1 while @day_code <= 31 begin if substring(@calendar_month, @day_code, 1) = 'X' begin set @calendar_date = convert(datetime, (cast(@month_code as varchar) + '/' + cast(@day_code as varchar) + '/' + cast(@calendar_year as varchar))) insert into #Temp (country, state, date, hours) values (@country_code, @state_code, @calendar_date, 8) end if substring(@calendar_month, @day_code, 1) = 'H' begin set @calendar_date = convert(datetime, (cast(@month_code as varchar) + '/' + cast(@day_code as varchar) + '/' + cast(@calendar_year as varchar))) insert into #Temp (country, state, date, hours) values (@country_code, @state_code, @calendar_date, 4) end set @day_code = @day_code + 1 end delete from #Temp_2 where @country_code = country_code AND @state_code = state_code AND @calendar_year = calendar_year AND @calendar_month = calendar_month AND @month_code = month_code --update #Temp_2 set processed = 1 where @country_code = country_code AND @state_code = state_code AND @calendar_year = calendar_year AND @calendar_month = calendar_month AND @month_code = month_code END I am not an expert in SQL, so I'd like to get some input on my approach, and maybe even a much better approach suggestion. After having the temp table, I'm planning to do (dates would be coming from a table): select cast(convert(datetime, ('01/31/2012'), 101) -convert(datetime, ('01/17/2012'), 101) as int) - ((select sum(hours) from #Temp where date between convert(datetime, ('01/17/2012'), 101) and convert(datetime, ('01/31/2012'), 101)) / 8) Besides the solution of normalizing the table, the other solution I implemented for now, is a function which does all this logic of getting the business days by scanning the current table. It runs pretty fast, but I'm hesitant to call a function, if I can instead add a simpler query to get result. (I'm currently trying this on MSSQL, but I would need to do same for Sybase ASE and Oracle)

    Read the article

  • On asp:Table Control how do we create a thead ?

    - by balexandre
    from MSDN article on the subject we can see that we create a TableHeaderRowthat conatins TableHeaderCells. but they add the table header like this: myTable.Row.AddAt(0, headerRow); witch outputs the HTML: <table id="Table1" ... > <tr> <th scope="column" abbr="Col 1 Head">Column 1 Header</th> <th scope="column" abbr="Col 2 Head">Column 2 Header</th> <th scope="column" abbr="Col 3 Head">Column 3 Header</th> </tr> <tr> <td>(0,0)</td> <td>(0,1)</td> <td>(0,2)</td> </tr> ... and it should have <thead> and <tbody> (so it works seamless with tablesorter) :) <table id="Table1" ... > <thead> <tr> <th scope="column" abbr="Col 1 Head">Column 1 Header</th> <th scope="column" abbr="Col 2 Head">Column 2 Header</th> <th scope="column" abbr="Col 3 Head">Column 3 Header</th> </tr> </thead> <tbody> <tr> <td>(0,0)</td> <td>(0,1)</td> <td>(0,2)</td> </tr> ... </tbody> the HTML aspx code is <asp:Table ID="Table1" runat="server" /> How can I output the correct syntax? Just as information, the GridViewcontrol has this builed in as we just need to set teh Accesbility and use the HeaderRow gv.UseAccessibleHeader = true; gv.HeaderRow.TableSection = TableRowSection.TableHeader; gv.HeaderRow.CssClass = "myclass"; but the question is for the Table control.

    Read the article

  • css display:table first column too wide

    - by Mestore
    I have a css table setup like this: <div class='table'> <div> <span>name</span> <span>details</span> </div> </div> The css for the table is: .table{ display:table; width:100%; } .table div{ text-align:right; display:table-row; border-collapse: separate; border-spacing: 0px; } .table div span:first-child { text-align:right; } .table div span { vertical-align:top; text-align:left; display:table-cell; padding:2px 10px; } As it stands the two columns are split evenly between the space occupied by the width of the table. I'm trying to get the first column only to be as wide as is needed by the text occupying it's cells The table is an unknown width, as are the columns/cells.

    Read the article

  • Entire Table is pushed to the next page when rendering a SSRS 2005 Report (as .pdf) in SSRS 2008

    - by Pwninstein
    I have a SSRS 2005 report that I'm rendering in SSRS 2008 as a .pdf. The report contains (among other things) a table that's very simple: header row, details, no footer, no aggregation, no grouping, keep together = false, pageBreakAtStart = false, pageBreakAtEnd = false, repeatHeaderOnNewPage = true. I resized the table to be much narrower than the body of the report just to be sure it wasn't extending beyond the bounds of the report, pushing everything down. But, no matter what I try, if some of the detail rows in that table would need to be pushed to the next page, then the ENTIRE TABLE is pushed to the next page, not just the extra rows. So my question is: Is there a workaround for this problem, is this a known issue, or is it even possible to get this 2005 report to render properly in 2008? NOTE: this is related to a question that I previously asked here, and is based on this MSDN forum post started by a coworker. This question is not the same as my previous question, as I'd like to see things work properly in with a 2005 report. If it's not possible, that would be good to know, as it would indicate that we need to upgrade one of our servers to SQL 2008. Thanks!

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >