Search Results

Search found 10424 results on 417 pages for 'persisted column'.

Page 34/417 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Two column layout, navigation div on the right, solution from previous thread didn't seem to work

    - by Tom
    I tried the solution from this thread, but I must be missing something because it doesn't work: <div style="float:left;margin-right:200px"> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p> </div> <div style="float:right;width:200px"> <p>navigation</p> </div> It works when the text in the content div (the left one) is short, but when it's long then the div takes up the whole width of the browser and the margin is there, but the right div is pushed below the first one nevertheless. What am I missing? Edit: The goal is to have a fix sized navigation column on the right of the browser window and the left div should get all the space left by the right navigation column (liquid layout).

    Read the article

  • Use Javascript RegEx to extract column names from SQLite Create Table SQL

    - by NimbusSoftware
    I'm trying to extract column names from a SQLite result set from sqlite_master's sql column. I get hosed up in the regular expressions in the match() and split() functions. t1.executeSql('SELECT name, sql FROM sqlite_master WHERE type="table" and name!="__WebKitDatabaseInfoTable__";', [], function(t1, result) { for(i = 0;i < result.rows.length; i++){ var tbl = result.rows.item(i).name; var dbSchema = result.rows.item(i).sql; // errors out on next line var columns = dbSchema.match(/.*CREATE\s+TABLE\s+(\S+)\s+\((.*)\).*/)[2].split(/\s+[^,]+,?\s*/); } }, function(){console.log('err1');} ); I want to parse SQL statements like these... CREATE TABLE sqlite_sequence(name,seq); CREATE TABLE tblConfig (Key TEXT NOT NULL,Value TEXT NOT NULL); CREATE TABLE tblIcon (IconID INTEGER NOT NULL PRIMARY KEY,png TEXT NOT NULL,img32 TEXT NOT NULL,img64 TEXT NOT NULL,Version TEXT NOT NULL) into a strings like theses... name,seq Key,Value IconID,png,img32,img64,Version Any help with a RegEx would be greatly appreciated.

    Read the article

  • How to determine one to many column name from entity type

    - by snicker
    I need a way to determine the name of the column used by NHibernate to join one-to-many collections from the collected entity's type. I need to be able to determine this at runtime. Here is an example: I have some entities: namespace Entities { public class Stable { public virtual int Id {get; set;} public virtual string StableName {get; set;} public virtual IList<Pony> Ponies { get; set; } } public class Dude { public virtual int Id { get; set; } public virtual string DudesName { get; set; } public virtual IList<Pony> PoniesThatBelongToDude { get; set; } } public class Pony { public virtual int Id {get; set;} public virtual string Name {get; set;} public virtual string Color { get; set; } } } I am using NHibernate to generate the database schema, which comes out looking like this: create table "Stable" (Id integer, StableName TEXT, primary key (Id)) create table "Dude" (Id integer, DudesName TEXT, primary key (Id)) create table "Pony" (Id integer, Name TEXT, Color TEXT, Stable_id INTEGER, Dude_id INTEGER, primary key (Id)) Given that I have a Pony entity in my code, I need to be able to find out: A. Does Pony even belong to a collection in the mapping? B. If it does, what are the column names in the database table that pertain to collections In the above instance, I would like to see that Pony has two collection columns, Stable_id and Dude_id.

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • Replace some column values depending on a condition, MATLAB

    - by darkcminor
    I have a matrix like A= 4.0000 120.0000 92.0000 0 0 37.6000 0.1910 30.0000 10.0000 168.0000 74.0000 0 0 38.0000 0.5370 34.0000 10.0000 139.0000 80.0000 0 0 27.1000 1.4410 57.0000 1.0000 139.0000 60.0000 23.0000 846.0000 30.1000 0.3980 59.0000 5.0000 136.0000 72.0000 19.0000 175.0000 25.8000 0.5870 51.0000 7.0000 121.0000 0 0 0 30.0000 0.4840 32.0000 I want to replace the values of the first column that are greater than '5' to 0, also in the second column if the values are in range 121-130 replace them by 0, if they are in range 131-140 replece by 1, 141-150 by 2, 151-160 by 3... so the result matrix would be A= 4.0000 0.0000 92.0000 0 0 37.6000 0.1910 30.0000 0.0000 4.0000 74.0000 0 0 38.0000 0.5370 34.0000 0.0000 1.0000 80.0000 0 0 27.1000 1.4410 57.0000 1.0000 1.0000 60.0000 23.0000 846.0000 30.1000 0.3980 59.0000 5.0000 1.0000 72.0000 19.0000 175.0000 25.8000 0.5870 51.0000 0.0000 0.0000 0 0 0 30.0000 0.4840 32.0000 How to acomplish this? I was trying something like counter=1; for i = 1: rows if A(i,1) > 5 A(i ,1) = 0; end if A(i,2) > 120 && A(i,2) < 130 A(i ,2) = 0; end counter = counter+1; end Using a case could do the trick?

    Read the article

  • MS SQL - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • Dynamic evaluation of a table column within an insert before trigger

    - by Tim Garver
    HI All, I have 3 tables, main, types and linked. main has an id column and 32 type columns. types has id, type linked has id, main_id, type_id I want to create an insert before trigger on the main table. It needs to compare its 32 type columns to the values in the types table if the main table column has an 'X' for its value and insert the main_id and types_id into the linked table. i have done a lot of searching, and it looks like a prepared statement would be the way to go, but i wanted to ask the experts. The issue, is i dont want to write 32 IF statements, and even if i did, i need to query the types table to get the ID for that type, seems like a huge waist of resources. Ideally i want to do this inside of my trigger: BEGIN DECLARE @types results_set -- (not sure if this is a valid type); -- (iam sure my loop syntax is all wrong here)... SET @types = (select * from types) for i=0;i<types.records;i++ { IF NEW.[i.type] = 'X' THEN insert into linked (main_id,type_id) values (new.ID, i.id); END IF; } END; Anyway, This is what i was hoping to do, maybe there is a way to dynamically set the field name inside of a results loop, but i cant find a good example of this. Thanks in advance Tim

    Read the article

  • ASP.NET javascript embed in template column

    - by Mahesh
    Hi, I am developing a web page in which a rad grid displays the list of exams. I included a template column which shows count down timer when the exam is going to expire. Code is as given below: <telerik:RadGrid ID="radGrid" runat="server" AutoGenerateColumns="false"> <MasterTableView> <Columns> <telerik:GridTemplateColumn HeaderText="template" DataField="Date"> <ItemTemplate> <script language="JavaScript" type="text/javascript"> TargetDate = '<%# Eval("Date") %>'; BackColor = "white"; ForeColor = "black"; CountActive = true; CountStepper = -1; LeadingZero = true; DisplayFormat = "%%D%% Days, %%H%% Hours, %%M%% Minutes, %%S%% Seconds."; FinishMessage = "It is finally here!"; </script> <script language="JavaScript" src="http://scripts.hashemian.com/js/countdown.js" type="text/javascript"></script> </ItemTemplate> </telerik:GridTemplateColumn> </Columns> </MasterTableView> </telerik:RadGrid> I am giving DataTable as datasource to this grid. But my problem is , the template column is showing data only for the first record and the value taken is from the last row in the DataTable. For Ex: If I give data as given below, I can see 3 records but with only the first record displaying the counter with last value(10/10/2010 05:43 PM). 02/02/2011 01:00 AM 08/09/2010 11:00 PM 10/10/2010 05:43 PM Could you please help in this?? Thanks, Mahesh

    Read the article

  • Return the Column Name for row with last non-null value "Ms Access 2007"

    - by bri1969
    I have a Table, which contains a list of league players. Each season, we record their Points per Dart. Their total PPD for that season is stored in other tables and extracted through other queries, which in turn are imported to the master table "Player History" at the end of the season for use as historical data. The current query retrieves each players PPD for each season they played, when they played last, and how many seasons played. The code for Last season Played has become too long and unstable to use. it was originally created, and split into two separate columns because a single SQL was to long. (LSP1) and LSP2) which work, but as I add seasons, Access does not like the length of code. In short, i need to find a more simple code that will look at each row, and look in that row for the last non null cell and report which column that last non null value is in. So if a player played seasons 30 & 31, but did not play 32..but did play 33, the Column with the code should be titled Last Season Played, and for that Player, it would state "33" in that cell, indicating that this player last played season "33" I will provide both tables and the query.. Please help

    Read the article

  • Excel how to get an average for column for rows that meet multiple criteria

    - by Jess
    I would like to know the average days between open and close dates for an item with a close date in a particular month. So from the below example in Jan 2013 items 2,5 and 6 were closed (Closed can be RESOLVED or CANCELLED status), each were open for 26, 9 and 6 days respectivly. So of the jobs that have a closed date in Jan 2013 (between 01/01/2013 and 13/02/13) they have an average open time (between open and close date) of 13.67 days to 2dp. I have tried a few ways to get this to work and i think the issue I am having is with the AVERAGE function. First time using a forum so apologies if my question is unclear. Was unable to post image to have this comma seperated below Item_ID,Open_Date,Status,Close_Date 1,1/06/2012,RESOLVED,16/07/2012 2,20/12/2012,RESOLVED,16/01/2013 3,2/01/2013,IN PROGRESS, 4,3/01/2013,CANCELLED,7/05/2013 5,3/01/2013,RESOLVED,12/01/2013 6,4/01/2013,RESOLVED,10/01/2013 7,1/02/2013,RESOLVED,15/02/2013 8,2/02/2013,OPEN, 9,7/02/2013,CANCELLED,26/02/2013

    Read the article

  • Column locking in innodb?

    - by mingyeow
    I know this sounds weird, but apparently one of my columns is locked. select * from table where type_id = 1 and updated_at < '2010-03-14' limit 1; select * from table where type_id = 3 and updated_at < '2010-03-14' limit 10; the first one would not finish running, while the second one completes smoothly. the only difference is the type_id Thanks in advance for your help - i have an urgent data job to finish, and this problem is driving me crazy

    Read the article

  • TIME column in TOP command for mysql

    - by michael
    When I run top on my database server I get that mysqld has been running for 4:00.51 and it continues to go up. I assume this means that one process with mysql has been running this long from other posts on here. Its not set to cumulative mode as best I can tell as the heading looks like it would change to CTIME if that be the case. What I'm wondering is if this is normal for a site that makes a lot of individual connections using PHP. I shouldn't have any long running processes that would hold on to a mysql connection for this long, only seconds at most. Am I incorrect to assume that this time relates to one connection/process running? I think usually I see it flash up and away on the TOP, not just stay there with this number increasing.

    Read the article

  • Outlook: Move 'flag status' index column to the left

    - by ripper234
    In 'Messages' (inbox) view in Outlook 2007, there is a list of all messages (one liners) with several field. The rightmost field is the 'Flag Status' field. I'm trying to move this icon to the left. All other columns are movable (via several methods), but this status icon is not. How can I move it to the left of the headers line?

    Read the article

  • Move every 3 rows into a column in excel

    - by Eliane El Asmr
    Please i need your help. I need to move every 3 rows into a new colomn. --Let's suppose i have this: Ambassade de France S.E. M. Patrice PAOLI 01-420000-420150 Ambassade de France Mme. Jamilé Anan 01-420000-420150 Ambassade de France Mme . Marie Maamari 01-420000-420150 --I need them to be Like this: Ambassade de France S.E. M. Patrice PAOLI 01-420000-420150 Ambassade de France Mme. Jamilé Anan 01-420000-420150 Ambassade de France Mme . Marie Maamari 01-420000-420150 I have this code. Can you help me Please. It's giving me error. Out of range. What should i change? It's urgent:(the code is for every 7, i need for every 3) Sub Every7() Dim i As Integer, j As Integer, cl As Range Dim myarray(100, 6) As Integer 'I don't know what your data is. Mine is integer data 'Change 100 to however many rows you have in your original data, divided by seven, round up 'remember arrays start at zero, so 6 really is 7 If MsgBox("Is your entire data selected?", vbYesNo, "Data selected?") <> vbYes Then MsgBox ("First select all your data") End If 'Read data into array For Each cl In Selection.Cells Debug.Print cl.Value myarray(i, j) = cl.Value If j = 6 Then i = i + 1 j = 0 Else j = j + 1 End If Next 'Now paste the array for your data into a new worksheet Worksheets.Add Range(Cells(1, 1), Cells(101, 7)) = myarray End Sub Thank you.

    Read the article

  • Show Excel column filter information in cells

    - by Alex
    We have a sheet with a huge number of columns and filtering is often used to navigate to the correct data. The problem is that sometimes its not obvious that the filter has been applied , the visual cue is very subtle. Is it possible to show some data via a formula or VBA about the filter inside another cell? Something like this: Just knowing if the filter is active would be a good help, knowing what columns have active filters applied to them would be icing on the cake. Ideally they update automatically. I dont have ownership of the spreadsheet so cant make major changes to its structure or anything but VBA is fine. Any ideas?

    Read the article

  • Excel Subtotal if adjacent column is not blank

    - by Head of Catering
    I'm trying to create a subtotal for a range that excludes rows that don't have a wholesale price. I have a range of products, prices and units that have subtotals by brand, although the brand subtotal is a sum and not a subtotal because the total needs to be displayed regardless of what the user chooses to filter. These subtotal rows do not have wholesale prices. Here is the sumif formula I'm using to calculate totals in the summary area above the range: =SUMIF(B5:B12, "", D5:D12) I need to have a subtotal formula that works the same way. Is there an equivalent to the sumif formula for subtotals? Or maybe a worksheet function I can use? I need to be able to do this without using VBA.

    Read the article

  • Excel formula to compare single value in one cell with multiple values in other cell

    - by Raw
    I have a value in Column A, which I want to compare with multiple values of corresponding cell in column B, and depending on that value, put the answer in column C. For example, using the table below, it searching in column B for values which are less than or equal to 12 and put the answer in same order in column C. Column A Column B Column C 12 0,12,13,14 Yes, Yes, No, No 101 101,102,103,104 Yes, No, No, No How can I do this in Excel?

    Read the article

  • Include most recent non empty column value in filter

    - by Domenic
    If my data looks like this: Category Sub Category 1 a b 2 c d Which shows that there are two categories: "1", which has sub categories "a" and "b", and "2", which has sub categories "c" and "d". What can I do in excel (for filtering/sorting) to keep rows 1 and 2 together as category "1", instead of the first row as category "1", and the second as category ""? I'm trying to avoid having to do this: Category Sub Category 1 a 1 b 2 c 2 d

    Read the article

  • Excel - Possible to create a sorted view of a column in one sheet on another sheet?

    - by Cumbayah
    Hi; I'm trying, in Excel 2007, to populate a column in one sheet with the data contained in a column on another sheet, so that I may provide another sorting on the data, related to that sheet only. I've tried to boil it down to being able to have a column on sheet2 automatically being populated with all rows from a column in sheet1, but I can't seem to do so. Any suggestions? Thanks in advance.

    Read the article

  • Possible to create a sorted view of a column in one sheet on another sheet?

    - by Cumbayah
    Hi; I'm trying, in Excel 2007, to populate a column in one sheet with the data contained in a column on another sheet, so that I may provide another sorting on the data, related to that sheet only. I've tried to boil it down to being able to have a column on sheet2 automatically being populated with all rows from a column in sheet1, but I can't seem to do so. Any suggestions? Thanks in advance.

    Read the article

  • Excel: How to Compare Column Values in a Row

    - by spazzie
    I have a bunch of comparison data and a lot of entries being compared. As an example, say my sheet looks like this, give or take a few columns: Item Price1 Quantity1 Price2 Quantity2 Price3 Quantity3 001 $123 12 $456 24 $789 48 002 $100 95 $200 5 $300 51 For each item (row), I want to be able to look at all of the Quantity columns and find which one has the highest quantity. Ideally I'd be able to run a condition of some sort on the entire excel sheet at once, and it would highlight in red the highest quantity. So the results would be a red "48" (qty3) for Item 001 and a red "95" (qty1) for Item 002. Only the color would change, not any data, and no new rows would need to be created. Let me know if you need more info

    Read the article

  • SP Gridview link button column not working

    - by Dilse Naaz
    Hi I have one sharepoint custom page application which is rendering from a user control. In the user control page, i had used SPGridview for displaying data. My first column is Title Column (link button column), when the user click on the link, then one popup window will open with corresponding data. But the problem is the link button is not working properly. But this application is working as fine in asp.net application. My code is shown below.. <asp:UpdatePanel runat="server" ID="UpdatePanel2"> <ContentTemplate> <SharePoint:SPGridView ID="dgApplicationBox" CellPadding="0" Height="100%" runat="server" ForeColor="Black" Font-Size="10px" Font-Names="Verdana" AutoGenerateColumns="False" AllowPaging="True" Width="100%" BorderStyle="None" BorderWidth="0px" PageSize="10" BorderColor="White" BackColor="White" OnRowDataBound="dgApplicationBox_RowDataBound" DataKeyNames="ApplicationID" OnSelectedIndexChanged="dgApplicationBox_SelectedIndexChanged" OnPageIndexChanging="dgApplicationBox_PageIndexChanging" CssClass="ms-listviewtable" AlternatingRowStyle-CssClass="ms-alternating"> <SelectedRowStyle Font-Bold="True" ForeColor="Black" BackColor="#CE5D5A"></SelectedRowStyle> <EditRowStyle Font-Size="10px" Font-Names="Verdana,Arial,Helvetica,sans-serif"></EditRowStyle> <HeaderStyle Font-Size="11px" Height="20px" Font-Bold="True" ForeColor="Black" BackColor="#E7E8EC"> </HeaderStyle> <PagerStyle HorizontalAlign="Center" ForeColor="#414E61" Font-Size="5px" Font-Names="arial" Height="10px" BackColor="#EBF3FF"></PagerStyle> <RowStyle /> <Columns> <asp:TemplateField HeaderText="Title" HeaderStyle-CssClass="ms-vb"> <ItemTemplate> <asp:LinkButton ID="lbtnSubject" Text='<%# Bind("UDF5") %>' runat="server" OnClick="lbtnSubject_Click"></asp:LinkButton> </ItemTemplate> <HeaderStyle HorizontalAlign="Left" CssClass="ms-vh2" Font-Bold="true" /> <ItemStyle HorizontalAlign="Left" CssClass="ms-vb2" /> </asp:TemplateField> <asp:TemplateField HeaderText="Request No."> <ItemTemplate> <asp:Label ID="lblReqNo" Text='<%# Bind("UDF1") %>' runat="server" /> </ItemTemplate> <HeaderStyle HorizontalAlign="Left" CssClass="ms-vh2" Font-Bold="true" /> <ItemStyle HorizontalAlign="Left" CssClass="ms-vb2" /> </asp:TemplateField> <asp:BoundField DataField="CreatedOn" HeaderText="Created On" DataFormatString="{0:MM/dd/yyyy}" HeaderStyle-HorizontalAlign="Left" ItemStyle-HorizontalAlign="Left"> <HeaderStyle CssClass="ms-vh2" Font-Bold="true"></HeaderStyle> <ItemStyle CssClass="ms-vb2"></ItemStyle> </asp:BoundField> <asp:BoundField DataField="Name" HeaderText="Form Type" HeaderStyle-HorizontalAlign="Left" ItemStyle-HorizontalAlign="Left"> <HeaderStyle CssClass="ms-vh2" Font-Bold="true"></HeaderStyle> <ItemStyle CssClass="ms-vb2"></ItemStyle> </asp:BoundField> <asp:TemplateField HeaderText="History"> <HeaderStyle CssClass="ms-vh2" Font-Bold="true"></HeaderStyle> <ItemStyle HorizontalAlign="Center" VerticalAlign="Middle" Width="21px" CssClass="ms-vb2"> </ItemStyle> <ItemTemplate> <asp:LinkButton ID="lbtnView" runat="server" OnClick="lbtnView_Click" >View</asp:LinkButton> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Application Id" Visible="False"> <ItemTemplate> <asp:Label ID="lblApplicationId" runat="server" Text='<%# Bind("ApplicationId") %>'></asp:Label> </ItemTemplate> <HeaderStyle HorizontalAlign="Left" CssClass="ms-vh2" Font-Bold="true" /> <ItemStyle HorizontalAlign="Left" CssClass="ms-vb2" /> </asp:TemplateField> </Columns> </SharePoint:SPGridView> </ContentTemplate> </asp:UpdatePanel> when the user click on the link button, this code will works.. try { clearSession(); Session["DigitalSignature"] = null; Button btnDetails = sender as Button; DataTable dt = (DataTable)dgApplicationBox.DataSource; GridViewRow gvRow = (GridViewRow)(sender as LinkButton).Parent.Parent; Session["AppId"] = ((Label)gvRow.FindControl("lblApplicationId")).Text; string subject = ((LinkButton)gvRow.FindControl("lbtnSubject")).Text; WFInfo objWFInfo = new WFInfo(); objWFInfo.InitWorkflowProperty(Convert.ToInt32(Session["AppId"].ToString()), Session["CurrentUser"].ToString()); Session["FormId"] = objWFInfo.FormID.ToString(); string strFilname = objWFInfo.GetFormName(objWFInfo.ApplicationCategoryID.ToString()); string WindowName = strFilname; strFilname += ".aspx"; Session["CategoryId"] = objWFInfo.ApplicationCategoryID.ToString(); //pnlSubmitModal_ModalPopupExtender.Show(); ScriptManager.RegisterStartupScript(this, this.GetType(), "starScript", "popUpWindow('" + strFilname + "?tittle=" + subject + "', 800, 690,'" + WindowName + "');", true); this.Controls.Add(new LiteralControl("<script>alert('hi');</script>")); if (Session["CurrentUser"] != null) { ApplicationForm objApplication = new ApplicationForm(); objApplication.markRead(Convert.ToInt32(Session["AppId"].ToString()), Session["CurrentUser"].ToString()); } bindFolderData(); } If i click on the link button, there will be only post back occuring. but not the popup window open.. Please help me for resolving this problem. thanks in advance..

    Read the article

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • I see no LOBs!

    - by Paul White
    Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns? I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring – even going so far as to re-run DBCC CHECKDB to see if any corruption had taken place.  The table in question wasn’t particularly pretty – it had grown somewhat organically over time, with new columns being added every so often as the need arose.  Nevertheless, it remained a simple structure with no LOB columns – no TEXT or IMAGE, no XML, no MAX types – nothing aside from ordinary INT, MONEY, VARCHAR, and DATETIME types.  To add to the air of mystery, not every query that ran against the table would report LOB logical reads – just sometimes – but when it did, the query often took much longer to execute. Ok, enough of the pre-amble.  I can’t reproduce the exact structure here, but the following script creates a table that will serve to demonstrate the effect: IF OBJECT_ID(N'dbo.Test', N'U') IS NOT NULL DROP TABLE dbo.Test GO CREATE TABLE dbo.Test ( row_id NUMERIC IDENTITY NOT NULL,   col01 NVARCHAR(450) NOT NULL, col02 NVARCHAR(450) NOT NULL, col03 NVARCHAR(450) NOT NULL, col04 NVARCHAR(450) NOT NULL, col05 NVARCHAR(450) NOT NULL, col06 NVARCHAR(450) NOT NULL, col07 NVARCHAR(450) NOT NULL, col08 NVARCHAR(450) NOT NULL, col09 NVARCHAR(450) NOT NULL, col10 NVARCHAR(450) NOT NULL, CONSTRAINT [PK dbo.Test row_id] PRIMARY KEY CLUSTERED (row_id) ) ; The next script loads the ten variable-length character columns with one-character strings in the first row, two-character strings in the second row, and so on down to the 450th row: WITH Numbers AS ( -- Generates numbers 1 - 450 inclusive SELECT TOP (450) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) INSERT dbo.Test WITH (TABLOCKX) SELECT REPLICATE(N'A', N.n), REPLICATE(N'B', N.n), REPLICATE(N'C', N.n), REPLICATE(N'D', N.n), REPLICATE(N'E', N.n), REPLICATE(N'F', N.n), REPLICATE(N'G', N.n), REPLICATE(N'H', N.n), REPLICATE(N'I', N.n), REPLICATE(N'J', N.n) FROM Numbers AS N ORDER BY N.n ASC ; Once those two scripts have run, the table contains 450 rows and 10 columns of data like this: Most of the time, when we query data from this table, we don’t see any LOB logical reads, for example: -- Find the maximum length of the data in -- column 5 for a range of rows SELECT result = MAX(DATALENGTH(T.col05)) FROM dbo.Test AS T WHERE row_id BETWEEN 50 AND 100 ; But with a different query… -- Read all the data in column 1 SELECT result = MAX(DATALENGTH(T.col01)) FROM dbo.Test AS T ; …suddenly we have 49 LOB logical reads, as well as the ‘normal’ logical reads we would expect. The Explanation If we had tried to create this table in SQL Server 2000, we would have received a warning message to say that future INSERT or UPDATE operations on the table might fail if the resulting row exceeded the in-row storage limit of 8060 bytes.  If we needed to store more data than would fit in an 8060 byte row (including internal overhead) we had to use a LOB column – TEXT, NTEXT, or IMAGE.  These special data types store the large data values in a separate structure, with just a small pointer left in the original row. Row Overflow SQL Server 2005 introduced a feature called row overflow, which allows one or more variable-length columns in a row to move to off-row storage if the data in a particular row would otherwise exceed 8060 bytes.  You no longer receive a warning when creating (or altering) a table that might need more than 8060 bytes of in-row storage; if SQL Server finds that it can no longer fit a variable-length column in a particular row, it will silently move one or more of these columns off the row into a separate allocation unit. Only variable-length columns can be moved in this way (for example the (N)VARCHAR, VARBINARY, and SQL_VARIANT types).  Fixed-length columns (like INTEGER and DATETIME for example) never move into ‘row overflow’ storage.  The decision to move a column off-row is done on a row-by-row basis – so data in a particular column might be stored in-row for some table records, and off-row for others. In general, if SQL Server finds that it needs to move a column into row-overflow storage, it moves the largest variable-length column record for that row.  Note that in the case of an UPDATE statement that results in the 8060 byte limit being exceeded, it might not be the column that grew that is moved! Sneaky LOBs Anyway, that’s all very interesting but I don’t want to get too carried away with the intricacies of row-overflow storage internals.  The point is that it is now possible to define a table with non-LOB columns that will silently exceed the old row-size limit and result in ordinary variable-length columns being moved to off-row storage.  Adding new columns to a table, expanding an existing column definition, or simply storing more data in a column than you used to – all these things can result in one or more variable-length columns being moved off the row. Note that row-overflow storage is logically quite different from old-style LOB and new-style MAX data type storage – individual variable-length columns are still limited to 8000 bytes each – you can just have more of them now.  Having said that, the physical mechanisms involved are very similar to full LOB storage – a column moved to row-overflow leaves a 24-byte pointer record in the row, and the ‘separate storage’ I have been talking about is structured very similarly to both old-style LOBs and new-style MAX types.  The disadvantages are also the same: when SQL Server needs a row-overflow column value it needs to follow the in-row pointer a navigate another chain of pages, just like retrieving a traditional LOB. And Finally… In the example script presented above, the rows with row_id values from 402 to 450 inclusive all exceed the total in-row storage limit of 8060 bytes.  A SELECT that references a column in one of those rows that has moved to off-row storage will incur one or more lob logical reads as the storage engine locates the data.  The results on your system might vary slightly depending on your settings, of course; but in my tests only column 1 in rows 402-450 moved off-row.  You might like to play around with the script – updating columns, changing data type lengths, and so on – to see the effect on lob logical reads and which columns get moved when.  You might even see row-overflow columns moving back in-row if they are updated to be smaller (hint: reduce the size of a column entry by at least 1000 bytes if you hope to see this). Be aware that SQL Server will not warn you when it moves ‘ordinary’ variable-length columns into overflow storage, and it can have dramatic effects on performance.  It makes more sense than ever to choose column data types sensibly.  If you make every column a VARCHAR(8000) or NVARCHAR(4000), and someone stores data that results in a row needing more than 8060 bytes, SQL Server might turn some of your column data into pseudo-LOBs – all without saying a word. Finally, some people make a distinction between ordinary LOBs (those that can hold up to 2GB of data) and the LOB-like structures created by row-overflow (where columns are still limited to 8000 bytes) by referring to row-overflow LOBs as SLOBs.  I find that quite appealing, but the ‘S’ stands for ‘small’, which makes expanding the whole acronym a little daft-sounding…small large objects anyone? © Paul White 2011 email: [email protected] twitter: @SQL_Kiwi

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >