Search Results

Search found 155 results on 7 pages for 'columnname'.

Page 2/7 | < Previous Page | 1 2 3 4 5 6 7  | Next Page >

  • sqlalchemy dynamic mapping

    - by adancu
    Hi, I have the following problem: I have the class: class Word(object): def __init__(self): self.id = None self.columns = {} def __str__(self): return "(%s, %s)" % (str(self.id), str(self.columns)) self.columns is a dict which will hold (columnName:columnValue) values. The name of the columns are known at runtime and they are loaded in a wordColumns list, for example wordColumns = ['english', 'korean', 'romanian'] wordTable = Table('word', metadata, Column('id', Integer, primary_key = True) ) for columnName in wordColumns: wordTable.append_column(Column(columnName, String(255), nullable = False)) I even created a explicit mapper properties to "force" the table columns to be mapped on word.columns[columnName], instead of word.columnName, I don't get any error on mapping, but it seems that doesn't work. mapperProperties = {} for column in wordColumns: mapperProperties['columns[\'%']' % column] = wordTable.columns[column] mapper(Word, wordTable, mapperProperties) When I load a word object, SQLAlchemy creates an object which has the word.columns['english'], word.columns['korean'] etc. properties instead of loading them into word.columns dict. So for each column, it creates a new property. Moreover word.columns dictionary doesn't even exists. The same way, when I try to persist a word, SQLAlchemy expects to find the column values in properties named like word.columns['english'] (string type) instead of the dictionary word.columns. I have to say that my experience with Python and SQLAlchemy is quite limited, maybe it isn't possible to do what I'm trying to do. Any help appreciated, Thanks in advance.

    Read the article

  • Transaction within IF THEN ELSE doesn't commit

    - by boris callens
    In my TSQL script I have an IF THEN ELSE structure that checks if a column already exists. If not it creates the column and updates it. IF NOT EXISTS( SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'tableName' AND COLUMN_NAME = 'columnName')) BEGIN BEGIN TRANSACTION ALTER TABLE tableName ADD columnName int NULL COMMIT BEGIN TRANSACTION update tableName set columnName = [something] from [subquery] COMMIT END This doesn't work because the column doesn't exist after the commit. Why doesn't the COMMIT commit?

    Read the article

  • PHP variable question

    - by Kyle Parisi
    This works: $customerBox = mysql_query("MY SQL STATEMENT HERE"); $boxRow = mysql_fetch_array($customerBox); $customerBox = mysql_query("MY SQL STATEMENT AGAIN"); while($item = mysql_fetch_assoc($customerBox)) { foreach ($item as $columnName = $value) { if (empty($value)) { print $columnName; } } } This does not: $customerBox = mysql_query("MY SQL STATEMENT HERE"); $boxRow = mysql_fetch_array($customerBox); while($item = mysql_fetch_assoc($customerBox)) { foreach ($item as $columnName = $value) { if (empty($value)) { print $columnName; } } } Why? I guess I don't understand how variables work yet.

    Read the article

  • Grafting LINQ onto C# 2 library

    - by P Daddy
    I'm writing a data access layer. It will have C# 2 and C# 3 clients, so I'm compiling against the 2.0 framework. Although encouraging the use of stored procedures, I'm still trying to provide a fairly complete ability to perform ad-hoc queries. I have this working fairly well, already. For the convenience of C# 3 clients, I'm trying to provide as much compatibility with LINQ query syntax as I can. Jon Skeet noticed that LINQ query expressions are duck typed, so I don't have to have an IQueryable and IQueryProvider (or IEnumerable<T>) to use them. I just have to provide methods with the correct signatures. So I got Select, Where, OrderBy, OrderByDescending, ThenBy, and ThenByDescending working. Where I need help are with Join and GroupJoin. I've got them working, but only for one join. A brief compilable example of what I have is this: // .NET 2.0 doesn't define the Func<...> delegates, so let's define some workalikes delegate TResult FakeFunc<T, TResult>(T arg); delegate TResult FakeFunc<T1, T2, TResult>(T1 arg1, T2 arg2); abstract class Projection{ public static Condition operator==(Projection a, Projection b){ return new EqualsCondition(a, b); } public static Condition operator!=(Projection a, Projection b){ throw new NotImplementedException(); } } class ColumnProjection : Projection{ readonly Table table; readonly string columnName; public ColumnProjection(Table table, string columnName){ this.table = table; this.columnName = columnName; } } abstract class Condition{} class EqualsCondition : Condition{ readonly Projection a; readonly Projection b; public EqualsCondition(Projection a, Projection b){ this.a = a; this.b = b; } } class TableView{ readonly Table table; readonly Projection[] projections; public TableView(Table table, Projection[] projections){ this.table = table; this.projections = projections; } } class Table{ public Projection this[string columnName]{ get{return new ColumnProjection(this, columnName);} } public TableView Select(params Projection[] projections){ return new TableView(this, projections); } public TableView Select(FakeFunc<Table, Projection[]> projections){ return new TableView(this, projections(this)); } public Table Join(Table other, Condition condition){ return new JoinedTable(this, other, condition); } public TableView Join(Table inner, FakeFunc<Table, Projection> outerKeySelector, FakeFunc<Table, Projection> innerKeySelector, FakeFunc<Table, Table, Projection[]> resultSelector){ Table join = new JoinedTable(this, inner, new EqualsCondition(outerKeySelector(this), innerKeySelector(inner))); return join.Select(resultSelector(this, inner)); } } class JoinedTable : Table{ readonly Table left; readonly Table right; readonly Condition condition; public JoinedTable(Table left, Table right, Condition condition){ this.left = left; this.right = right; this.condition = condition; } } This allows me to use a fairly decent syntax in C# 2: Table table1 = new Table(); Table table2 = new Table(); TableView result = table1 .Join(table2, table1["ID"] == table2["ID"]) .Select(table1["ID"], table2["Description"]); But an even nicer syntax in C# 3: TableView result = from t1 in table1 join t2 in table2 on t1["ID"] equals t2["ID"] select new[]{t1["ID"], t2["Description"]}; This works well and gives me identical results to the first case. The problem is if I want to join in a third table. TableView result = from t1 in table1 join t2 in table2 on t1["ID"] equals t2["ID"] join t3 in table3 on t1["ID"] equals t3["ID"] select new[]{t1["ID"], t2["Description"], t3["Foo"]}; Now I get an error (Cannot implicitly convert type 'AnonymousType#1' to 'Projection[]'), presumably because the second join is trying to join the third table to an anonymous type containing the first two tables. This anonymous type, of course, doesn't have a Join method. Any hints on how I can do this?

    Read the article

  • Some problems with GridView in webpart with multiple filters.

    - by NF_81
    Hello, I'm currently working on a highly configurable Database Viewer webpart for WSS 3.0 which we are going to need for several customized sharepoint sites. Sorry in advance for the large wall of text, but i fear it's necessary to recap the whole issue. As background information and to describe my problem as good as possible, I'll start by telling you what the webpart shall do: Basically the webpart contains an UpdatePanel, which contains a GridView and an SqlDataSource. The select-query the Datasource uses can be set via webbrowseable properties or received from a consumer method from another webpart. Now i wanted to add a filtering feature to the webpart, so i want a dropdownlist in the headerrow for each column that should be filterable. As the select-query is completely dynamic and i don't know at design time which columns shall be filterable, i decided to add a webbrowseable property to contain an xml-formed string with filter information. So i added the following into OnRowCreated of the gridview: void gridView_RowCreated(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.Header) { for (int i = 0; i < e.Row.Cells.Count; i++) { if (e.Row.Cells[i].GetType() == typeof(DataControlFieldHeaderCell)) { string headerText = ((DataControlFieldHeaderCell)e.Row.Cells[i]).ContainingField.HeaderText; // add sorting functionality if (_allowSorting && !String.IsNullOrEmpty(headerText)) { Label l = new Label(); l.Text = headerText; l.ForeColor = Color.Blue; l.Font.Bold = true; l.ID = "Header" + i; l.Attributes["title"] = "Sort by " + headerText; l.Attributes["onmouseover"] = "this.style.cursor = 'pointer'; this.style.color = 'red'"; l.Attributes["onmouseout"] = "this.style.color = 'blue'"; l.Attributes["onclick"] = "__doPostBack('" + panel.UniqueID + "','SortBy$" + headerText + "');"; e.Row.Cells[i].Controls.Add(l); } // check if this column shall be filterable if (!String.IsNullOrEmpty(filterXmlData)) { XmlNode columnNode = GetColumnNode(headerText); if (columnNode != null) { string dataValueField = columnNode.Attributes["DataValueField"] == null ? "" : columnNode.Attributes["DataValueField"].Value; string filterQuery = columnNode.Attributes["FilterQuery"] == null ? "" : columnNode.Attributes["FilterQuery"].Value; if (!String.IsNullOrEmpty(dataValueField) && !String.IsNullOrEmpty(filterQuery)) { SqlDataSource ds = new SqlDataSource(_conStr, filterQuery); DropDownList cbx = new DropDownList(); cbx.ID = "FilterCbx" + i; cbx.Attributes["onchange"] = "__doPostBack('" + panel.UniqueID + "','SelectionChange$" + headerText + "$' + this.options[this.selectedIndex].value);"; cbx.Width = 150; cbx.DataValueField = dataValueField; cbx.DataSource = ds; cbx.DataBound += new EventHandler(cbx_DataBound); cbx.PreRender += new EventHandler(cbx_PreRender); cbx.DataBind(); e.Row.Cells[i].Controls.Add(cbx); } } } } } } } GetColumnNode() checks in the filter property, if there is a node for the current column, which contains information about the Field the DropDownList should bind to, and the query for filling in the items. In cbx_PreRender() i check ViewState and select an item in case of a postback. In cbx_DataBound() i just add tooltips to the list items as the dropdownlist has a fixed width. Previously, I used AutoPostback and SelectedIndexChanged of the DDL to filter the grid, but to my disappointment it was not always fired. Now i check __EVENTTARGET and __EVENTARGUMENT in OnLoad and call a function when the postback event was due to a selection change in a DDL: private void FilterSelectionChanged(string columnName, string selectedValue) { columnName = "[" + columnName + "]"; if (selectedValue.IndexOf("--") < 0 ) // "-- All --" selected { if (filter.ContainsKey(columnName)) filter[columnName] = "='" + selectedValue + "'"; else filter.Add(columnName, "='" + selectedValue + "'"); } else { filter.Remove(columnName); } gridView.PageIndex = 0; } "filter" is a HashTable which is stored in ViewState for persisting the filters (got this sample somewhere on the web, don't remember where). In OnPreRender of the webpart, i call a function which reads the ViewState and apply the filterExpression to the datasource if there is one. I assume i had to place it here, because if there is another postback (e.g. for sorting) the filters are not applied any more. private void ApplyGridFilter() { string args = " "; int i = 0; foreach (object key in filter.Keys) { if (i == 0) args = key.ToString() + filter[key].ToString(); else args += " AND " + key.ToString() + filter[key].ToString(); i++; } dataSource.FilterExpression = args; ViewState.Add("FilterArgs", filter); } protected override void OnPreRender(EventArgs e) { EnsureChildControls(); if (WebPartManager.DisplayMode.Name == "Edit") { errMsg = "Webpart in Edit mode..."; return; } if (useWebPartConnection == true) // get select-query from consumer webpart { if (provider != null) { dataSource.SelectCommand = provider.strQuery; } } try { int currentPageIndex = gridView.PageIndex; if (!String.IsNullOrEmpty(m_SortExpression)) { gridView.Sort("[" + m_SortExpression + "]", m_SortDirection); } gridView.PageIndex = currentPageIndex; // for some reason, the current pageindex resets after sorting ApplyGridFilter(); gridView.DataBind(); } catch (Exception ex) { Functions.ShowJavaScriptAlert(Page, ex.Message); } base.OnPreRender(e); } So i set the filterExpression and the call DataBind(). I don't know if this is ok on this late stage.. don't have a lot of asp.net experience after all. If anyone can suggest a better solution, please give me a hint. This all works great so far, except when i have two or more filters and set them to a combination that returns zero records. Bam ... gridview gone, completely - without a possiblity of changing the filters back. So i googled and found out that i have to subclass gridview in order to always show the headerrow. I found this solution and implemented it with some modifications. The headerrow get's displayed and i can change the filters even if the returned result contains no rows. But finally to my current problem: When i have two or more filters set which return zero rows, and i change back one filter to something that should return rows, the gridview remains empty (although the pager is rendered). I have to completly refresh the page to reset the filters. When debugging, i can see in the overridden CreateChildControls of the grid, that the base method indeed returns 0, but anyway... the gridView.RowCount remains 0 after databinding. Anyone have an idea what's going wrong here?

    Read the article

  • c# Counter requires 2 button clicks to update

    - by marko.ivanovski.nz
    Hi, I have a problem that has been bugging me all day. In my code I have the following: private int rowCount { get { return (int)ViewState["rowCount"]; } set { ViewState["rowCount"] = value; } } and a button event protected void addRow_Click(object sender, EventArgs e) { rowCount = rowCount + 1; } Then on Page_Load I read that value and create controls accordingly. I understand the button event fires AFTER the Page_Load fires so the value isn't updated until the next postback. Real nightmare. Here's the entire code: protected void Page_Load(object sender, EventArgs e) { string xmlValue = ""; //To read a value from a database if (xmlValue.Length > 0) { if (!Page.IsPostBack) { DataSet ds = XMLToDataSet(xmlValue); Table dimensionsTable = DataSetToTable(ds); tablePanel.Controls.Add(dimensionsTable); DataTable dt = ds.Tables["Dimensions"]; rowCount = dt.Rows.Count; colCount = dt.Columns.Count; } else { tablePanel.Controls.Add(DataSetToTable(DefaultDataSet(rowCount, colCount))); } } else { if (!Page.IsPostBack) { rowCount = 2; colCount = 4; } tablePanel.Controls.Add(DataSetToTable(DefaultDataSet(rowCount, colCount))); } } protected void submit_Click(object sender, EventArgs e) { resultsLabel.Text = Server.HtmlEncode(DataSetToStringXML(TableToDataSet((Table)tablePanel.Controls[0]))); } protected void addColumn_Click(object sender, EventArgs e) { colCount = colCount + 1; } protected void addRow_Click(object sender, EventArgs e) { rowCount = rowCount + 1; } public DataSet TableToDataSet(Table table) { DataSet ds = new DataSet(); DataTable dt = new DataTable("Dimensions"); ds.Tables.Add(dt); //Add headers for (int i = 0; i < table.Rows[0].Cells.Count; i++) { DataColumn col = new DataColumn(); TextBox headerTxtBox = (TextBox)table.Rows[0].Cells[i].Controls[0]; col.ColumnName = headerTxtBox.Text; col.Caption = headerTxtBox.Text; dt.Columns.Add(col); } for (int i = 0; i < table.Rows.Count; i++) { DataRow valueRow = dt.NewRow(); for (int x = 0; x < table.Rows[i].Cells.Count; x++) { TextBox valueTextBox = (TextBox)table.Rows[i].Cells[x].Controls[0]; valueRow[x] = valueTextBox.Text; } dt.Rows.Add(valueRow); } return ds; } public Table DataSetToTable(DataSet ds) { DataTable dt = ds.Tables["Dimensions"]; Table newTable = new Table(); //Add headers TableRow headerRow = new TableRow(); for (int i = 0; i < dt.Columns.Count; i++) { TableCell headerCell = new TableCell(); TextBox headerTxtBox = new TextBox(); headerTxtBox.ID = "HeadersTxtBox" + i.ToString(); headerTxtBox.Font.Bold = true; headerTxtBox.Text = dt.Columns[i].ColumnName; headerCell.Controls.Add(headerTxtBox); headerRow.Cells.Add(headerCell); } newTable.Rows.Add(headerRow); //Add value rows for (int i = 0; i < dt.Rows.Count; i++) { TableRow valueRow = new TableRow(); for (int x = 0; x < dt.Columns.Count; x++) { TableCell valueCell = new TableCell(); TextBox valueTxtBox = new TextBox(); valueTxtBox.ID = "ValueTxtBox" + i.ToString() + i + x + x.ToString(); valueTxtBox.Text = dt.Rows[i][x].ToString(); valueCell.Controls.Add(valueTxtBox); valueRow.Cells.Add(valueCell); } newTable.Rows.Add(valueRow); } return newTable; } public DataSet DefaultDataSet(int rows, int cols) { DataSet ds = new DataSet(); DataTable dt = new DataTable("Dimensions"); ds.Tables.Add(dt); DataColumn nameCol = new DataColumn(); nameCol.Caption = "Name"; nameCol.ColumnName = "Name"; nameCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(nameCol); DataColumn widthCol = new DataColumn(); widthCol.Caption = "Width"; widthCol.ColumnName = "Width"; widthCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(widthCol); if (cols > 2) { DataColumn heightCol = new DataColumn(); heightCol.Caption = "Height"; heightCol.ColumnName = "Height"; heightCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(heightCol); } if (cols > 3) { DataColumn depthCol = new DataColumn(); depthCol.Caption = "Depth"; depthCol.ColumnName = "Depth"; depthCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(depthCol); } if (cols > 4) { int newColCount = cols - 4; for (int i = 0; i < newColCount; i++) { DataColumn newCol = new DataColumn(); newCol.Caption = "New " + i.ToString(); newCol.ColumnName = "New " + i.ToString(); newCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(newCol); } } for (int i = 0; i < rows; i++) { DataRow newRow = dt.NewRow(); newRow["Name"] = "Name " + i.ToString(); newRow["Width"] = "Width " + i.ToString(); if (cols > 2) { newRow["Height"] = "Height " + i.ToString(); } if (cols > 3) { newRow["Depth"] = "Depth " + i.ToString(); } dt.Rows.Add(newRow); } return ds; } public DataSet XMLToDataSet(string xml) { StringReader sr = new StringReader(xml); DataSet ds = new DataSet(); ds.ReadXml(sr); return ds; } public string DataSetToStringXML(DataSet ds) { XmlDocument _XMLDoc = new XmlDocument(); _XMLDoc.LoadXml(ds.GetXml()); StringWriter sw = new StringWriter(); XmlTextWriter xw = new XmlTextWriter(sw); XmlDocument xml = _XMLDoc; xml.WriteTo(xw); return sw.ToString(); } private int rowCount { get { return (int)ViewState["rowCount"]; } set { ViewState["rowCount"] = value; } } private int colCount { get { return (int)ViewState["colCount"]; } set { ViewState["colCount"] = value; } } Thanks in advance, Marko

    Read the article

  • Linq to Datarow, Select multiple columns as distinct?

    - by Beta033
    basically i'm trying to reproduce the following mssql query as LINQ SELECT DISTINCT [TABLENAME], [COLUMNNAME] FROM [DATATABLE] the closest i've got is Dim query = (From row As DataRow In ds.Tables("DATATABLE").Rows _ Select row("COLUMNNAME") ,row("TABLENAME").Distinct when i do the above i get the error Range variable name can be inferred only from a simple or qualified name with no arguments. i was sort of expecting it to return a collection that i could then iterate through and perform actions for each entry. maybe a datarow collection? As a complete LINQ newb, i'm not sure what i'm missing. i've tried variations on Select new with { row("COLUMNNAME") ,row("TABLENAME")} and get: Anonymous type member name can be inferred only from a simple or qualified name with no arguments. Also, does anyone know of any good books/resources to get fluent?

    Read the article

  • WPF Validation & IDataErrorInfo

    - by Jefim
    A note - the classes I have are EntityObject classes! I have the following class: public class Foo { public Bar Bar { get; set; } } public class Bar : IDataErrorInfo { public string Name { get; set; } #region IDataErrorInfo Members string IDataErrorInfo.Error { get { return null; } } string IDataErrorInfo.this[string columnName] { get { if (columnName == "Name") { return "Hello error!"; } Console.WriteLine("Validate: " + columnName); return null; } } #endregion } XAML goes as follows: <StackPanel Orientation="Horizontal" DataContext="{Binding Foo.Bar}"> <TextBox Text="{Binding Path=Name, ValidatesOnDataErrors=true}"/> </StackPanel> I put a breakpoint and a Console.Writeline on the validation there - I get no breaks. The validation is not executed. Can anybody just press me against the place where my error lies?

    Read the article

  • System.IndexOutOfRange Exception Sending Gridview Values to DataTable

    - by SidC
    Hello, I am writing an ASP.NET 3.5 application and need to send gridview values to a datatable for use in a listbox control as part of a quote process. I have written the following VB code in my Page_Load: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim dtSelParts As DataTable = New DataTable("dtSelParts") Dim column As DataColumn = New DataColumn column.ColumnName = "PartName" column.ColumnName = "PartNumber" column.ColumnName = "Quantity" dtSelParts.Columns.Add(column) For Each row As GridViewRow In MySearch.Rows Dim drSelParts As DataRow drSelParts = dtSelParts.NewRow() For i As Integer = 0 To row.Cells.Count - 1 drSelParts(i) = row.Cells(i).Text Next Next End Sub When I run the partsearch.aspx page, I enter values in the row textbox for parts I want included in the listbox (to be included in quote). However, I receive the error message System.Index.OutOfRangeException: Cannot find column 1 which occurs on the line drSelParts(i) = row.Cells(i).Text. How might I correct the code and resolve the error? Thanks, Sid

    Read the article

  • ASP.NET Chart control - how to make it Smaller or add a scrollbar

    - by nCdy
    How to make the table column width inside the chart smaller so I can see more values and how to add some scrollbar to see the values that I can't see on right sight . There is no changes in ASP (just added this element) here is a method how I drawing this line : if (dt != null) // dt - my DataTable { string seriesName = "Graph"; Chart1.Series.Add(seriesName); Chart1.Series[seriesName].ChartType = SeriesChartType.Line; Chart1.Series[seriesName].BorderWidth = 3; foreach (DataRow row in dt.Rows) { string columnName = row[0].ToString(); try { double YVal = Convert.ToDouble(row[1]); Chart1.Series[seriesName].Points.AddXY(columnName, YVal); } catch (Exception) { Chart1.Series[seriesName].Points.AddXY(columnName, 0); } } }

    Read the article

  • set Enabled = "false" to radiobuttonlist then can not toggle enable/disable

    - by cindy
    I am able to use jquery toenable/disable radiobuttonlist based on the checkbox value. But the problem is that I want to disable radiobuttonlist at the first time. Then toggle its enable/disable by checkbox later. So I have But after I add: Enabled = "false" to my radiobuttonlist, the toggle of checkbox does not work. Here is my function to toggle : $(function() { function checkBoxClicked() { var isChecked = $(this).is(":checked"); var columnName = "rblColumn" + $(this).parent().attr("alt"); if (isChecked) { $("#" + columnName).removeAttr("disabled"); } else { $("#" + columnName).attr("disabled", "disabled"); } } //intercept any check box click event inside the #list Div $(":checkbox").click(checkBoxClicked); });

    Read the article

  • WFP Validation & IDataErrorInfo

    - by Jefim
    A note - the classes I have are EntityObject classes! I have the following class: public class Foo { public Bar Bar { get; set; } } public class Bar : IDataErrorInfo { public string Name { get; set; } #region IDataErrorInfo Members string IDataErrorInfo.Error { get { return null; } } string IDataErrorInfo.this[string columnName] { get { if (columnName == "Name") { return "Hello error!"; } Console.WriteLine("Validate: " + columnName); return null; } } #endregion } XAML goes as follows: <StackPanel Orientation="Horizontal" DataContext="{Binding Foo.Bar}"> <TextBox Text="{Binding Path=Name, ValidatesOnDataErrors=true}"/> </StackPanel> I put a breakpoint and a Console.Writeline on the validation there - I get no breaks. The validation is not executed. Can anybody just press me against the place where my error lies?

    Read the article

  • Sniffing out SQL Code Smells: Inconsistent use of Symbolic names and Datatypes

    - by Phil Factor
    It is an awkward feeling. You’ve just delivered a database application that seems to be working fine in production, and you just run a few checks on it. You discover that there is a potential bug that, out of sheer good chance, hasn’t kicked in to produce an error; but it lurks, like a smoking bomb. Worse, maybe you find that the bug has started its evil work of corrupting the data, but in ways that nobody has, so far detected. You investigate, and find the damage. You are somehow going to have to repair it. Yes, it still very occasionally happens to me. It is not a nice feeling, and I do anything I can to prevent it happening. That’s why I’m interested in SQL code smells. SQL Code Smells aren’t necessarily bad practices, but just show you where to focus your attention when checking an application. Sometimes with databases the bugs can be subtle. SQL is rather like HTML: the language does its best to try to carry out your wishes, rather than to be picky about your bugs. Most of the time, this is a great benefit, but not always. One particular place where this can be detrimental is where you have implicit conversion between different data types. Most of the time it is completely harmless but we’re  concerned about the occasional time it isn’t. Let’s give an example: String truncation. Let’s give another even more frightening one, rounding errors on assignment to a number of different precision. Each requires a blog-post to explain in detail and I’m not now going to try. Just remember that it is not always a good idea to assign data to variables, parameters or even columns when they aren’t the same datatype, especially if you are relying on implicit conversion to work its magic.For details of the problem and the consequences, see here:  SR0014: Data loss might occur when casting from {Type1} to {Type2} . For any experienced Database Developer, this is a more frightening read than a Vampire Story. This is why one of the SQL Code Smells that makes me edgy, in my own or other peoples’ code, is to see parameters, variables and columns that have the same names and different datatypes. Whereas quite a lot of this is perfectly normal and natural, you need to check in case one of two things have gone wrong. Either sloppy naming, or mixed datatypes. Sure it is hard to remember whether you decided that the length of a log entry was 80 or 100 characters long, or the precision of a number. That is why a little check like this I’m going to show you is excellent for tidying up your code before you check it back into source Control! 1/ Checking Parameters only If you were just going to check parameters, you might just do this. It simply groups all the parameters, either input or output, of all the routines (e.g. stored procedures or functions) by their name and checks to see, in the HAVING clause, whether their data types are all the same. If not, it lists all the examples and their origin (the routine) Even this little check can occasionally be scarily revealing. ;WITH userParameter AS  ( SELECT   c.NAME AS ParameterName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  t.name + ' '     + CASE     --we may have to put in the length            WHEN t.name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.max_length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.name IN ('nchar', 'nvarchar')                      THEN c.max_length / 2 ELSE c.max_length                    END)                END + ')'         WHEN t.name IN ('decimal', 'numeric')             THEN '(' + CONVERT(VARCHAR(4), c.precision)                   + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = c.XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType]  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'   AND parameter_id>0)SELECT CONVERT(CHAR(80),objectName+'.'+ParameterName),DataType FROM UserParameterWHERE ParameterName IN   (SELECT ParameterName FROM UserParameter    GROUP BY ParameterName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY ParameterName   so, in a very small example here, we have a @ClosingDelimiter variable that is only CHAR(1) when, by the looks of it, it should be up to ten characters long, or even worse, a function that should be a char(1) and seems to let in a string of ten characters. Worth investigating. Then we have a @Comment variable that can't decide whether it is a VARCHAR(2000) or a VARCHAR(MAX) 2/ Columns and Parameters Actually, once we’ve cleared up the mess we’ve made of our parameter-naming in the database we’re inspecting, we’re going to be more interested in listing both columns and parameters. We can do this by modifying the routine to list columns as well as parameters. Because of the slight complexity of creating the string version of the datatypes, we will create a fake table of both columns and parameters so that they can both be processed the same way. After all, we want the datatypes to match Unfortunately, parameters do not expose all the attributes we are interested in, such as whether they are nullable (oh yes, subtle bugs happen if this isn’t consistent for a datatype). We’ll have to leave them out for this check. Voila! A slight modification of the first routine ;WITH userObject AS  ( SELECT   Name AS DataName,--the actual name of the parameter or column ('@' removed)  --and the qualified object name of the routine  OBJECT_SCHEMA_NAME(ObjectID) + '.' + OBJECT_NAME(ObjectID) AS ObjectName,  --now the harder bit: the definition of the datatype.  TypeName + ' '     + CASE     --we may have to put in the length. e.g. CHAR (10)           WHEN TypeName IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN MaxLength = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN TypeName IN ('nchar', 'nvarchar')                      THEN MaxLength / 2 ELSE MaxLength                    END)                END + ')'         WHEN TypeName IN ('decimal', 'numeric')--a BCD number!             THEN '(' + CONVERT(VARCHAR(4), Precision)                   + ',' + CONVERT(VARCHAR(4), Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0 --tush tush. XML         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT TOP 1 QUOTENAME(ss.name) + '.' + QUOTENAME(sc.Name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType],       DataObjectType  FROM   (Select t.name AS TypeName, REPLACE(c.name,'@','') AS Name,          c.max_length AS MaxLength, c.precision AS [Precision],           c.scale AS [Scale], c.[Object_id] AS ObjectID, XML_collection_ID,          is_XML_Document,'P' AS DataobjectType  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  AND parameter_id>0  UNION all  Select t.name AS TypeName, c.name AS Name, c.max_length AS MaxLength,          c.precision AS [Precision], c.scale AS [Scale],          c.[Object_id] AS ObjectID, XML_collection_ID,is_XML_Document,          'C' AS DataobjectType            FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID   WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'  )f)SELECT CONVERT(CHAR(80),objectName+'.'   + CASE WHEN DataobjectType ='P' THEN '@' ELSE '' END + DataName),DataType FROM UserObjectWHERE DataName IN   (SELECT DataName FROM UserObject   GROUP BY DataName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY DataName     Hmm. I can tell you I found quite a few minor issues with the various tabases I tested this on, and found some potential bugs that really leap out at you from the results. Here is the start of the result for AdventureWorks. Yes, AccountNumber is, for some reason, a Varchar(10) in the Customer table. Hmm. odd. Why is a city fifty characters long in that view?  The idea of the description of a colour being 256 characters long seems over-ambitious. Go down the list and you'll spot other mistakes. There are no bugs, but just mess. We started out with a listing to examine parameters, then we mixed parameters and columns. Our last listing is for a slightly more in-depth look at table columns. You’ll notice that we’ve delibarately removed the indication of whether a column is persisted, or is an identity column because that gives us false positives for our code smells. If you just want to browse your metadata for other reasons (and it can quite help in some circumstances) then uncomment them! ;WITH userColumns AS  ( SELECT   c.NAME AS columnName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  REPLACE(t.name + ' '   + CASE WHEN is_computed = 1 THEN ' AS ' + --do DDL for a computed column          (SELECT definition FROM sys.computed_columns cc           WHERE cc.object_id = c.object_id AND cc.column_ID = c.column_ID)     --we may have to put in the length            WHEN t.Name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.Max_Length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.Name IN ('nchar', 'nvarchar')                      THEN c.Max_Length / 2 ELSE c.Max_Length                    END)                END + ')'       WHEN t.name IN ('decimal', 'numeric')       THEN '(' + CONVERT(VARCHAR(4), c.precision) + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'       ELSE ''      END + CASE WHEN c.is_rowguidcol = 1          THEN ' ROWGUIDCOL'          ELSE ''         END + CASE WHEN XML_collection_ID <> 0            THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                THEN 'DOCUMENT '                ELSE 'CONTENT '               END + COALESCE((SELECT                QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM                sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE                sc.xml_collection_ID = c.XML_collection_ID),                'NULL') + ')'            ELSE ''           END + CASE WHEN is_identity = 1             THEN CASE WHEN OBJECTPROPERTY(object_id,                'IsUserTable') = 1 AND COLUMNPROPERTY(object_id,                c.name,                'IsIDNotForRepl') = 0 AND OBJECTPROPERTY(object_id,                'IsMSShipped') = 0                THEN ''                ELSE ' NOT FOR REPLICATION '               END             ELSE ''            END + CASE WHEN c.is_nullable = 0               THEN ' NOT NULL'               ELSE ' NULL'              END + CASE                WHEN c.default_object_id <> 0                THEN ' DEFAULT ' + object_Definition(c.default_object_id)                ELSE ''               END + CASE                WHEN c.collation_name IS NULL                THEN ''                WHEN c.collation_name <> (SELECT                collation_name                FROM                sys.databases                WHERE                name = DB_NAME()) COLLATE Latin1_General_CI_AS                THEN COALESCE(' COLLATE ' + c.collation_name,                '')                ELSE ''                END,'  ',' ') AS [DataType]FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys')SELECT CONVERT(CHAR(80),objectName+'.'+columnName),DataType FROM UserColumnsWHERE columnName IN (SELECT columnName FROM UserColumns  GROUP BY columnName  HAVING MIN(Datatype)<>MAX(DataType))ORDER BY columnName If you take a look down the results against Adventureworks, you'll see once again that there are things to investigate, mostly, in the illustration, discrepancies between null and non-null datatypes So I here you ask, what about temporary variables within routines? If ever there was a source of elusive bugs, you'll find it there. Sadly, these temporary variables are not stored in the metadata so we'll have to find a more subtle way of flushing these out, and that will, I'm afraid, have to wait!

    Read the article

  • jquery get radiobuttonlist by name dynamically

    - by Cindy
    I have two radiobuttonlist and one checkboxlist on the page. Ideally based on the checkbox selected value, I want to enable/disable corresponding radibuttonlist with jquery function. But some how $("input[name*=" + columnName + "]") always return null. It can not find the radiobuttonlist by its name? $(function() { function checkBoxClicked() { var isChecked = $(this).is(":checked"); var columnName = "rblColumn" + $(this).parent().attr("alt"); if (isChecked) { $("input[name*=" + columnName + "]").removeAttr("disabled"); } else { $("input[name*=" + columnName + "]").attr("disabled", "disabled"); $("input[name*=" + columnName + "] input").each(function() { $(this).attr("checked", "") }); } } //intercept any check box click event inside the #list Div $(":checkbox").click(checkBoxClicked); }); <asp:Panel ID="TestPanel" runat="server"> <asp:CheckBoxList ID = "chkColumn" runat="server" RepeatDirection="Horizontal"> <asp:ListItem id = "Column1" runat="server" Text="Column 1" Value="1" alt="1" class="HeadColumn" /> <asp:ListItem id = "Column2" runat="server" Text="Column 2" Value="2" alt="2" class="HeadColumn"/> </asp:CheckBoxList> <table> <tr> <td> <asp:RadioButtonList ID = "rblColumn1" runat="server" RepeatDirection="Vertical" disabled="disabled"> <asp:ListItem id="liColumn1p" runat="server" /> <asp:ListItem id="liColumn1n" runat="server" /> </asp:RadioButtonList> </td> <td> <asp:RadioButtonList ID = "rblColumn2" runat="server" RepeatDirection="Vertical" disabled="disabled"> <asp:ListItem id="liColumn2p" runat="server" /> <asp:ListItem id="liColumn2n" runat="server" /> </asp:RadioButtonList> </td> </tr> </table> </asp:Panel> source: <div id="TestPanel"> <table id="chkColumn" border="0"> <tr> <td><span id="Column1" alt="1" class="HeadColumn"><input id="chkColumn_0" type="checkbox" name="chkColumn$0" /><label for="chkColumn_0">Column 1</label></span></td><td><span id="Column2" alt="2" class="HeadColumn"><input id="chkColumn_1" type="checkbox" name="chkColumn$1" /><label for="chkColumn_1">Column 2</label></span></td> </tr> </table> <table> <tr> <td> <table id="rblColumn1" class="myRadioButtonList" disabled="disabled" border="0"> <tr> <td><span id="liColumn1p"><input id="rblColumn1_0" type="radio" name="rblColumn1" value="" /></span></td> </tr><tr> <td><span id="liColumn1n"><input id="rblColumn1_1" type="radio" name="rblColumn1" value="" /></span></td> </tr> </table> </td> <td> <table id="rblColumn2" class="myRadioButtonList" disabled="disabled" border="0"> <tr> <td><span id="liColumn2p"><input id="rblColumn2_0" type="radio" name="rblColumn2" value="" /></span></td> </tr><tr> <td><span id="liColumn2n"><input id="rblColumn2_1" type="radio" name="rblColumn2" value="" /></span></td> </tr> </table> </td> </tr> </table> </div>

    Read the article

  • Defining an Entity Framework 1:1 association

    - by Craig Fisher
    I'm trying to define a 1:1 association between two entities (one maps to a table and the other to a view - using DefinedQuery) in an Entity Framework model. When trying to define the mapping for this in the designer, it makes me choose the (1) table or view to map the association to. What am I supposed to choose? I can choose either of the two tables but then I am forced to choose a column from that table (or view) for each end of the relationship. I would expect to be able to choose a column from one table for one end of the association and a column from the other table for the other end of the association, but there's no way to do this. Here I've chosen to map to the "DW_ WF_ClaimInfo" view and it is forcing me to choose two columns from that view - one for each end of the relationship. I've also tried defining the mapping manually in the XML as follows: <AssociationSetMapping Name="Entity1Entity2" TypeName="ClaimsModel.Entity1Entity2" StoreEntitySet="Entity1"> <EndProperty Name="Entity2"> <ScalarProperty Name="DOCUMENT" ColumnName="DOCUMENT" /> </EndProperty> <EndProperty Name="Entity1"> <ScalarProperty Name="PK_DocumentId" ColumnName="PK_DocumentId" /> </EndProperty> </AssociationSetMapping> But this gives: Error 2010: The Column 'DOCUMENT' specified as part of this MSL does not exist in MetadataWorkspace. Seems like it still expects both columns to come from the same table, which doesn't make sense to me. Furthermore, if I select the same key for each end, e.g.: <AssociationSetMapping Name="Entity1Entity2" TypeName="ClaimsModel.Entity1Entity2" StoreEntitySet="Entity1"> <EndProperty Name="Entity2"> <ScalarProperty Name="DOCUMENT" ColumnName="PK_DocumentId" /> </EndProperty> <EndProperty Name="Entity1"> <ScalarProperty Name="PK_DocumentId" ColumnName="PK_DocumentId" /> </EndProperty> </AssociationSetMapping> I then get: Error 3021: Problem in Mapping Fragment starting at line 675: Each of the following columns in table AssignedClaims is mapped to multiple conceptual side properties: AssignedClaims.PK_DocumentId is mapped to <AssignedClaimDW_WF_ClaimInfo.DW_WF_ClaimInfo.DOCUMENT, AssignedClaimDW_WF_ClaimInfo.AssignedClaim.PK_DocumentId> What am I not getting?

    Read the article

  • MySQL: Which is faster — INSTR or LIKE?

    - by Grekker
    If your goal is to test if a string exists in a MySQL column (of type 'varchar', 'text', 'blob', etc) which of the following is faster / more efficient / better to use, and why? Or, is there some other method that tops either of these? INSTR( columnname, 'mystring' ) > 0 vs columnname LIKE '%mystring%'

    Read the article

  • Why was this T-SQL Syntax never implemented?

    - by ChrisA
    Why did they never let us do this sort of thing: Create Proc RunParameterisedSelect @tableName varchar(100), @columnName varchar(100), @value varchar(100) as select * from @tableName where @columnName = @value You can use @value as a parameter, obviously, and you can achieve the whole thing with dynamic SQL, but creating it is invariably a pain. So why didn't they make it part of the language in some way, rather than forcing you to EXEC(@sql)?

    Read the article

  • Grid View Button Passing Data Via On Click

    - by flyersun
    Hi, I'm pretty new to C# and asp.net so aplogies if this is a really stupid question. I'm using a grid view to display a number of records from a database. Each row has an Edit Button. When the button is clicked I want an ID to be passed back to a funtion in my .cs file. How do I bind the rowID to the Button field? I've tired using a hyper link instead but this doens't seem to work because I'm posting back to the same page which already has a Permanter on the URL. asp.net <asp:GridView ID="gvAddresses" runat="server" onrowcommand="Edit_Row"> <Columns> <asp:ButtonField runat="server" ButtonType="Button" Text="Edit"> </Columns> </asp:GridView> c# int ImplantID = Convert.ToInt32(Request.QueryString["ImplantID"]); Session.Add("ImplantID", ImplantID); List<GetImplantDetails> DataObject = ImplantDetails(ImplantID); System.Data.DataSet DSImplant = new DataSet(); System.Data.DataTable DTImplant = new DataTable("Implant"); DSImplant.Tables.Add(DTImplant); DataColumn ColPostCode = new DataColumn(); ColPostCode.ColumnName = "PostCode"; ColPostCode.DataType = typeof(string); DTImplant.Columns.Add(ColPostCode); DataColumn ColConsigneeName = new DataColumn(); ColConsigneeName.ColumnName = "Consignee Name"; ColConsigneeName.DataType = typeof(string); DTImplant.Columns.Add(ColConsigneeName); DataColumn ColIsPrimaryAddress = new DataColumn(); ColIsPrimaryAddress.ColumnName = "Primary"; ColIsPrimaryAddress.DataType = typeof(int); DTImplant.Columns.Add(ColIsPrimaryAddress); DataColumn ColImplantCustomerDetailsID = new DataColumn(); ColImplantCustomerDetailsID.ColumnName = "Implant ID"; ColImplantCustomerDetailsID.DataType = typeof(int); DTImplant.Columns.Add(ColImplantCustomerDetailsID); foreach (GetImplantDetails Object in DataObject) { DataRow DRImplant = DTImplant.NewRow(); DRImplant["PostCode"] = Object.GetPostCode(); DRImplant["Consignee Name"] = Object.GetConsigneeName(); DRImplant["Primary"] = Object.GetIsPrimaryAddress(); DRImplant["Implant ID"] = Object.GeTImplantCustomerDetailsID(); DTImplant.Rows.Add(DRImplant); <--- this is what I need to be added to the button } gvAddresses.DataSource = DTImplant; gvAddresses.DataBind();

    Read the article

  • Using Table-Valued Parameters in SQL Server

    - by Jesse
    I work with stored procedures in SQL Server pretty frequently and have often found myself with a need to pass in a list of values at run-time. Quite often this list contains a set of ids on which the stored procedure needs to operate the size and contents of which are not known at design time. In the past I’ve taken the collection of ids (which are usually integers), converted them to a string representation where each value is separated by a comma and passed that string into a VARCHAR parameter of a stored procedure. The body of the stored procedure would then need to parse that string into a table variable which could be easily consumed with set-based logic within the rest of the stored procedure. This approach works pretty well but the VARCHAR variable has always felt like an un-wanted “middle man” in this scenario. Of course, I could use a BULK INSERT operation to load the list of ids into a temporary table that the stored procedure could use, but that approach seems heavy-handed in situations where the list of values is usually going to contain only a few dozen values. Fortunately SQL Server 2008 introduced the concept of table-valued parameters which effectively eliminates the need for the clumsy middle man VARCHAR parameter. Example: Customer Transaction Summary Report Let’s say we have a report that can summarize the the transactions that we’ve conducted with customers over a period of time. The report returns a pretty simple dataset containing one row per customer with some key metrics about how much business that customer has conducted over the date range for which the report is being run. Sometimes the report is run for a single customer, sometimes it’s run for all customers, and sometimes it’s run for a handful of customers (i.e. a salesman runs it for the customers that fall into his sales territory). This report can be invoked from a website on-demand, or it can be scheduled for periodic delivery to certain users via SQL Server Reporting Services. Because the report can be created from different places and the query to generate the report is complex it’s been packed into a stored procedure that accepts three parameters: @startDate – The beginning of the date range for which the report should be run. @endDate – The end of the date range for which the report should be run. @customerIds – The customer Ids for which the report should be run. Obviously, the @startDate and @endDate parameters are DATETIME variables. The @customerIds parameter, however, needs to contain a list of the identity values (primary key) from the Customers table representing the customers that were selected for this particular run of the report. In prior versions of SQL Server we might have made this parameter a VARCHAR variable, but with SQL Server 2008 we can make it into a table-valued parameter. Defining And Using The Table Type In order to use a table-valued parameter, we first need to tell SQL Server about what the table will look like. We do this by creating a user defined type. For the purposes of this stored procedure we need a very simple type to model a table variable with a single integer column. We can create a generic type called ‘IntegerListTableType’ like this: CREATE TYPE IntegerListTableType AS TABLE (Value INT NOT NULL) Once defined, we can use this new type to define the @customerIds parameter in the signature of our stored procedure. The parameter list for the stored procedure definition might look like: 1: CREATE PROCEDURE dbo.rpt_CustomerTransactionSummary 2: @starDate datetime, 3: @endDate datetime, 4: @customerIds IntegerListTableTableType READONLY   Note the ‘READONLY’ statement following the declaration of the @customerIds parameter. SQL Server requires any table-valued parameter be marked as ‘READONLY’ and no DML (INSERT/UPDATE/DELETE) statements can be performed on a table-valued parameter within the routine in which it’s used. Aside from the DML restriction, however, you can do pretty much anything with a table-valued parameter as you could with a normal TABLE variable. With the user defined type and stored procedure defined as above, we could invoke like this: 1: DECLARE @cusomterIdList IntegerListTableType 2: INSERT @customerIdList VALUES (1) 3: INSERT @customerIdList VALUES (2) 4: INSERT @customerIdList VALUES (3) 5:  6: EXEC dbo.rpt_CustomerTransationSummary 7: @startDate = '2012-05-01', 8: @endDate = '2012-06-01' 9: @customerIds = @customerIdList   Note that we can simply declare a variable of type ‘IntegerListTableType’ just like any other normal variable and insert values into it just like a TABLE variable. We could also populate the variable with a SELECT … INTO or INSERT … SELECT statement if desired. Using The Table-Valued Parameter With ADO .NET Invoking a stored procedure with a table-valued parameter from ADO .NET is as simple as building a DataTable and passing it in as the Value of a SqlParameter. Here’s some example code for how we would construct the SqlParameter for the @customerIds parameter in our stored procedure: 1: var customerIdsParameter = new SqlParameter(); 2: customerIdParameter.Direction = ParameterDirection.Input; 3: customerIdParameter.TypeName = "IntegerListTableType"; 4: customerIdParameter.Value = selectedCustomerIds.ToIntegerListDataTable("Value");   All we’re doing here is new’ing up an instance of SqlParameter, setting the pamameters direction, specifying the name of the User Defined Type that this parameter uses, and setting its value. We’re assuming here that we have an IEnumerable<int> variable called ‘selectedCustomerIds’ containing all of the customer Ids for which the report should be run. The ‘ToIntegerListDataTable’ method is an extension method of the IEnumerable<int> type that looks like this: 1: public static DataTable ToIntegerListDataTable(this IEnumerable<int> intValues, string columnName) 2: { 3: var intergerListDataTable = new DataTable(); 4: intergerListDataTable.Columns.Add(columnName); 5: foreach(var intValue in intValues) 6: { 7: var nextRow = intergerListDataTable.NewRow(); 8: nextRow[columnName] = intValue; 9: intergerListDataTable.Rows.Add(nextRow); 10: } 11:  12: return intergerListDataTable; 13: }   Since the ‘IntegerListTableType’ has a single int column called ‘Value’, we pass that in for the ‘columnName’ parameter to the extension method. The method creates a new single-columned DataTable using the provided column name then iterates over the items in the IEnumerable<int> instance adding one row for each value. We can then use this SqlParameter instance when invoking the stored procedure just like we would use any other parameter. Advanced Functionality Using passing a list of integers into a stored procedure is a very simple usage scenario for the table-valued parameters feature, but I’ve found that it covers the majority of situations where I’ve needed to pass a collection of data for use in a query at run-time. I should note that BULK INSERT feature still makes sense for passing large amounts of data to SQL Server for processing. MSDN seems to suggest that 1000 rows of data is the tipping point where the overhead of a BULK INSERT operation can pay dividends. I should also note here that table-valued parameters can be used to deal with more complex data structures than single-columned tables of integers. A User Defined Type that backs a table-valued parameter can use things like identities and computed columns. That said, using some of these more advanced features might require the use the SqlDataRecord and SqlMetaData classes instead of a simple DataTable. Erland Sommarskog has a great article on his website that describes when and how to use these classes for table-valued parameters. What About Reporting Services? Earlier in the post I referenced the fact that our example stored procedure would be called from both a web application and a SQL Server Reporting Services report. Unfortunately, using table-valued parameters from SSRS reports can be a bit tricky and warrants its own blog post which I’ll be putting together and posting sometime in the near future.

    Read the article

  • What is the best way to provide an AutoMappingOverride for an interface in fluentnhibernate automapp

    - by Tom
    In my quest for a version-wide database filter for an application, I have written the following code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using FluentNHibernate.Automapping; using FluentNHibernate.Automapping.Alterations; using FluentNHibernate.Mapping; using MvcExtensions.Model; using NHibernate; namespace MvcExtensions.Services.Impl.FluentNHibernate { public interface IVersionAware { string Version { get; set; } } public class VersionFilter : FilterDefinition { const string FILTERNAME = "MyVersionFilter"; const string COLUMNNAME = "Version"; public VersionFilter() { this.WithName(FILTERNAME) .WithCondition("Version = :"+COLUMNNAME) .AddParameter(COLUMNNAME, NHibernateUtil.String ); } public static void EnableVersionFilter(ISession session,string version) { session.EnableFilter(FILTERNAME).SetParameter(COLUMNNAME, version); } public static void DisableVersionFilter(ISession session) { session.DisableFilter(FILTERNAME); } } public class VersionAwareOverride : IAutoMappingOverride<IVersionAware> { #region IAutoMappingOverride<IVersionAware> Members public void Override(AutoMapping<IVersionAware> mapping) { mapping.ApplyFilter<VersionFilter>(); } #endregion } } But, since overrides do not work on interfaces, I am looking for a way to implement this. Currently I'm using this (rather cumbersome) way for each class that implements the interface : public class SomeVersionedEntity : IModelId, IVersionAware { public virtual int Id { get; set; } public virtual string Version { get; set; } } public class SomeVersionedEntityOverride : IAutoMappingOverride<SomeVersionedEntity> { #region IAutoMappingOverride<SomeVersionedEntity> Members public void Override(AutoMapping<SomeVersionedEntity> mapping) { mapping.ApplyFilter<VersionFilter>(); } #endregion } I have been looking at IClassmap interfaces etc, but they do not seem to provide a way to access the ApplyFilter method, so I have not got a clue here... Since I am probably not the first one who has this problem, I am quite sure that it should be possible; I am just not quite sure how this works.. EDIT : I have gotten a bit closer to a generic solution: This is the way I tried to solve it : Using a generic class to implement alterations to classes implementing an interface : public abstract class AutomappingInterfaceAlteration<I> : IAutoMappingAlteration { public void Alter(AutoPersistenceModel model) { model.OverrideAll(map => { var recordType = map.GetType().GetGenericArguments().Single(); if (typeof(I).IsAssignableFrom(recordType)) { this.GetType().GetMethod("overrideStuff").MakeGenericMethod(recordType).Invoke(this, new object[] { model }); } }); } public void overrideStuff<T>(AutoPersistenceModel pm) where T : I { pm.Override<T>( a => Override(a)); } public abstract void Override<T>(AutoMapping<T> am) where T:I; } And a specific implementation : public class VersionAwareAlteration : AutomappingInterfaceAlteration<IVersionAware> { public override void Override<T>(AutoMapping<T> am) { am.Map(x => x.Version).Column("VersionTest"); am.ApplyFilter<VersionFilter>(); } } Unfortunately I get the following error now : [InvalidOperationException: Collection was modified; enumeration operation may not execute.] System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource) +51 System.Collections.Generic.Enumerator.MoveNextRare() +7661017 System.Collections.Generic.Enumerator.MoveNext() +61 System.Linq.WhereListIterator`1.MoveNext() +156 FluentNHibernate.Utils.CollectionExtensions.Each(IEnumerable`1 enumerable, Action`1 each) +239 FluentNHibernate.Automapping.AutoMapper.ApplyOverrides(Type classType, IList`1 mappedProperties, ClassMappingBase mapping) +345 FluentNHibernate.Automapping.AutoMapper.MergeMap(Type classType, ClassMappingBase mapping, IList`1 mappedProperties) +43 FluentNHibernate.Automapping.AutoMapper.Map(Type classType, List`1 types) +566 FluentNHibernate.Automapping.AutoPersistenceModel.AddMapping(Type type) +85 FluentNHibernate.Automapping.AutoPersistenceModel.CompileMappings() +746 EDIT 2 : I managed to get a bit further; I now invoke "Override" using reflection for each class that implements the interface : public abstract class PersistenceOverride<I> { public void DoOverrides(AutoPersistenceModel model,IEnumerable<Type> Mytypes) { foreach(var t in Mytypes.Where(x=>typeof(I).IsAssignableFrom(x))) ManualOverride(t,model); } private void ManualOverride(Type recordType,AutoPersistenceModel model) { var t_amt = typeof(AutoMapping<>).MakeGenericType(recordType); var t_act = typeof(Action<>).MakeGenericType(t_amt); var m = typeof(PersistenceOverride<I>) .GetMethod("MyOverride") .MakeGenericMethod(recordType) .Invoke(this, null); model.GetType().GetMethod("Override").MakeGenericMethod(recordType).Invoke(model, new object[] { m }); } public abstract Action<AutoMapping<T>> MyOverride<T>() where T:I; } public class VersionAwareOverride : PersistenceOverride<IVersionAware> { public override Action<AutoMapping<T>> MyOverride<T>() { return am => { am.Map(x => x.Version).Column(VersionFilter.COLUMNNAME); am.ApplyFilter<VersionFilter>(); }; } } However, for one reason or another my generated hbm files do not contain any "filter" fields.... Maybe somebody could help me a bit further now ??

    Read the article

  • Ado.Net Entity produces "namespace cannot be found"

    - by Dave
    I've seen several possible solutions to this, but none have worked for me. After adding a ADO.NET Entity Data Model to my .Net Forms C# web project, I am unable to use it. Perhaps I made a mistake adding it? The name of the file added is QcFormData.edmx. In my code, perhaps I'm instantiating it incorrectly? I tried adding the line: QcFormDataContainer db = new QcFormDataContainer(); It appears in Intellisense, but when compiling I get the error : Error 13 The type or namespace name 'QcFormDataContainer' could not be found (are you missing a using directive or an assembly reference?) I've followed the suggestions that I found online that did not help: 1) made sure there is "using System.Data.Entity" 2) made sure the dll exists. 3) made sure the reference exists. 4) one post said use using System.Web.Data.Entity; but I do not see that available. What am I missing? QcFormData.edmx <?xml version="1.0" encoding="utf-8"?> <edmx:Edmx Version="3.0" xmlns:edmx="http://schemas.microsoft.com/ado/2009/11/edmx"> <!-- EF Runtime content --> <edmx:Runtime> <!-- SSDL content --> <edmx:StorageModels> <Schema Namespace="MyCocoModel.Store" Alias="Self" Provider="System.Data.SqlClient" ProviderManifestToken="2008" xmlns:store="http://schemas.microsoft.com/ado/2007/12/edm/EntityStoreSchemaGenerator" xmlns="http://schemas.microsoft.com/ado/2009/11/edm/ssdl"> <EntityContainer Name="MyCocoModelStoreContainer"> <EntitySet Name="QcFieldValues" EntityType="MyCocoModel.Store.QcFieldValues" store:Type="Tables" Schema="dbo" /> </EntityContainer> <EntityType Name="QcFieldValues"> <Key> <PropertyRef Name="ID" /> </Key> <Property Name="ID" Type="int" Nullable="false" StoreGeneratedPattern="Identity" /> <Property Name="FieldID" Type="nvarchar" MaxLength="100" /> <Property Name="FieldValue" Type="nvarchar" MaxLength="100" /> <Property Name="DateTimeAdded" Type="datetime" /> <Property Name="OrderReserveNumber" Type="nvarchar" MaxLength="50" /> </EntityType> </Schema> </edmx:StorageModels> <!-- CSDL content --> <edmx:ConceptualModels> <Schema Namespace="MyCocoModel" Alias="Self" p1:UseStrongSpatialTypes="false" xmlns:annotation="http://schemas.microsoft.com/ado/2009/02/edm/annotation" xmlns:p1="http://schemas.microsoft.com/ado/2009/02/edm/annotation" xmlns="http://schemas.microsoft.com/ado/2009/11/edm"> <EntityContainer Name="MyCocoEntities" p1:LazyLoadingEnabled="true"> <EntitySet Name="QcFieldValues" EntityType="MyCocoModel.QcFieldValue" /> </EntityContainer> <EntityType Name="QcFieldValue"> <Key> <PropertyRef Name="ID" /> </Key> <Property Name="ID" Type="Int32" Nullable="false" p1:StoreGeneratedPattern="Identity" /> <Property Name="FieldID" Type="String" MaxLength="100" Unicode="true" FixedLength="false" /> <Property Name="FieldValue" Type="String" MaxLength="100" Unicode="true" FixedLength="false" /> <Property Name="DateTimeAdded" Type="DateTime" Precision="3" /> <Property Name="OrderReserveNumber" Type="String" MaxLength="50" Unicode="true" FixedLength="false" /> </EntityType> </Schema> </edmx:ConceptualModels> <!-- C-S mapping content --> <edmx:Mappings> <Mapping Space="C-S" xmlns="http://schemas.microsoft.com/ado/2009/11/mapping/cs"> <EntityContainerMapping StorageEntityContainer="MyCocoModelStoreContainer" CdmEntityContainer="MyCocoEntities"> <EntitySetMapping Name="QcFieldValues"> <EntityTypeMapping TypeName="MyCocoModel.QcFieldValue"> <MappingFragment StoreEntitySet="QcFieldValues"> <ScalarProperty Name="ID" ColumnName="ID" /> <ScalarProperty Name="FieldID" ColumnName="FieldID" /> <ScalarProperty Name="FieldValue" ColumnName="FieldValue" /> <ScalarProperty Name="DateTimeAdded" ColumnName="DateTimeAdded" /> <ScalarProperty Name="OrderReserveNumber" ColumnName="OrderReserveNumber" /> </MappingFragment> </EntityTypeMapping> </EntitySetMapping> </EntityContainerMapping> </Mapping> </edmx:Mappings> </edmx:Runtime> <!-- EF Designer content (DO NOT EDIT MANUALLY BELOW HERE) --> <Designer xmlns="http://schemas.microsoft.com/ado/2009/11/edmx"> <Connection> <DesignerInfoPropertySet> <DesignerProperty Name="MetadataArtifactProcessing" Value="EmbedInOutputAssembly" /> </DesignerInfoPropertySet> </Connection> <Options> <DesignerInfoPropertySet> <DesignerProperty Name="ValidateOnBuild" Value="true" /> <DesignerProperty Name="EnablePluralization" Value="True" /> <DesignerProperty Name="IncludeForeignKeysInModel" Value="True" /> <DesignerProperty Name="CodeGenerationStrategy" Value="None" /> </DesignerInfoPropertySet> </Options> <!-- Diagram content (shape and connector positions) --> <Diagrams></Diagrams> </Designer> </edmx:Edmx>

    Read the article

  • how to design a schema where the columns of a table are not fixed

    - by hIpPy
    I am trying to design a schema where the columns of a table are not fixed. Ex: I have an Employee table where the columns of the table are not fixed and vary (attributes of Employee are not fixed and vary). Nullable columns in the Employee table itself i.e. no normalization Instead of adding nullable columns, separate those columns out in their individual tables ex: if Address is a column to be added then create table Address[EmployeeId, AddressValue]. Create tables ExtensionColumnName [EmployeeId, ColumnName] and ExtensionColumnValue [EmployeeId, ColumnValue]. ExtensionColumnName would have ColumnName as "Address" and ExtensionColumnValue would have ColumnValue as address value. Employee table EmployeeId Name ExtensionColumnName table ColumnNameId EmployeeId ColumnName ExtensionColumnValue table EmployeeId ColumnNameId ColumnValue There is a drawback is the first two ways as the schema changes with every new attribute. Note that adding a new attribute is frequent. I am not sure if this is the good or bad design. If someone had a similar decision to make, please give an insight on things like foreign keys / data integrity, indexing, performance, reporting etc.

    Read the article

  • SQL SERVER – Difference between COUNT(DISTINCT) vs COUNT(ALL)

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday hosted by Jes Schultz Borland. Earlier today, I was presenting a 45-minute session at the Community College about “The Beginning SQL Server Database”. One of the students asked me the following question. What is the difference between COUNT(DISTINCT) vs COUNT(ALL)? I found this question from the student very interesting. He seems to have read the documentation (Book Online) and was then asking me this question. I always carry laptop which has SQL Server installed. I quickly opened it and ran the following script. After looking at the result, I think it was clear to everybody. Here is the script: SELECT COUNT([Title]) Value FROM [AdventureWorks].[Person].[Contact] GO SELECT COUNT(ALL [Title]) ALLValue FROM [AdventureWorks].[Person].[Contact] GO SELECT COUNT(DISTINCT [Title]) DistinctValue FROM [AdventureWorks].[Person].[Contact] GO The above script will give me the following results. You can clearly notice from the result set that COUNT (ALL ColumnName) is the same as COUNT(ColumnName). The reality is that the “ALL” is actually  the default option and it needs not to be specified. The ALL keyword includes all the non-NULL values. I know this is very simple and may be it does not change how we work; however looking at the whole angle, I really enjoyed the question. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

< Previous Page | 1 2 3 4 5 6 7  | Next Page >