Search Results

Search found 28900 results on 1156 pages for 'sql 2005'.

Page 580/1156 | < Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >

  • C# - InvalidCastException when fetching double from sqlite

    - by Irro
    I keep getting a InvalidCastException when I'm fetching any double from my SQLite database in C#. The exception says "Specified cast is not valid." I am able to see the value in a SQL manager so I know it exists. It is possible to fetch Strings (VARCHARS) and ints from the database. I'm also able to fetch the value as an object but then I get "66.0" when it's suppose to be "66,8558604947586" (latitude coordination). Any one who knows how to solve this? My code: using System.Data.SQLite; ... SQLiteConnection conn = new SQLiteConnection(@"Data Source=C:\\database.sqlite; Version=3;"); conn.Open(); SQLiteDataReader reader = getReader(conn, "SELECT * FROM table"); //These are working String name = reader.GetString(1); Int32 value = reader.GetInt32(2); //This is not working Double latitude = reader.getDouble(3); //This gives me wrong value Object o = reader[3]; //or reader["latitude"] or reader.getValue(3)

    Read the article

  • How to determine if DataGridView contains uncommitted changes when bound to a SqlDataAdapter

    - by JYelton
    I have a form which contains a DataGridView, a BindingSource, a DataTable, and a SqlDataAdapter. I populate the grid and data bindings as follows: private BindingSource bindingSource = new BindingSource(); private DataTable table = new DataTable(); private SqlDataAdapter dataAdapter = new SqlDataAdapter("SELECT * FROM table ORDER BY id ASC;", ClassSql.SqlConn()); private void LoadData() { table.Clear(); dataGridView1.AutoGenerateColumns = false; dataGridView1.DataSource = bindingSource; SqlCommandBuilder commandBuilder = new SqlCommandBuilder(dataAdapter); table.Locale = System.Globalization.CultureInfo.InvariantCulture; dataAdapter.Fill(table); bindingSource.DataSource = table; } The user can then make changes to the data, and commit those changes or discard them by clicking either a save or cancel button, respectively. private void btnSave_Click(object sender, EventArgs e) { // save everything to the displays table dataAdapter.Update(table); } private void btnCancel_Click(object sender, EventArgs e) { // alert user if unsaved changes, otherwise close form } I would like to add a dialog if cancel is clicked that warns the user of unsaved changes, if unsaved changes exist. Question: How can I determine if the user has modified data in the DataGridView but not committed it to the database? Is there an easy way to compare the current DataGridView data to the last-retrieved query? (Note that there would not be any other threads or users altering data in SQL at the same time.)

    Read the article

  • Joining on a common table, how do you get a FULL OUTER JOIN to expand on another table?

    - by stimpy77
    I've scoured StackOverflow and Google for an answer to this problem. I'm trying to create a Microsot SQL Server 2008 view. Not a stored procedure. Not a function. Just a query (i.e. a view). I have three tables. The first table defines a common key, let's say "CompanyID". The other two tables have a sometimes-common field, let's say "EmployeeName". I want a single table result that, when my WHERE clause says "WHERE CompanyID = 12" looks like this: CompanyID | TableA | TableB 12 | John Doe | John Doe 12 | Betty Sue | NULL 12 | NULL | Billy Bob I've tried a FULL OUTER JOIN that looks like this: SELECT Company.CompanyID, TableA.EmployeeName, TableB.EmployeeName FROM Company FULL OUTER JOIN TableA ON Company.CompanyID = TableA.CompanyID FULL OUTER JOIN TableB ON Company.CompanyID = TableB.CompanyID AND (TableA.EmployeeName IS NULL OR TableB.EmployeeName IS NULL OR TableB.EmployeeName = TableA.EmployeeName) I'm only getting the NULL from one matched table, I'm not getting the expansion for the other table. In the above sample, I'm basically only getting the first and third rows and not the second. Can someone help me create this query and show me how this is done correctly? BTW I already have a stored procedure that looks very clean and populates an in-memory table, but that isn't what I want. Thanks.

    Read the article

  • How can I prevent 'objects you are adding to the designer use a different data connection...'?

    - by Timothy Khouri
    I am using Visual Studio 2010, and I have a LINQ-to-SQL DBML file that my colleagues and I are using for this project. We have a connection string in the web.config file that the DBML is using. However, when I drag a new table from my "Server Explorer" onto the DBML file... I get presented with a dialog that demands that do one of these two options: Allow visual studio to change the connection string to match the one in my solution explorer. Cancel the operation (meaning, I don't get my table). I don't really care too much about the debate as why the PMs/devs who made this tool didn't allow a third option - "Create the object anyway - don't worry, I'm a developer!" What I am thinking would be a good solution is if I can create a connection in the Server Explorer - WITHOUT A WIZARD. If I can just paste a connection string, that would be awesome! Because then the DBML designer won't freak out on me :O) If anyone knows the answer to this question, or how to do the above, please lemme know!

    Read the article

  • Automatic Adjusting Range Table

    - by Bradford
    I have a table with a start date range, an end date range, and a few other additional columns. On input of a new record, I want to automatically adjust any overlapping date ranges (shrinking them to allow for the new input). I also want to ensure that no overlapping records can accidentally be inserted into this table. I'm using Oracle and Java for my application code. How should I enforce the prevention of overlapping date ranges and also allow for automatically adjusting overlapping ranges? Should I create an AFTER INSERT trigger, with a dbms_lock to serialize access, to prevent the overlapping data. Then in Java, apply the logic to auto adjust everything? Or should that part be in PL/SQL in stored procedure call? This is something that we need for a couple other tables so it'd be nice to abstract. If anyone has something like this already written, please share :) I did find this reference: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:474221407101 Here's an example of how each of the 4 overlapping cases should be handled for adjustment on insert: = Example 1 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 50, 'A') Gives (0, 10, 'X') **(20, 50, 'A') **(51, 100, 'Z') (200, 500, 'Y') = Example 2 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (40, 80, 'A') Gives (0, 10, 'X') **(30, 39, 'Z') **(40, 80, 'A') **(81, 100, 'Z') (200, 500, 'Y') = Example 3 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (50, 120, 'A') Gives (0, 10, 'X') **(30, 49, 'Z') **(50, 120, 'A') (200, 500, 'Y') = Example 4 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 120, 'A') Gives (0, 10, 'X') **(20, 120, 'A') (200, 500, 'Y') The algorithm is as follows: given range = g; input range = i; output range set = o if i.start <= g.start if i.end >= g.end o_1 = i else o_1 = i o_2 = (o.end + 1, g.end) else if i.end >= g.end o_1 = (g.start, i.start - 1) o_2 = i else o_1 = (g.start, i.start - 1) o_2 = i o_3 = (i.end + 1, i.end)

    Read the article

  • JPA ManyToMany referencing issue

    - by dharga
    I have three tables. AvailableOptions and PlanTypeRef with a ManyToMany association table called AvailOptionPlanTypeAssoc. The trimmed down schemas look like this CREATE TABLE [dbo].[AvailableOptions]( [SourceApplication] [char](8) NOT NULL, [OptionId] [int] IDENTITY(1,1) NOT NULL, ... ) CREATE TABLE [dbo].[AvailOptionPlanTypeAssoc]( [SourceApplication] [char](8) NOT NULL, [OptionId] [int] NOT NULL, [PlanTypeCd] [char](2) NOT NULL, ) CREATE TABLE [dbo].[PlanTypeRef]( [PlanTypeCd] [char](2) NOT NULL, [PlanTypeDesc] [varchar](32) NOT NULL, ) And the Java code looks like this. //AvailableOption.java @ManyToMany(cascade={CascadeType.ALL}, fetch=FetchType.EAGER) @JoinTable( name = "AvailOptionPlanTypeAssoc", joinColumns = { @JoinColumn(name = "OptionId"), @JoinColumn(name="SourceApplication")}, inverseJoinColumns=@JoinColumn(name="PlanTypeCd")) List<PlanType> planTypes = new ArrayList<PlanType>(); //PlanType.java @ManyToMany @JoinTable( name = "AvailOptionPlanTypeAssoc", joinColumns = { @JoinColumn(name = "PlanTypeCd")}, inverseJoinColumns={@JoinColumn(name="OptionId"), @JoinColumn(name="SourceApplication")}) List<AvailableOption> options = new ArrayList<AvailableOption>(); The problem arises when making a select on AvailableOptions it joins back onto itself. Note the following SQL code from the backtrace. The second inner join should be on PlanTypeRef. SELECT t0.OptionId, t0.SourceApplication, t2.PlanTypeCd, t2.EffectiveDate, t2.PlanTypeDesc, t2.SysLstTrxDtm, t2.SysLstUpdtUserId, t2.TermDate FROM dbo.AvailableOptions t0 INNER JOIN dbo.AvailOptionPlanTypeAssoc t1 ON t0.OptionId = t1.OptionId AND t0.SourceApplication = t1.SourceApplication INNER JOIN dbo.AvailableOptions t2 ON t1.PlanTypeCd = t2.PlanTypeCd WHERE (t0.SourceApplication = ? AND t0.OptionType = ?) ORDER BY t0.OptionId ASC, t0.SourceApplication ASC [params=(String) testApp, (String) junit0]}

    Read the article

  • Converting Complicated Oracle Join Syntax

    - by Grasper
    I have asked for help before on porting joins of this nature, but nothing this complex. I am porting a bunch of old SQL from oracle to postgres, which includes a lot of (+) style left joins. I need this in a format that pg will understand. I am having trouble deciphering this join hierarchy: SELECT * FROM PLANNED_MISSION PM_CTRL, CONTROL_AGENCY CA, MISSION_CONTROL MC, MISSION_OBJECTIVE MOR, REQUEST_OBJECTIVE RO, MISSION_REQUEST_PAIRING MRP, FRIENDLY_UNIT FU, PACKAGE_MISSION PKM, MISSION_AIRCRAFT MA, MISSION_OBJECTIVE MO, PLANNED_MISSION PM WHERE PM.MSN_TASKED_UNIT_TYPE != 'EAM' AND PM.MSN_INT_ID = MO.MSN_INT_ID AND PM.MSN_INT_ID = PKM.MSN_INT_ID (+) AND PM.MSN_INT_ID = MA.MSN_INT_ID (+) AND COALESCE(MA.MA_RESOURCE_INT_ID,0) = (SELECT COALESCE(MIN(MA1.MA_RESOURCE_INT_ID),0) FROM MISSION_AIRCRAFT MA1 WHERE MA.MSN_INT_ID = MA1.MSN_INT_ID) AND MA.FU_UNIT_ID = FU.FU_UNIT_ID (+) AND MA.CC_COUNTRY_CD = FU.CC_COUNTRY_CD (+) AND MO.MSN_INT_ID = MC.MSN_INT_ID (+) AND MO.MO_INT_ID = MC.MO_INT_ID (+) AND MC.CAG_CALLSIGN = CA.CAG_CALLSIGN (+) AND MC.CTRL_MSN_INT_ID = PM_CTRL.MSN_INT_ID (+) AND MO.MSN_INT_ID = MRP.MSN_INT_ID (+) AND MO.MO_INT_ID = MRP.MO_INT_ID (+) AND MRP.REQ_INT_ID = RO.REQ_INT_ID (+) AND RO.MSN_INT_ID = MOR.MSN_INT_ID (+) AND RO.MO_INT_ID = MOR.MO_INT_ID (+) AND MO.MSN_INT_ID = :msn_int_id AND MO.MO_INT_ID = :obj_int_id AND COALESCE(PM.MSN_MISSION_NUM, ' ') LIKE '%' AND COALESCE( PKM.PKG_NM,' ') LIKE '%' AND COALESCE( MA.FU_UNIT_ID, ' ') LIKE '%' AND COALESCE( MA.CC_COUNTRY_CD, ' ') LIKE '%' AND COALESCE(FU.FU_COMPONENT, ' ') LIKE '%' AND COALESCE( MA.ACT_AC_TYPE,' ') LIKE '%' AND MO.MO_MSN_CLASS_CD LIKE '%' AND COALESCE(MO.MO_MSN_TYPE, ' ') LIKE '%' AND COALESCE( MO.MO_OBJ_LOCATION,COALESCE( MOR.MO_OBJ_LOCATION, ' ')) LIKE '%' AND COALESCE(CA.CAG_TYPE_OF_CONTROL, ' ') LIKE '%' AND COALESCE( MC.CAG_CALLSIGN,' ') LIKE '%' AND COALESCE( MC.ASP_AIRSPACE_NM, ' ') LIKE '%' AND COALESCE( MC.CTRL_MSN_INT_ID, 0) LIKE '%' AND COALESCE(MC.CTRL_MO_INT_ID, 0) LIKE '%' AND COALESCE( PM_CTRL.MSN_MISSION_NUM,' ') LIKE '%' Any help is appreciated.

    Read the article

  • Retrieve 2 last posts for each category.

    - by Savageman
    Hello, Lets say I have 2 tables: blog_posts and categories. Each blog post belongs to only ONE category, so there is basically a foreign key between the 2 tables here. I would like to retrieve the 2 lasts posts from each category, is it possible to achieve this in a single request? GROUP BY would group everything and leave me with only one row in each category. But I want 2 of them. It would be easy to perform 1 + N query (N = number of category). First retrieve the categories. And then retrieve 2 posts from each category. I believe it would also be quite easy to perform M queries (M = number of posts I want from each category). First query selects the first post for each category (with a group by). Second query retrieves the second post for each category. etc. I'm just wondering if someone has a better solution for this. I don't really mind doing 1+N queries for that, but for curiosity and general SQL knowledge, it would be appreciated! Thanks in advance to whom can help me with this.

    Read the article

  • Problem with oracle stored procedure - parameters

    - by Nicole
    I have this stored procedure: CREATE OR REPLACE PROCEDURE "LIQUIDACION_OBTENER" ( p_Cuenta IN NUMBER, p_Fecha IN DATE, p_Detalle OUT LIQUIDACION.FILADETALLE%TYPE ) IS BEGIN SELECT FILADETALLE INTO p_Detalle FROM Liquidacion WHERE (FILACUENTA = p_Cuenta) AND (FILAFECHA = p_Fecha); END; / ...and my c# code: string liquidacion = string.Empty; OracleCommand command = new OracleCommand("Liquidacion_Obtener"); command.BindByName = true; command.Parameters.Add(new OracleParameter("p_Cuenta", OracleDbType.Int64)); command.Parameters["p_Cuenta"].Value = cuenta; command.Parameters.Add(new OracleParameter("p_Fecha", OracleDbType.Date)); command.Parameters["p_Fecha"].Value = fecha; command.Parameters.Add("p_Detalle", OracleDbType.Varchar2, ParameterDirection.Output); OracleConnectionHolder connection = null; connection = this.GetConnection(); command.Connection = connection.Connection; command.CommandTimeout = 30; command.CommandType = CommandType.StoredProcedure; OracleDataReader lector = command.ExecuteReader(); while (lector.Read()) { liquidacion += ((OracleString)command.Parameters["p_Detalle"].Value).Value; } the thing is that when I try to put a value into the parameter "Fecha" (that is a date) the code gives me this error (when the line command.ExecuteReader(); is executed) Oracle.DataAccess.Client.OracleException : ORA-06502: PL/SQL: numeric or value error ORA-06512: at "SYSTEM.LIQUIDACION_OBTENER", line 9 ORA-06512: at line 1 I tried with the datetime and was not the problem, I eve tried with no input parameters and just the output and still got the same error. Aparently the problem is with the output parameter. I already tried putting p_Detalle OUT VARCHAR2 instead of p_Detalle OUT LIQUIDACION.FILADETALLE%TYPE but it didn't work either I hope my post is understandable.. thanks!!!!!!!!!!

    Read the article

  • Linq PredicateBuilder with conditional AND, OR and NOT filters.

    - by richeym
    We have a project using LINQ to SQL, for which I need to rewrite a couple of search pages to allow the client to select whether they wish to perform an and or an or search. I though about redoing the LINQ queries using PredicateBuilder and have got this working pretty well I think. I effectively have a class containing my predicates, e.g.: internal static Expression<Func<Job, bool>> Description(string term) { return p => p.Description.Contains(term); } To perform the search i'm doing this (some code omitted for brevity): public Expression<Func<Job, bool>> ToLinqExpression() { var predicates = new List<Expression<Func<Job, bool>>>(); // build up predicates here if (SearchType == SearchType.And) { query = PredicateBuilder.True<Job>(); } else { query = PredicateBuilder.False<Job>(); } foreach (var predicate in predicates) { if (SearchType == SearchType.And) { query = query.And(predicate); } else { query = query.Or(predicate); } } return query; } While i'm reasonably happy with this, I have two concerns: The if/else blocks that evaluate a SearchType property feel like they could be a potential code smell. The client is now insisting on being able to perform 'and not' / 'or not' searches. To address point 2, I think I could do this by simply rewriting my expressions, e.g.: internal static Expression<Func<Job, bool>> Description(string term, bool invert) { if (invert) { return p => !p.Description.Contains(term); } else { return p => p.Description.Contains(term); } } However this feels like a bit of a kludge, which usually means there's a better solution out there. Can anyone recommend how this could be improved? I'm aware of dynamic LINQ, but I don't really want to lose LINQ's strong typing.

    Read the article

  • Specifying ASP.NET MVC attributes for auto-generated data models

    - by Lyubomyr Shaydariv
    Hello to everyone. I'm very new to ASP.NET MVC (as well as ASP.NET in general), and going to gain some knowledge for this technology, so I'm sorry I can ask some trivial questions. I have installed ASP.NET MVC 3 RC1 and I'm trying to do the following. Let's consider that I have a model that's completely auto-generated from a table using the "LINQ to SQL Classes" template in VS2010. The template generates 3 files (two .cs files and one .layout file respectively), and the generated partial class is expected to be used as an MVC model. Let's also consider, a single DB column, that's mapped into the model, may look like this: [global::System.Data.Linq.Mapping.ColumnAttribute(Storage = "_Name", DbType = "VarChar(128)")] public string Name { get { return this._Name; } set { if ( (this._Name != value) ) { // ... generated stuff goes here } } } The ASP.NET MVC engine also provides a beautiful declarative way to specify some additional stuff, like RequiredAttribute, DisplayNameAttribute and other nice attributes. But since the mapped model is a purely auto-genereated model, I've realized that I should not change the model manually, and specify the fields like: [Required] [DisplayName("Project name")] [StringLength(128)] [global::System.Data.Linq.Mapping.ColumnAttribute(Storage = "_Name", DbType = "VarChar(128)")] public string Name { ... though this approach works perfectly... until I change the model in the DBML-designer removing the ASP.NET MVC attributes automatically. So, how do I specify ASP.NET MVC attributes for the DBML models and their fields safely? Thanks in advance, and Merry Christmas.

    Read the article

  • Locking a table for getting MAX in LINQ

    - by Hossein Margani
    Hi Every one! I have a query in LINQ, I want to get MAX of Code of my table and increase it and insert new record with new Code. just like the IDENTITY feature of SQL Server, but here my Code column is char(5) where can be alphabets and numeric. My problem is when inserting a new row, two concurrent processes get max and insert an equal Code to the record. my command is: var maxCode = db.Customers.Select(c=>c.Code).Max(); var anotherCustomer = db.Customers.Where(...).SingleOrDefault(); anotherCustomer.Code = GenerateNextCode(maxCode); db.SubmitChanges(); I ran this command cross 1000 threads and each updating 200 customers, and used a Transaction with IsolationLevel.Serializable, after two or three execution an error occured: using (var db = new DBModelDataContext()) { DbTransaction tran = null; try { db.Connection.Open(); tran = db.Connection.BeginTransaction(IsolationLevel.Serializable); db.Transaction = tran; . . . . tran.Commit(); } catch { tran.Rollback(); } finally { db.Connection.Close(); } } error: Transaction (Process ID 60) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. other IsolationLevels generates this error: Row not found or changed. Please help me, thank you.

    Read the article

  • Proper DateTime Format for a Web Service

    - by user48408
    I have a webservice with a method which is called via a xmlhttprequest object in my javascript. The method accepts a datetime parameter which is subsequently converted to a string and run against the database to perform a calculation. I get the value from m_txtDateAdd and send off the xmlHttprequest <asp:textbox id=m_txtDateAdd tabIndex=4 runat="server" Width="96px" Text="<%# Today %>"> </asp:textbox> which has a validator attacted to it <asp:CustomValidator id="m_DateAddValidator" runat="server" ErrorMessage="Please Enter a Valid Date" ControlToValidate="m_txtDateAdd">&#x25CF;</asp:CustomValidator> My webmethod looks something like this [WebMethod] public decimal GetTotalCost(DateTime transactionDate) { String sqlDateString = transactionDate.Year+"/"+transactionDate.Month+"/"+transactionDate.Day; I use sqlDateString as part of the commandtext i send off to the database. Its a legacy application and its inline sql so I don't have the freedom to set up a stored procedure and create and assign parameters in my code behind. This works 90% of the time. The webservice is called on the onchange event of m_txtDateAdd. Every now and again the response i get from the server is System.ArgumentException: Cannot convert 25/06/2009 to System.DateTime. System.ArgumentException: Cannot convert 25/06/2009 to System.DateTime. Parameter name: type --- System.FormatException: String was not recognized as a valid DateTime. at System.DateTimeParse.Parse(String s, DateTimeFormatInfo dtfi, DateTimeStyles styles) at System.DateTime.Parse(String s, IFormatProvider provider) at System.Convert.ToDateTime(String value, IFormatProvider provider) at System.String.System.IConvertible.ToDateTime(IFormatProvider provider) at System.Convert.ChangeType(Object value, Type conversionType, IFormatProvider provider) at System.Web.Services.Protocols.ScalarFormatter.FromString(String value, Type type) --- End of inner exception stack trace --- at System.Web.Services.Protocols.ScalarFormatter.FromString(String value, Type type) at System.Web.Services.Protocols.ValueCollectionParameterReader.Read(NameValueCollection collection) at System.Web.Services.Protocols.HtmlFormParameterReader.Read(HttpRequest request) at System.Web.Services.Protocols.HttpServerProtocol.ReadParameters() at System.Web.Services.Protocols.WebServiceHandler.CoreProcessRequest()

    Read the article

  • Difficulty setting ArrayList to java.sql.Blob to save in DB using hibernate

    - by me_here
    I'm trying to save a java ArrayList in a database (H2) by setting it as a blob, for retrieval later. If this is a bad approach, please say - I haven't been able to find much information on this area. I have a column of type Blob in the database, and Hibernate maps to this with java.sql.Blob. The code I'm struggling with is: Drawings drawing = new Drawings(); try { ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = null; oos = new ObjectOutputStream(bos); oos.writeObject(plan.drawingPane21.pointList); byte[] buff = bos.toByteArray(); Blob drawingBlob = null; drawingBlob.setBytes(0, buff); drawing.setDrawingObject(drawingBlob); } catch (Exception e){ System.err.println(e); } The object I'm trying to save into a blob (plan.drawingPane21.pointList) is of type ArrayList<DrawingDot>, DrawingDot being a custom class implementing Serializable. My code is failing on the line drawingBlob.setBytes(0, buff); with a NullPointerException. Help appreciated.

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • Passing a LINQ DataRow Reference in a GridView's ItemTemplate

    - by Bob Kaufman
    Given the following GridView: <asp:GridView runat="server" ID="GridView1" AutoGenerateColumns="false" DataKeyNames="UniqueID" OnSelectedIndexChanging="GridView1_SelectedIndexChanging" > <Columns> <asp:BoundField HeaderText="Remarks" DataField="Remarks" /> <asp:TemplateField HeaderText="Listing"> <ItemTemplate> <%# ShowListingTitle( ( ( System.Data.DataRowView ) ( Container.DataItem ) ).Row ) %> </ItemTemplate> </asp:TemplateField> <asp:BoundField HeaderText="Amount" DataField="Amount" DataFormatString="{0:C}" /> </Columns> </asp:GridView> which refers to the following code-behind method: protected String ShowListingTitle( DataRow row ) { Listing listing = ( Listing ) row; return NicelyFormattedString( listing.field1, listing.field2, ... ); } The cast from DataRow to Listing is failing (cannot convert from DataRow to Listing) I'm certain the problem lies in what I'm passing from within the ItemTemplate, which is simply not the right reference to the current record from the LINQ to SQL data set that I've created, which looks like this: private void PopulateGrid() { using ( MyDataContext context = new MyDataContext() ) { IQueryable < Listing > listings = from l in context.Listings where l.AccountID == myAccountID select l; GridView1.DataSource = listings; GridView1.DataBind(); } }

    Read the article

  • LINQ Normalizing data

    - by Brennan Mann
    I am using an OMS that stores up to three line items per record in the database. Below is an example of an order containing five line items. Order Header Order Detail Prod 1 Prod 2 Prod 3 Order Detail Prod 4 Prod 5 One order header record and two detail records. My goal is have a one to one relation for details records(i.e., one detail record per line item). In the past, I used an UNION ALL SQL statement to extract the data. Is there a better approcach to this problem using LINQ? Below is my first attempt at using LINQ. Any feedback, suggestions or recommendations would greatly be appreciated. For what I have read, an UNION statement can tax the process? var orderdetail = (from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_001, price = o.EXTPRICES_001, qty = o.ITEMQTYS_001 } ).Union(from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_002, price = o.EXTPRICES_002, qty = o.ITEMQTYS_002 } ).Union(from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_003, price = o.EXTPRICES_003, qty = o.ITEMQTYS_003 });

    Read the article

  • PHP/SQL/Wordpress: Group a user list by alphabet

    - by rayne
    I want to create a (fairly big) Wordpress user index with the users categorized alphabetically, like this: A Amy Adam B Bernard Bianca and so on. I've created a custom Wordpress query which works fine for this, except for one problem: It also displays "empty" letters, letters where there aren't any users whose name begins with that letter. I'd be glad if you could help me fix this code so that it only displays the letter if there's actually a user with a name of that letter :) I've tried my luck by checking how many results there are for that letter, but somehow that's not working. (FYI, I use the user photo plugin and only want to show users in the list who have an approved picture, hence the stuff in the SQL query). <?php $alphabet = range('A', 'Z'); foreach ($alphabet as $letter) { $user_count = $wpdb->get_results("SELECT COUNT(*) FROM wp_users WHERE display_name LIKE '".$letter."%' ORDER BY display_name ASC"); if ($user_count > 0) { $user_row = $wpdb->get_results("SELECT wp_users.user_login, wp_users.display_name FROM wp_users, wp_usermeta WHERE wp_users.display_name LIKE '".$letter."%' AND wp_usermeta.meta_key = 'userphoto_approvalstatus' AND wp_usermeta.meta_value = '2' AND wp_usermeta.user_id = wp_users.ID ORDER BY wp_users.display_name ASC"); echo '<li class="letter">'.$letter.''; echo '<ul>'; foreach ($user_row as $user) { echo '<li><a href="/author/'.$user->user_login.'">'.$user->display_name.'</a></li>'; } echo '</ul></li>'; } } ?> Thanks in advance!

    Read the article

  • LinqToSQL Double Insert issue

    - by Vaccano
    I have a WCF service with an object structure similar to this: public class MyClass { public List<MySubItem> SubItems { get; set; } } public class MySubItem { public List<MySubSubItem> SubSubItems { get; set; } } public class MySubSubItem { public string DataValue { get; set; } } public class MyClassDAL { public void InsertMyClass(MyClass myClass) { ctx.MyClasses.InsertOnSubmit(myClass); ctx.SubmitChanges(); } } Sometimes my client will call in with a MyClass that submits only half of the values that it has in the list SubSubItems. Later it calls the insert with the rest of the list. The problem is that when it does this I get a Primary Key violation. The reason is that it is trying to insert the MySubItem again (because there are more items in the SubSubItems owned by the same MySubItem object). How do I deal with this? Do I just call an Update? Do I have to try to separate them out (updates from inserts)? SQL Server 2008 has a really cool Merge functionally. Is there some way to access that from LinqToSQL?

    Read the article

  • SQL (mySQL) update some value in all records processed by a select

    - by jdmuys
    I am using mySQL from their C API, but that shouldn't be relevant. My code must process records from a table that match some criteria, and then update the said records to flag them as processed. The lines in the table are modified/inserted/deleted by another process I don't control. I am afraid in the following, the UPDATE might flag some records erroneously since the set of records matching might have changed between step 1 and step 3. SELECT * FROM myTable WHERE <CONDITION>; # step 1 <iterate over the selected set of lines. This may take some time.> # step 2 UPDATE myTable SET processed=1 WHERE <CONDITION> # step 3 What's the smart way to ensure that the UPDATE updates all the lines processed, and only them? A transaction doesn't seem to fit the bill as it doesn't provide isolation of that sort: a recently modified record not in the originally selected set might still be targeted by the UPDATE statement. For the same reason, SELECT ... FOR UPDATE doesn't seem to help, though it sounds promising :-) The only way I can see is to use a temporary table to memorize the set of rows to be processed, doing something like: CREATE TEMPORARY TABLE workOrder (jobId INT(11)); INSERT INTO workOrder SELECT myID as jobId FROM myTable WHERE <CONDITION>; SELECT * FROM myTable WHERE myID IN (SELECT * FROM workOrder); <iterate over the selected set of lines. This may take some time.> UPDATE myTable SET processed=1 WHERE myID IN (SELECT * FROM workOrder); DROP TABLE workOrder; But this seems wasteful and not very efficient. Is there anything smarter? Many thanks from a SQL newbie.

    Read the article

  • Calling SubmitChanges on DataContext does not update database.

    - by drasto
    In C# ASP.NET MVC application I use Link to SQL to provide data for my application. I have got simple database schema like this: In my controller class I reference this data context called Model (as you can see on the right side of picture in properties) like this: private Model model = new Model(); I've got a table (List) of Series rendered on my page. It renders properly and I was able to add delete functionality to delete Series like this: public ActionResult Delete(int id) { model.Series.DeleteOnSubmit(model.Series.SingleOrDefault(s => s.ID == id)); model.SubmitChanges(); return RedirectToAction("Index"); } Where appropriate action link looks like this: <%: Html.ActionLink("Delete", "Delete", new { id=item.ID })%> Also create (implemented in similar way) works fine. However edit does not work. My edit looks like this: public ActionResult Edit(int id) { return View(model.Series.SingleOrDefault(s => s.ID == id)); } [HttpPost] public ActionResult Edit(Series series) { if (ModelState.IsValid) { UpdateModel(series); series.Title = series.Title + " some string to ensure title has changed"; model.SubmitChanges(); return RedirectToAction("Index"); } I have controlled that my database has a primary key set up correctly. I debugged my application and found out that everything works as expected until the line with model.SubmitChanges();. This command does not apply the changes of Title property(or any other) against the database. Please help.

    Read the article

  • Fuzzy Search on Material Descriptions including numerical sizes & general descriptions of material t

    - by Kyle
    We're looking to provide a fuzzy search on an electrical materials database (i.e. conduit, cable, etc.). The problem is that, because of a lack of consistency across all material types, we could not split sizes into separate fields from the text description because some materials are rated by things other than size. I've attempted a combination of a full text search & a SQL CLR implementation of the Levenshtein search algorithm (for assistance in ranking), but my results are a little funky (i.e. they are not sorting correctly due to improper ranking). For example, if the search term is "3/4" ABCD Conduit", I'll might get back several irrelevant results in the following order: 1/2" Conduit 1/4" X 3/4" Cable 1/4" Cable Ties 3/4" DFC Conduit Tees 3/4" ABCD Conduit 3/4" Conduit I believe I've nailed the problem down to the fact that these two search algorithms do not factor in the relevance of punctuation & numeric. That is, in such a search, I'd expect the size to take precedence over any fuzzy match on the rest of the description, but my results don't reflect that. My question is: Can anyone recommend better search algorithms or different approaches that may be better suited for searching a combination of alphanumerics & punctuation characters?

    Read the article

  • New Perl user: using a hash of arrays

    - by Zach H
    I'm doing a little datamining project where a perl script grabs info from a SQL database and parses it. The data consists of several timestamps. I want to find how many of a particular type of timestamp exist on any particular day. Unfortunately, this is my first perl script, and the nature of perl when it comes to hashes and arrays is confusing me quite a bit. Code segment: my %values=();#A hash of the total values of each type of data of each day. #The key is the day, and each key stores an array of each of the values I need. my @proposal; #[drafted timestamp(0), submitted timestamp(1), attny approved timestamp(2),Organiziation approved timestamp(3), Other approval timestamp(4), Approved Timestamp(5)] while(@proposal=$sqlresults->fetchrow_array()){ #TODO: check to make sure proposal is valid #Increment the number of timestamps of each type on each particular date my $i; for($i=0;$i<=5;$i++) $values{$proposal[$i]}[$i]++; #Update rolling average of daily #TODO: To check total load, increment total load on all dates between attourney approve date and accepted date for($i=$proposal[1];$i<=$proposal[2];$i++) $values{$i}[6]++; } I keep getting syntax errors inside the for loops incrementing values. Also, considering that I'm using strict and warnings, will Perl auto-create arrays of the right values when I'm accessing them inside the hash, or will I get out-of bounds errors everywhere? Thanks for any help, Zach

    Read the article

  • Table Design For SystemSettings, Best Model

    - by Chris L
    Someone suggested moving a table full of settings, where each column is a setting name(or type) and the rows are the customers & their respective settings for each setting. ID | IsAdmin | ImagePath ------------------------------ 12 | 1          | \path\to\images 34 | 0          | \path\to\images The downside to this is every time we want a new setting name(or type) we alter the table(via sql) and add the new (column)setting name/type. Then update the rows(so that each customer now has a value for that setting). The new table design proposal. The proposal is to have a column for setting name and another column for setting. ID | SettingName | SettingValue ---------------------------- 12 | IsAdmin        | 1 12 | ImagePath   | \path\to\images 34 | IsAdmin        | 0 34 | ImagePath   | \path\to\images The point they made was that adding a new setting was as easy as a simple insert statement to the row, no added column. But something doesn't feel right about the second design, it looks bad, but I can't come up with any arguments against it. Am I wrong?

    Read the article

  • CLR Stored Procedures

    - by Paul Hatcherian
    In an ASP.NET application, I have a small number of fairly complex, frequently used operations to execute against a database. In these operations, one or more of several tables needs updates or inserts based a logical evaluation of both input parameters and values of certain tables. I've maintained a separation of logic and data access, so the operation currently looks like this: Request received from client Business layer invokes data layer to retrieve data from database Business layer processes result and determines which operation to execute Business layer invokes appropriate data operation Response sent to client As you can see, the client is kept waiting while two separate requests are made to the database. In searching for a solution to this, I've found CLR Stored Procedures, but I'm not sure if I have the right idea about what they are useful for. I have written a replacement for the code above which especially places steps 2-4 in a CLR SP. My understanding is that the SP will be executed locally by SQL Server and result in only one call being made to the server. My initial benchmark tests show this is actually orders of magnitude slower than my original code, but I attribute that recompilation of the code I have not worked out yet and/or some flaw in my environment. My question is basically, is this the intended use of CLR SPs or am I missing something? I realize this is a bit of a compromise structurally, so if there's a better way to do it I'd love to hear it.

    Read the article

< Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >