Search Results

Search found 63598 results on 2544 pages for 'sql add on'.

Page 246/2544 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • Sql Server Maintenance Plan Tasks & Completion

    - by Ben
    Hi All, I have a maintenance plan that looks like this... Client 1 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client 2 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client N ... Import Data and Process Data are calling jobs and Post Process is an Execute Sql task. If Import Data or Process Data Fail, it goes to the next client Import Data... Both Import Data and Process Data are jobs that contain SSIS packages that are using the built-in SQL logging provider. My expectation with the configuration as it stands is: Client 1 Import Data Runs: Failure - Client 2 Import Data | Success Process Data Process Data Runs: Failure - Client 2 Import Data | Success Post Process Post Process Runs: Completion - Success or Failure - Next Client Import Data This isn't what I'm seeing in my logs though... I see several Client Import Data SSIS log entries, then several Post Process log entries, then back to Client Import Data! Arg!! What am I doing wrong? I didn't think the "success" piece of Client 1 Import Data would kick off until it... well... succeeded aka finished! The logs seem to indicate otherwise though... I really need these tasks to be consecutive not concurrent. Is this possible? Thanks!

    Read the article

  • SQL Server Compact timed out waiting for a lock

    - by jankhana
    Hi all, I'm having an application in that i use Sql Compact 3.5 with VS2008. I'm running multiple threads in my application which contacts the compact database and accesses the row. It selects and deletes those rows in a fashion i.e selecting and giving to the application 5 rows and deleting those rows from the table. It works great with a single thread but if i use multiple threads i.e if 3 or more threads are running I get very often the TimeOut Error!!! I have increased the Time out property in the connection string but it didn't give me expected result. The error log is as follow: SQL Server Compact timed out waiting for a lock. The default lock time is 2000ms for devices and 5000ms for desktops. The default lock timeout can be increased in the connection string using the ssce: default lock timeout property. [ Session id = 5,Thread id = 4204,Process id = 4808,Table name = XXX,Conflict type = x lock (s blocks),Resource = TAB ] The Query that I use to retrieve is as follows: " select Top(5) * from TableName order by id; delete from TableName where id in(select top(5) id from TableName order by id); " Is there any way by which we can avoid this Time Out exception??????? The above query I un as a transaction in VS2008 one using SQLCECommand and the other using SqlCEDataAdapter. Any Idea!!!!!! Reply

    Read the article

  • Images in database vs file system

    - by Jesse
    We have a project coming up where we will be building a whole backend CMS system that will power our entire extranet and intranet with one package. The question I have been trying to find an answer to is which is better: storing images in the database (SQL Server 2005) so we may have integrity, single replication plan, etc OR storing on the file system? One issue we have is that we have multiple servers load balanced that require to have the same data at all times. As of now we have SQL replication taking care of that but file replication seems to be a little tougher. Another concern we have is that we would like to have multiple resolutions of the same image, we are not sure if creating and storing each version on the file system would be best or maybe dynamically pulling and creating the resolution image we would like upon request. Our concerns are the with the following: Data integrity Data replication Multiple resolutions Speed of database vs file system Overhead load of database vs file system Data management and backup Does anyone have a similar situation or have any input on what would be recommended? Thanks in advance for the help!

    Read the article

  • Improving SQL Code

    - by jeremib
    I'm using Pervasive SQL. I have the following UNION of mulitple SQL statements. Is there a way to clean this up, especially the Pay Date an the Loc No fields that are selected in each statement. Is there a way to pull this out and have only one place to need to change those two fields? ( SELECT '23400' as Gl_Number, y.Plan as Description, 0 as Hours, ROUND(SUM(Ee_Curr),2) as Debit, 0 as Credit FROM "PR_YLOC" y LEFT JOIN PR_SUMM s ON (s.Summ_No = y.Summ_No) WHERE y.Loc_No = 1041 AND s.Pay_Date = '2010-04-02' AND y.Code IN (100, 105, 110) AND y.Type = 3 GROUP BY y.Plan ) UNION ( SELECT '72000' as Gl_Number, y.Plan, 0, ROUND(SUM(Er_Curr),2), 0 FROM "PR_YLOC" y LEFT JOIN PR_SUMM s ON (s.Summ_No = y.Summ_No) WHERE y.Loc_No = 1041 AND s.Pay_Date = '2010-04-02' AND y.Code IN (100, 105, 110) AND y.Type = 3 GROUP BY y.Plan ) UNION ( SELECT '24800', c.Plan, 0, ROUND(SUM(Ee_Amt),2), 0 FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 100 GROUP BY c.Plan ) UNION ( SELECT '24800', c.Plan, 0, 0, ROUND(SUM(Ee_Amt),2) FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 115 GROUP BY c.Plan ) UNION ( SELECT '24150', c.Plan, 0, 0, ROUND(SUM(Ee_Amt),2) FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 241 GROUP BY c.Plan ) UNION ( SELECT '24150', c.Plan, 0, ROUND(SUM(Ee_Amt),2), 0 FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 239 GROUP BY c.Plan ) UNION ( SELECT '24120', c.Plan, 0, ROUND(SUM(Ee_Amt),2), 0 FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 230 GROUP BY c.Plan ) UNION ( SELECT '24100', c.Plan, 0, ROUND(SUM(Ee_Amt),2), 0 FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 225 GROUP BY c.Plan ) UNION ( SELECT '23800', c.Plan, 0, ROUND(SUM(Ee_Amt),2), 0 FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 245 GROUP BY c.Plan ) UNION ( select m.Def_Dept as Gl_Number, t.Short_Desc, (SELECT SUM(Hours) FROM pr_earn en WHERE en.Loc_No = e.Loc_No AND en.Emp_No = e.Emp_No AND en.Pay_Date = e.Pay_Date AND en.Pay_Code = e.Pay_Code) as Hours, (SELECT SUM(Pay_Amt) FROM pr_earn en WHERE en.Loc_No = e.Loc_No AND en.Emp_No = e.Emp_No AND en.Pay_Date = e.Pay_Date AND en.Pay_Code = e.Pay_Code) as Debit, 0 from pr_earn e left join pr_mast m on (e.Loc_No = m.Loc_No and e.Emp_No = m.Emp_No) left join pr_ptype t ON (t.Code = e.Pay_Code) where e.loc_no = 1041 and e.pay_date = '2010-04-02' group by m.Def_Dept, t.Short_Desc ) Thanks

    Read the article

  • Tool for generating flat files from SQL objects dynamically

    - by Fabio Gouw
    Hello! I'm looking for a tool or component that generates flat files given a SQL Server's query result (from a stored procedure or a SELECT * over a table or view). This will be a batch process which runs every day, and every day a new file is created. I could use SQL Server Integration Services (DTS), but I have a mandatory requirement: the output of the file need to be dynamic. If a new column is added in my query result, the file must have this new column too, without having to modify my SSIS package. If a column is removed, then the flat file no longer will have it. I’ve tried to do this with SSIS, but when I create a new package I need to specify the number of columns. Another requirement is configuring the format of the output, depending on the data type of the column. If it’s a datetime, the format needs to be YYYY-MM-DD. If it’s a float, then I need to use 2 decimal digits, and so on. Does anyone know a tool that does this job? Thanks

    Read the article

  • Count rows against to SQL server (2005) table?

    - by David.Chu.ca
    I have a simple question with two options to get count of rows in a SQL server (2005). I am using VS 2005. There are two options to get the count: SELECT id FROM Table1 WHERE dt >= startDt AND dt < endDt;; I get a list of ids from above call in cache and then I get count by List.Count. Another option is SELECT COUNT(*) FROM Table1 WHERE dt >= startDt AND dt < endDt; The above call will get the count directly. The issue is that I had several cases of exceptions with the second method: timeout. What I found is that the table1 is too big with millions of data. When I used the first option, it seems OK. I am confused by the fact that Count() takes more time than getting all the rows(is that true?). Not sure if the aggregation call with Count() would cause SQL server to create temporary table or cache on server side and it would result in slow performance when table is too big? I am not sure what is the best way to get the count?

    Read the article

  • How to store a list in a column of a database table.

    - by John Berryman
    Howdy! So, per Mehrdad's answer to a related question, I get it that a "proper" database table column doesn't store a list. Rather, you should create another table that effectively holds the elements of said list and then link to it directly or through a junction table. However, the type of list I want to create will be composed of unique items (unlike the linked question's fruit example). Furthermore, the items in my list are explicitly sorted - which means that if I stored the elements in another table, I'd have to sort them every time I accessed them. Finally, the list is basically atomic in that any time I wish to access the list, I will want to access the entire list rather than just a piece of it - so it seems silly to have to issue a database query to gather together pieces of the list. AKX's solution (linked above) is to serialize the list and store it in a binary column. But this also seems inconvenient because it means that I have to worry about serialization and deserialization. Is there any better solution? If there is no better solution, then why? It seems that this problem should come up from time to time. ... just a little more info to let you know where I'm coming from. As soon as I had just begun understanding SQL and databases in general, I was turned on to LINQ to SQL, and so now I'm a little spoiled because I expect to deal with my programming object model without having to think about how the objects are queried or stored in the database. Thanks All! John

    Read the article

  • Selecting a good SQL Server 2008 spatial index with large polygons

    - by andynormancx
    I'm having some fun trying to pick a decent SQL Server 2008 spatial index setup for a data set I am dealing with. The dataset is polygons, representing contours over the whole globe. There are 106,000 rows in the table, the polygons are stored in a geometry field. The issue I have is that many of the polygons cover a large portion of the globe. This seems to make it very hard to get a spatial index that will eliminate many rows in the primary filter. For example, look at the following query: SELECT "ID","CODE","geom".STAsBinary() as "geom" FROM "dbo"."ContA" WHERE "geom".Filter( geometry::STGeomFromText('POLYGON ((-142.03193662573682 59.53396984952896, -142.03193662573682 59.88928136451884, -141.32743833481925 59.88928136451884, -141.32743833481925 59.53396984952896, -142.03193662573682 59.53396984952896))', 4326) ) = 1 This is querying an area which intersects with only two of the polygons in the table. No matter what combination of spatial index settings I chose, that Filter() always returns around 60,000 rows. Replacing Filter() with STIntersects() of course returns just the two polygons I want, but of course takes much longer (Filter() is 6 seconds, STIntersects() is 12 seconds). Can anyone give me any hints on whether there is a spatial index setup that is likely to improve on 60,000 rows or is my dataset just not a good match for SQL Server's spatial indexing ?

    Read the article

  • DataGridView not displaying a row after it is created

    - by joslinm
    Hi, I'm using Visual Studio 10 and I just created a Database using SQL Server CE. Within it, I made a table CSLDataTable and that automatically created a CSLDataSet & CSLDataTableTableAdapter. The three variables were automatically created in my MainWindow.cs class: cSLDataSet cSLDataTableTableAdapter cSLDataTableBindingSource I have added a DataGridView in my Form called dataGridView and datasource cSLDataTableBindingSource. In my MainWindow(), I tried adding a row as a test: public MainWindow() { InitializeComponent(); CSLDataSet.CSLDataTableRow row = cSLDataSet.CSLDataTable.NewCSLDataTableRow(); row.File_ = "file"; row.Artist = "artist11"; row.Album = "album"; row.Save_Structure = "save"; row.Sent = false; row.Error = true; row.Release_Format = "release"; row.Bit_Rate = "bitrate.."; row.Year = "year"; row.Physical_Format = "format"; row.Bit_Format = "bitformat"; row.File_Path = "File!!path"; row.Site_Origin = "what"; cSLDataSet.CSLDataTable.Rows.Add(row); cSLDataSet.AcceptChanges(); cSLDataTableTableAdapter.Fill(cSLDataSet.CSLDataTable); cSLDataTableTableAdapter.Update(cSLDataSet); dataGridView.Refresh(); dataGridView.Update(); } In regards to the DataSet methods I tried calling, I had been trying to find a "correct" way to interact with the adapter, dataset, and datatable to successfully show the row, but to no avail. I'm rather new to using SQL Server CE Database, and read a lot of the MSDN sites & thought I was on the right track, but I've had no luck. The DataGridView shows the headers correctly, but that new row does not show up.

    Read the article

  • Optimal two variable linear regression SQL statement (censoring outliers)

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 15 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here (with five outliers highlighted): Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • SQL command to get field of a maximum value, without making two select

    - by António Capelo
    I'm starting to learn SQL and I'm working on this exercise: I have a "books" table which holds the info on every book (including price and genre ID). I need to get the name of the genre which has the highest average price. I suppose that I first need to group the prices by genre and then retrieve the name of the highest.. I know that I can get the results GENRE VS COST with the following: select b.genre, round(avg(b.price),2) as cost from books b group by b.genre; My question is, to get the genre with the highest AVG price from that result, do I have to make: select aux.genre from ( select b.genre, round(avg(b.price),2) as cost from books b group by b.genre ) aux where aux.cost = (select max(aux.cost) from ( select b.genre, round(avg(b.price),2) as cost from books l group by b.genre ) aux); Is it bad practice or isn't there another way? I get the correct result but I'm not confortable with creating two times the same selection. I'm not using PL SQL so I can't use variables or anything like that.. Any help will be appreciated. Thanks in advance!

    Read the article

  • Linq to sql C# updating reference Tables

    - by Laurence Burke
    ok reclarification I am adding a new address and I know the structure as AddressID = PK and all other entities are non nullable. Now on insert of a new row the addrID Pk is autogened and I am wondering if I would have to get that to create a new row in the referencing table or does that automatically get generated also. also I want to be able to repopulate the dropdownlist that lists the current employee's addresses with the newly created address. static uint _curEmpID; protected void btnAdd_Click(object sender, EventArgs e) { if (txtZip.Text != "" && txtAdd1.Text != "" && txtCity.Text != "") { TestDataClassDataContext dc = new TestDataClassDataContext(); Address addr = new Address() { AddressLine1 = txtAdd1.Text, AddressLine2 = txtAdd2.Text, City = txtCity.Text, PostalCode = txtZip.Text, StateProvinceID = Convert.ToInt32(ddlState.SelectedValue) }; dc.Addresses.InsertOnSubmit(addr); lblSuccess.Visible = true; lblErrMsg.Visible = false; dc.SubmitChanges(); // // TODO: add reference from new address to CurEmp Table // SetAddrList(); } else { lblErrMsg.Text = "Invalid Input"; lblErrMsg.Visible = true; } } protected void ddlAddList_SelectedIndexChanged(object sender, EventArgs e) { lblErrMsg.Visible = false; lblSuccess.Visible = false; TestDataClassDataContext dc = new TestDataClassDataContext(); dc.ObjectTrackingEnabled = false; if (ddlAddList.SelectedValue != "-1") { var addr = (from a in dc.Addresses where a.AddressID == Convert.ToInt32(ddlAddList.SelectedValue) select a).FirstOrDefault(); txtAdd1.Text = addr.AddressLine1; txtAdd2.Text = addr.AddressLine2; txtCity.Text = addr.City; txtZip.Text = addr.PostalCode; ddlState.SelectedValue = addr.StateProvinceID.ToString(); btnSubmit.Visible = true; btnAdd.Visible = false; } else { txtAdd1.Text = ""; txtAdd2.Text = ""; txtCity.Text = ""; txtZip.Text = ""; btnAdd.Visible = true; btnSubmit.Visible = false; } } protected void SetAddrList() { TestDataClassDataContext dc = new TestDataClassDataContext(); dc.ObjectTrackingEnabled = false; var addList = from addr in dc.Addresses from eaddr in dc.EmployeeAddresses where eaddr.EmployeeID == _curEmpID && addr.AddressID == eaddr.AddressID select new { AddValue = addr.AddressID, AddText = addr.AddressID, }; ddlAddList.DataSource = addList; ddlAddList.DataValueField = "AddValue"; ddlAddList.DataTextField = "AddText"; ddlAddList.DataBind(); ddlAddList.Items.Add(new ListItem("<Add Address>", "-1")); } OK I am hoping that I did not include too much code. I would really appreciate any other comments about I could otherwise improve this code in any other ways also.

    Read the article

  • Sql Exception: Error converting data type numeric to numeric

    - by Lucifer
    Hello We have a very strange issue with a database that has been moved from staging to production. The first time the database was moved it was by detaching, copying and reattaching, the second time we tried restoring from a backup of the staging. Both SQL Servers are the same version of MS SQL 2008, running on 64 bit hardware. The code accessing the database is the same build, built using the .net 2.0 framework. Here is the error message and some of the stack trace: Exception Details: System.Data.SqlClient.SqlException: Error converting data type numeric to numeric. Stack Trace: [SqlException (0x80131904): Error converting data type numeric to numeric.] System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) +1953274 System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4849707 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2392 System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) +204 System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) +954 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) +162 System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) +175 System.Data.SqlClient.SqlCommand.ExecuteNonQuery() +137 Version Information: Microsoft .NET Framework Version:2.0.50727.4200; ASP.NET Version:2.0.50727.4016

    Read the article

  • Need advice on comparing the performance of 2 equivalent linq to sql queries

    - by uvita
    I am working on tool to optimize linq to sql queries. Basically it intercepts the linq execution pipeline and makes some optimizations like for example removing a redundant join from a query. Of course, there is an overhead in the execution time before the query gets executed in the dbms, but then, the query should be processed faster. I don't want to use a sql profiler because I know that the generated query will be perform better in the dbms than the original one, I am looking for a correct way of measuring the global time between the creation of the query in linq and the end of its execution. Currently, I am using the Stopwatch class and my code looks something like this: var sw = new Stopwatch(); sw.Start(); const int amount = 100; for (var i = 0; i < amount; i++) { ExecuteNonOptimizedQuery(); } sw.Stop(); Console.Writeline("Executing the query {2} times took: {0}ms. On average, each query took: {1}ms", sw.ElapsedMilliseconds, sw.ElapsedMilliseconds / amount, amount); Basically the ExecutenNonOptimizedQuery() method creates a new DataContext, creates a query and then iterates over the results. I did this for both versions of the query, the normal one and the optimized one. I took the idea from this post from Frans Bouma. Is there any other approach/considerations I should take? Thanks in advance!

    Read the article

  • sql locking on silverlight app

    - by immuner
    Hi, i am not sure if this is the correct term, but this is what id like to do: I have an application that uses a mssql database. This application can operate in 3 modes. mode 1) user does not alter, but only read the database mode 2) user can add rows (one at a time) onto a table in the database mode 3) user can alter several tables in the database (one person at a time) question 1) how can i ensure that when a user in in mode 3 that the database will "lock" and all logged in users who operate in mode 2 or mode 3 will not be able to change the database until he finishes? question 2) how can i ensure that while there are several users in mode 2, that there will be no conflict while they all update the table? my guess here, is that before adding a new row, you make a server query for the table's current unique keys and add the new entry. will this be safe enough though? Thanks

    Read the article

  • Excel and SQL, order by help

    - by perlnoob
    Im stuck in Excel 2007, running a query, it worked until I wanted to add a 2nd row containing "field 2". Select "Site Updates"."Posted By", "Site Uploaded"."Site Upload Date" From site_info.dbo."Site Updates" Where ("Site Updates"."Posted By") AND "Site Uploaded"."Site Upload Date">={ts '2010-05-01 00:00:00'}), ("Site Location"='Chicago') Union all Select "Site Updates"."Posted By", "Site Uploaded"."Site Upload Date" From site_info.dbo."Site Updates" Where ("Site Updates"."Posted By") AND "Site Uploaded"."Site Upload Date">={ts '2010-05-01 00:00:00'}), ("Site Location"='Denver') Order By "Site Location" ASC; Basically I want 2 different cells for the locations, example name - Chicago - denver user1 - 100 - 20 user2 - 34 - 1002 Right now for some odd reason, its combining it like: name - chicago user1 - 120 user2 - 1036 Please note updating to 2010 beta is not a viable option for me at this point. Any and all input that will help me is greatly apprecaited. I have read over http://www.techonthenet.com/sql/order_by.php however its not gotten me very far in this question. If you have another SQL resource you recomend for people trying to get their feet wet, I'd greatly apprecaite it. If it helps all the info is on the same table.

    Read the article

  • Optimal two variable linear regression SQL statement

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 5 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; and insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here: Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • Should I write more SQL to be more efficient, or less SQL to be less buggy?

    - by RenderIn
    I've been writing a lot of one-off SQL queries to return exactly what a certain page needs and no more. I could reuse existing queries and issue a number of SQL requests linear to the number of records on the page. As an example, I have a query to return People and a query to return Job Details for a person. To return a list of people with their job details I could query once for people and then once for each person to retrieve their job details. I've found that in most cases that solution returns things in a reasonable amount of time, but I don't know how well it will scale in my environment. Instead I've been writing queries to join people + job details, or people + salary history, etc. I'm looking at my models and I see how I could shave off maybe 30% of my code if I were to re-use existing queries. This is a big temptation. Is it a bad thing to go for reuse over efficiency in general or does it all come down to the specific situation? Should I first do it the easy way and then optimize later, or is it best to get the code knocked out while everything is fresh in my mind? Thoughts, experiences?

    Read the article

  • Fastest method for SQL Server inserts, updates, selects from C# ASP.Net 2.0+

    - by Ian
    Hi All, long time listener, first time caller. I use SPs and this isn't an SP vs code-behind "Build your SQL command" question. I'm looking for a high-throughput method for a backend app that handles many small transactions. I use SQLDataReader for most of the returns since forward only works in most cases for me. I've seen it done many ways, and used most of them myself. Methods that define and accept the stored procedure parameters as parameters themselves and build using cmd.Parameters.Add (with or without specifying the DB value type and/or length) Assembling your SP params and their values into an array or hashtable, then passing to a more abstract method that parses the collection and then runs cmd.Parameters.Add Classes that represent tables, initializing the class upon need, setting the public properties that represent the table fields, and calling methods like Save, Load, etc I'm sure there are others I've seen but can't think of at the moment as well. I'm open to all suggestions.

    Read the article

  • Calculate differences between rows while grouping with SQL

    - by Guido
    I have a postgresql table containing movements of different items (models) between warehouses. For example, the following record means that 5 units of model 1 have been sent form warehouse 1 to 2: source target model units ------ ------ ----- ----- 1 2 1 5 I am trying to build a SQL query to obtain the difference between units sent and received, grouped by models. Again with an example: source target model units ------ ------ ----- ----- 1 2 1 5 -- 5 sent from 1 to 2 1 2 2 1 2 1 1 2 -- 2 sent from 2 to 1 2 1 1 1 -- 1 more sent from 2 to 1 The result should be: source target model diff ------ ------ ----- ---- 1 2 1 2 -- 5 sent minus 3 received 1 2 2 1 I wonder if this is possible with a single SQL query. Here is the table creation script and some data, just in case anyone wants to try it: CREATE TEMP TABLE movements ( source INTEGER, target INTEGER, model INTEGER, units INTEGER ); insert into movements values (1,2,1,5); insert into movements values (1,2,2,1); insert into movements values (2,1,1,2); insert into movements values (2,1,1,1);

    Read the article

  • injection attack (I thought I was protected!) <?php /**/eval(base64_decode( everywhere

    - by Cyprus106
    I've got a fully custom PHP site with a lot of database calls. I just got injection hacked. This little chunk of code below showed up in dozens of my PHP pages. <?php /**/ eval(base64_decode(big string of code.... I've been pretty careful about my SQL calls and such; they're all in this format: $query = sprintf("UPDATE Sales SET `Shipped`='1', `Tracking_Number`='%s' WHERE ID='%s' LIMIT 1 ;", mysql_real_escape_string($trackNo), mysql_real_escape_string($id)); $result = mysql_query($query); mysql_close(); For the record, I rarely use mysql_close() at the end though. That just happened to be the code I grabbed. I can't think of any places where I don't use mysql_real_escape_string(), (although I'm sure there's probably a couple. I'll be grepping soon to find out) There's also no places where users can put in custom HTML or anything. In fact, most of the user-accessible pages, if they use SQL calls at all, are almost inevitably "SELECT * FROM" pages that use a GET or POST, depending. Obviously I need to beef up my security, but I've never had an attack like this and I'm not positive what I should do. I've decided to put limits on all my inputs and go through looking to see if i missed a mysql_real_escape_string somewhere... Anybody else have any suggestions? Also... what does this type of code do? Why is it there?

    Read the article

  • Linq to sql Repository pattern , Some doubts

    - by MindlessProgrammer
    I am using repository pattern with linq to sql, I am using a repository class per table. I want to know , am i doing at good/standard way, ContactRepository Contact GetByID() Contact GetAll() COntactTagRepository List<ContactTag> Get(long contactID) List<ContactTag> GetAll() List<ContactTagDetail> GetAllDetails() class ContactTagDetail { public Contact Contact {get;set;} public ContactTag COntactTag {get;set;} } When i need a contact i call method in contactrepository, same for contacttag but when i need contact and tags together i call GetDetais() in ContactTag repository its not returning the COntactTag entity generated by the orm insted its returning ContactTagDetail entity conatining both COntact and COntactTag generated by the orm, i know i can simple call GetAll in COntactTag repository and can access Contact.ContactTag but as its linq to sql it will there is no option to Deferred Load in query level, so whenever i need a entity with a related entity i create a projection class Another doubt is where i really need to right the method i can do it in both contact & ContactTag repostitory like In contact repository GetALlWithTags() or something but i am doing it in in COntactTag repository Whats your suggestions ?

    Read the article

  • SQL to CodeIgniter Array Missing Data Issue

    - by SamD
    $query = $this->db->query("SELECT t1.numberofbets, t1.profit, t2.seven_profit, t3.28profit, user.user_id, username, password, email, balance, user.date_added, activation_code, activated FROM user LEFT JOIN (SELECT user_id, SUM(amount_won) AS profit, count(tip_id) AS numberofbets FROM tip GROUP BY user_id) as t1 ON user.user_id = t1.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS seven_profit FROM tip WHERE date_settled > '$seven_daystime' GROUP BY user_id) as t2 ON user.user_id = t2.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS 28profit FROM tip WHERE date_settled > '$twoeight_daystime' GROUP BY user_id) as t3 ON user.user_id = t3.user_id where activated = 1 GROUP BY user.user_id ORDER BY user.date_added DESC"); return $query->result_array(); The query works fine running it in phpMyAdmin and returns complete results (in image attached). However, printing the array in CodeIgniter, it has no value for one field ,seven_profit, where it is there in the SQL query ran in phpMyAdmin, just the discrepancy in this one field, from sql to php array... I just can’t see why, when printing the array, that one field, which should have value of 26, contains nothing? Any ideas? I changed the field name from starting with a number in attempt to fix it, but no difference. I know this is complex and looks horrible, any help or just people coming across something similar would be great to know about, thanks. Sam

    Read the article

  • Fastest method for SQL Server inserts, updates, selects

    - by Ian
    I use SPs and this isn't an SP vs code-behind "Build your SQL command" question. I'm looking for a high-throughput method for a backend app that handles many small transactions. I use SQLDataReader for most of the returns since forward only works in most cases for me. I've seen it done many ways, and used most of them myself. Methods that define and accept the stored procedure parameters as parameters themselves and build using cmd.Parameters.Add (with or without specifying the DB value type and/or length) Assembling your SP params and their values into an array or hashtable, then passing to a more abstract method that parses the collection and then runs cmd.Parameters.Add Classes that represent tables, initializing the class upon need, setting the public properties that represent the table fields, and calling methods like Save, Load, etc I'm sure there are others I've seen but can't think of at the moment as well. I'm open to all suggestions.

    Read the article

  • How to use a variable to specify filegroup in MSSQL

    - by gt
    I want to alter a table to add a constraint during upgrade on a MSSQL database. This table is normally indexed on a filegroup called 'MY_INDEX' - but may also be on a database without this filegroup. In this case I want the indexing to be done on the 'PRIMARY' filegroup. I tried the following code to achieve this: DECLARE @fgName AS VARCHAR(10) SET @fgName = CASE WHEN EXISTS(SELECT groupname FROM sysfilegroups WHERE groupname = 'MY_INDEX') THEN QUOTENAME('MY_INDEX') ELSE QUOTENAME('PRIMARY') END ALTER TABLE [dbo].[mytable] ADD CONSTRAINT [PK_mytable] PRIMARY KEY ( [myGuid] ASC ) ON @fgName -- fails: 'incorrect syntax' However, the last line fails as it appears a filegroup cannot be specified by variable. Is this possible?

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >