Search Results

Search found 28297 results on 1132 pages for 'sql azure'.

Page 667/1132 | < Previous Page | 663 664 665 666 667 668 669 670 671 672 673 674  | Next Page >

  • User has many computers, computers have many attributes in different tables, best way to JOIN?

    - by krismeld
    I have a table for users: USERS: ID | NAME | ---------------- 1 | JOHN | 2 | STEVE | a table for computers: COMPUTERS: ID | USER_ID | ------------------ 13 | 1 | 14 | 1 | a table for processors: PROCESSORS: ID | NAME | --------------------------- 27 | PROCESSOR TYPE 1 | 28 | PROCESSOR TYPE 2 | and a table for harddrives: HARDDRIVES: ID | NAME | ---------------------------| 35 | HARDDRIVE TYPE 25 | 36 | HARDDRIVE TYPE 90 | Each computer can have many attributes from the different attributes tables (processors, harddrives etc), so I have intersection tables like this, to link the attributes to the computers: COMPUTER_PROCESSORS: C_ID | P_ID | --------------| 13 | 27 | 13 | 28 | 14 | 27 | COMPUTER_HARDDRIVES: C_ID | H_ID | --------------| 13 | 35 | So user JOHN, with id 1 owns computer 13 and 14. Computer 13 has processor 27 and 28, and computer 13 has harddrive 35. Computer 14 has processor 27 and no harddrive. Given a user's id, I would like to retrieve a list of that user's computers with each computers attributes. I have figured out a query that gives me a somewhat of a result: SELECT computers.id, processors.id AS p_id, processors.name AS p_name, harddrives.id AS h_id, harddrives.name AS h_name, FROM computers JOIN computer_processors ON (computer_processors.c_id = computers.id) JOIN processors ON (processors.id = computer_processors.p_id) JOIN computer_harddrives ON (computer_harddrives.c_id = computers.id) JOIN harddrives ON (harddrives.id = computer_harddrives.h_id) WHERE computers.user_id = 1 Result: ID | P_ID | P_NAME | H_ID | H_NAME | ----------------------------------------------------------- 13 | 27 | PROCESSOR TYPE 1 | 35 | HARDDRIVE TYPE 25 | 13 | 28 | PROCESSOR TYPE 2 | 35 | HARDDRIVE TYPE 25 | But this has several problems... Computer 14 doesnt show up, because it has no harddrive. Can I somehow make an OUTER JOIN to make sure that all computers show up, even if there a some attributes they don't have? Computer 13 shows up twice, with the same harddrive listet for both. When more attributes are added to a computer (like 3 blocks of ram), the number of rows returned for that computer gets pretty big, and it makes it had to sort the result out in application code. Can I somehow make a query, that groups the two returned rows together? Or a query that returns NULL in the h_name column in the second row, so that all values returned are unique? EDIT: What I would like to return is something like this: ID | P_ID | P_NAME | H_ID | H_NAME | ----------------------------------------------------------- 13 | 27 | PROCESSOR TYPE 1 | 35 | HARDDRIVE TYPE 25 | 13 | 28 | PROCESSOR TYPE 2 | 35 | NULL | 14 | 27 | PROCESSOR TYPE 1 | NULL | NULL | Or whatever result that make it easy to turn it into an array like this [13] => [P_NAME] => [0] => PROCESSOR TYPE 1 [1] => PROCESSOR TYPE 2 [H_NAME] => [0] => HARDDRIVE TYPE 25 [14] => [P_NAME] => [0] => PROCESSOR TYPE 1

    Read the article

  • Run SSIS Package from T-SQL

    - by Dr. Zim
    I noticed you can use the following stored procedures (in order) to schedule a SSIS package: msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]' msdb.dbo.sp_add_job ... msdb.dbo.sp_add_jobstep ... msdb.dbo.sp_update_job ... msdb.dbo.sp_add_jobschedule ... msdb.dbo.sp_add_jobserver ... (You can see an example by right clicking a scheduled job and selecting "Script Job as- Create To".) AND you can use sp_start_job to execute the job immediately, effectively running SSIS packages on demand. Question: does anyone know of any msdb.dbo.[...] stored procedures that simply allow you to run SSIS packages on the fly without using sp_cmdshell directly, or some easier approach?

    Read the article

  • Any way to optimize this MySQL query?

    - by manyxcxi
    My table looks like this: `MyDB`.`Details` ( `id` bigint(20) NOT NULL, `run_id` int(11) NOT NULL, `element_name` varchar(255) NOT NULL, `value` text, `line_order` int(11) default NULL, `column_order` int(11) default NULL ); I have the following SELECT statement in a stored procedure SELECT RULE ,TITLE ,SUM(IF(t.PASSED='Y',1,0)) AS PASS ,SUM(IF(t.PASSED='N',1,0)) AS FAIL FROM ( SELECT a.line_order ,MAX(CASE WHEN a.element_name = 'PASSED' THEN a.`value` END) AS PASSED ,MAX(CASE WHEN a.element_name = 'RULE' THEN a.`value` END) AS RULE ,MAX(CASE WHEN a.element_name = 'TITLE' THEN a.`value` END) AS TITLE FROM Details a WHERE run_id = runId GROUP BY line_order ) t GROUP BY RULE, TITLE; *runId is an input parameter to the stored procedure. This query takes about 14 seconds to run. The table has 214856 rows, and the particular run_id I am filtering on has 162204 records. It's not on a super high power machine, but I feel like I could be doing this more efficiently. My main goal is to summarize by Rule and Title and show Pass and Fail count columns.

    Read the article

  • HQl equivalent of sql query

    - by kash
    String SQL_QUERY = "SELECT count(*) FROM (SELECT * FROM Url as U where U.pageType=" + 1 + " group by U.pageId having count(U.pageId) = 1)"; query = session.createQuery(SQL_QUERY); I am getting an error org.hibernate.hql.ast.QuerySyntaxException: unexpected token: ( near line 1, column 23 [ SELECT count() FROM (SELECT * FROM Url as U where U.pageType = 2 group by U.pageId having count(U.pageId) = 1)]

    Read the article

  • select from multiple tables but ordering by a datetime field

    - by Chris Mccabe
    I have 3 tables that are unrelated (related that each contains data for a different social network). Each has a datetime field dated- I'm already grouping by hour as you can see below (this one below for linked_in) SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_linked_in_accts WHERE CAST(dated AS DATE) = '".$start_date."' GROUP BY hour I would like to know how to do a total across all 3 networks- the tables for the three are CREATE TABLE IF NOT EXISTS `upd8r_facebook_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `fb_id` bigint(30) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=80 ; CREATE TABLE IF NOT EXISTS `upd8r_linked_in_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `linked_in` varchar(200) NOT NULL, `oauth_secret` varchar(100) NOT NULL, `first_count` int(11) NOT NULL, `second_count` int(11) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=200 ; CREATE TABLE IF NOT EXISTS `upd8r_twitter_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `twitter` varchar(200) NOT NULL, `twitter_secret` varchar(100) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=9 ; something like this ? (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_linked_in_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_facebook_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_twitter_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL GROUP BY hour

    Read the article

  • why datetime.now not work when I didn't use tolist?

    - by MemoryLeak
    When I use datacontext.News .Where(p => p.status == true) .Where(p => p.date <= DateTime.Now) .ToList(); the system will return no results; When I use datacontext.News .Where(p => p.status == true) .ToList() .Where(p => p.date <= DateTime.Now) .ToList(); system will return expected results. Can anyone tell me what's up? Thanks in advance !

    Read the article

  • Foreign Keys and Primary Keys at the same time

    - by Bader
    hello , i am trying to create table (orderdetails) , the table has two FKs and PKs at the same keys here is my code create table OrderDetails2 ( PFOrder_ID Number(3) PFProduct_ID Number(3) CONSTRAINT PF PRIMARY KEY (PFOrder_ID,PFProduct_ID), CONSTRAINT FK_1 FOREIGN KEY (PFProudct_ID) REFERENCES Product(Product_ID), CONSTRAINT FK_2 FOREIGN KEY (PFOrder_ID) REFERENCES Orderr(Order_ID) ); i am using Oracle express , a problem pops when i run the code , here is it ORA-00907: missing right parenthesis what is the problem ?

    Read the article

  • Multiple Foriegn Keys from One Table linking to single Primary Key in second Table

    - by croker10
    Hi all, I have a database with three tables, a household table, an adults table and a users table. The Household table contains two foreign keys, iAdult1ID and iAdult2ID. The Users table has a iUserID primary key and the Adult table has a corresponding iUserID foreign key. One of the columns in the Users table is strUsername, an e-mail address. I am trying to write a query that will allow me to search for an e-mail address for either adult that has a relation to the household. So I have two questions, assuming that all the values are not null, how can I do this? And two, in reality, iAdult2ID can be null, is it still possible to write a query to do this? Thanks for your help. Let me know if you need any more information.

    Read the article

  • Incorrect syntax inserting data into table

    - by SelectDistinct
    I am having some trouble with my update() method. The idea is that the user Provides a recipe name, ingredients, instructions and then selects an image using Filestream. Once the user clicks 'Add Recipe' this will call the update method, however as things stand I am getting an error which is mentioning the contents of the text box: Here is the update() method code: private void updatedata() { // filesteam object to read the image // full length of image to a byte array try { // try to see if the image has a valid path if (imagename != "") { FileStream fs; fs = new FileStream(@imagename, FileMode.Open, FileAccess.Read); // a byte array to read the image byte[] picbyte = new byte[fs.Length]; fs.Read(picbyte, 0, System.Convert.ToInt32(fs.Length)); fs.Close(); //open the database using odp.net and insert the lines string connstr = @"Server=mypcname\SQLEXPRESS;Database=RecipeOrganiser;Trusted_Connection=True"; SqlConnection conn = new SqlConnection(connstr); conn.Open(); string query; query = "insert into Recipes(RecipeName,RecipeImage,RecipeIngredients,RecipeInstructions) values (" + textBox1.Text + "," + " @pic" + "," + textBox2.Text + "," + textBox3.Text + ")"; SqlParameter picparameter = new SqlParameter(); picparameter.SqlDbType = SqlDbType.Image; picparameter.ParameterName = "pic"; picparameter.Value = picbyte; SqlCommand cmd = new SqlCommand(query, conn); cmd.Parameters.Add(picparameter); cmd.ExecuteNonQuery(); MessageBox.Show("Image successfully saved"); cmd.Dispose(); conn.Close(); conn.Dispose(); Connection(); } } catch (Exception ex) { MessageBox.Show(ex.Message); } } Can anyone see where I have gone wrong with the insert into Recipes query or suggest an alternative approach to this part of the code?

    Read the article

  • MD5 hash validation failing for unknown reason in PHP

    - by Sennheiser
    I'm writing a login form, and it converts the given password to an MD5 hash with md5($password), then matches it to an already-hashed record in my database. I know for sure that the database record is correct in this case. However, it doesn't log me in and claims the password is incorrect. Here's my code: $password = mysql_real_escape_string($_POST["password"]); ...more code... $passwordQuery = mysql_fetch_row(mysql_query(("SELECT password FROM users WHERE email = '$userEmail'"))); ...some code... elseif(md5($password) != $passwordQuery) { $_SESSION["noPass"] = "That password is incorrect."; } ...more code after... I tried pulling just the value of md5($password) and that matched up when I visually compared it. However, I can't get the comparison to work in PHP. Perhaps it is because the MySQL record is stored as text, and the MD5 is something else?

    Read the article

  • Postgresql count+sort performance

    - by invictus
    I have built a small inventory system using postgresql and psycopg2. Everything works great, except, when I want to create aggregated summaries/reports of the content, I get really bad performance due to count()'ing and sorting. The DB schema is as follows: CREATE TABLE hosts ( id SERIAL PRIMARY KEY, name VARCHAR(255) ); CREATE TABLE items ( id SERIAL PRIMARY KEY, description TEXT ); CREATE TABLE host_item ( id SERIAL PRIMARY KEY, host INTEGER REFERENCES hosts(id) ON DELETE CASCADE ON UPDATE CASCADE, item INTEGER REFERENCES items(id) ON DELETE CASCADE ON UPDATE CASCADE ); There are some other fields as well, but those are not relevant. I want to extract 2 different reports: - List of all hosts with the number of items per, ordered from highest to lowest count - List of all items with the number of hosts per, ordered from highest to lowest count I have used 2 queries for the purpose: Items with host count: SELECT i.id, i.description, COUNT(hi.id) AS count FROM items AS i LEFT JOIN host_item AS hi ON (i.id=hi.item) GROUP BY i.id ORDER BY count DESC LIMIT 10; Hosts with item count: SELECT h.id, h.name, COUNT(hi.id) AS count FROM hosts AS h LEFT JOIN host_item AS hi ON (h.id=hi.host) GROUP BY h.id ORDER BY count DESC LIMIT 10; Problem is: the queries runs for 5-6 seconds before returning any data. As this is a web based application, 6 seconds are just not acceptable. The database is heavily populated with approximately 50k hosts, 1000 items and 400 000 host/items relations, and will likely increase significantly when (or perhaps if) the application will be used. After playing around, I found that by removing the "ORDER BY count DESC" part, both queries would execute instantly without any delay whatsoever (less than 20ms to finish the queries). Is there any way I can optimize these queries so that I can get the result sorted without the delay? I was trying different indexes, but seeing as the count is computed it is possible to utilize an index for this. I have read that count()'ing in postgresql is slow, but its the sorting that are causing me problems... My current workaround is to run the queries above as an hourly job, putting the result into a new table with an index on the count column for quick lookup. I use Postgresql 9.2.

    Read the article

  • How to map combinations of things to a relational database?

    - by Space_C0wb0y
    I have a table whose records represent certain objects. For the sake of simplicity I am going to assume that the table only has one row, and that is the unique ObjectId. Now I need a way to store combinations of objects from that table. The combinations have to be unique, but can be of arbitrary length. For example, if I have the ObjectIds 1,2,3,4 I want to store the following combinations: {1,2}, {1,3,4}, {2,4}, {1,2,3,4} The ordering is not necessary. My current implementation is to have a table Combinations that maps ObjectIds to CombinationIds. So every combination receives a unique Id: ObjectId | CombinationId ------------------------ 1 | 1 2 | 1 1 | 2 3 | 2 4 | 2 This is the mapping for the first two combinations of the example above. The problem is, that the query for finding the CombinationId of a specific Combination seems to be very complex. The two main usage scenarios for this table will be to iterate over all combinations, and the retrieve a specific combination. The table will be created once and never be updated. I am using SQLite through JDBC. Is there any simpler way or a best practice to implement such a mapping?

    Read the article

  • How to run an .exe application in another computer?

    - by ADAM
    I am working on a C# application in Visual Studio 2013. When I run the .exe file from my computer, the application runs very well and all the features work. When I tried to run the .exe on another computer, the database side doesn't work well and the connection with the database couldn't be opened. The SqlConnection is constructed as follows: SqlConnection cn = new SqlConnection("Data Source=ADAM-PC;Initial Catalog=integrationdatabase;Integrated Security=True" I don't know how to change the data source to make the connection with the database established in another computer. How can I solve this problem?

    Read the article

  • How do I replace NOT EXISTS with JOIN?

    - by YelizavetaYR
    I've got the following query: select distinct a.id, a.name from Employee a join Dependencies b on a.id = b.eid where not exists ( select * from Dependencies d where b.id = d.id and d.name = 'Apple' ) and exists ( select * from Dependencies c where b.id = c.id and c.name = 'Orange' ); I have two tables, relatively simple. The first Employee has an id column and a name column The second table Dependencies has 3 column, an id, an eid (employee id to link) and names (apple, orange etc). the data looks like this Employee table looks like this id | name ----------- 1 | Pat 2 | Tom 3 | Rob 4 | Sam Dependencies id | eid | Name -------------------- 1 | 1 | Orange 2 | 1 | Apple 3 | 2 | Strawberry 4 | 2 | Apple 5 | 3 | Orange 6 | 3 | Banana As you can see Pat has both Orange and Apple and he needs to be excluded and it has to be via joins and i can't seem to get it to work. Ultimately the data should only return Rob

    Read the article

  • Does normalization really hurt performance in high traffic sites?

    - by Luke101
    I am designing a database and I would like to normalize the database. I one query I will joining about 30-40 tables. Will this hurt the website performance if it ever becomes extremely popular? This will be the main query and it will be getting called 50% of the time. The other queries I will be joining about 2 tables. I have a choice right now to normalize or not to normalize but if the normalization becomes a problem in the future i may have to rewrite 40% of the software and it may take me a long time. Does normalization really hurt in this case? Should I denormalize now while I have the time?

    Read the article

  • Does anybody have any suggestions on which of these two approaches is better for large delete?

    - by RPS
    Approach #1: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE TOP (@count) FROM ProductOrderInfo WHERE ProductId = @product_id AND bCopied = 1 AND FileNameCRC = @localNameCrc SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' Approach #2: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE FROM ProductOrderInfo WHERE ProductId = @product_id AND FileNameCRC IN ( SELECT TOP(@count) FileNameCRC FROM ProductOrderInfo WITH (NOLOCK) WHERE bCopied = 1 AND FileNameCRC = @localNameCrc ) SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' END

    Read the article

  • Speed up a web service for auto complete and avoid too many method calls.

    - by jphenow
    So I've got my jquery autocomplete 'working,' but its a little fidgety since I call the webservice method each time a keydown() fires so I get lots of methods hanging and sometimes to get the "auto" to work I have to type it out and backspace a bit because i'm assuming it got its return value a little slow. I've limited the query results to 8 to mininmize time. Is there anything i can do to make this a little snappier? This thing seems near useless if I don't get it a little more responsive. javascript $("#clientAutoNames").keydown(function () { $.ajax({ type: "POST", url: "WebService.asmx/LoadData", data: "{'input':" + JSON.stringify($("#clientAutoNames").val()) + "}", contentType: "application/json; charset=utf-8", dataType: "json", success: function (data) { if (data.d != null) { var serviceScript = data.d; } $("#autoNames").html(serviceScript); $('#clientAutoNames').autocomplete({ minLength: 2, source: autoNames, delay: 100, focus: function (event, ui) { $('#project').val(ui.item.label); return false; }, select: function (event, ui) { $('#clientAutoNames').val(ui.item.label); $('#projectid').val(ui.item.value); $('#project-description').html(ui.item.desc); pkey = $('#project-id').val; return false; } }) .data("autocomplete")._renderItem = function (ul, item) { return $("<li></li>") .data("item.autocomplete", item) .append("<a>" + item.label + "<br>" + item.desc + "</a>") .appendTo(ul); } } }); }); WebService.asmx <WebMethod()> _ Public Function LoadData(ByVal input As String) As String Dim result As String = "<script>var autoNames = [" Dim sqlOut As Data.SqlClient.SqlDataReader Dim connstring As String = *Datasource* Dim strSql As String = "SELECT TOP 2 * FROM v_Clients WHERE (SearchName Like '" + input + "%') ORDER BY SearchName" Dim cnn As Data.SqlClient.SqlConnection = New Data.SqlClient.SqlConnection(connstring) Dim cmd As Data.SqlClient.SqlCommand = New Data.SqlClient.SqlCommand(strSql, cnn) cnn.Open() sqlOut = cmd.ExecuteReader() Dim c As Integer = 0 While sqlOut.Read() result = result + "{" result = result + "value: '" + sqlOut("ContactID").ToString() + "'," result = result + "label: '" + sqlOut("SearchName").ToString() + "'," 'result = result + "desc: '" + title + " from " + company + "'," result = result + "}," End While result = result + "];</script>" sqlOut.Close() cnn.Close() Return result End Function I'm sure I'm just going about this slightly wrong or not doing a better balance of calls or something. Greatly appreciated!

    Read the article

  • alter mysqldump file before import

    - by julio
    Hi-- I have a mysqldump file created from an earlier version of a product that can't be imported into a new version of the product, since the db structure has changed slightly (mainly altering a column that was NOT NULL DEFAULT 0 to UNIQUE KEY DEFAULT NULL). If I just import the old dump file, it will error out since the column that has default values of 0 now breaks the UNIQUE constraint. It would be easy enough to either manually alter the mysqldump file, or import into a temp table and change it, then copy to the new table. However, is there a way to do this programatically, so it will be repeatable and not manual? (this will need to happen for many instances of this product). I'm thinking something like disabling key constraints for the import, then setting all values that = 0 to NULL, then re-enabling the key constraints? Is this possible? Any help appreciated.

    Read the article

< Previous Page | 663 664 665 666 667 668 669 670 671 672 673 674  | Next Page >