Search Results

Search found 33316 results on 1333 pages for 'sql team'.

Page 661/1333 | < Previous Page | 657 658 659 660 661 662 663 664 665 666 667 668  | Next Page >

  • How to apply GROUP_CONCAT in mysql Query

    - by Query Master
    How to apply GROUP_CONCAT in this Query if you guys have any idea or any alternate solution about this please share me. Helps are definitely appreciated also (see Query or result required) Query SELECT WEEK(cpd.added_date) AS week_no,COUNT(cpd.result) AS death_count FROM cron_players_data cpd WHERE cpd.player_id = 81 AND cpd.result = 2 AND cpd.status = 1 GROUP BY WEEK(cpd.added_date); Query output result screen Result Required 23,24,25 AS week_no 2,3,1 AS death_count

    Read the article

  • MySQL join problem

    - by David
    Table1 has u_name, Table2 has u_name, u_type and u_admin Table1.u_name is unique. But neither of the 3 fields in Table2 is unique. For any value of Table1.u_name, there are 0 to many entries in Table2 that Table2.u_name equals to that value. For any value of Table1.u_name, there are 0 to 1 entries in Table2 that Table2.u_name equals to that value AND Table2.u_type='S' What I want: Use Table1.u_name to get Table1.*, Table2.u_admin where Table1.u_name=Tabl2.u_name and Table2.u_type='S'. If there is no such entry in Table2 we still need to get Table1.* Please help give me some hints. Thank you so much!

    Read the article

  • When should we use Views, Temporary Tables and Direct Queries ? What are the Performance issues in a

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • Loading Dimension Tables - Methodologies

    - by Nev_Rahd
    Hello, Recently I been working on project, where need to populated Dim Tables from EDW Tables. EDW Tables are of type II which does maintain historical data. When comes to load Dim Table, for which source may be multiple EDW Tables or would be single table with multi level pivoting (on attributes). Mean: There would be 10 records - one for each attribute which need to be pivoted on domain_code to make a single row in Dim. Out of these 10 records there would be some attributes with same domain_code but with different sub_domain_code, which needs further pivoting on subdomain code. Ex: if i got domain code: 01,02, 03 = which are straight pivot on domain code I would also have domain code: 10 with subdomain code / version as 2006,2007,2008,2009 That means I need to split my source table with above attributes into two = one for domain code and other for domain_code + version. so far so good. When it comes to load Dim Table: As per design specs for Dimensions (originally written by third party), what they want is: for every single change in EDW (attribute), it should assemble all the related records (for that NK) mean new one with other attribute values which are current = process them to create a new dim record and insert it. That mean if a single extract contains 100 records updated (one for each NK), it should assemble 100 + (100*9) records to insert / update dim table. How good is this approach. Other way I tried to do is just do a lookup into dim table for that NK get the value's of recent records (attributes which not changed) and insert it and update the current one. What would be the better approach assembling records at source side for one attribute change or looking into dim table's recent record and process it. If this doesn't make sense, would like to elaborate it further. Thanks

    Read the article

  • Customizing Mail Message in SSIS Event Handler

    - by Eric Ness
    I want to add an email notification to an SSIS 2005 package event handler. I've added a Send Mail task to the event handler. I'd like to customize the email body to include things like the error description. I've tried including @[System::ErrorDescription] in the MessageSource field, but the mail message doesn't include the value of ErrorDescription only the name of the variable.

    Read the article

  • select from multiple tables but ordering by a datetime field

    - by Chris Mccabe
    I have 3 tables that are unrelated (related that each contains data for a different social network). Each has a datetime field dated- I'm already grouping by hour as you can see below (this one below for linked_in) SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_linked_in_accts WHERE CAST(dated AS DATE) = '".$start_date."' GROUP BY hour I would like to know how to do a total across all 3 networks- the tables for the three are CREATE TABLE IF NOT EXISTS `upd8r_facebook_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `fb_id` bigint(30) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=80 ; CREATE TABLE IF NOT EXISTS `upd8r_linked_in_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `linked_in` varchar(200) NOT NULL, `oauth_secret` varchar(100) NOT NULL, `first_count` int(11) NOT NULL, `second_count` int(11) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=200 ; CREATE TABLE IF NOT EXISTS `upd8r_twitter_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `twitter` varchar(200) NOT NULL, `twitter_secret` varchar(100) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=9 ; something like this ? (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_linked_in_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_facebook_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_twitter_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL GROUP BY hour

    Read the article

  • How to retrieve the ordered list of best articles having a minimum number of votes by using HSQL ?

    - by fabien7474
    I have a Vote domain class from my grails application containing properties like article_id and note I want to HQL query the Vote domain class in order to retrieve the 5 best rated articles having at least 10 votes. I tried : SELECT v.article_id, avg(v.note), count(*) FROM vote v where count(*) >= 10 group by v.article_id order by avg(v.note) desc limit 5; But unfortunately the insertion of where count(*) >= 10 throws an error. How can I do that in a simple way? Thank you for your help.

    Read the article

  • ASP.Net delete record audit trigger

    - by Germ
    Suppose you have the following... A ASP.NET web application that calls a stored procedure to delete a record. The table has a trigger on it that will insert an audit entry each time a record is deleted. I want to be able to record in the audit entry the person who deleted the record. What would be the best way to go about achieving this? I know I could remove the trigger and have the delete stored procedure insert the audit entry prior to deleting but are there other recommeded alternative?

    Read the article

  • LINQ + Find count of non-null values

    - by Ashutosh
    I have a table with the below structure. ID VALUE 1 3.2 2 NULL 4 NULL 5 NULL 7 NULL 10 1.8 11 NULL 12 3.2 15 4.7 17 NULL 22 NULL 24 NULL 25 NULL 27 NULL 28 7 I would like to get the max count of consecutive null values in the table. Any help would be greatly appreciated. THanks Ashutosh

    Read the article

  • SQL only row mapping record fetching

    - by Prasanna
    I have a customer call detail table in which call details of all customer stored. I have to find out the distinct aparty (means our customer ) who only calls our customers (means bparty also be our numbers) . There is no other domestic call , International calls made by A party (our customer) in this case. could you people please help me to find the same data. FILE INPUT oF SAMPLE CDR TABLE ROW NAME VALUES ANUMBER :-any mobile number(Domestic+International); for our customer it must like 70,070,0070,9370) BNUMBER :-any mobile number(Domestic+International); for our customer it must like 70,070,0070,9370 CALLTRANSACTION :-eg: 91,92,93 etc CALLTRANSACTIONTYPEC :-eg: MOC,MTC FILENAME :-MCS_01 etc TIME:- any time value Required Output DISTINCT ANUMBER :-for our customer it mobile number must start with 70 or 070 or 0070 or 9370 BNUMBER :- for our customer it mobile number must start with 70 or 070 or 0070 or 9370 means our customer only calls to our network customer ( No other doestic call or international calls made by our operator)

    Read the article

  • Moving from NHibernate to FluentNHibernate: assembly error (related to versions)?

    - by goober
    Not sure where to start, but I had gotten the most recent version of NHibernate, successfully mapped the most simple of business objects, etc. When trying to move to FluentNHibernate and do the same thing, I got this error message on build: "System.IO.FileLoadException: Could not load file or assembly 'NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference." Background: I'm new to Hibernate, NHibernate, and FluentNHibernate -- but not to .NET, C#, etc. Database I have a database table called Category: (PK) CategoryID (type: int), unique, auto-incrementing UserID (type: uniqueidentifier) -- given the value of the user Guid in ASP.NET database Title (type: varchar(50) -- the title of the category Components involved: I have a SessionProviderClass which creates the mapping to the database I have a Category class which has all the virtual methods for FluentNHibernate to override I have a CategoryMap : ClassMap class, which does the fluent mappings for the entity I have a CategoryRepository class that contains the method to add & save the category I have the TestCatAdd.aspx file which uses the CategoryRepository class. Would be happy to post code for any of those, but I'm not sure that it's necessary, as I think the issue is that somewhere there's a version conflict between what FluentNHibernate references and the NHibernate I have installed from before. Thanks in advance for any help you can give!

    Read the article

  • Database abstraction/adapters for ruby

    - by Stiivi
    What are the database abstractions/adapters you are using in Ruby? I am mainly interested in data oriented features, not in those with object mapping (like active record or data mapper). I am currently using Sequel. Are there any other options? I am mostly interested in: simple, clean and non-ambiguous API data selection (obviously), filtering and aggregation raw value selection without field mapping: SELECT col1, col2, col3 = [val1, val2, val3] not hash of { :col1 = val1 ...} API takes into account table schemas 'some_schema.some_table' in a consistent (and working) way; also reflection for this (get schema from table) database reflection: get list of table columns, their database storage types and perhaps adaptor's abstracted types table creation, deletion be able to work with other tables (insert, update) in a loop enumerating selection from another table without requiring to fetch all records from table being enumerated Purpose is to manipulate data with unknown structure at the time of writing code, which is the opposite to object mapping where structure or most of the structure is usually well known. I do not need the object mapping overhead. What are the options, including back-ends for object-mapping libraries?

    Read the article

  • INSERT data from Textbox to Postgres SQL

    - by user1479013
    I just learn how to connect C# and PostgresSQL. I want to INSERT data from tb1(Textbox) and tb2 to database. But I don't know how to code My previous code is SELECT from database. this is my code private void button1_Click(object sender, EventArgs e) { bool blnfound = false; NpgsqlConnection conn = new NpgsqlConnection("Server=127.0.0.1;Port=5432;User Id=postgres;Password=admin123;Database=Login"); conn.Open(); NpgsqlCommand cmd = new NpgsqlCommand("SELECT * FROM login WHERE name='" + tb1.Text + "' and password = '" + tb2.Text + "'",conn); NpgsqlDataReader dr = cmd.ExecuteReader(); if (dr.Read()) { blnfound = true; Form2 f5 = new Form2(); f5.Show(); this.Hide(); } if (blnfound == false) { MessageBox.Show("Name or password is incorrect", "Message Box", MessageBoxButtons.OK, MessageBoxIcon.Exclamation, MessageBoxDefaultButton.Button1); dr.Close(); conn.Close(); } } So please help me the code.

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • MD5 hash validation failing for unknown reason in PHP

    - by Sennheiser
    I'm writing a login form, and it converts the given password to an MD5 hash with md5($password), then matches it to an already-hashed record in my database. I know for sure that the database record is correct in this case. However, it doesn't log me in and claims the password is incorrect. Here's my code: $password = mysql_real_escape_string($_POST["password"]); ...more code... $passwordQuery = mysql_fetch_row(mysql_query(("SELECT password FROM users WHERE email = '$userEmail'"))); ...some code... elseif(md5($password) != $passwordQuery) { $_SESSION["noPass"] = "That password is incorrect."; } ...more code after... I tried pulling just the value of md5($password) and that matched up when I visually compared it. However, I can't get the comparison to work in PHP. Perhaps it is because the MySQL record is stored as text, and the MD5 is something else?

    Read the article

  • MySQL SUM Query daily values of a week problem

    - by davykiash
    Am trying to return the sum of each day of a week in mysql but it returns nothing despite having values for the 3rd Week of March 2010 SELECT SUM(expense_details_amount) AS total FROM expense_details WHERE YEAR(expense_details_date) = '2010' AND MONTH(expense_details_date) = '03' AND WEEK(expense_details_date) = '3' GROUP BY DAY(expense_details_date) How do I go about this?

    Read the article

  • Linq qurery with multiple where's

    - by Dan
    I am trying the to query my Status Update repository using the following var result = (from s in _dataContext.StatusUpdates where s.Username == "friend1" && s.Username == "friend2" etc... select s).ToList(); Insead of using s.Username == "friendN" continously is there anyway I can pass a list or array or something like that rather that specifying each one, or can i use a foreach loop in the middle of the query. Thanks

    Read the article

  • Search sort by parameter match count in the query? PostgreSQL

    - by Ben Dauphinee
    I am working on a search query in PostgreSQL, and one of the things I do is sort my query results by the number of parameters matched. I have no clue how this can be done. Does anyone have a suggestion or solution? Table brand color type engine Ford Blue 4-door V8 Maserati Blue 2-door V12 Saturn Green 4-door V8 GM Yellow 1-door V4 Current Query SELECT brand FROM table WHERE color = 'Blue' or type = '4-door' or engine = 'V8' Result Should Be Ford (3 match) Saturn (2 match) Maserati (1 match)

    Read the article

  • Drawbacks of Dynamic Query in Sqlserver 2005 ?

    - by KuldipMCA
    I have using the many dynamic Query in my database for the procedures because my filter is not fix so i have taken @filter as parameter and pass in the procedure. Declare @query as varchar(8000) Declare @Filter as varchar(1000) set @query = 'Select * from Person.Address where 1=1 and ' + @Filter exec(@query) Like that my filter contain any Field from the table for comparison. It will affect my performance or not ? is there any alternate way to achieve this type of things

    Read the article

  • Get column of a mysql entry

    - by Xelluloid
    Is there a possibility to get the name of the column a database entry belongs to? Perhaps I have three columns with column names col1, col2 and col3. Now I want to select for every column the column with the maximum entry, something like this. Select name_of_column(max(col1,col2,col3)). I know that I can ask for the name of the columns by its ordinal position in the information_schema.COLUMNS table but how do I get the ordinal position of a database entry within a table?

    Read the article

  • In MySql Stored Procedure updating more than one time

    - by Both FM
    In MySql UPDATE `inventoryentry` SET `Status` = 1 WHERE `InventoryID`=92 AND `ItemID`=28; It successfully update only one row , where inventoryID = 92 and itemID=28 , the following message displayed. 1 row(s) affected when I put this on stored procedure, as follow CREATE DEFINER=`root`@`localhost` PROCEDURE `Sample`(IN itemId INT, IN itemQnty DOUBLE, IN invID INT) BEGIN DECLARE crntQnty DOUBLE; DECLARE nwQnty DOUBLE; SET crntQnty=(SELECT `QuantityOnHand` FROM `item` WHERE id=itemId); SET nwQnty=itemQnty+crntQnty; UPDATE `item` SET `QuantityOnHand`=nwQnty WHERE `Id`=itemId; UPDATE `inventoryentry` SET `Status` = 1 WHERE `InventoryID`=invID AND `ItemID`=itemId; END$$ calling stored procedures CALL Sample(28,10,92) It update all the status = 1 in inventoryentry against InventoryID (i.e. 92) ignoring ItemID, instead of updating only one row. The following message displayed! 5 row(s) affected Why Stored procedure ignoring itemID in update statement ? or Why Stored procedure updating more than one time? But without Stored procedure it working fine.

    Read the article

< Previous Page | 657 658 659 660 661 662 663 664 665 666 667 668  | Next Page >