Search Results

Search found 1805 results on 73 pages for 'varchar'.

Page 10/73 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • How can I optimize this subqueried and Joined MySQL Query?

    - by kevzettler
    I'm pretty green on mysql and I need some tips on cleaning up a query. It is used in several variations through out a site. Its got some subquerys derived tables and fun going on. Heres the query: # Query_time: 2 Lock_time: 0 Rows_sent: 0 Rows_examined: 0 SELECT * FROM ( SELECT products . *, categories.category_name AS category, ( SELECT COUNT( * ) FROM distros WHERE distros.product_id = products.product_id) AS distro_count, (SELECT COUNT(*) FROM downloads WHERE downloads.product_id = products.product_id AND WEEK(downloads.date) = WEEK(curdate())) AS true_downloads, (SELECT COUNT(*) FROM views WHERE views.product_id = products.product_id AND WEEK(views.date) = WEEK(curdate())) AS true_views FROM products INNER JOIN categories ON products.category_id = categories.category_id ORDER BY created_date DESC, true_views DESC ) AS count_table WHERE count_table.distro_count > 0 AND count_table.status = 'published' AND count_table.active = 1 LIMIT 0, 8 Heres the explain: +----+--------------------+------------+-------+---------------+-------------+---------+------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+------------+-------+---------------+-------------+---------+------------------------------------+------+----------------------------------------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 232 | Using where | | 2 | DERIVED | categories | index | PRIMARY | idx_name | 47 | NULL | 13 | Using index; Using temporary; Using filesort | | 2 | DERIVED | products | ref | category_id | category_id | 4 | digizald_db.categories.category_id | 9 | | | 5 | DEPENDENT SUBQUERY | views | ref | product_id | product_id | 4 | digizald_db.products.product_id | 46 | Using where | | 4 | DEPENDENT SUBQUERY | downloads | ref | product_id | product_id | 4 | digizald_db.products.product_id | 14 | Using where | | 3 | DEPENDENT SUBQUERY | distros | ref | product_id | product_id | 4 | digizald_db.products.product_id | 1 | Using index | +----+--------------------+------------+-------+---------------+-------------+---------+------------------------------------+------+----------------------------------------------+ 6 rows in set (0.04 sec) And the Tables: mysql> describe products; +---------------+--------------------------------------------------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------------------------------------------+------+-----+-------------------+----------------+ | product_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_key | char(32) | NO | | NULL | | | title | varchar(150) | NO | | NULL | | | company | varchar(150) | NO | | NULL | | | user_id | int(10) unsigned | NO | MUL | NULL | | | description | text | NO | | NULL | | | video_code | text | NO | | NULL | | | category_id | int(10) unsigned | NO | MUL | NULL | | | price | decimal(10,2) | NO | | NULL | | | quantity | int(10) unsigned | NO | | NULL | | | downloads | int(10) unsigned | NO | | NULL | | | views | int(10) unsigned | NO | | NULL | | | status | enum('pending','published','rejected','removed') | NO | | NULL | | | active | tinyint(1) | NO | | NULL | | | deleted | tinyint(1) | NO | | NULL | | | created_date | datetime | NO | | NULL | | | modified_date | timestamp | NO | | CURRENT_TIMESTAMP | | | scrape_source | varchar(215) | YES | | NULL | | +---------------+--------------------------------------------------+------+-----+-------------------+----------------+ 18 rows in set (0.00 sec) mysql> describe categories -> ; +------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+------------------+------+-----+---------+----------------+ | category_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | category_name | varchar(45) | NO | MUL | NULL | | | parent_id | int(10) unsigned | YES | MUL | NULL | | | category_type_id | int(10) unsigned | NO | | NULL | | +------------------+------------------+------+-----+---------+----------------+ 4 rows in set (0.00 sec) mysql> describe compatibilities -> ; +------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+------------------+------+-----+---------+----------------+ | compatibility_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | name | varchar(45) | NO | | NULL | | | code_name | varchar(45) | NO | | NULL | | | description | varchar(128) | NO | | NULL | | | position | int(10) unsigned | NO | | NULL | | +------------------+------------------+------+-----+---------+----------------+ 5 rows in set (0.01 sec) mysql> describe distros -> ; +------------------+--------------------------------------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+--------------------------------------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_id | int(10) unsigned | NO | MUL | NULL | | | compatibility_id | int(10) unsigned | NO | MUL | NULL | | | user_id | int(10) unsigned | NO | | NULL | | | status | enum('pending','published','rejected','removed') | NO | | NULL | | | distro_type | enum('file','url') | NO | | NULL | | | version | varchar(150) | NO | | NULL | | | filename | varchar(50) | YES | | NULL | | | url | varchar(250) | YES | | NULL | | | virus | enum('READY','PASS','FAIL') | YES | | NULL | | | downloads | int(10) unsigned | NO | | 0 | | +------------------+--------------------------------------------------+------+-----+---------+----------------+ 11 rows in set (0.01 sec) mysql> describe downloads; +------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_id | int(10) unsigned | NO | MUL | NULL | | | distro_id | int(10) unsigned | NO | MUL | NULL | | | user_id | int(10) unsigned | NO | MUL | NULL | | | ip_address | varchar(15) | NO | | NULL | | | date | datetime | NO | | NULL | | +------------+------------------+------+-----+---------+----------------+ 6 rows in set (0.01 sec) mysql> describe views -> ; +------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_id | int(10) unsigned | NO | MUL | NULL | | | user_id | int(10) unsigned | NO | MUL | NULL | | | ip_address | varchar(15) | NO | | NULL | | | date | datetime | NO | | NULL | | +------------+------------------+------+-----+---------+----------------+ 5 rows in set (0.00 sec)

    Read the article

  • PHP-MySQL: Arranging rows from seperate tables together/Expression to determine row origin

    - by Koroviev
    I'm new to PHP and have a two part question. I need to take rows from two separate tables, and arrange them in descending order by their date. The rows do not correspond in order or number and have no relationship with each other. ---EDIT--- They each contain updates on a site, one table holds text, links, dates, titles etc. from a blog. The other has titles, links, specifications, etc. from images. I want to arrange some basic information (title, date, small description) in an updates section on the main page of the site, and for it to be in order of date. Merging them into one table and modifying it to suit both types isn't what I'd like to do here, the blog table is Wordpress' standard wp_posts and I don't feel comfortable adding columns to make it suit the image table too. I'm afraid it could clash with upgrading later on and it seems like a clumsy solution (but that doesn't mean I'll object if people here advise me it's the best solution). ------EDIT 2------ Here are the DESCRIBES of each table: mysql> describe images; +---------+--------------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------+--------------+------+-----+-------------------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | project | varchar(255) | NO | | NULL | | | title | varchar(255) | NO | | NULL | | | time | timestamp | NO | | CURRENT_TIMESTAMP | | | img_url | varchar(255) | NO | | NULL | | | alt_txt | varchar(255) | YES | | NULL | | | text | text | YES | | NULL | | | text_id | int(11) | YES | | NULL | | +---------+--------------+------+-----+-------------------+----------------+ mysql> DESCRIBE wp_posts; +-----------------------+---------------------+------+-----+---------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------------+---------------------+------+-----+---------------------+----------------+ | ID | bigint(20) unsigned | NO | PRI | NULL | auto_increment | | post_author | bigint(20) unsigned | NO | | 0 | | | post_date | datetime | NO | | 0000-00-00 00:00:00 | | | post_date_gmt | datetime | NO | | 0000-00-00 00:00:00 | | | post_content | longtext | NO | | NULL | | | post_title | text | NO | | NULL | | | post_excerpt | text | NO | | NULL | | | post_status | varchar(20) | NO | | publish | | | comment_status | varchar(20) | NO | | open | | | ping_status | varchar(20) | NO | | open | | | post_password | varchar(20) | NO | | | | | post_name | varchar(200) | NO | MUL | | | | to_ping | text | NO | | NULL | | | pinged | text | NO | | NULL | | | post_modified | datetime | NO | | 0000-00-00 00:00:00 | | | post_modified_gmt | datetime | NO | | 0000-00-00 00:00:00 | | | post_content_filtered | text | NO | | NULL | | | post_parent | bigint(20) unsigned | NO | MUL | 0 | | | guid | varchar(255) | NO | | | | | menu_order | int(11) | NO | | 0 | | | post_type | varchar(20) | NO | MUL | post | | | post_mime_type | varchar(100) | NO | | | | | comment_count | bigint(20) | NO | | 0 | | +-----------------------+---------------------+------+-----+---------------------+----------------+ ---END EDIT--- I can do this easily with a single table like this (I include it here in case I'm using an over-elaborate method without knowing it): $content = mysql_query("SELECT post_title, post_text, post_date FROM posts ORDER BY post_date DESC"); while($row = mysql_fetch_array($content)) { echo $row['post_date'], $row['post_title'], $row['post_text']; } But how is it possible to call both tables into the same array to arrange them correctly? By correctly, I mean that they will intermix their echoed results based on their date. Maybe I'm looking at this from the wrong perspective, and calling them to a single array isn't the answer? Additionally, I need a way to form a conditional expression based on which table they came from, so that rows from table 1 get echoed differently than rows from table 2? I want results from table 1 to be echoed differently (with different strings concatenated around them, I mean) for the purpose of styling them differently than those from table two. And vice versa. I know an if...else statement would work here, but I have no idea how can I write the expression that would determine which table the row is from. All and any help is appreciated, thanks.

    Read the article

  • Why to avoid SELECT * from tables in your Views

    - by Jeff Smith
    -- clean up any messes left over from before: if OBJECT_ID('AllTeams') is not null  drop view AllTeams go if OBJECT_ID('Teams') is not null  drop table Teams go -- sample table: create table Teams (  id int primary key,  City varchar(20),  TeamName varchar(20) ) go -- sample data: insert into Teams (id, City, TeamName ) select 1,'Boston','Red Sox' union all select 2,'New York','Yankees' go create view AllTeams as  select * from Teams go select * from AllTeams --Results: -- --id          City                 TeamName ------------- -------------------- -------------------- --1           Boston               Red Sox --2           New York             Yankees -- Now, add a new column to the Teams table: alter table Teams add League varchar(10) go -- put some data in there: update Teams set League='AL' -- run it again select * from AllTeams --Results: -- --id          City                 TeamName ------------- -------------------- -------------------- --1           Boston               Red Sox --2           New York             Yankees -- Notice that League is not displayed! -- Here's an even worse scenario, when the table gets altered in ways beyond adding columns: drop table Teams go -- recreate table putting the League column before the City: -- (i.e., simulate re-ordering and/or inserting a column) create table Teams (  id int primary key,  League varchar(10),  City varchar(20),  TeamName varchar(20) ) go -- put in some data: insert into Teams (id,League,City,TeamName) select 1,'AL','Boston','Red Sox' union all select 2,'AL','New York','Yankees' -- Now, Select again for our view: select * from AllTeams --Results: -- --id          City       TeamName ------------- ---------- -------------------- --1           AL         Boston --2           AL         New York -- The column labeled "City" in the View is actually the League, and the column labelled TeamName is actually the City! go -- clean up: drop view AllTeams drop table Teams

    Read the article

  • Query a Log4Net-database

    - by pinhack
    So if you use Log4Net to log into a database (i.e. using the AdoNetAppender), how can you conveniently get an overview of what has happend ? Well, you could try the following Query ( T-SQL ):   SELECT convert(varchar(10),LogDB.Date,121) as Datum, LogDB.Level, LogDB.Logger,COUNT(LogDB.Logger) as Counter From Log4Net.dbo.Log as LogDB  where Level <> 'DEBUG' AND convert(varchar(10),LogDB.Date,121) like '2010-03-25' GROUP BY convert(varchar(10),LogDB.Date,121),LogDB.Level,LogDB.Logger ORDER BY counter desc This query will give you the number of events by the Logger at a specified date - and it's easy to customize, just adjust the Date and the Level to your needs. You need a bit more information than that? How about this query:  Select  convert(varchar(10),LogDB.Date,121) as Datum,LogDB.Level,LogDB.Message,LogDB.Logger ,count(LogDB.Message) as counter From Log4Net.dbo.Log as LogDB where Level <> 'DEBUG' AND convert(varchar(10),LogDB.Date,121) like '2010-03-25' GROUP BY convert(varchar(10),LogDB.Date,121),LogDB.Level,LogDB.Message,LogDB.Logger ORDER BY counter desc Similar to the first one, but inclusive the Message - which will return a much larger resultset.

    Read the article

  • SQLiteOpenHelper.getWriteableDatabase() null pointer exception on Android

    - by Drew Dara-Abrams
    I've had fine luck using SQLite with straight, direct SQL in Android, but this is the first time I'm wrapping a DB in a ContentProvider. I keep getting a null pointer exception when calling getWritableDatabase() or getReadableDatabase(). Is this just a stupid mistake I've made with initializations in my code or is there a bigger issue? public class DatabaseProvider extends ContentProvider { ... private DatabaseHelper databaseHelper; private SQLiteDatabase db; ... @Override public boolean onCreate() { databaseHelper = new DatabaseProvider.DatabaseHelper(getContext()); return (databaseHelper == null) ? false : true; } ... @Override public Uri insert(Uri uri, ContentValues values) { db = databaseHelper.getWritableDatabase(); // NULL POINTER EXCEPTION HERE ... } private static class DatabaseHelper extends SQLiteOpenHelper { public static final String DATABASE_NAME = "cogsurv.db"; public static final int DATABASE_VERSION = 1; public static final String[] TABLES = { "people", "travel_logs", "travel_fixes", "landmarks", "landmark_visits", "direction_distance_estimates" }; // people._id does not AUTOINCREMENT, because it's set based on server's people.id public static final String[] CREATE_TABLE_SQL = { "CREATE TABLE people (_id INTEGER PRIMARY KEY," + "server_id INTEGER," + "name VARCHAR(255)," + "email_address VARCHAR(255))", "CREATE TABLE travel_logs (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "person_local_id INTEGER," + "person_server_id INTEGER," + "start DATE," + "stop DATE," + "type VARCHAR(15)," + "uploaded VARCHAR(1))", "CREATE TABLE travel_fixes (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "datetime DATE, " + "latitude DOUBLE, " + "longitude DOUBLE, " + "altitude DOUBLE," + "speed DOUBLE," + "accuracy DOUBLE," + "travel_mode VARCHAR(50), " + "person_local_id INTEGER," + "person_server_id INTEGER," + "travel_log_local_id INTEGER," + "travel_log_server_id INTEGER," + "uploaded VARCHAR(1))", "CREATE TABLE landmarks (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "name VARCHAR(150)," + "latitude DOUBLE," + "longitude DOUBLE," + "person_local_id INTEGER," + "person_server_id INTEGER," + "uploaded VARCHAR(1))", "CREATE TABLE landmark_visits (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "person_local_id INTEGER," + "person_server_id INTEGER," + "landmark_local_id INTEGER," + "landmark_server_id INTEGER," + "travel_log_local_id INTEGER," + "travel_log_server_id INTEGER," + "datetime DATE," + "number_of_questions_asked INTEGER," + "uploaded VARCHAR(1))", "CREATE TABLE direction_distance_estimates (_id INTEGER PRIMARY KEY AUTOINCREMENT," + "server_id INTEGER," + "person_local_id INTEGER," + "person_server_id INTEGER," + "travel_log_local_id INTEGER," + "travel_log_server_id INTEGER," + "landmark_visit_local_id INTEGER," + "landmark_visit_server_id INTEGER," + "start_landmark_local_id INTEGER," + "start_landmark_server_id INTEGER," + "target_landmark_local_id INTEGER," + "target_landmark_server_id INTEGER," + "datetime DATE," + "direction_estimate DOUBLE," + "distance_estimate DOUBLE," + "uploaded VARCHAR(1))" }; public DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); Log.v(Constants.TAG, "DatabaseHelper()"); } @Override public void onCreate(SQLiteDatabase db) { Log.v(Constants.TAG, "DatabaseHelper.onCreate() starting"); // create the tables int length = CREATE_TABLE_SQL.length; for (int i = 0; i < length; i++) { db.execSQL(CREATE_TABLE_SQL[i]); } Log.v(Constants.TAG, "DatabaseHelper.onCreate() finished"); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { for (String tableName : TABLES) { db.execSQL("DROP TABLE IF EXISTS" + tableName); } onCreate(db); } } } As always, thanks for the assistance! -- Not sure if this detail helps, but here's LogCat showing the exception:

    Read the article

  • Restoring multiple database backups in a transaction

    - by Raghu Dodda
    I wrote a stored procedure that restores as set of the database backups. It takes two parameters - a source directory and a restore directory. The procedure looks for all .bak files in the source directory (recursively) and restores all the databases. The stored procedure works as expected, but it has one issue - if I uncomment the try-catch statements, the procedure terminates with the following error: error_number = 3013 error_severity = 16 error_state = 1 error_message = DATABASE is terminating abnormally. The weird part is sometimes (it is not consistent) the restore is done even if the error occurs. The procedure: create proc usp_restore_databases ( @source_directory varchar(1000), @restore_directory varchar(1000) ) as begin declare @number_of_backup_files int -- begin transaction -- begin try -- step 0: Initial validation if(right(@source_directory, 1) <> '\') set @source_directory = @source_directory + '\' if(right(@restore_directory, 1) <> '\') set @restore_directory = @restore_directory + '\' -- step 1: Put all the backup files in the specified directory in a table -- declare @backup_files table ( file_path varchar(1000)) declare @dos_command varchar(1000) set @dos_command = 'dir ' + '"' + @source_directory + '*.bak" /s/b' /* DEBUG */ print @dos_command insert into @backup_files(file_path) exec xp_cmdshell @dos_command delete from @backup_files where file_path IS NULL select @number_of_backup_files = count(1) from @backup_files /* DEBUG */ select * from @backup_files /* DEBUG */ print @number_of_backup_files -- step 2: restore each backup file -- declare backup_file_cursor cursor for select file_path from @backup_files open backup_file_cursor declare @index int; set @index = 0 while(@index < @number_of_backup_files) begin declare @backup_file_path varchar(1000) fetch next from backup_file_cursor into @backup_file_path /* DEBUG */ print @backup_file_path -- step 2a: parse the full backup file name to get the DB file name. declare @db_name varchar(100) set @db_name = right(@backup_file_path, charindex('\', reverse(@backup_file_path)) -1) -- still has the .bak extension /* DEBUG */ print @db_name set @db_name = left(@db_name, charindex('.', @db_name) -1) /* DEBUG */ print @db_name set @db_name = lower(@db_name) /* DEBUG */ print @db_name -- step 2b: find out the logical names of the mdf and ldf files declare @mdf_logical_name varchar(100), @ldf_logical_name varchar(100) declare @backup_file_contents table ( LogicalName nvarchar(128), PhysicalName nvarchar(260), [Type] char(1), FileGroupName nvarchar(128), [Size] numeric(20,0), [MaxSize] numeric(20,0), FileID bigint, CreateLSN numeric(25,0), DropLSN numeric(25,0) NULL, UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0) NULL, ReadWriteLSN numeric(25,0) NULL, BackupSizeInBytes bigint, SourceBlockSize int, FileGroupID int, LogGroupGUID uniqueidentifier NULL, DifferentialBaseLSN numeric(25,0) NULL, DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit ) insert into @backup_file_contents exec ('restore filelistonly from disk=' + '''' + @backup_file_path + '''') select @mdf_logical_name = LogicalName from @backup_file_contents where [Type] = 'D' select @ldf_logical_name = LogicalName from @backup_file_contents where [Type] = 'L' /* DEBUG */ print @mdf_logical_name + ', ' + @ldf_logical_name -- step 2c: restore declare @mdf_file_name varchar(1000), @ldf_file_name varchar(1000) set @mdf_file_name = @restore_directory + @db_name + '.mdf' set @ldf_file_name = @restore_directory + @db_name + '.ldf' /* DEBUG */ print 'mdf_logical_name = ' + @mdf_logical_name + '|' + 'ldf_logical_name = ' + @ldf_logical_name + '|' + 'db_name = ' + @db_name + '|' + 'backup_file_path = ' + @backup_file_path + '|' + 'restore_directory = ' + @restore_directory + '|' + 'mdf_file_name = ' + @mdf_file_name + '|' + 'ldf_file_name = ' + @ldf_file_name restore database @db_name from disk = @backup_file_path with move @mdf_logical_name to @mdf_file_name, move @ldf_logical_name to @ldf_file_name -- step 2d: iterate set @index = @index + 1 end close backup_file_cursor deallocate backup_file_cursor -- end try -- begin catch -- print error_message() -- rollback transaction -- return -- end catch -- -- commit transaction end Does anybody have any ideas why this might be happening? Another question: is the transaction code useful ? i.e., if there are 2 databases to be restored, will SQL Server undo the restore of one database if the second restore fails?

    Read the article

  • Foreign Key Relationships and "belongs to many"

    - by jan
    I have the following model: S belongs to T T has many S A,B,C,D,E (etc) have 1 T each, so the T should belong to each of A,B,C,D,E (etc) At first I set up my foreign keys so that in A, fk_a_t would be the foreign key on A.t to T(id), in B it'd be fk_b_t, etc. Everything looks fine in my UML (using MySQLWorkBench), but generating the yii models results in it thinking that T has many A,B,C,D (etc) which to me is the reverse. It sounds to me like either I need to have A_T, B_T, C_T (etc) tables, but this would be a pain as there are a lot of tables that have this relationship. I've also googled that the better way to do this would be some sort of behavior, such that A,B,C,D (etc) can behave as a T, but I'm not clear on exactly how to do this (I will continue to google more on this) What do you think is the better solution? UML: Here's the DDL (auto generated). Just pretend that there is more than 3 tables referencing T. -- ----------------------------------------------------- -- Table `mydb`.`T` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`T` ( `id` INT NOT NULL AUTO_INCREMENT , PRIMARY KEY (`id`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`S` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`S` ( `id` INT NOT NULL AUTO_INCREMENT , `thing` VARCHAR(45) NULL , `t` INT NOT NULL , PRIMARY KEY (`id`) , INDEX `fk_S_T` (`id` ASC) , CONSTRAINT `fk_S_T` FOREIGN KEY (`id` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`A` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`A` ( `id` INT NOT NULL AUTO_INCREMENT , `T` INT NOT NULL , `stuff` VARCHAR(45) NULL , `bar` VARCHAR(45) NULL , `foo` VARCHAR(45) NULL , PRIMARY KEY (`id`) , INDEX `fk_A_T` (`T` ASC) , CONSTRAINT `fk_A_T` FOREIGN KEY (`T` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`B` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`B` ( `id` INT NOT NULL AUTO_INCREMENT , `T` INT NOT NULL , `stuff2` VARCHAR(45) NULL , `foobar` VARCHAR(45) NULL , `other` VARCHAR(45) NULL , PRIMARY KEY (`id`) , INDEX `fk_A_T` (`T` ASC) , CONSTRAINT `fk_A_T` FOREIGN KEY (`T` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`C` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`C` ( `id` INT NOT NULL AUTO_INCREMENT , `T` INT NOT NULL , `stuff3` VARCHAR(45) NULL , `foobar2` VARCHAR(45) NULL , `other4` VARCHAR(45) NULL , PRIMARY KEY (`id`) , INDEX `fk_A_T` (`T` ASC) , CONSTRAINT `fk_A_T` FOREIGN KEY (`T` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB;

    Read the article

  • Mysql query question

    - by brux
    I have 2 tables: Customer: customerid - int, pri-key,auto fname - varchar sname -varchar housenum - varchar street -varchar Items: itemid - int,pri-key,auto type - varchar collectiondate - date releasedate - date customerid - int I need a query which will get me all items that have a releasedate 3 days prior to (and including) the current date. i.e The query should return customerid,fname,sname,street,housenum,type,releasedate for all items which have releasedate within (and including)3 days prior today thanks in advance

    Read the article

  • Cannot add or update a child row: a foreign key constraint fails

    - by Tom
    table 1 +----------+-------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------+-------------+------+-----+---------+----------------+ | UserID | int(11) | NO | PRI | NULL | auto_increment | | Password | varchar(20) | NO | | | | | Username | varchar(25) | NO | | | | | Email | varchar(60) | NO | | | | +----------+-------------+------+-----+---------+----------------+ table2 +------------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+--------------+------+-----+---------+----------------+ | UserID | int(11) | NO | MUL | | | | PostID | int(11) | NO | PRI | NULL | auto_increment | | Title | varchar(50) | NO | | | | | Summary | varchar(500) | NO | | | | +------------------+--------------+------+-----+---------+----------------+ com.mysql.jdbc.exceptions.MySQLIntegrityConstraintViolationException: Cannot add or update a child row: a foreign key constraint fails (myapp/table2, CONSTRAINT table2_ibfk_1 FOREIGN KEY (UserID) REFERENCES table1 (UserID)) What have I done wrong? I read this: http://www.w3schools.com/Sql/sql_foreignkey.asp and i don't see whats wrong. :S

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • SQL SERVER – Parsing SSIS Catalog Messages – Notes from the Field #030

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data. In this episode of the Notes from the Field series I requested SSIS Expert Andy Leonard to discuss one of the most interesting concepts of SSIS Catalog Messages. There are plenty of interesting and useful information captured in the SSIS catalog and we will learn together how to explore the same. The SSIS Catalog captures a lot of cool information by default. Here’s a query I use to parse messages from the catalog.operation_messages table in the SSISDB database, where the logged messages are stored. This query is set up to parse a default message transmitted by the Lookup Transformation. It’s one of my favorite messages in the SSIS log because it gives me excellent information when I’m tuning SSIS data flows. The message reads similar to: Data Flow Task:Information: The Lookup processed 4485 rows in the cache. The processing time was 0.015 seconds. The cache used 1376895 bytes of memory. The query: USE SSISDB GO DECLARE @MessageSourceType INT = 60 DECLARE @StartOfIDString VARCHAR(100) = 'The Lookup processed ' DECLARE @ProcessingTimeString VARCHAR(100) = 'The processing time was ' DECLARE @CacheUsedString VARCHAR(100) = 'The cache used ' DECLARE @StartOfIDSearchString VARCHAR(100) = '%' + @StartOfIDString + '%' DECLARE @ProcessingTimeSearchString VARCHAR(100) = '%' + @ProcessingTimeString + '%' DECLARE @CacheUsedSearchString VARCHAR(100) = '%' + @CacheUsedString + '%' SELECT operation_id , SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))) AS LookupRowsCount , SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))) AS LookupProcessingTime , CASE WHEN (CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))))) = 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) / CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1)))) END AS LookupRowsPerSecond , SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1))) AS LookupBytesUsed ,CASE WHEN (CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))))= 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1)))) / CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) END AS LookupBytesPerRow FROM [catalog].[operation_messages] WHERE message_source_type = @MessageSourceType AND MESSAGE LIKE @StartOfIDSearchString GO Note that you have to set some parameter values: @MessageSourceType [int] – represents the message source type value from the following results: Value     Description 10           Entry APIs, such as T-SQL and CLR Stored procedures 20           External process used to run package (ISServerExec.exe) 30           Package-level objects 40           Control Flow tasks 50           Control Flow containers 60           Data Flow task 70           Custom execution message Note: Taken from Reza Rad’s (excellent!) helper.MessageSourceType table found here. @StartOfIDString [VarChar(100)] – use this to uniquely identify the message field value you wish to parse. In this case, the string ‘The Lookup processed ‘ identifies all the Lookup Transformation messages I desire to parse. @ProcessingTimeString [VarChar(100)] – this parameter is message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Processing Time value. For this execution, I use the string ‘The processing time was ‘. @CacheUsedString [VarChar(100)] – this parameter is also message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Cache  Used value. It returns the memory used, in bytes. For this execution, I use the string ‘The cache used ‘. The other parameters are built from variations of the parameters listed above. The query parses the values into text. The string values are converted to numeric values for ratio calculations; LookupRowsPerSecond and LookupBytesPerRow. Since ratios involve division, CASE statements check for denominators that equal 0. Here are the results in an SSMS grid: This is not the only way to retrieve this information. And much of the code lends itself to conversion to functions. If there is interest, I will share the functions in an upcoming post. If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • mysql - query group/concat question??

    - by tom smith
    I have a situation where I return results with mutiple rows. I'm looking for aw way to return a single row, with the multiple items in separate cols within the row. My initial query: SELECT a.name, a.city, a.address, a.abbrv, b.urltype, b.url FROM jos__universityTBL as a LEFT JOIN jos__university_urlTBL as b on b.universityID = a.ID WHERE a.stateVAL = 'CA' ...my output: | University Of Southern Califor | Los Angeles | | usc | 2 | http://web-app.usc.edu/ws/soc/api/ | | University Of Southern Califor | Los Angeles | | usc | 4 | http://web-app.usc.edu/ws/soc/api/ | | University Of Southern Califor | Los Angeles | | usc | 1 | www.usc.edu | | San Jose State University | San Jose | | sjsu | 2 | http://info.sjsu.edu/home/schedules.html | | San Jose State University | San Jose | | sjsu | 4 | https://cmshr.sjsu.edu/psp/HSJPRDF/EMPLOYEE/HSJPRD/c/COMMUNITY_ACCESS.CLASS_SEARCH.GBL?FolderPath=PORTAL_ROOT_OBJECT.PA_HC_CLASS_SEARCH | | San Jose State University | San Jose | | sjsu | 1 | www.sjsu.edu My tbl schema... mysql> describe jos_universityTBL; +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | name | varchar(50) | NO | UNI | | | | repos_dir_name | varchar(50) | NO | | | | | city | varchar(20) | YES | | | | | stateVAL | varchar(5) | NO | | | | | address | varchar(50) | NO | | | | | abbrv | varchar(20) | NO | | | | | childtbl | varchar(200) | NO | | | | | userID | int(10) | NO | | 0 | | | ID | int(10) | NO | PRI | NULL | auto_increment | +----------------+--------------+------+-----+---------+----------------+ 9 rows in set (0.00 sec) mysql> describe jos_university_urlTBL; +--------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------------+--------------+------+-----+---------+----------------+ | universityID | int(10) | NO | | 0 | | | urltype | int(5) | NO | | 0 | | | url | varchar(200) | NO | MUL | | | | actionID | int(5) | YES | | 0 | | | status | int(5) | YES | | 0 | | | ID | int(10) | NO | PRI | NULL | auto_increment | +--------------+--------------+------+-----+---------+----------------+ 6 rows in set (0.00 sec) I'm really trying to get something like: |<<the concated urltype-url >>| ucs | losangeles | usc.edu | 1-u1 | 2-u2 | 3-u2 | Any pointers would be helpful!

    Read the article

  • Mysql german accents not-sensitive search in full-text searches

    - by lukaszsadowski
    Let`s have a example hotels table: CREATE TABLE `hotels` ( `HotelNo` varchar(4) character set latin1 NOT NULL default '0000', `Hotel` varchar(80) character set latin1 NOT NULL default '', `City` varchar(100) character set latin1 default NULL, `CityFR` varchar(100) character set latin1 default NULL, `Region` varchar(50) character set latin1 default NULL, `RegionFR` varchar(100) character set latin1 default NULL, `Country` varchar(50) character set latin1 default NULL, `CountryFR` varchar(50) character set latin1 default NULL, `HotelText` text character set latin1, `HotelTextFR` text character set latin1, `tagsforsearch` text character set latin1, `tagsforsearchFR` text character set latin1, PRIMARY KEY (`HotelNo`), FULLTEXT KEY `fulltextHotelSearch` (`HotelNo`,`Hotel`,`City`,`CityFR`,`Region`,`RegionFR`,`Country`,`CountryFR`,`HotelText`,`HotelTextFR`,`tagsforsearch`,`tagsforsearchFR`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_german1_ci; In this table for example we have only one hotel with Region name = "Graubünden" (please note umlaut ü character) And now I want to achieve same search match for phrases: 'graubunden' and 'graubünden' This is simple with use of MySql built in collations in regular searches as follows: SELECT * FROM `hotels` WHERE `Region` LIKE CONVERT(_utf8 '%graubunden%' USING latin1) COLLATE latin1_german1_ci This works fine for 'graubunden' and 'graubünden' and as a result I receive proper result, but problem is when we make MySQL full text search Whats wrong with this SQL statement?: SELECT * FROM hotels WHERE MATCH (`HotelNo`,`Hotel`,`Address`,`City`,`CityFR`,`Region`,`RegionFR`,`Country`,`CountryFR`, `HotelText`, `HotelTextFR`, `tagsforsearch`, `tagsforsearchFR`) AGAINST( CONVERT('+graubunden' USING latin1) COLLATE latin1_german1_ci IN BOOLEAN MODE) ORDER BY Country ASC, Region ASC, City ASC This doesn`t return any result. Any ideas where the dog is buried ?

    Read the article

  • Newbie T-SQL dynamic stored procedure -- how can I improve it?

    - by Andy Jones
    I'm new to T-SQL; all my experience is in a completely different database environment (Openedge). I've learned enough to write the procedure below -- but also enough to know that I don't know enough! This routine will have to go into a live environment soon, and it works, but I'm quite certain there are a number of c**k-ups and gotchas in it that I know nothing about. The routine copies data from table A to table B, replacing the data in table B. The tables could be in any database. I plan to call this routine multiple times from another stored procedure. Permissions aren't a problem: the routine will be run by the dba as a timed job. Could I have your suggestions as to how to make it fit best-practice? To bullet-proof it? ALTER PROCEDURE [dbo].[copyTable2Table] @sdb varchar(30), @stable varchar(30), @tdb varchar(30), @ttable varchar(30), @raiseerror bit = 1, @debug bit = 0 as begin set nocount on declare @source varchar(65) declare @target varchar(65) declare @dropstmt varchar(100) declare @insstmt varchar(100) declare @ErrMsg nvarchar(4000) declare @ErrSeverity int set @source = '[' + @sdb + '].[dbo].[' + @stable + ']' set @target = '[' + @tdb + '].[dbo].[' + @ttable + ']' set @dropStmt = 'drop table ' + @target set @insStmt = 'select * into ' + @target + ' from ' + @source set @errMsg = '' set @errSeverity = 0 if @debug = 1 print('Drop:' + @dropStmt + ' Insert:' + @insStmt) -- drop the target table, copy the source table to the target begin try begin transaction exec(@dropStmt) exec(@insStmt) commit end try begin catch if @@trancount > 0 rollback select @errMsg = error_message(), @errSeverity = error_severity() end catch -- update the log table insert into HHG_system.dbo.copyaudit (copytime, copyuser, source, target, errmsg, errseverity) values( getdate(), user_name(user_id()), @source, @target, @errMsg, @errSeverity) if @debug = 1 print ( 'Message:' + @errMsg + ' Severity:' + convert(Char, @errSeverity) ) -- handle errors, return value if @errMsg <> '' begin if @raiseError = 1 raiserror(@errMsg, @errSeverity, 1) return 1 end return 0 END Thanks!

    Read the article

  • convert MsSql StoredPorcedure to MySql

    - by karthik
    I need to covert the following SP of MsSql To MySql. I am new to MySql.. Help needed. CREATE PROC InsertGenerator (@tableName varchar(100)) as --Declare a cursor to retrieve column specific information --for the specified table DECLARE cursCol CURSOR FAST_FORWARD FOR SELECT column_name,data_type FROM information_schema.columns WHERE table_name = @tableName OPEN cursCol DECLARE @string nvarchar(3000) --for storing the first half --of INSERT statement DECLARE @stringData nvarchar(3000) --for storing the data --(VALUES) related statement DECLARE @dataType nvarchar(1000) --data types returned --for respective columns SET @string='INSERT '+@tableName+'(' SET @stringData='' DECLARE @colName nvarchar(50) FETCH NEXT FROM cursCol INTO @colName,@dataType IF @@fetch_status<0 begin print 'Table '+@tableName+' not found, processing skipped.' close curscol deallocate curscol return END WHILE @@FETCH_STATUS=0 BEGIN IF @dataType in ('varchar','char','nchar','nvarchar') BEGIN SET @stringData=@stringData+'''''''''+ isnull('+@colName+','''')+'''''',''+' END ELSE if @dataType in ('text','ntext') --if the datatype --is text or something else BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(2000)),'''')+'''''',''+' END ELSE IF @dataType = 'money' --because money doesn't get converted --from varchar implicitly BEGIN SET @stringData=@stringData+'''convert(money,''''''+ isnull(cast('+@colName+' as varchar(200)),''0.0000'')+''''''),''+' END ELSE IF @dataType='datetime' BEGIN SET @stringData=@stringData+'''convert(datetime,''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+''''''),''+' END ELSE IF @dataType='image' BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast(convert(varbinary,'+@colName+') as varchar(6)),''0'')+'''''',''+' END ELSE --presuming the data type is int,bit,numeric,decimal BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+'''''',''+' END SET @string=@string+@colName+',' FETCH NEXT FROM cursCol INTO @colName,@dataType END

    Read the article

  • Modeling objects with multiple table relationships in Zend Framework

    - by andybaird
    I'm toying with Zend Framework and trying to use the "QuickStart" guide against a website I'm making just to see how the process would work. Forgive me if this answer is obvious, hopefully someone experienced can shed some light on this. I have three database tables: CREATE TABLE `users` ( `id` int(11) NOT NULL auto_increment, `email` varchar(255) NOT NULL, `username` varchar(255) NOT NULL default '', `first` varchar(128) NOT NULL default '', `last` varchar(128) NOT NULL default '', `gender` enum('M','F') default NULL, `birthyear` year(4) default NULL, `postal` varchar(16) default NULL, `auth_method` enum('Default','OpenID','Facebook','Disabled') NOT NULL default 'Default', PRIMARY KEY (`id`), UNIQUE KEY `email` (`email`), UNIQUE KEY `username` (`username`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 CREATE TABLE `user_password` ( `user_id` int(11) NOT NULL, `password` varchar(16) NOT NULL default '', PRIMARY KEY (`user_id`), UNIQUE KEY `user_id` (`user_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 CREATE TABLE `user_metadata` ( `user_id` int(11) NOT NULL default '0', `signup_date` datetime default NULL, `signup_ip` varchar(15) default NULL, `last_login_date` datetime default NULL, `last_login_ip` varchar(15) default NULL, PRIMARY KEY (`user_id`), UNIQUE KEY `user_id` (`user_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 I want to create a User model that uses all three tables in certain situations. E.g., the metadata table is accessed if/when the meta data is needed. The user_password table is accessed only if the 'Default' auth_method is set. I'll likely be adding a profile table later on that I would like to be able to access from the user model. What is the best way to do this with ZF and why?

    Read the article

  • How to display multiple categories and products underneath each category?

    - by shin
    Generally there is a category menu and each link to a category page where shows all the items under that category. Now I need to show all the categories and products underneath with PHP/MySQL in the same page. So it will be like this. Category 1 description of category 1 item 1 item 2 .. Category 2 description of category 2 item 5 item 6 .. Category 3 description of category 3 item 8 item 9 ... ... I have category and product table in my database. But I am not sure how to proceed. CREATE TABLE IF NOT EXISTS `omc_product` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, `shortdesc` varchar(255) NOT NULL, `longdesc` text NOT NULL, `thumbnail` varchar(255) NOT NULL, `image` varchar(255) NOT NULL, `product_order` int(11) DEFAULT NULL, `class` varchar(255) DEFAULT NULL, `grouping` varchar(16) DEFAULT NULL, `status` enum('active','inactive') NOT NULL, `category_id` int(11) NOT NULL, `featured` enum('none','front','webshop') NOT NULL, `other_feature` enum('none','most sold','new product') NOT NULL, `price` float(7,2) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; CREATE TABLE IF NOT EXISTS `omc_category` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, `shortdesc` varchar(255) NOT NULL, `longdesc` text NOT NULL, `status` enum('active','inactive') NOT NULL, `parentid` int(11) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; I will appreciate your help. Thanks in advance.

    Read the article

  • convert SQL Server StoredPorcedure to MySql

    - by karthik
    I need to covert the following SP of SQL Server To MySql. I am new to MySql.. Help needed. CREATE PROC InsertGenerator (@tableName varchar(100)) as --Declare a cursor to retrieve column specific information --for the specified table DECLARE cursCol CURSOR FAST_FORWARD FOR SELECT column_name,data_type FROM information_schema.columns WHERE table_name = @tableName OPEN cursCol DECLARE @string nvarchar(3000) --for storing the first half --of INSERT statement DECLARE @stringData nvarchar(3000) --for storing the data --(VALUES) related statement DECLARE @dataType nvarchar(1000) --data types returned --for respective columns SET @string='INSERT '+@tableName+'(' SET @stringData='' DECLARE @colName nvarchar(50) FETCH NEXT FROM cursCol INTO @colName,@dataType IF @@fetch_status<>0 begin print 'Table '+@tableName+' not found, processing skipped.' close curscol deallocate curscol return END WHILE @@FETCH_STATUS=0 BEGIN IF @dataType in ('varchar','char','nchar','nvarchar') BEGIN SET @stringData=@stringData+'''''''''+ isnull('+@colName+','''')+'''''',''+' END ELSE if @dataType in ('text','ntext') --if the datatype --is text or something else BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(2000)),'''')+'''''',''+' END ELSE IF @dataType = 'money' --because money doesn't get converted --from varchar implicitly BEGIN SET @stringData=@stringData+'''convert(money,''''''+ isnull(cast('+@colName+' as varchar(200)),''0.0000'')+''''''),''+' END ELSE IF @dataType='datetime' BEGIN SET @stringData=@stringData+'''convert(datetime,''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+''''''),''+' END ELSE IF @dataType='image' BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast(convert(varbinary,'+@colName+') as varchar(6)),''0'')+'''''',''+' END ELSE --presuming the data type is int,bit,numeric,decimal BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+'''''',''+' END SET @string=@string+@colName+',' FETCH NEXT FROM cursCol INTO @colName,@dataType END

    Read the article

  • Further filter SQL results

    - by eric
    I've got a query that returns a proper result set, using SQL 2005. It is as follows: select case when convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) = '1969 Q4' then '2009 Q2' else convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) end as [Quarter], bugtypes.bugtypename, count(bug.bugid) as [Total] from bug left outer join bugtypes on bug.crntbugtypeid = bugtypes.bugtypeid and bug.projectid = bugtypes.projectid where (bug.projectid = 44 and bug.currentowner in (-1000000031,-1000000045) and bug.crntplatformid in (42,37,25,14)) or (bug.projectid = 44 and bug.currentowner in (select memberid from groupmembers where projectid = 44 and groupid in (87,88)) and bug.crntplatformid in (42,37,25,14)) group by case when convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) = '1969 Q4' then '2009 Q2' else convert(varchar(4),datepart(yyyy,bug.datecreated),101)+ ' Q' +convert(varchar(2),datepart(qq,bug.datecreated),101) end, bugtypes.bugtypename order by 1,3 desc It produces a nicely grouped list of years and quarters, an associated descriptor, and a count of incidents in descending count order. What I'd like to do is further filter this so it shows only the 10 most submitted incidents per quarter. What I'm struggling with is how to take this result set and achieve that.

    Read the article

  • Need help to debug application using .Net and MySQL

    - by Tim Murphy
    What advice can you give me on how to track down a bug I have while inserting data into MySQL database with .Net? The error message is: MySql.Data.MySqlClient.MySqlException: Duplicate entry '26012' for key 'StockNumber_Number_UNIQUE' Reviewing of the log proves that StockNumber_Number of 26012 has not been inserted yet. Products in use. Visual Studio 2008. mysql.data.dll 6.0.4.0. Windows 7 Ultimate 64 bit and Windows 2003 32 bit. Custom built ORM framework (have source code). Importing data from Access 2003 database. The code works fine for 3000 - 5000 imports. The record being imported that causes the problem in a full run works fine if just importing by itself. I've also seen the error on other records if I sort the data to be imported a different way. Have tried import with and without transactions. Have logged the hell out of the system. The SQL command to create the table: CREATE TABLE `RareItems_RareItems` ( `RareItemKey` CHAR(36) NOT NULL PRIMARY KEY, `StockNumber_Text` VARCHAR(7) NOT NULL, `StockNumber_Number` INT NOT NULL AUTO_INCREMENT, UNIQUE INDEX `StockNumber_Number_UNIQUE` (`StockNumber_Number` ASC), `OurPercentage` NUMERIC , `SellPrice` NUMERIC(19, 2) , `Author` VARCHAR(250) , `CatchWord` VARCHAR(250) , `Title` TEXT , `Publisher` VARCHAR(250) , `InternalNote` VARCHAR(250) , `DateOfPublishing` VARCHAR(250) , `ExternalNote` LONGTEXT , `Description` LONGTEXT , `Scrap` LONGTEXT , `SuppressionKey` CHAR(36) NOT NULL, `TypeKey` CHAR(36) NOT NULL, `CatalogueStatusKey` CHAR(36) NOT NULL, `CatalogueRevisedDate` DATETIME , `CatalogueRevisedByKey` CHAR(36) NOT NULL, `CatalogueToBeRevisedByKey` CHAR(36) NOT NULL, `DontInsure` BIT NOT NULL, `ExtraCosts` NUMERIC(19, 2) , `IsWebReady` BIT NOT NULL, `LocationKey` CHAR(36) NOT NULL, `LanguageKey` CHAR(36) NOT NULL, `CatalogueDescription` VARCHAR(250) , `PlacePublished` VARCHAR(250) , `ToDo` LONGTEXT , `Headline` VARCHAR(250) , `DepartmentKey` CHAR(36) NOT NULL, `Temp1` INT , `Temp2` INT , `Temp3` VARCHAR(250) , `Temp4` VARCHAR(250) , `InternetStatusKey` CHAR(36) NOT NULL, `InternetStatusInfo` LONGTEXT , `PurchaseKey` CHAR(36) NOT NULL, `ConsignmentKey` CHAR(36) , `IsSold` BIT NOT NULL, `RowCreated` DATETIME NOT NULL, `RowModified` DATETIME NOT NULL ); The SQL command and parameters to insert the record: INSERT INTO `RareItems_RareItems` (`RareItemKey`, `StockNumber_Text`, `StockNumber_Number`, `OurPercentage`, `SellPrice`, `Author`, `CatchWord`, `Title`, `Publisher`, `InternalNote`, `DateOfPublishing`, `ExternalNote`, `Description`, `Scrap`, `SuppressionKey`, `TypeKey`, `CatalogueStatusKey`, `CatalogueRevisedDate`, `CatalogueRevisedByKey`, `CatalogueToBeRevisedByKey`, `DontInsure`, `ExtraCosts`, `IsWebReady`, `LocationKey`, `LanguageKey`, `CatalogueDescription`, `PlacePublished`, `ToDo`, `Headline`, `DepartmentKey`, `Temp1`, `Temp2`, `Temp3`, `Temp4`, `InternetStatusKey`, `InternetStatusInfo`, `PurchaseKey`, `ConsignmentKey`, `IsSold`, `RowCreated`, `RowModified`) VALUES (@RareItemKey, @StockNumber_Text, @StockNumber_Number, @OurPercentage, @SellPrice, @Author, @CatchWord, @Title, @Publisher, @InternalNote, @DateOfPublishing, @ExternalNote, @Description, @Scrap, @SuppressionKey, @TypeKey, @CatalogueStatusKey, @CatalogueRevisedDate, @CatalogueRevisedByKey, @CatalogueToBeRevisedByKey, @DontInsure, @ExtraCosts, @IsWebReady, @LocationKey, @LanguageKey, @CatalogueDescription, @PlacePublished, @ToDo, @Headline, @DepartmentKey, @Temp1, @Temp2, @Temp3, @Temp4, @InternetStatusKey, @InternetStatusInfo, @PurchaseKey, @ConsignmentKey, @IsSold, @RowCreated, @RowModified) @RareItemKey = 0b625bd6-776d-43d6-9405-e97159d172a6 @StockNumber_Text = 199305 @StockNumber_Number = 26012 @OurPercentage = 22.5 @SellPrice = 1250 @Author = SPARRMAN, Anders. @CatchWord = COOK: SECOND VOYAGE @Title = A Voyage Round the World with Captain James Cook in H.M.S. Resolution… Introduction and notes by Owen Rutter, wood engravings by Peter Barker-Mill. @Publisher = @InternalNote = @DateOfPublishing = 1944 @ExternalNote = The first English translation of Sparrman’s narrative, which had originally been published in Sweden in 1802-1818, and the only complete version of his account to appear in English. The eighteenth-century translation had appeared some time before the Swedish publication of the final sections of his account. Sparrman’s observant and well-written narrative of the second voyage contains much that appears nowhere else, emphasising naturally his interests in medicine, health, and natural history.&lt;br&gt;&lt;br&gt;One of 350 numbered copies: a handsomely produced and beautifully illustrated work. @Description = Small folio, wood-engravings in the text; original olive glazed cloth, top edges gilt, a very good copy. London, Golden Cockerel Press, 1944. @Scrap = @SuppressionKey = 00000000-0000-0000-0000-000000000000 @TypeKey = 93f58155-7471-46ad-84c5-262ab9dd37e8 @CatalogueStatusKey = 00000000-0000-0000-0000-000000000003 @CatalogueRevisedDate = @CatalogueRevisedByKey = c4f6fc06-956d-44c4-b393-0d5462cbffec @CatalogueToBeRevisedByKey = 00000000-0000-0000-0000-000000000000 @DontInsure = False @ExtraCosts = @IsWebReady = False @LocationKey = 00000000-0000-0000-0000-000000000000 @LanguageKey = 00000000-0000-0000-0000-000000000000 @CatalogueDescription = @PlacePublished = Golden Cockerel Press @ToDo = @Headline = @DepartmentKey = 529578a3-9189-40de-b656-eef9039d00b8 @Temp1 = @Temp2 = @Temp3 = @Temp4 = v @InternetStatusKey = 00000000-0000-0000-0000-000000000000 @InternetStatusInfo = @PurchaseKey = 00000000-0000-0000-0000-000000000000 @ConsignmentKey = @IsSold = True @RowCreated = 8/04/2010 8:49:16 PM @RowModified = 8/04/2010 8:49:16 PM Suggestions on what is causing the error and/or how to track down what is causing the problem?

    Read the article

  • Combine config-paramters with parameters passed from commanline

    - by Frederik
    I have created a SSIS-package that imports a file into a table (simple enough). I have some variables, a few set in a config-file such as server, database, importfolder. at runtime I want to pass the filename. This is done through a stored procedure using dtexec. When setting the paramters throught the configfile it works fine also when setting all parameters in the procedure and passing them with the \Set statement (se below). when I try to combine the config-version with settings parameters on the fly I get an error refering to the config-files path that was set at design time. Has anybody come across this and found a solution for it? Regards Frederik DECLARE @SSISSTR VARCHAR(8000), @DataBaseServer VARCHAR(100), @DataBaseName VARCHAR(100), @PackageFilePath VARCHAR(200), @ImportFolder VARCHAR(200), @HandledFolder VARCHAR(200), @ConfigFilePath VARCHAR(200), @SSISreturncode INT; /* DEBUGGING DECLARE @FileName VARCHAR(100), @SelectionId INT SET @FileName = 'Test.csv'; SET @SelectionId = 366; */ SET @PackageFilePath = '/FILE "Y:\SSIS\Packages\PostalCodeSelectionImport\ImportPackage.dtsx" '; SET @DataBaseServer = 'STOSWVUTVDB01\DEV_BSE'; SET @DataBaseName = 'BSE_ODR'; SET @ImportFolder = '\\Stoswvutvbse01\Application\FileLoadArea\ODR\\'; SET @HandledFolder = '\\Stoswvutvbse01\Application\FileLoadArea\ODR\Handled\\'; --SET @ConfigFilePath = '/CONFIGFILE "Y:\SSIS\Packages\PostalCodeSelectionImport\Configuration\DEV_BSE.dtsConfig" '; ----now making "dtexec" SQL from dynamic values SET @SSISSTR = 'DTEXEC ' + @PackageFilePath; -- + @ConfigFilePath; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::SelectionId].Properties[Value];' + CAST( @SelectionId AS VARCHAR(12)); SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::DataBaseServer].Properties[Value];"' + @DataBaseServer + '"'; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::ImportFolder].Properties[Value];"' + @ImportFolder + '" '; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::DataBaseName].Properties[Value];"' + @DataBaseName + '" '; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::ImportFileName].Properties[Value];"' + @FileName + '" '; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::HandledFolder].Properties[Value];"' + @HandledFolder + '" '; -- Now execute dynamic SQL by using EXEC. EXEC @SSISreturncode = xp_cmdshell @SSISSTR;

    Read the article

  • getting rid of filesort on WordPress MySQL query

    - by Hans
    An instance of WordPress that I manage goes down about once a day due to this monster MySQL query taking far too long: SELECT SQL_CALC_FOUND_ROWS distinct wp_posts.* FROM wp_posts LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) LEFT JOIN wp_term_taxonomy ON wp_term_taxonomy.term_taxonomy_id = wp_term_relationships.term_taxonomy_id LEFT JOIN wp_ec3_schedule ec3_sch ON ec3_sch.post_id=id WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('1050') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish') AND NOT EXISTS (SELECT * FROM wp_term_relationships JOIN wp_term_taxonomy ON wp_term_taxonomy.term_taxonomy_id = wp_term_relationships.term_taxonomy_id WHERE wp_term_relationships.object_id = wp_posts.ID AND wp_term_taxonomy.taxonomy = 'category' AND wp_term_taxonomy.term_id IN (533,3567) ) AND ec3_sch.post_id IS NULL GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC LIMIT 0, 10; What do I have to do to get rid of the very slow filesort? I would think that the multicolumn type_status_date index would be fast enough. The EXPLAIN EXTENDED output is below. +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ | 1 | PRIMARY | wp_posts | ref | type_status_date | type_status_date | 124 | const,const | 7034 | Using where; Using temporary; Using filesort | | 1 | PRIMARY | wp_term_relationships | ref | PRIMARY | PRIMARY | 8 | bwog_wordpress_w.wp_posts.ID | 373 | Using index | | 1 | PRIMARY | wp_term_taxonomy | eq_ref | PRIMARY | PRIMARY | 8 | bwog_wordpress_w.wp_term_relationships.term_taxonomy_id | 1 | Using index | | 1 | PRIMARY | ec3_sch | ref | post_id_index | post_id_index | 9 | bwog_wordpress_w.wp_posts.ID | 1 | Using where; Using index | | 3 | DEPENDENT SUBQUERY | wp_term_taxonomy | range | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 106 | NULL | 2 | Using where | | 3 | DEPENDENT SUBQUERY | wp_term_relationships | eq_ref | PRIMARY,term_taxonomy_id | PRIMARY | 16 | bwog_wordpress_w.wp_posts.ID,bwog_wordpress_w.wp_term_taxonomy.term_taxonomy_id | 1 | Using index | | 2 | DEPENDENT SUBQUERY | tt | const | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 106 | const,const | 1 | | | 2 | DEPENDENT SUBQUERY | tr | eq_ref | PRIMARY,term_taxonomy_id | PRIMARY | 16 | func,const | 1 | Using index | +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ 8 rows in set, 2 warnings (0.05 sec) And CREATE TABLE: CREATE TABLE `wp_posts` ( `ID` bigint(20) unsigned NOT NULL auto_increment, `post_author` bigint(20) unsigned NOT NULL default '0', `post_date` datetime NOT NULL default '0000-00-00 00:00:00', `post_date_gmt` datetime NOT NULL default '0000-00-00 00:00:00', `post_content` longtext NOT NULL, `post_title` text NOT NULL, `post_excerpt` text NOT NULL, `post_status` varchar(20) NOT NULL default 'publish', `comment_status` varchar(20) NOT NULL default 'open', `ping_status` varchar(20) NOT NULL default 'open', `post_password` varchar(20) NOT NULL default '', `post_name` varchar(200) NOT NULL default '', `to_ping` text NOT NULL, `pinged` text NOT NULL, `post_modified` datetime NOT NULL default '0000-00-00 00:00:00', `post_modified_gmt` datetime NOT NULL default '0000-00-00 00:00:00', `post_content_filtered` text NOT NULL, `post_parent` bigint(20) unsigned NOT NULL default '0', `guid` varchar(255) NOT NULL default '', `menu_order` int(11) NOT NULL default '0', `post_type` varchar(20) NOT NULL default 'post', `post_mime_type` varchar(100) NOT NULL default '', `comment_count` bigint(20) NOT NULL default '0', `robotsmeta` varchar(64) default NULL, PRIMARY KEY (`ID`), KEY `post_name` (`post_name`), KEY `type_status_date` (`post_type`,`post_status`,`post_date`,`ID`), KEY `post_parent` (`post_parent`), KEY `post_date` (`post_date`), FULLTEXT KEY `post_related` (`post_title`,`post_content`) )

    Read the article

  • Mybatis nested collection doesn't work correctly with column prefix

    - by Shikarn-O
    I need to set collection for object in another collection using mybatis mappings. It works for me w/o using columnPrefix, but I need it since there are a lot of repeteable columns. <collection property="childs" javaType="ArrayList" ofType="org.example.mybatis.Child" resultMap="ChildMap" columnPrefix="c_"/> </resultMap> <resultMap id="ChildMap" type="org.example.mybatis.Parent"> <id column="Id" jdbcType="VARCHAR" property="id" /> <id column="ParentId" jdbcType="VARCHAR" property="parentId" /> <id column="Name" jdbcType="VARCHAR" property="name" /> <id column="SurName" jdbcType="VARCHAR" property="surName" /> <id column="Age" jdbcType="INTEGER" property="age" /> <collection property="toys" javaType="ArrayList" ofType="org.example.mybatis.Toy" resultMap="ToyMap" columnPrefix="t_"/> </resultMap> <resultMap id="ToyMap" type="org.example.mybatis.Toy"> <id column="Id" jdbcType="VARCHAR" property="id" /> <id column="ChildId" jdbcType="VARCHAR" property="childId" /> <id column="Name" jdbcType="VARCHAR" property="name" /> <id column="Color" jdbcType="VARCHAR" property="color" /> </resultMap> <sql id="Parent_Column_List"> p.Id, p.Name, p.SurName, </sql> <sql id="Child_Column_List"> c.Id as c_Id, c.ParentId as c_ParentId, c.Name as c_Name, c.SurName as c_Surname, c.Age as c_Age, </sql> <sql id="Toy_Column_List"> t.Id as t_Id, t.Name as t_Name, t.Color as t_Color </sql> <select id="getParent" parameterType="java.lang.String" resultMap="ParentMap" > select <include refid="Parent_Column_List"/> <include refid="Child_Column_List" /> <include refid="Toy_Column_List" /> from Parent p left outer join Child c on p.Id = c.ParentId left outer join Toy t on c.Id = t.ChildId where p.id = #{id,jdbcType=VARCHAR} With columnPrefix all works fine, but nested toys collection is empty. Sql query on database works correctly and all toys are joined. May be i missed something or this is bug with mybatis?

    Read the article

  • Hibernate: how to call a stored function returning a varchar?

    - by Péter Török
    I am trying to call a legacy stored function in an Oracle9i DB from Java using Hibernate. The function is declared like this: create or replace FUNCTION Transferlocation_Fix (mnemonic_code IN VARCHAR2) RETURN VARCHAR2 After several failed tries and extensive googling, I found this thread on the Hibernate forums which suggested a mapping like this: <sql-query name="TransferLocationFix" callable="true"> <return-scalar column="retVal" type="string"/> select Transferlocation_Fix(:mnemonic) as retVal from dual </sql-query> My code to execute it is Query query = session.getNamedQuery("TransferLocationFix"); query.setParameter("mnemonic", "FC3"); String result = (String) query.uniqueResult(); and the resulting log is DEBUG (org.hibernate.jdbc.AbstractBatcher:366) - - about to open PreparedStatement (open PreparedStatements: 0, globally: 0) DEBUG (org.hibernate.SQL:401) - - select Transferlocation_Fix(?) as retVal from dual TRACE (org.hibernate.jdbc.AbstractBatcher:484) - - preparing statement TRACE (org.hibernate.type.StringType:133) - - binding 'FC3' to parameter: 2 TRACE (org.hibernate.type.StringType:133) - - binding 'FC3' to parameter: 2 java.lang.NullPointerException at oracle.jdbc.ttc7.TTCAdapter.newTTCType(TTCAdapter.java:300) at oracle.jdbc.ttc7.TTCAdapter.createNonPlsqlTTCColumnArray(TTCAdapter.java:270) at oracle.jdbc.ttc7.TTCAdapter.createNonPlsqlTTCDataSet(TTCAdapter.java:231) at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1924) at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteDescribe(TTC7Protocol.java:850) at oracle.jdbc.driver.OracleStatement.doExecuteQuery(OracleStatement.java:2599) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:2963) at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:658) at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:736) at com.mchange.v2.c3p0.impl.NewProxyCallableStatement.execute(NewProxyCallableStatement.java:3044) at org.hibernate.dialect.Oracle8iDialect.getResultSet(Oracle8iDialect.java:379) at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:193) at org.hibernate.loader.Loader.getResultSet(Loader.java:1784) at org.hibernate.loader.Loader.doQuery(Loader.java:674) at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:236) at org.hibernate.loader.Loader.doList(Loader.java:2220) at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2104) at org.hibernate.loader.Loader.list(Loader.java:2099) at org.hibernate.loader.custom.CustomLoader.list(CustomLoader.java:289) at org.hibernate.impl.SessionImpl.listCustomQuery(SessionImpl.java:1695) at org.hibernate.impl.AbstractSessionImpl.list(AbstractSessionImpl.java:142) at org.hibernate.impl.SQLQueryImpl.list(SQLQueryImpl.java:152) at org.hibernate.impl.AbstractQueryImpl.uniqueResult(AbstractQueryImpl.java:811) at com.my.project.SomeClass.method(SomeClass.java:202) ... Any clues what am I doing wrong? Or any better ways to call this stored function?

    Read the article

  • error in GP installation

    - by Rahul khanna
    Hi all, I know this is not the forum to post installation problem, though i am posting this type problem because i am helpless. Pls someobody give me some idea. After installing the GP10.0 while i run the GP Utility for server installation. In between installation i got the following error. I have SQL2000 data base and created the System DSN for and tested successfully. Pls somebody help me what is this error about. The following SQL statement produced an error: create procedure smRuleTestSendMail @emailid varchar(255), @ccids varchar(255), @bccids varchar(255), @emailmsg varchar(255) as if substring(CONVERT(varchar(128), SERVERPROPERTY('ProductVersion')),1,PATINDEX('%.%', CONVERT(varchar(128), SERVERPROPERTY('ProductVersion'))) - 1) < 9 begin exec master.dbo.xp_sendmail @recipients=@emailid, @copy_recipients=@ccids, @blind_copy_recipients=@bccids, @message=@emailmsg, @Subject='TEST message from SQL Server' end else begin if PATINDEX ('%64-bit%',@@version) 0 begin exec msdb.dbo.sp_send_dbmail @recipients=@emailid, @copy_recipients=@ccids, @blind_copy_recipients=@bccids, @body=@emailmsg, @subject='TEST message from SQL Server' end else begin exec master.dbo.xp_sendmail @recipients=@emailid, @copy_recipients=@ccids, @blind_copy_recipients=@bccids, @message=@emailmsg, @Subject='TEST message from SQL Server' end end

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >