Search Results

Search found 130 results on 6 pages for 'collate'.

Page 5/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • rails, mysql charsets & encoding: binary

    - by Benjamin Vetter
    Hi, i've a rails app that runs using utf-8. It uses a mysql database, all tables with mysql's default charset and collation (i.e. latin1). Therefore the latin1 tables contain utf-8 data. Sure, that's not nice, but i'm not really interested in it. Everything works fine, because the connection encoding is latin1 as well and therefore mysql does not convert between charsets. Only one problem: i need a utf-8 fulltext index for one table: mysql> show create table autocompletephrases; ... AUTO_INCREMENT=310095 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci But: I don't want to convert between charsets in my rails app. Therefore I would like to know if i could just set config/database.yml production: adapter: mysql >>>> encoding: binary ... which just calls SET NAMES 'binary' when connecting to mySQL. It looks like it works for my case, because i guess it forces mysql to -not- convert between charsets (mySQL docs). Does anyone knows about problems about doing this? Any side-effects? Or do you have any other suggestions? But i'd like to avoid converting my whole database to utf-8. Many Thanks! Benjamin

    Read the article

  • SQL deadlock on delete then bulk insert

    - by StarLite
    I have an issue with a deadlock in SQL Server that I haven't been able to resolve. Basically I have a large number of concurrent connections (from many machines) that are executing transactions where they first delete a range of entries and then re-insert entries within the same range with a bulk insert. Essentially, the transaction looks like this BEGIN TRANSACTION T1 DELETE FROM [TableName] WITH( XLOCK HOLDLOCK ) WHERE [Id]=@Id AND [SubId]=@SubId INSERT BULK [TableName] ( [Id] Int , [SubId] Int , [Text] VarChar(max) COLLATE SQL_Latin1_General_CP1_CI_AS ) WITH(CHECK_CONSTRAINTS, FIRE_TRIGGERS) COMMIT TRANSACTION T1 The bulk insert only inserts items matching the Id and SubId of the deletion in the same transaction. Furthermore, these Id and SubId entries should never overlap. When I have enough concurrent transaction of this form, I start to see a significant number of deadlocks between these statements. I added the locking hints XLOCK HOLDLOCK to attempt to deal with the issue, but they don't seem to be helpling. The canonical deadlock graph for this error shows: Connection 1: Holds RangeX-X on PK_TableName Holds IX Page lock on the table Requesting X Page lock on the table Connection 2: Holds IX Page lock on the table Requests RangeX-X lock on the table What do I need to do in order to ensure that these deadlocks don't occur. I have been doing some reading on the RangeX-X locks and I'm not sure I fully understand what is going on with these. Do I have any options short of locking the entire table here?

    Read the article

  • Get percentiles of data-set with group by month

    - by Cylindric
    Hello, I have a SQL table with a whole load of records that look like this: | Date | Score | + -----------+-------+ | 01/01/2010 | 4 | | 02/01/2010 | 6 | | 03/01/2010 | 10 | ... | 16/03/2010 | 2 | I'm plotting this on a chart, so I get a nice line across the graph indicating score-over-time. Lovely. Now, what I need to do is include the average score on the chart, so we can see how that changes over time, so I can simply add this to the mix: SELECT YEAR(SCOREDATE) 'Year', MONTH(SCOREDATE) 'Month', MIN(SCORE) MinScore, AVG(SCORE) AverageScore, MAX(SCORE) MaxScore FROM SCORES GROUP BY YEAR(SCOREDATE), MONTH(SCOREDATE) ORDER BY YEAR(SCOREDATE), MONTH(SCOREDATE) That's no problem so far. The problem is, how can I easily calculate the percentiles at each time-period? I'm not sure that's the correct phrase. What I need in total is: A line on the chart for the score (easy) A line on the chart for the average (easy) A line on the chart showing the band that 95% of the scores occupy (stumped) It's the third one that I don't get. I need to calculate the 5% percentile figures, which I can do singly: SELECT MAX(SubQ.SCORE) FROM (SELECT TOP 45 PERCENT SCORE FROM SCORES WHERE YEAR(SCOREDATE) = 2010 AND MONTH(SCOREDATE) = 1 ORDER BY SCORE ASC) AS SubQ SELECT MIN(SubQ.SCORE) FROM (SELECT TOP 45 PERCENT SCORE FROM SCORES WHERE YEAR(SCOREDATE) = 2010 AND MONTH(SCOREDATE) = 1 ORDER BY SCORE DESC) AS SubQ But I can't work out how to get a table of all the months. | Date | Average | 45% | 55% | + -----------+---------+-----+-----+ | 01/01/2010 | 13 | 11 | 15 | | 02/01/2010 | 10 | 8 | 12 | | 03/01/2010 | 5 | 4 | 10 | ... | 16/03/2010 | 7 | 7 | 9 | At the moment I'm going to have to load this lot up into my app, and calculate the figures myself. Or run a larger number of individual queries and collate the results.

    Read the article

  • Optimal two variable linear regression SQL statement (censoring outliers)

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 15 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here (with five outliers highlighted): Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • Optimal two variable linear regression SQL statement

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 5 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; and insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here: Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • save html-formatted text to database

    - by yozhik
    Hi all! I want to save html-formatted text to database, but when I do that it is don't save html-symbols like < / ' and others This is how I read article from database for editing: <p class="Title">??????????? ???????:</p> <textarea name="EN" cols="90" rows="20" value="<?php echo $articleArr['EN']; ?>" ></textarea> And this is how I save it to database: function UpdateArticle($ArticleID, $ParentName, $Title, $RU, $EN, $UKR) { //fetch data from database for dropdown lists //connect to db or die) $db = mysql_connect($GLOBALS["gl_dbName"], $GLOBALS["gl_UserName"], $GLOBALS["gl_Password"] ) or die ("Unable to connect"); //to prevenr ????? symbols in unicode - utf-8 coding mysql_query("SET NAMES 'UTF8'"); mysql_set_charset('utf8'); mysql_query("SET NAMES 'utf8' COLLATE 'utf8_general_ci'"); //select database mysql_select_db($GLOBALS["gl_adminDatabase"], $db); $sql = "UPDATE Articles SET AParentName='".$ParentName."', ATitle='".$Title."', RU='".$RU."', EN='".$EN."', UKR='".$UKR."' WHERE ArticleID='".$ArticleID."';"; //execute SQL-query $result = mysql_query($sql, $db); if (!$result) { die('Invalid query: ' . mysql_error()); } //close database = very inportant mysql_close($db); } Help me please, how can I save such texts properly, Thanx!

    Read the article

  • Corrupt UTF-8 Characters with PHP 5.2.10 and MySQL 5.0.81

    - by jkndrkn
    We have an application hosted on both a local development server and a live site. We are experiencing UTF-8 corruption issues and are looking to figure out how to resolve them. The system is run using symfony 1.0 with Propel. On our development server, we are running PHP 5.2.0 and MySQL 5.0.32. We do not experience corrupted UTF-8 characters there. On our live site, PHP 5.2.10 and MySQL 5.0.81 is running. On that server, certain characters such as ô´ and S are corrupted once they are stored in the database. The corrupted characters are showing up as either question marks or approximations of the original character with adjacent question marks. Examples of corruption: Uncorrupted: ô´ Corrupted: ô? Uncorrupted: S Corrupted: ? We are currently using the following techniques on both development and live servers: Executing the following queries prior to execution of any other queries: SET NAMES 'utf8' COLLATE 'utf8_unicode_ci' SET CHARSET 'utf8' Setting the <meta> Content-Type value to: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Adding the following to our .htaccess file: AddDefaultCharset utf-8 Using mb_* (multibyte) PHP functions where necessary. Being sure to set database columns to use utf8_unicode_ci collation. These techniques are sufficient for our development site, but do not work on the live site. On the live site I've also tried adding mysql_set_encoding('ut8', $mysql_connection) but this does not help either. I have found some evidence that newer versions of PHP and MySQL are mishandling UTF-8 character encodings.

    Read the article

  • gridview image column problem

    - by jame
    Work on vs05 C# asp.net .My SQL Syntax is :**** if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[Images]') and OBJECTPROPERTY(id, N'IsUserTable') = 1) drop table [dbo].[Images] GO CREATE TABLE [dbo].[Images] ( [ID] [numeric](18, 0) IDENTITY (1, 1) NOT NULL , [ImageName] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , [Image] [image] NULL ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] GO I want to show this Images table values in a grid view.....I do it ...but the image value can not show ....asp.net syntax for gridview is <asp:GridView ID="GridView2" runat="server" AutoGenerateColumns="False"> <Columns> <asp:BoundField DataField="ImageName" HeaderText="ImageName" /> <asp:TemplateField HeaderText="Image"> <EditItemTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Eval("Image") %>'></asp:TextBox> </EditItemTemplate> <ItemTemplate> <asp:Image ID="Image1" runat="server" ImageUrl='<%# Eval("Image") %>' /> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> i write the below code on pageload event. i want images table values must shown when the page is load... string strSQL = "Select * From Images"; DataTable dt = clsDB.getDataTable(strSQL); this.GridView2.DataSource = dt; this.GridView2.DataBind(); Why not i get the image on my image column of the gridview.....what's the problem is how to solve?

    Read the article

  • Mysql slow query: INNER JOIN + ORDER BY causes filesort

    - by Alexander
    Hello! I'm trying to optimize this query: SELECT `posts`.* FROM `posts` INNER JOIN `posts_tags` ON `posts`.id = `posts_tags`.post_id WHERE (((`posts_tags`.tag_id = 1))) ORDER BY posts.created_at DESC; The size of tables is 38k rows, and 31k and mysql uses "filesort" so it gets pretty slow. I tried to use different indexes, no luck. CREATE TABLE `posts` ( `id` int(11) NOT NULL auto_increment, `created_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_on_created_at` (`created_at`), KEY `for_tags` (`trashed`,`published`,`clan_private`,`created_at`) ) ENGINE=InnoDB AUTO_INCREMENT=44390 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci CREATE TABLE `posts_tags` ( `id` int(11) NOT NULL auto_increment, `post_id` int(11) default NULL, `tag_id` int(11) default NULL, `created_at` datetime default NULL, `updated_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_tags_on_post_id_and_tag_id` (`post_id`,`tag_id`) ) ENGINE=InnoDB AUTO_INCREMENT=63175 DEFAULT CHARSET=utf8 +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | 1 | SIMPLE | posts_tags | index | index_post_id_and_tag_id | index_post_id_and_tag_id | 10 | NULL | 24159 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | posts | eq_ref | PRIMARY | PRIMARY | 4 | .posts_tags.post_id | 1 | | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ 2 rows in set (0.00 sec) What kind of index I need to define to avoid mysql using filesort? Is it possible when order field is not in where clause?

    Read the article

  • Symfony generating database from model

    - by Sergej Jevsejev
    Hello, I am having troubles generating a simple database form model. I am using: Doctrine on Symfony 1.4.4 MySQL Workbench 5.2.16 with Doctrine Export 0.4.2dev So my ERL Model is: http://img708.imageshack.us/img708/1716/tmg.png Genereted YAML file: --- detect_relations: true options: collate: utf8_unicode_ci charset: utf8 type: InnoDB Course: columns: id: type: integer(4) primary: true notnull: true autoincrement: true name: type: string(255) notnull: true keywords: type: string(255) notnull: true summary: type: clob(65535) notnull: true Lecture: columns: id: type: integer(4) primary: true notnull: true autoincrement: true course_id: type: integer(4) primary: true notnull: true name: type: string(255) notnull: true description: type: string(255) notnull: true url: type: string(255) relations: Course: class: Course local: course_id foreign: id foreignAlias: Lectures foreignType: many owningSide: true User: columns: id: type: integer(4) primary: true unique: true notnull: true autoincrement: true firstName: type: string(255) notnull: true lastName: type: string(255) notnull: true email: type: string(255) unique: true notnull: true designation: type: string(1024) personalHeadline: type: string(1024) shortBio: type: clob(65535) UserCourse: tableName: user_has_course columns: user_id: type: integer(4) primary: true notnull: true course_id: type: integer(4) primary: true notnull: true relations: User: class: User local: user_id foreign: id foreignAlias: UserCourses foreignType: many owningSide: true Course: class: Course local: course_id foreign: id foreignAlias: UserCourses foreignType: many owningSide: true And no matter what I try this error occurs after: symfony doctrine:build --all --no-confirmation SQLSTATE[42000]: Syntax error or access violation: 1072 Key column 'user_userid' doesn't exist in table. Failing Query: "ALTER TABLE user_has_course ADD CONSTRAINT user_has_course_user_userid_user_id FOREIGN KEY (user_userid) REFERENCES user(id)". Failing Query: ALTER TABLE user_has_course ADD CONSTRAINT user_has_cou rse_user_userid_user_id FOREIGN KEY (user_userid) REFERENCES user(id) Currently I am studying Symfony, and stuck with this error. Please help.

    Read the article

  • How much does an InnoDB table benefit from having fixed-length rows?

    - by Philip Eve
    I know that dependent on the database storage engine in use, a performance benefit can be found if all of the rows in the table can be guaranteed to be the same length (by avoiding nullable columns and not using any VARCHAR, TEXT or BLOB columns). I'm not clear on how far this applies to InnoDB, with its funny table arrangements. Let's give an example: I have the following table CREATE TABLE `PlayerGameRcd` ( `User` SMALLINT UNSIGNED NOT NULL, `Game` MEDIUMINT UNSIGNED NOT NULL, `GameResult` ENUM('Quit', 'Kicked by Vote', 'Kicked by Admin', 'Kicked by System', 'Finished 5th', 'Finished 4th', 'Finished 3rd', 'Finished 2nd', 'Finished 1st', 'Game Aborted', 'Playing', 'Hide' ) NOT NULL DEFAULT 'Playing', `Inherited` TINYINT NOT NULL, `GameCounts` TINYINT NOT NULL, `Colour` TINYINT UNSIGNED NOT NULL, `Score` SMALLINT UNSIGNED NOT NULL DEFAULT 0, `NumLongTurns` TINYINT UNSIGNED NOT NULL DEFAULT 0, `Notes` MEDIUMTEXT, `CurrentOccupant` TINYINT UNSIGNED NOT NULL DEFAULT 0, PRIMARY KEY (`Game`, `User`), UNIQUE KEY `PGR_multi_uk` (`Game`, `CurrentOccupant`, `Colour`), INDEX `Stats_ind_PGR` (`GameCounts`, `GameResult`, `Score`, `User`), INDEX `GameList_ind_PGR` (`User`, `CurrentOccupant`, `Game`, `Colour`), CONSTRAINT `Constr_PlayerGameRcd_User_fk` FOREIGN KEY `User_fk` (`User`) REFERENCES `User` (`UserID`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `Constr_PlayerGameRcd_Game_fk` FOREIGN KEY `Game_fk` (`Game`) REFERENCES `Game` (`GameID`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=INNODB CHARACTER SET utf8 COLLATE utf8_general_ci The only column that is nullable is Notes, which is MEDIUMTEXT. This table presently has 33097 rows (which I appreciate is small as yet). Of these rows, only 61 have values in Notes. How much of an improvement might I see from, say, adding a new table to store the Notes column in and performing LEFT JOINs when necessary?

    Read the article

  • SQL SERVER – SSMS: Memory Usage By Memory Optimized Objects Report

    - by Pinal Dave
    At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition. This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list. If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group. Here is the query used to identify whether a database has memory-optimized file group or not. SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX' Once we add filegroup using below command, we would see different version of report. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA GO The report is still empty because we have not defined any Memory Optimized table in the database.  Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO') TO FILEGROUP [IMO_FG] GO You may have to change the path based on your SQL Server configuration. Below is the script to create the table. USE MemoryOptimizedDB GO --Drop table if it already exists. IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL DROP TABLE dbo.SQLAuthority GO CREATE TABLE dbo.SQLAuthority ( ID INT IDENTITY NOT NULL, Name CHAR(500)  COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal', CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID), INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below. Notice that table memory is zero but index is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows INSERT INTO SQLAuthority VALUES (DEFAULT) GO 10000 Here is the same report after inserting 1000 rows into our InMemory table.    There are total three sections in the whole report. Total Memory consumed by In-Memory Objects Pie chart showing memory distribution based on type of consumer – table, index and system. Details of memory usage by each table. The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs.  So, to look at user table memory usage, below is the over-simplified version of query. USE MemoryOptimizedDB GO SELECT OBJECT_NAME(OBJECT_ID), * FROM sys.dm_db_xtp_table_memory_stats WHERE OBJECT_ID > 0 GO This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Memory, SQL Reports

    Read the article

  • Visual Studio Load Testing using Windows Azure

    - by Tarun Arora
    In my opinion the biggest adoption barrier in performance testing on smaller projects is not the tooling but the high infrastructure and administration cost that comes with this phase of testing. Only if a reusable solution was possible and infrastructure management wasn’t as expensive, adoption would certainly spike. It certainly is possible if you bring Visual Studio and Windows Azure into the equation. It is possible to run your test rig in the cloud without getting tangled in SCVMM or Lab Management. All you need is an active Azure subscription, Windows Azure endpoint enabled developer workstation running visual studio ultimate on premise, windows azure endpoint enabled worker roles on azure compute instances set up to run as test controllers and test agents. My test rig is running SQL server 2012 and Visual Studio 2012 RC agents. The beauty is that the solution is reusable, you can open the azure project, change the subscription and certificate, click publish and *BOOM* in less than 15 minutes you could have your own test rig running in the cloud. In this blog post I intend to show you how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. As a bonus, I will share a reusable solution that you can use to automate test rig creation for both VS 2010 agents as well as VS 2012 agents. Introduction The slide show below should help you under the high level details of what we are trying to achive... Leveraging Azure for Performance Testing View more PowerPoint from Avanade Scenario 1 – Running a Test Rig in Windows Azure To start off with the basics, in the first scenario I plan to discuss how to, - Automate deployment & configuration of Windows Azure Worker Roles for Test Controller and Test Agent - Automate deployment & configuration of SQL database on Test Controller on the Test Controller Worker Role - Scaling Test Agents on demand - Creating a Web Performance Test and a simple Load Test - Managing Test Controllers right from Visual Studio on Premise Developer Workstation - Viewing results of the Load Test - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig Scenario 2 – The scaled out Test Rig and sharing data using SQL Azure A scaled out version of this implementation would involve running multiple test rigs running in the cloud, in this scenario I will show you how to sync the load test database from these distributed test rigs into one SQL Azure database using Azure sync. The selling point for this scenario is being able to collate the load test efforts from across the organization into one data store. - Deploy multiple test rigs using the reusable solution from scenario 1 - Set up and configure Windows Azure Sync - Test SQL Azure Load Test result database created as a result of Windows Azure Sync - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig The Ingredients Though with an active MSDN ultimate subscription you would already have access to everything and more, you will essentially need the below to try out the scenarios, 1. Windows Azure Subscription 2. Windows Azure Storage – Blob Storage 3. Windows Azure Compute – Worker Role 4. SQL Azure Database 5. SQL Data Sync 6. Windows Azure Connect – End points 7. SQL 2012 Express or SQL 2008 R2 Express 8. Visual Studio All Agents 2012 or Visual Studio All Agents 2010 9. A developer workstation set up with Visual Studio 2012 – Ultimate or Visual Studio 2010 – Ultimate 10. Visual Studio Load Test Unlimited Virtual User Pack. Walkthrough To set up the test rig in the cloud, the test controller, test agent and SQL express installers need to be available when the worker role set up starts, the easiest and most efficient way is to pre upload the required software into Windows Azure Blob storage. SQL express, test controller and test agent expose various switches which we can take advantage of including the quiet install switch. Once all the 3 have been installed the test controller needs to be registered with the test agents and the SQL database needs to be associated to the test controller. By enabling Windows Azure connect on the machines in the cloud and the developer workstation on premise we successfully create a virtual network amongst the machines enabling 2 way communication. All of the above can be done programmatically, let’s see step by step how… Scenario 1 Video Walkthrough–Leveraging Windows Azure for performance Testing Scenario 2 Work in progress, watch this space for more… Solution If you are still reading and are interested in the solution, drop me an email with your windows live id. I’ll add you to my TFS preview project which has a re-usable solution for both VS 2010 and VS 2012 test rigs as well as guidance and demo performance tests.   Conclusion Other posts and resources available here. Possibilities…. Endless!

    Read the article

  • Default Database Collations got messed up

    - by dominicdinada
    I am using Ubuntu 9.10 with XAMPP ( Lampp "MYSQL 5.1.45 PHPMYADMIN 3.3.1 PHP 5.3.2 ) What my problem is, is that I set up my testing env to debug my scripts locally and when I did so there arose a problem. This problem is that I used firefox's addon SQLinject ME to test for weakness' and upon doing so it caused mysql to change the default local collations; character sets dir /opt/lampp/share/mysql/charsets/ collation connection latin1_general_ci (Global value) latin1_swedish_ci collation database latin1_swedish_ci collation server latin1_swedish_ci I have searched for quite sometime in regards to a solution to this problem and have come up with searching for the db.opt file which stores this information without success. Upon not finding a solution I removed lampp with the "sudo rm -fR /opt" command and reinstall and the problem still persists. I have tried to change the collations manually and still come up with the database displaying latin1_swedish_ci as the default language. Why is this a problem?? Why is it a problem with mysql? Because the application I am testing and debugging locally is built on the CodeIgnitor with Smarty framework and since this combination of framework is built to detect the LOCALES, Rather what the database defaults are I keep getting errors saying no language file for swedish...... Of course I could get the swedish language file to work around this problem but I do not feel the need to make this work around a perminant solution as with time when I move on to projects I will run into simular problems every time that A; When importing database files, backups etc it will default to import such databases as the locale swedish. B; As time passes on I might completly forget of this error and will be back to square one. I have found this code in searches for a fix,which seems to alter the tables to a desired Collaion; $value) { mysql_query("ALTER TABLE $value COLLATE latin1_general_ci"); }} echo "The collation of your database has been successfully changed!"; ? Which is handy to switch collations in One Schema at a time however this is not a fix when a framework doesnt care that the said database is in one langugae. It tests for the Default of the entire server. Someone with any knowledge of a purge or fix to this I would greatly appricate the help. One more final note is that when I was testing I only figured to back up the applications DataBase and not the entire Schema of the install. No matter if I uninstall or reinstall the database still seems to carry these problems.

    Read the article

  • Regular exp to validate email in C

    - by Liju Mathew
    Hi, We need to write a email validation program in C. We are planning to use GNU Cregex.h) regular expression. The regular expression we prepared is [a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])? But the below code is failing while compiling the regex. #include <stdio.h> #include <regex.h> int main(const char *argv, int argc) { const char *reg_exp = "[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?"; int status = 1; char email[71]; regex_t preg; int rc; printf("The regex = %s\n", reg_exp); rc = regcomp(&preg, reg_exp, REG_EXTENDED|REG_NOSUB); if (rc != 0) { if (rc == REG_BADPAT || rc == REG_ECOLLATE) fprintf(stderr, "Bad Regex/Collate\n"); if (rc == REG_ECTYPE) fprintf(stderr, "Invalid Char\n"); if (rc == REG_EESCAPE) fprintf(stderr, "Trailing \\\n"); if (rc == REG_ESUBREG || rc == REG_EBRACK) fprintf(stderr, "Invalid number/[] error\n"); if (rc == REG_EPAREN || rc == REG_EBRACE) fprintf(stderr, "Paren/Bracket error\n"); if (rc == REG_BADBR || rc == REG_ERANGE) fprintf(stderr, "{} content invalid/Invalid endpoint\n"); if (rc == REG_ESPACE) fprintf(stderr, "Memory error\n"); if (rc == REG_BADRPT) fprintf(stderr, "Invalid regex\n"); fprintf(stderr, "%s: Failed to compile the regular expression:%d\n", __func__, rc); return 1; } while (status) { fgets(email, sizeof(email), stdin); status = email[0]-48; rc = regexec(&preg, email, (size_t)0, NULL, 0); if (rc == 0) { fprintf(stderr, "%s: The regular expression is a match\n", __func__); } else { fprintf(stderr, "%s: The regular expression is not a match: %d\n", __func__, rc); } } regfree(&preg); return 0; } The regex compilation is failing with the below error. The regex = [a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])? Invalid regex main: Failed to compile the regular expression:13 What is the cause of this error? Whether the regex need to be modified? Thanks, Mathew Liju

    Read the article

  • Default Database Collations PenTesting Env

    - by dominicdinada
    I am using Ubuntu 9.10 with XAMPP ( Lampp "MYSQL 5.1.45 PHPMYADMIN 3.3.1 PHP 5.3.2 ) What my problem is, is that I set up my testing env to debug my scripts locally and when I did so there arose a problem. This problem is that I used firefox's addon SQLinject ME to test for weakness' and upon doing so it caused mysql to change the default local collations; character sets dir /opt/lampp/share/mysql/charsets/ collation connection latin1_general_ci (Global value) latin1_swedish_ci collation database latin1_swedish_ci collation server latin1_swedish_ci I have searched for quite sometime in regards to a solution to this problem and have come up with searching for the db.opt file which stores this information without success. Upon not finding a solution I removed lampp with the "sudo rm -fR /opt" command and reinstall and the problem still persists. I have tried to change the collations manually and still come up with the database displaying latin1_swedish_ci as the default language. Why is this a problem?? Why is it a problem with mysql? Because the application I am testing and debugging locally is built on the CodeIgnitor with Smarty framework and since this combination of framework is built to detect the LOCALES, Rather what the database defaults are I keep getting errors saying no language file for swedish...... Of course I could get the swedish language file to work around this problem but I do not feel the need to make this work around a perminant solution as with time when I move on to projects I will run into simular problems every time that A; When importing database files, backups etc it will default to import such databases as the locale swedish. B; As time passes on I might completly forget of this error and will be back to square one. I have found this code in searches for a fix,which seems to alter the tables to a desired Collaion; $value) { mysql_query("ALTER TABLE $value COLLATE latin1_general_ci"); }} echo "The collation of your database has been successfully changed!"; ? Which is handy to switch collations in One Schema at a time however this is not a fix when a framework doesnt care that the said database is in one langugae. It tests for the Default of the entire server. Someone with any knowledge of a purge or fix to this I would greatly appricate the help. One more final note is that when I was testing I only figured to back up the applications DataBase and not the entire Schema of the install. No matter if I uninstall or reinstall the database still seems to carry these problems.

    Read the article

  • Storing and displaying unicode string (??????) using PHP and MySQL

    - by Anirudh Goel
    I have to store hindi text in a MySQL database, fetch it using a PHP script and display it on a webpage. I did the following: I created a database and set its encoding to UTF-8 and also the collation to utf8_bin. I added a varchar field in the table and set it to accept UTF-8 text in the charset property. Then I set about adding data to it. Here I had to copy data from an existing site. The hindi text looks like this: ????????:05:30 I directly copied this text into my database and used the PHP code echo(utf8_encode($string)) to display the data. Upon doing so the browser showed me "??????". When I inserted the UTF equivalent of the text by going to "view source" in the browser, however, ???????? translates into &#2360;&#2370;&#2352;&#2381;&#2351;&#2379;&#2342;&#2351;. If I enter and store &#2360;&#2370;&#2352;&#2381;&#2351;&#2379;&#2342;&#2351; in the database, it converts perfectly. So what I want to know is how I can directly store ???????? into my database and fetch it and display it in my webpage using PHP. Also, can anyone help me understand if there's a script which when I type in ????????, gives me &#2360;&#2370;&#2352;&#2381;&#2351;&#2379;&#2342;&#2351;? Solution Found I wrote the following sample script which worked for me. Hope it helps someone else too <html> <head> <title>Hindi</title></head> <body> <?php include("connection.php"); //simple connection setting $result = mysql_query("SET NAMES utf8"); //the main trick $cmd = "select * from hindi"; $result = mysql_query($cmd); while ($myrow = mysql_fetch_row($result)) { echo ($myrow[0]); } ?> </body> </html> The dump for my database storing hindi utf strings is CREATE TABLE `hindi` ( `data` varchar(1000) character set utf8 collate utf8_bin default NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `hindi` VALUES ('????????'); Now my question is, how did it work without specifying "META" or header info? Thanks!

    Read the article

  • BNF – how to read syntax?

    - by Piotr Rodak
    A few days ago I read post of Jen McCown (blog) about her idea of blogging about random articles from Books Online. I think this is a great idea, even if Jen says that it’s not exciting or sexy. I noticed that many of the questions that appear on forums and other media arise from pure fact that people asking questions didn’t bother to read and understand the manual – Books Online. Jen came up with a brilliant, concise acronym that describes very well the category of posts about Books Online – RTFM365. I take liberty of tagging this post with the same acronym. I often come across questions of type – ‘Hey, i am trying to create a table, but I am getting an error’. The error often says that the syntax is invalid. 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT DEFAULT Guid_Default NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); 5 The answer is usually(1), ‘Ok, let me check it out.. Ah yes – you have to put name of the DEFAULT constraint before the type of constraint: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); Why many people stumble on syntax errors? Is the syntax poorly documented? No, the issue is, that correct syntax of the CREATE TABLE statement is documented very well in Books Online and is.. intimidating. Many people can be taken aback by the rather complex block of code that describes all intricacies of the statement. However, I don’t know better way of defining syntax of the statement or command. The notation that is used to describe syntax in Books Online is a form of Backus-Naur notatiion, called BNF for short sometimes. This is a notation that was invented around 50 years ago, and some say that even earlier, around 400 BC – would you believe? Originally it was used to define syntax of, rather ancient now, ALGOL programming language (in 1950’s, not in ancient India). If you look closer at the definition of the BNF, it turns out that the principles of this syntax are pretty simple. Here are a few bullet points: italic_text is a placeholder for your identifier <italic_text_in_angle_brackets> is a definition which is described further. [everything in square brackets] is optional {everything in curly brackets} is obligatory everything | separated | by | operator is an alternative ::= “assigns” definition to an identifier Yes, it looks like these six simple points give you the key to understand even the most complicated syntax definitions in Books Online. Books Online contain an article about syntax conventions – have you ever read it? Let’s have a look at fragment of the CREATE TABLE statement: 1 CREATE TABLE 2 [ database_name . [ schema_name ] . | schema_name . ] table_name 3 ( { <column_definition> | <computed_column_definition> 4 | <column_set_definition> } 5 [ <table_constraint> ] [ ,...n ] ) 6 [ ON { partition_scheme_name ( partition_column_name ) | filegroup 7 | "default" } ] 8 [ { TEXTIMAGE_ON { filegroup | "default" } ] 9 [ FILESTREAM_ON { partition_scheme_name | filegroup 10 | "default" } ] 11 [ WITH ( <table_option> [ ,...n ] ) ] 12 [ ; ] Let’s look at line 2 of the above snippet: This line uses rules 3 and 5 from the list. So you know that you can create table which has specified one of the following. just name – table will be created in default user schema schema name and table name – table will be created in specified schema database name, schema name and table name – table will be created in specified database, in specified schema database name, .., table name – table will be created in specified database, in default schema of the user. Note that this single line of the notation describes each of the naming schemes in deterministic way. The ‘optionality’ of the schema_name element is nested within database_name.. section. You can use either database_name and optional schema name, or just schema name – this is specified by the pipe character ‘|’. The error that user gets with execution of the first script fragment in this post is as follows: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'DEFAULT'. Ok, let’s have a look how to find out the correct syntax. Line number 3 of the BNF fragment above contains reference to <column_definition>. Since column_definition is in angle brackets, we know that this is a reference to notion described further in the code. And indeed, the very next fragment of BNF contains syntax of the column definition. 1 <column_definition> ::= 2 column_name <data_type> 3 [ FILESTREAM ] 4 [ COLLATE collation_name ] 5 [ NULL | NOT NULL ] 6 [ 7 [ CONSTRAINT constraint_name ] DEFAULT constant_expression ] 8 | [ IDENTITY [ ( seed ,increment ) ] [ NOT FOR REPLICATION ] 9 ] 10 [ ROWGUIDCOL ] [ <column_constraint> [ ...n ] ] 11 [ SPARSE ] Look at line 7 in the above fragment. It says, that the column can have a DEFAULT constraint which, if you want to name it, has to be prepended with [CONSTRAINT constraint_name] sequence. The name of the constraint is optional, but I strongly recommend you to make the effort of coming up with some meaningful name yourself. So the correct syntax of the CREATE TABLE statement from the beginning of the article is like this: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); That is practically everything you should know about BNF. I encourage you to study the syntax definitions for various statements and commands in Books Online, you can find really interesting things hidden there. Technorati Tags: SQL Server,t-sql,BNF,syntax   (1) No, my answer usually is a question – ‘What error message? What does it say?’. You’d be surprised to know how many people think I can go through time and space and look at their screen at the moment they received the error.

    Read the article

  • SQL SERVER – How to Recover SQL Database Data Deleted by Accident

    - by Pinal Dave
    In Repair a SQL Server database using a transaction log explorer, I showed how to use ApexSQL Log, a SQL Server transaction log viewer, to recover a SQL Server database after a disaster. In this blog, I’ll show you how to use another SQL Server disaster recovery tool from ApexSQL in a situation when data is accidentally deleted. You can download ApexSQL Recover here, install, and play along. With a good SQL Server disaster recovery strategy, data recovery is not a problem. You have a reliable full database backup with valid data, a full database backup and subsequent differential database backups, or a full database backup and a chain of transaction log backups. But not all situations are ideal. Here we’ll address some sub-optimal scenarios, where you can still successfully recover data. If you have only a full database backup This is the least optimal SQL Server disaster recovery strategy, as it doesn’t ensure minimal data loss. For example, data was deleted on Wednesday. Your last full database backup was created on Sunday, three days before the records were deleted. By using the full database backup created on Sunday, you will be able to recover SQL database records that existed in the table on Sunday. If there were any records inserted into the table on Monday or Tuesday, they will be lost forever. The same goes for records modified in this period. This method will not bring back modified records, only the old records that existed on Sunday. If you restore this full database backup, all your changes (intentional and accidental) will be lost and the database will be reverted to the state it had on Sunday. What you have to do is compare the records that were in the table on Sunday to the records on Wednesday, create a synchronization script, and execute it against the Wednesday database. If you have a full database backup followed by differential database backups Let’s say the situation is the same as in the example above, only you create a differential database backup every night. Use the full database backup created on Sunday, and the last differential database backup (created on Tuesday). In this scenario, you will lose only the data inserted and updated after the differential backup created on Tuesday. If you have a full database backup and a chain of transaction log backups This is the SQL Server disaster recovery strategy that provides minimal data loss. With a full chain of transaction logs, you can recover the SQL database to an exact point in time. To provide optimal results, you have to know exactly when the records were deleted, because restoring to a later point will not bring back the records. This method requires restoring the full database backup first. If you have any differential log backup created after the last full database backup, restore the most recent one. Then, restore transaction log backups, one by one, it the order they were created starting with the first created after the restored differential database backup. Now, the table will be in the state before the records were deleted. You have to identify the deleted records, script them and run the script against the original database. Although this method is reliable, it is time-consuming and requires a lot of space on disk. How to easily recover deleted records? The following solution enables you to recover SQL database records even if you have no full or differential database backups and no transaction log backups. To understand how ApexSQL Recover works, I’ll explain what happens when table data is deleted. Table data is stored in data pages. When you delete table records, they are not immediately deleted from the data pages, but marked to be overwritten by new records. Such records are not shown as existing anymore, but ApexSQL Recover can read them and create undo script for them. How long will deleted records stay in the MDF file? It depends on many factors, as time passes it’s less likely that the records will not be overwritten. The more transactions occur after the deletion, the more chances the records will be overwritten and permanently lost. Therefore, it’s recommended to create a copy of the database MDF and LDF files immediately (if you cannot take your database offline until the issue is solved) and run ApexSQL Recover on them. Note that a full database backup will not help here, as the records marked for overwriting are not included in the backup. First, I’ll delete some records from the Person.EmailAddress table in the AdventureWorks database.   I can delete these records in SQL Server Management Studio, or execute a script such as DELETE FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 Then, I’ll start ApexSQL Recover and select From DELETE operation in the Recovery tab.   In the Select the database to recover step, first select the SQL Server instance. If it’s not shown in the drop-down list, click the Server icon right to the Server drop-down list and browse for the SQL Server instance, or type the instance name manually. Specify the authentication type and select the database in the Database drop-down list.   In the next step, you’re prompted to add additional data sources. As this can be a tricky step, especially for new users, ApexSQL Recover offers help via the Help me decide option.   The Help me decide option guides you through a series of questions about the database transaction log and advises what files to add. If you know that you have no transaction log backups or detached transaction logs, or the online transaction log file has been truncated after the data was deleted, select No additional transaction logs are available. If you know that you have transaction log backups that contain the delete transactions you want to recover, click Add transaction logs. The online transaction log is listed and selected automatically.   Click Add if to add transaction log backups. It would be best if you have a full transaction log chain, as explained above. The next step for this option is to specify the time range.   Selecting a small time range for the time of deletion will create the recovery script just for the accidentally deleted records. A wide time range might script the records deleted on purpose, and you don’t want that. If needed, you can check the script generated and manually remove such records. After that, for all data sources options, the next step is to select the tables. Be careful here, if you deleted some data from other tables on purpose, and don’t want to recover them, don’t select all tables, as ApexSQL Recover will create the INSERT script for them too.   The next step offers two options: to create a recovery script that will insert the deleted records back into the Person.EmailAddress table, or to create a new database, create the Person.EmailAddress table in it, and insert the deleted records. I’ll select the first one.   The recovery process is completed and 11 records are found and scripted, as expected.   To see the script, click View script. ApexSQL Recover has its own script editor, where you can review, modify, and execute the recovery script. The insert into statements look like: INSERT INTO Person.EmailAddress( BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate) VALUES( 70, 70, N'[email protected]' COLLATE SQL_Latin1_General_CP1_CI_AS, 'd62c5b4e-c91f-403f-b630-7b7e0fda70ce', '20030109 00:00:00.000' ); To execute the script, click Execute in the menu.   If you want to check whether the records are really back, execute SELECT * FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 As shown, ApexSQL Recover recovers SQL database data after accidental deletes even without the database backup that contains the deleted data and relevant transaction log backups. ApexSQL Recover reads the deleted data from the database data file, so this method can be used even for databases in the Simple recovery model. Besides recovering SQL database records from a DELETE statement, ApexSQL Recover can help when the records are lost due to a DROP TABLE, or TRUNCATE statement, as well as repair a corrupted MDF file that cannot be attached to as SQL Server instance. You can find more information about how to recover SQL database lost data and repair a SQL Server database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • Brighton Rocks: UA Europe 2011

    - by ultan o'broin
    User Assistance Europe 2011 was held in Brighton, UK. Having seen Quadrophenia a dozen times, I just had to go along (OK, I wanted to talk about messages in enterprise applications). Sadly, it rained a lot, though that was still eminently more tolerable than being stuck home in Dublin during Bloomsday. So, here are my somewhat selective highlights and observations from the conference, massively skewed towards my own interests, as usual. Enjoyed Leah Guren's (Cow TC) great start ‘keynote’ on the Cultural Dimensions of Software Help Usage. Starting out by revisiting Hofstede's and Hall's work on culture (how many times I have done this for Multilingual magazine?) and then Neilsen’s findings on age as an indicator of performance, Leah showed how it is the expertise of the user that user assistance (UA) needs to be designed for (especially for high-end users), with some considerations made for age, while the gender and culture of users are not major factors. Help also needs to be contextual and concise, embedded close to the action. That users are saying things like “If I want help on Office, I go to Google ” isn't all that profound at this stage, but it is always worth reiterating how search can be optimized to return better results for users. Interestingly, regardless of user education level, the issue of information quality--hinging on the lynchpin of terminology reflecting that of the user--is critical. Major takeaway for me there. Matthew Ellison’s sessions on embedded help and demos were also impressive. Embedded help that is concise and contextual is definitely a powerful UX enabler, and I’m pleased to say that in Oracle Fusion Applications we have embraced the concept fully. Matthew also mentioned in his session about successful software demos that the principle of modality with demos is a must. Look no further than Oracle User Productivity Kit demos See It!, Try It!, Know It, and Do It! modes, for example. I also found some key takeaways in the presentation by Marie-Louise Flacke on notes and warnings. Here, legal considerations seemed to take precedence over providing any real information to users. I was delighted when Marie-Louise called out the Oracle JDeveloper documentation as an exemplar of how to use notes and instructions instead of trying to scare the bejaysus out of people and not providing them with any real information they’d find useful instead. My own session on designing messages for enterprise applications was well attended. Knowing your user profiles (remember user expertise is the king maker for UA so write for each audience involved), how users really work, the required application business and UI rules, what your application technology supports, and how messages integrate with the enterprise help desk and support policies and you will go much further than relying solely on the guideline of "writing messages in plain language". And, remember the value in warnings and confirmation messages too, and how you can use them smartly. I hope y’all got something from my presentation and from my answers to questions afterwards. Ellis Pratt stole the show with his presentation on applying game theory to software UA, using plenty of colorful, relevant examples (check out the Atlassian and DropBox approaches, for example), and striking just the right balance between theory and practice. Completely agree that the approach to take here is not to make UA itself a game, but to invoke UA as part of a bigger game dynamic (time-to-task completion, personal and communal goals, personal achievement and status, and so on). Sure there are gotchas and limitations to gamification, and we need to do more research. However, we'll hear a lot more about this subject in coming years, particularly in the enterprise space. I hope. I also heard good things about the different sessions about DITA usage (including one by Sonja Fuga that clearly opens the door for major innovation in the community content space using WordPress), the progressive disclosure of information (Cerys Willoughby), an overview of controlled language (or "information quality", as I like to position it) solutions and rationale by Dave Gash, and others. I also spent time chatting with Mike Hamilton of MadCap Software, who showed me a cool demo of their Flare product, and the Lingo translation solution. I liked the idea of their licensing model for workers-on-the-go; that’s smart UX-awareness in itself. Also chatted with Julian Murfitt of Mekon about uptake of DITA in the enterprise space. In all, it's worth attending UA Europe. I was surprised, however, not to see conference topics about mobile UA, community conversation and content, and search in its own right. These are unstoppable forces now, and the latter is pretty central to providing assistance now to all but the most irredentist of hard-copy fetishists or advanced technical or functional users working away on the back end of applications and systems. Only saw one iPad too (says the guy who carries three laptops). Tweeting during the conference was pretty much nonexistent during the event, so no community energy there. Perhaps all this can be addressed next year. I would love to see the next UA Europe event come to Dublin (despite Bloomsday, it's not a bad place place, really) now that hotels are so cheap and all. So, what is my overall impression of the state of user assistance in Europe? Clearly, there are still many people in the industry who feel there is something broken with the traditional forms of user assistance (particularly printed doc) and something needs to be done about it. I would suggest they move on and try and embrace change, instead. Many others see new possibilities, offered by UX and technology, as well as the reality of online user behavior in an increasingly connected world and that is encouraging. Such thought leaders need to be listened to. As Ellis Pratt says in his great book, Trends in Technical Communication - Rethinking Help: “To stay relevant means taking a new perspective on the role (of technical writer), and delivering “products” over and above the traditional manual and online Help file... there are a number of new trends in this field - some complementary, some conflicting. Whatever trends emerge as the norm, it’s likely the status quo will change.” It already has, IMO. I hear similar debates in the professional translation world about the onset of translation crowd sourcing (the Facebook model) and machine translation (trust me, that battle is over). Neither of these initiatives has put anyone out of a job and probably won't, though the nature of the work might change. If anything, such innovations have increased the overall need for professional translators as user expectations rise, new audiences emerge, and organizations need to collate and curate user-generated content, combining it with their own. Perhaps user assistance professionals can learn from other professions and grow accordingly.

    Read the article

  • Using SQL Execution Plans to discover the Swedish alphabet

    - by Rob Farley
    SQL Server is quite remarkable in a bunch of ways. In this post, I’m using the way that the Query Optimizer handles LIKE to keep it SARGable, the Execution Plans that result, Collations, and PowerShell to come up with the Swedish alphabet. SARGability is the ability to seek for items in an index according to a particular set of criteria. If you don’t have SARGability in play, you need to scan the whole index (or table if you don’t have an index). For example, I can find myself in the phonebook easily, because it’s sorted by LastName and I can find Farley in there by moving to the Fs, and so on. I can’t find everyone in my suburb easily, because the phonebook isn’t sorted that way. I can’t even find people who have six letters in their last name, because also the book is sorted by LastName, it’s not sorted by LEN(LastName). This is all stuff I’ve looked at before, including in the talk I gave at SQLBits in October 2010. If I try to find everyone who’s names start with F, I can do that using a query a bit like: SELECT LastName FROM dbo.PhoneBook WHERE LEFT(LastName,1) = 'F'; Unfortunately, the Query Optimizer doesn’t realise that all the entries that satisfy LEFT(LastName,1) = 'F' will be together, and it has to scan the whole table to find them. But if I write: SELECT LastName FROM dbo.PhoneBook WHERE LastName LIKE 'F%'; then SQL is smart enough to understand this, and performs an Index Seek instead. To see why, I look further into the plan, in particular, the properties of the Index Seek operator. The ToolTip shows me what I’m after: You’ll see that it does a Seek to find any entries that are at least F, but not yet G. There’s an extra Predicate in there (a Residual Predicate if you like), which checks that each LastName is really LIKE F% – I suppose it doesn’t consider that the Seek Predicate is quite enough – but most of the benefit is seen by its working out the Seek Predicate, filtering to just the “at least F but not yet G” section of the data. This got me curious though, particularly about where the G comes from, and whether I could leverage it to create the Swedish alphabet. I know that in the Swedish language, there are three extra letters that appear at the end of the alphabet. One of them is ä that appears in the word Västerås. It turns out that Västerås is quite hard to find in an index when you’re looking it up in a Swedish map. I talked about this briefly in my five-minute talk on Collation from SQLPASS (the one which was slightly less than serious). So by looking at the plan, I can work out what the next letter is in the alphabet of the collation used by the column. In other words, if my alphabet were Swedish, I’d be able to tell what the next letter after F is – just in case it’s not G. It turns out it is… Yes, the Swedish letter after F is G. But I worked this out by using a copy of my PhoneBook table that used the Finnish_Swedish_CI_AI collation. I couldn’t find how the Query Optimizer calculates the G, and my friend Paul White (@SQL_Kiwi) tells me that it’s frustratingly internal to the QO. He’s particularly smart, even if he is from New Zealand. To investigate further, I decided to do some PowerShell, leveraging the Get-SqlPlan function that I blogged about recently (make sure you also have the SqlServerCmdletSnapin100 snap-in added). I started by indicating that I was going to use Finnish_Swedish_CI_AI as my collation of choice, and that I’d start whichever letter cam straight after the number 9. I figure that this is a cheat’s way of guessing the first letter of the alphabet (but it doesn’t actually work in Unicode – luckily I’m using varchar not nvarchar. Actually, there are a few aspects of this code that only work using ASCII, so apologies if you were wanting to apply it to Greek, Japanese, etc). I also initialised my $alphabet variable. $collation = 'Finnish_Swedish_CI_AI'; $firstletter = '9'; $alphabet = ''; Now I created the table for my test. A single field would do, and putting a Clustered Index on it would suffice for the Seeks. Invoke-Sqlcmd -server . -data tempdb -query "create table dbo.collation_test (col varchar(10) collate $collation primary key);" Now I get into the looping. $c = $firstletter; $stillgoing = $true; while ($stillgoing) { I construct the query I want, seeking for entries which start with whatever $c has reached, and get the plan for it: $query = "select col from dbo.collation_test where col like '$($c)%';"; [xml] $pl = get-sqlplan $query "." "tempdb"; At this point, my $pl variable is a scary piece of XML, representing the execution plan. A bit of hunting through it showed me that the EndRange element contained what I was after, and that if it contained NULL, then I was done. $stillgoing = ($pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange -ne $null); Now I could grab the value out of it (which came with apostrophes that needed stripping), and append that to my $alphabet variable.   if ($stillgoing)   {  $c=$pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange.RangeExpressions.ScalarOperator.ScalarString.Replace("'","");     $alphabet += $c;   } Finally, finishing the loop, dropping the table, and showing my alphabet! } Invoke-Sqlcmd -server . -data tempdb -query "drop table dbo.collation_test;"; $alphabet; When I run all this, I see that the Swedish alphabet is ABCDEFGHIJKLMNOPQRSTUVXYZÅÄÖ, which matches what I see at Wikipedia. Interesting to see that the letters on the end are still there, even with Case Insensitivity. Turns out they’re not just “letters with accents”, they’re letters in their own right. I’m sure you gave up reading long ago, and really aren’t that fazed about the idea of doing this using PowerShell. I chose PowerShell because I’d already come up with an easy way of grabbing the estimated plan for a query, and PowerShell does allow for easy navigation of XML. I find the most interesting aspect of this as the fact that the Query Optimizer uses the next letter of the alphabet to maintain the SARGability of LIKE. I’m hoping they do something similar for a whole bunch of operations. Oh, and the fact that you know how to find stuff in the IKEA catalogue. Footnote: If you are interested in whether this works in other languages, you might want to consider the following screenshot, which shows that in principle, it should work with Japanese. It might be a bit harder to run this in PowerShell though, as I’m not sure how it translates. In Hiragana, the Japanese alphabet starts ?, ?, ?, ?, ?, ...

    Read the article

  • MySQL Database Query Problem

    - by moustafa
    I need your help!!!. I need to query a table in my database that has record of goods sold. I want the query to detect a particular product and also calculate the quantity sold. The product are 300 now, but it would increase in the future. Below is a sample of my DB Table #---------------------------- # Table structure for litorder #---------------------------- CREATE TABLE `litorder` ( `id` int(10) NOT NULL auto_increment, `name` varchar(50) NOT NULL default '', `address` varchar(50) NOT NULL default '', `xdate` date NOT NULL default '0000-00-00', `ref` varchar(20) NOT NULL default '', `code1` varchar(50) NOT NULL default '', `code2` varchar(50) NOT NULL default '', `code3` varchar(50) NOT NULL default '', `code4` varchar(50) NOT NULL default '', `code5` varchar(50) NOT NULL default '', `code6` varchar(50) NOT NULL default '', `code7` varchar(50) NOT NULL default '', `code8` varchar(50) NOT NULL default '', `code9` varchar(50) NOT NULL default '', `code10` varchar(50) NOT NULL default '', `code11` varchar(50) character set latin1 collate latin1_bin NOT NULL default '', `code12` varchar(50) NOT NULL default '', `code13` varchar(50) NOT NULL default '', `code14` varchar(50) NOT NULL default '', `code15` varchar(50) NOT NULL default '', `product1` varchar(100) NOT NULL default '0', `product2` varchar(100) NOT NULL default '0', `product3` varchar(100) NOT NULL default '0', `product4` varchar(100) NOT NULL default '0', `product5` varchar(100) NOT NULL default '0', `product6` varchar(100) NOT NULL default '0', `product7` varchar(100) NOT NULL default '0', `product8` varchar(100) NOT NULL default '0', `product9` varchar(100) NOT NULL default '0', `product10` varchar(100) NOT NULL default '0', `product11` varchar(100) NOT NULL default '0', `product12` varchar(100) NOT NULL default '0', `product13` varchar(100) NOT NULL default '0', `product14` varchar(100) NOT NULL default '0', `product15` varchar(100) NOT NULL default '0', `price1` int(10) NOT NULL default '0', `price2` int(10) NOT NULL default '0', `price3` int(10) NOT NULL default '0', `price4` int(10) NOT NULL default '0', `price5` int(10) NOT NULL default '0', `price6` int(10) NOT NULL default '0', `price7` int(10) NOT NULL default '0', `price8` int(10) NOT NULL default '0', `price9` int(10) NOT NULL default '0', `price10` int(10) NOT NULL default '0', `price11` int(10) NOT NULL default '0', `price12` int(10) NOT NULL default '0', `price13` int(10) NOT NULL default '0', `price14` int(10) NOT NULL default '0', `price15` int(10) NOT NULL default '0', `quantity1` int(10) NOT NULL default '0', `quantity2` int(10) NOT NULL default '0', `quantity3` int(10) NOT NULL default '0', `quantity4` int(10) NOT NULL default '0', `quantity5` int(10) NOT NULL default '0', `quantity6` int(10) NOT NULL default '0', `quantity7` int(10) NOT NULL default '0', `quantity8` int(10) NOT NULL default '0', `quantity9` int(10) NOT NULL default '0', `quantity10` int(10) NOT NULL default '0', `quantity11` int(10) NOT NULL default '0', `quantity12` int(10) NOT NULL default '0', `quantity13` int(10) NOT NULL default '0', `quantity14` int(10) NOT NULL default '0', `quantity15` int(10) NOT NULL default '0', `amount1` int(10) NOT NULL default '0', `amount2` int(10) NOT NULL default '0', `amount3` int(10) NOT NULL default '0', `amount4` int(10) NOT NULL default '0', `amount5` int(10) NOT NULL default '0', `amount6` int(10) NOT NULL default '0', `amount7` int(10) NOT NULL default '0', `amount8` int(10) NOT NULL default '0', `amount9` int(10) NOT NULL default '0', `amount10` int(10) NOT NULL default '0', `amount11` int(10) NOT NULL default '0', `amount12` int(10) NOT NULL default '0', `amount13` int(10) NOT NULL default '0', `amount14` int(10) NOT NULL default '0', `amount15` int(10) NOT NULL default '0', `totalNaira` double(20,0) NOT NULL default '0', `totalDollar` int(20) NOT NULL default '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COMMENT='InnoDB free: 4096 kB; InnoDB free: 4096 kB; InnoDB free: 409'; #---------------------------- # Records for table litorder #---------------------------- insert into litorder values (27, 'Sanyaolu Fisayo', '14 Adegboyega Street Palmgrove Lagos', '2010-05-31', '', 'DL 001', 'DL 002', 'DL 003', '', '', '', '', '', '', '', '', '', '', '', '', 'AILMENT & PREVENTION DVD- ENGLISH', 'AILMENT & PREVENTION DVD- HAUSA', 'BEAUTY CD', '', '', '', '', '', '', '', '', '', '', '', '', 800, 800, 3000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16, 16, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12800, 12800, 60000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '85600', 563), (28, 'Irenonse Esther', 'Lagos,Nigeria', '2010-06-01', '', 'DL 005', 'DL 008', 'FC 004', '', '', '', '', '', '', '', '', '', '', '', '', 'GET HEALTHY DVD', 'YOUR FUTURE DVD', 'FOREVER FACE CAP (YELLOW)', '', '', '', '', '', '', '', '', '', '', '', '', 1000, 900, 2000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2000, 1800, 6000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '9800', 64), (29, 'Kalu Lekway', 'Lagos, Nigeria', '2010-06-01', '', 'DL 001', 'DL 003', '', '', '', '', '', '', '', '', '', '', '', '', '', 'AILMENT & PREVENTION DVD- ENGLISH', 'BEAUTY CD', '', '', '', '', '', '', '', '', '', '', '', '', '', 800, 3000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2400, 18000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '20400', 133), (30, 'Dele', 'Ilupeju', '2010-06-02', '', 'DL 001', 'DL 003', '', '', '', '', '', '', '', '', '', '', '', '', '', 'AILMENT & PREVENTION DVD- ENGLISH', 'BEAUTY CD', '', '', '', '', '', '', '', '', '', '', '', '', '', 800, 3000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8000, 30000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '38000', 250);

    Read the article

  • Stored proc running 30% slower through Java versus running directly on database

    - by James B
    Hi All, I'm using Java 1.6, JTDS 1.2.2 (also just tried 1.2.4 to no avail) and SQL Server 2005 to create a CallableStatement to run a stored procedure (with no parameters). I am seeing the Java wrapper running the same stored procedure 30% slower than using SQL Server Management Studio. I've run the MS SQL profiler and there is little difference in I/O between the two processes, so I don't think it's related to query plan caching. The stored proc takes no arguments and returns no data. It uses a server-side cursor to calculate the values that are needed to populate a table. I can't see how the calling a stored proc from Java should add a 30% overhead, surely it's just a pipe to the database that SQL is sent down and then the database executes it....Could the database be giving the Java app a different query plan?? I've posted to both the MSDN forums, and the sourceforge JTDS forums (topic: "stored proc slower in JTDS than direct in DB") I was wondering if anyone has any suggestions as to why this might be happening? Thanks in advance, -James (N.B. Fear not, I will collate any answers I get in other forums together here once I find the solution) Java code snippet: sLogger.info("Preparing call..."); stmt = mCon.prepareCall("SP_WB200_POPULATE_TABLE_limited_rows"); sLogger.info("Call prepared. Executing procedure..."); stmt.executeQuery(); sLogger.info("Procedure complete."); I have run sql profiler, and found the following: Java app : CPU: 466,514 Reads: 142,478,387 Writes: 284,078 Duration: 983,796 SSMS : CPU: 466,973 Reads: 142,440,401 Writes: 280,244 Duration: 769,851 (Both with DBCC DROPCLEANBUFFERS run prior to profiling, and both produce the correct number of rows) So my conclusion is that they both execute the same reads and writes, it's just that the way they are doing it is different, what do you guys think? It turns out that the query plans are significantly different for the different clients (the Java client is updating an index during an insert that isn't in the faster SQL client, also, the way it is executing joins is different (nested loops Vs. gather streams, nested loops Vs index scans, argh!)). Quite why this is, I don't know yet (I'll re-post when I do get to the bottom of it) Epilogue I couldn't get this to work properly. I tried homogenising the connection properties (arithabort, ansi_nulls etc) between the Java and Mgmt studio clients. It ended up the two different clients had very similar query/execution plans (but still with different actual plan_ids). I posted a summary of what I found to the MSDN SQL Server forums as I found differing performance not just between a JDBC client and management studio, but also between Microsoft's own command line client, SQLCMD, I also checked some more radical things like network traffic too, or wrapping the stored proc inside another stored proc, just for grins. I have a feeling the problem lies somewhere in the way the cursor was being executed, and it was somehow giving rise to the Java process being suspended, but why a different client should give rise to this different locking/waiting behaviour when nothing else is running and the same execution plan is in operation is a little beyond my skills (I'm no DBA!). As a result, I have decided that 4 days is enough of anyone's time to waste on something like this, so I will grudgingly code around it (if I'm honest, the stored procedure needed re-coding to be more incremental instead of re-calculating all data each week anyway), and chalk this one down to experience. I'll leave the question open, big thanks to everyone who put their hat in the ring, it was all useful, and if anyone comes up with anything further, I'd love to hear some more options...and if anyone finds this post as a result of seeing this behaviour in their own environments, then hopefully there's some pointers here that you can try yourself, and hope fully see further than we did. I'm ready for my weekend now! -James

    Read the article

  • Trying to import SQL file in a xampp server returns error

    - by Victor_J_Martin
    I have done a ER diagram in Mysql Workbench, and I am trying load in my server with phpMyAdmin, but it returns me the next error: Error SQL Query: -- ----------------------------------------------------- -- Table `BDA`.`UG` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`UG` ( `numero_ug` INT NOT NULL, `nombre` VARCHAR(45) NOT NULL, `segunda_firma_autorizada` VARCHAR(45) NOT NULL, `fecha_creacion` DATE NOT NULL, `nombre_depto` VARCHAR(140) NOT NULL, `dni` INT NOT NULL, `anho_contable` INT NOT NULL, PRIMARY KEY (`numero_ug`), INDEX `nombre_depto_idx` (`nombre_depto` ASC), INDEX `dni_idx` (`dni` ASC), INDEX `anho_contable_idx` (`anho_contable` ASC), CONSTRAINT `nombre_depto` FOREIGN KEY (`nombre_depto`) REFERENCES `BDA`.`Departamento` (`nombre_depto`) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `dni` FOREIGN KEY (`dni`) REFERENCES `BDA`.`Trabajador` (`dni`) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `anho_contable` FOREIGN KEY (`anho_contable`) REFERENCES `BDA`.`Capitulo_Contable` (`anho_contable`) [...] MySQL said: Documentation #1022 - Can't write; duplicate key in table 'ug' I export the result of the diagram from Mysql Workbench to a SQL file, and this file is what I'm trying to upload. This is the file. I can not find the duplicate key. SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0; SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0; SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL,ALLOW_INVALID_DATES'; CREATE SCHEMA IF NOT EXISTS `BDA` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci ; USE `BDA` ; -- ----------------------------------------------------- -- Table `BDA`.`Departamento` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`Departamento` ( `nombre_depto` VARCHAR(140) NOT NULL, `area_depto` VARCHAR(140) NOT NULL, PRIMARY KEY (`nombre_depto`)) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `BDA`.`Trabajador` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`Trabajador` ( `dni` INT NOT NULL, `direccion` VARCHAR(140) NOT NULL, `nombre` VARCHAR(45) NOT NULL, `apellidos` VARCHAR(140) NOT NULL, `fecha_nacimiento` DATE NOT NULL, `fecha_contrato` DATE NOT NULL, `titulacion` VARCHAR(140) NULL, `nombre_depto` VARCHAR(45) NOT NULL, PRIMARY KEY (`dni`), INDEX `nombre_depto_idx` (`nombre_depto` ASC), CONSTRAINT `nombre_depto` FOREIGN KEY (`nombre_depto`) REFERENCES `BDA`.`Departamento` (`nombre_depto`) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `BDA`.`Capitulo_Contable` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`Capitulo_Contable` ( `anho_contable` INT NOT NULL, `numero_ug` INT NOT NULL, `debe` DOUBLE NOT NULL, `haber` DOUBLE NOT NULL, PRIMARY KEY (`anho_contable`), INDEX `numero_ug_idx` (`numero_ug` ASC), CONSTRAINT `numero_ug` FOREIGN KEY (`numero_ug`) REFERENCES `BDA`.`UG` (`numero_ug`) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `BDA`.`UG` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`UG` ( `numero_ug` INT NOT NULL, `nombre` VARCHAR(45) NOT NULL, `segunda_firma_autorizada` VARCHAR(45) NOT NULL, `fecha_creacion` DATE NOT NULL, `nombre_depto` VARCHAR(140) NOT NULL, `dni` INT NOT NULL, `anho_contable` INT NOT NULL, PRIMARY KEY (`numero_ug`), INDEX `nombre_depto_idx` (`nombre_depto` ASC), INDEX `dni_idx` (`dni` ASC), INDEX `anho_contable_idx` (`anho_contable` ASC), CONSTRAINT `nombre_depto` FOREIGN KEY (`nombre_depto`) REFERENCES `BDA`.`Departamento` (`nombre_depto`) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `dni` FOREIGN KEY (`dni`) REFERENCES `BDA`.`Trabajador` (`dni`) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `anho_contable` FOREIGN KEY (`anho_contable`) REFERENCES `BDA`.`Capitulo_Contable` (`anho_contable`) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `BDA`.`Cliente` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`Cliente` ( `cif_cliente` INT NOT NULL, `nombre_cliente` VARCHAR(140) NOT NULL, PRIMARY KEY (`cif_cliente`)) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `BDA`.`Ingreso` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`Ingreso` ( `id` INT NOT NULL, `concepto` VARCHAR(45) NOT NULL, `importe` DOUBLE NOT NULL, `fecha` DATE NOT NULL, `cif_cliente` INT NOT NULL, `numero_ug` INT NOT NULL, PRIMARY KEY (`id`), INDEX `cif_cliente_idx` (`cif_cliente` ASC), INDEX `numero_ug_idx` (`numero_ug` ASC), CONSTRAINT `cif_cliente` FOREIGN KEY (`cif_cliente`) REFERENCES `BDA`.`Cliente` (`cif_cliente`) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `numero_ug` FOREIGN KEY (`numero_ug`) REFERENCES `BDA`.`UG` (`numero_ug`) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `BDA`.`Proveedor` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`Proveedor` ( `cif_proveedor` INT NOT NULL, `nombre_proveedor` VARCHAR(140) NOT NULL, PRIMARY KEY (`cif_proveedor`)) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `BDA`.`Gasto` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `BDA`.`Gasto` ( `id` INT NOT NULL, `concepto` VARCHAR(45) NOT NULL, `importe` DOUBLE NOT NULL, `fecha` DATE NOT NULL, `factura` INT NOT NULL, `cif_proveedor` INT NOT NULL, `numero_ug` INT NOT NULL, PRIMARY KEY (`id`), INDEX `cif_proveedor_idx` (`cif_proveedor` ASC), INDEX `numero_ug_idx` (`numero_ug` ASC), CONSTRAINT `cif_proveedor` FOREIGN KEY (`cif_proveedor`) REFERENCES `BDA`.`Proveedor` (`cif_proveedor`) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `numero_ug` FOREIGN KEY (`numero_ug`) REFERENCES `BDA`.`UG` (`numero_ug`) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; SET SQL_MODE=@OLD_SQL_MODE; SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS; SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS; Thanks for your advices.

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >