Search Results

Search found 1805 results on 73 pages for 'varchar'.

Page 14/73 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Post method in Servlet is not being called again after being executed once

    - by SaurabhCsIITKgp
    I am implementing a database based web application using servlets. Now, when I input a parameter using a form in the jsp page, it redirects it to a servlet which subsequently adds the value to the database (the servlet creates a new table if the table doesn't already exist). The creation of the table and the addition of value works fine if the table doesn't already exists. Once it is created however and the parameter is inputted again in the form, the submit button no longer redirects it to the servlet. Nor is the value added to the database. Kindly advise me as to where I am going wrong. Following are the snippets of my code: From the JSP page (/showmanager is the urlpattern of the servlet): <form action="showmanager" method="post"> <h3>Enter name of the show: </h3> <input type="text" name="showname" value=""> <input type="hidden" name="task" value="addshow" /> <input type="button" value="Add Show"> </form> From the servlet (POST method): p rotected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { if(request.getParameter("task").equals("addshow")){ this.addShow(request.getParameter("showname")); response.sendRedirect("showmanager.jsp"); } } Method to add in database: protected boolean addShow(String showname){ try{ statement =con.prepareStatement("INSERT INTO showdb10(name) VALUES ('"+showname+"')"); if(statement.executeUpdate()>0){ return true; } } catch(Exception e) { try{ statement =con.prepareStatement("create table showdb10 (id int NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, date varchar(20),time varchar(20), b_total int, o_total int, b_avbl int, o_avbl int, b_price double(10,2), o_price double(10,2), seat_no varchar(20), transaction_id varchar(255), total_sales double(10,2), paymnt_artists double(10,2), paymnt_othr double(10,2), flag varchar(20), PRIMARY KEY(id))"); statement.executeUpdate(); statement =con.prepareStatement("INSERT INTO showdb10(name) VALUES ('"+showname+"')"); if(statement.executeUpdate()>0){ return true; } }catch(Exception e2){} } return false; }

    Read the article

  • How can I create a SQL table using excel columns?

    - by Phsika
    I need to help to generate column name from excel automatically. I think that: we can do below codes: CREATE TABLE [dbo].[Addresses_Temp] ( [FirstName] VARCHAR(20), [LastName] VARCHAR(20), [Address] VARCHAR(50), [City] VARCHAR(30), [State] VARCHAR(2), [ZIP] VARCHAR(10) ) via C#. How can I learn column name from Excel? private void Form1_Load(object sender, EventArgs e) { ExcelToSql(); } void ExcelToSql() { string connectionString = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Source\MPD.xlsm;Extended Properties=""Excel 12.0;HDR=YES;"""; // if you don't want to show the header row (first row) // use 'HDR=NO' in the string string strSQL = "SELECT * FROM [Sheet1$]"; OleDbConnection excelConnection = new OleDbConnection(connectionString); excelConnection.Open(); // This code will open excel file. OleDbCommand dbCommand = new OleDbCommand(strSQL, excelConnection); OleDbDataAdapter dataAdapter = new OleDbDataAdapter(dbCommand); // create data table DataTable dTable = new DataTable(); dataAdapter.Fill(dTable); // bind the datasource // dataBingingSrc.DataSource = dTable; // assign the dataBindingSrc to the DataGridView // dgvExcelList.DataSource = dataBingingSrc; // dispose used objects if (dTable.Rows.Count > 0) MessageBox.Show("Count:" + dTable.Rows.Count.ToString()); dTable.Dispose(); dataAdapter.Dispose(); dbCommand.Dispose(); excelConnection.Close(); excelConnection.Dispose(); }

    Read the article

  • How do I join three tables with SQLalchemy and keeping all of the columns in one of the tables?

    - by jimka
    So, I have three tables: The class defenitions: engine = create_engine('sqlite://test.db', echo=False) SQLSession = sessionmaker(bind=engine) Base = declarative_base() class Channel(Base): __tablename__ = 'channel' id = Column(Integer, primary_key = True) title = Column(String) description = Column(String) link = Column(String) pubDate = Column(DateTime) class User(Base): __tablename__ = 'user' id = Column(Integer, primary_key = True) username = Column(String) password = Column(String) sessionId = Column(String) class Subscription(Base): __tablename__ = 'subscription' userId = Column(Integer, ForeignKey('user.id'), primary_key=True) channelId = Column(Integer, ForeignKey('channel.id'), primary_key=True) And the SQL commands that are executed to create them: CREATE TABLE subscription ( "userId" INTEGER NOT NULL, "channelId" INTEGER NOT NULL, PRIMARY KEY ("userId", "channelId"), FOREIGN KEY("userId") REFERENCES user (id), FOREIGN KEY("channelId") REFERENCES channel (id) ); CREATE TABLE user ( id INTEGER NOT NULL, username VARCHAR, password VARCHAR, "sessionId" VARCHAR, PRIMARY KEY (id) ); CREATE TABLE channel ( id INTEGER NOT NULL, title VARCHAR, description VARCHAR, link VARCHAR, "pubDate" TIMESTAMP, PRIMARY KEY (id) ); NOTE: I know user.username should be unique, need to fix that, and I'm not sure why SQLalchemy creates some row names with the double-quotes. And I'm trying to come up with a way to retrieve all of the channels, as well as an indication on what channels one particular user (identified by user.sessionId together with user.id) has a subscription on. For example, say we have four channels: channel1, channel2, channel3, channel4; a user: user1; who has a subscription on channel1 and channel4. The query for user1 would return something like: channel.id | channel.title | subscribed --------------------------------------- 1 channel1 True 2 channel2 False 3 channel3 False 4 channel4 True This is a best-case result, but since I have absolutely no clue as how to accomplish the subscribed column, I've been instead trying to get the particular users id in the rows where the user has a subscription and where a subscription is missing, just leave it blank. The database engine that I'm using together with SQLalchemy atm. is sqlite3 I've been scratching my head over this for two days now, I've no problem joining together all three by way of the subscription table but then all of the channels where the user does not have a subscription gets omitted. I hope I've managed to describe my problem sufficiently, thanks in advance.

    Read the article

  • atk4 advanced crud?

    - by thindery
    I have the following tables: -- ----------------------------------------------------- -- Table `product` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `product` ( `id` INT NOT NULL AUTO_INCREMENT , `productName` VARCHAR(255) NULL , `s7location` VARCHAR(255) NULL , PRIMARY KEY (`id`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `pages` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `pages` ( `id` INT NOT NULL AUTO_INCREMENT , `productID` INT NULL , `pageName` VARCHAR(255) NOT NULL , `isBlank` TINYINT(1) NULL , `pageOrder` INT(11) NULL , `s7page` INT(11) NULL , PRIMARY KEY (`id`) , INDEX `productID` (`productID` ASC) , CONSTRAINT `productID` FOREIGN KEY (`productID` ) REFERENCES `product` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `field` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `field` ( `id` INT NOT NULL AUTO_INCREMENT , `pagesID` INT NULL , `fieldName` VARCHAR(255) NOT NULL , `fieldType` VARCHAR(255) NOT NULL , `fieldDefaultValue` VARCHAR(255) NULL , PRIMARY KEY (`id`) , INDEX `id` (`pagesID` ASC) , CONSTRAINT `pagesID` FOREIGN KEY (`pagesID` ) REFERENCES `pages` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; I have gotten CRUD to work on the 'product' table. //addproduct.php class page_addproduct extends Page { function init(){ parent::init(); $crud=$this->add('CRUD')->setModel('Product'); } } This works. but I need to get it so that when a new product is created it basically allows me to add new rows into the pages and field tables. For example, the products in the tables are a print product(like a greeting card) that has multiple pages to render. Page 1 may have 2 text fields that can be customized, page 2 may have 3 text fields, a slider to define text size, and a drop down list to pick a color, and page 3 may have five text fields that can all be customized. All three pages (and all form elements, 12 in this example) are associated with 1 product. So when I create the product, could i add a button to create a page for that product, then within the page i can add a button to add a new form element field? I'm still somewhat new to this, so my db structure may not be ideal. i'd appreciate any suggestions and feedback! Could someone point me toward some information, tutorials, documentation, ideas, suggestions, on how I can implement this?

    Read the article

  • SQL Server - Complex Dynamic Pivot columns

    - by user972255
    I have two tables "Controls" and "ControlChilds" Parent Table Structure: Create table Controls( ProjectID Varchar(20) NOT NULL, ControlID INT NOT NULL, ControlCode Varchar(2) NOT NULL, ControlPoint Decimal NULL, ControlScore Decimal NULL, ControlValue Varchar(50) ) Sample Data ProjectID | ControlID | ControlCode | ControlPoint | ControlScore | ControlValue P001 1 A 30.44 65 Invalid P001 2 C 45.30 85 Valid Child Table Structure: Create table ControlChilds( ControlID INT NOT NULL, ControlChildID INT NOT NULL, ControlChildValue Varchar(200) NULL ) Sample Data ControlID | ControlChildID | ControlChildValue 1 100 Yes 1 101 No 1 102 NA 1 103 Others 2 104 Yes 2 105 SomeValue Output should be in a single row for a given ProjectID with all its Control values first & followed by child control values (based on the ControlCode (i.e.) ControlCode_Child (1, 2, 3...) and it should look like this Also, I tried this PIVOT query and I am able to get the ChildControls table values but I dont know how to get the Controls table values. DECLARE @cols AS NVARCHAR(MAX); DECLARE @query AS NVARCHAR(MAX); select @cols = STUFF((SELECT distinct ',' + QUOTENAME(ControlCode + '_Child' + CAST(ROW_NUMBER() over(PARTITION BY ControlCode ORDER BY ControlChildID) AS Varchar(25))) FROM Controls C INNER JOIN ControlChilds CC ON C.ControlID = CC.ControlID FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') , 1, 1, ''); SELECT @query ='SELECT * FROM ( SELECT (ControlCode + ''_Child'' + CAST(ROW_NUMBER() over(PARTITION BY ControlCode ORDER BY ControlChildID) AS Varchar(25))) As Code, ControlChildValue FROM Controls AS C INNER JOIN ControlChilds AS CC ON C.ControlID = CC.ControlID ) AS t PIVOT ( MAX(ControlChildValue) FOR Code IN( ' + @cols + ' )' + ' ) AS p ; '; execute(@query); Output I am getting: Can anyone please help me on how to get the Controls table values in front of each ControlChilds table values?

    Read the article

  • Find records IN BETWEEN Date Range

    - by Muhammad Kashif Nadeem
    Please see attached image I have a table which have FromDate and ToDate. FromDate is start of some event and ToDate is end of taht event. I need to find a record if search criteria is in between range of dates. e.g. If a record has FromDate 2010/15/5 and ToDate 2010/15/25 and my criteria is FromDate 2010/5/18 and ToDate is 2010/5/21 then this record should be in search results becasue this is in the range of 15 to 25. Following is my search query (chunk of) SELECT m.EventId FROM MajorEvents WHERE ( (m.LocationID = @locationID OR @locationID IS NULL) OR M.LocationID IS NULL) AND ( CONVERT(VARCHAR(10),M.EventDateFrom,23) BETWEEN CONVERT(VARCHAR(10),@DateTimeFrom,23) AND CONVERT(VARCHAR(10),@DateTimeTo,23) OR CONVERT(VARCHAR(10),M.EventDateTo,23) BETWEEN CONVERT(VARCHAR(10),@DateTimeFrom,23) AND CONVERT(VARCHAR(10),@DateTimeTo,23) ) If Search Criteria is equal to FromDate or ToDate then results are ok e.g. If search criterai is DateFrom = 2010/5/15 AND DateTo = 2010/5/18 then this record will return becasue Date From is exactly what is DateFrom in db. OR If search criterai is DateFrom = 2010/5/22 AND DateTo = 2010/5/25 then this record will return becasue Date To is exactly what is DateTo in db But if anything in between this range it does not work Thanks for the help.

    Read the article

  • Getting the most recent post based on date

    - by camcim
    Hi guys, How do I go about displaying the most recent post when I have two tables, both containing a column called creation_date This would be simple if all I had to do was get the most recent post based on posts created_on value however if a post contains replies I need to factor this into the equation. If a post has a more recent reply I want to get the replies created_on value but also get the posts post_id and subject. The posts table structure: CREATE TABLE `posts` ( `post_id` bigint(20) unsigned NOT NULL auto_increment, `cat_id` bigint(20) NOT NULL, `user_id` bigint(20) NOT NULL, `subject` tinytext NOT NULL, `comments` text NOT NULL, `created_on` datetime NOT NULL, `status` varchar(10) NOT NULL default 'INACTIVE', `private_post` varchar(10) NOT NULL default 'PUBLIC', `db_location` varchar(10) NOT NULL, PRIMARY KEY (`post_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=7 ; The replies table structure: CREATE TABLE `replies` ( `reply_id` bigint(20) unsigned NOT NULL auto_increment, `post_id` bigint(20) NOT NULL, `user_id` bigint(20) NOT NULL, `comments` text NOT NULL, `created_on` datetime NOT NULL, `notify` varchar(5) NOT NULL default 'YES', `status` varchar(10) NOT NULL default 'INACTIVE', `db_location` varchar(10) NOT NULL, PRIMARY KEY (`reply_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ; Here is my query so far. I've removed my attempt of extracting the dates. $strQuery = "SELECT posts.post_id, posts.created_on, replies.created_on, posts.subject "; $strQuery = $strQuery."FROM posts ,replies "; $strQuery = $strQuery."WHERE posts.post_id = replies.post_id "; $strQuery = $strQuery."AND posts.cat_id = '".$row->cat_id."'";

    Read the article

  • Mysql many to many problem (leaderborad/scoreboard)

    - by zoko2902
    Hi all! I'm working on a small project in regards of the upcoming World Cup. I'm building a roster/leaderboard/scoredboard based on groups with national teams. The idea is to have information on all upcoming matches within the group or in the knockout phase (scores, time of the match, match stats etc.). Currently I'm stuck with the DB in that I can't come up with a query that would return paired teams in a row. I have these 3 tables: CREATE TABLE IF NOT EXISTS `wc_team` ( `id` INT NOT NULL AUTO_INCREMENT , `name` VARCHAR(45) NULL , `description` VARCHAR(250) NULL , `flag` VARCHAR(45) NULL , `image` VARCHAR(45) NULL , `added` TIMESTAMP NULL DEFAULT CURRENT_TIMESTAMP , PRIMARY KEY (`id`) , CREATE TABLE IF NOT EXISTS `wc_match` ( `id` INT NOT NULL AUTO_INCREMENT , `score` VARCHAR(6) NULL , `date` DATE NULL , `time` VARCHAR(45) NULL , `added` TIMESTAMP NULL DEFAULT CURRENT_TIMESTAMP , PRIMARY KEY (`id`) , CREATE TABLE IF NOT EXISTS `wc_team_has_match` ( `wc_team_id` INT NOT NULL , `wc_match_id` INT NOT NULL , PRIMARY KEY (`wc_team_id`, `wc_match_id`) , I've simplified the tables so we don't go in the wrong direction. Now I've tried al kinds of joins and groupings I could think of, but I never seem to get. Example guery: SELECT t.wc_team_id,t.wc_match_id,c.id.c.name,d.id,d.name FROM wc_team_has_match AS t LEFT JOIN wc_match AS s ON t.wc_match_id = s.id LEFT JOIN wc_team AS c ON t.wc_team_id = c.id LEFT JOIN wc_team AS d ON t.wc_team_id = d.id Which returns: wc_team_id wc_match_id id name id name 16 5 16 Brazil 16 Brazil 18 5 18 Argentina 18 Argentina But what I really want is: wc_team_id wc_match_id id name id name 16 5 16 Brazil 18 Argentina Keep in mind that a group has more matches I want to see all those matches not only one. Any pointer or suggestion would be extremly appreciated since I'm stuck like a duck on this one :).

    Read the article

  • flush XML tags having certain value

    - by azathoth
    Hello all I have a webservice that returns an XML like this: <?xml version="1.0" encoding="UTF-8"?> <contractInformation> <contract> <idContract type="int">87191827</idContract> <region type="varchar">null</region> <dueDate type="varchar">null</dueDate> <contactName type="varchar">John Smith</contactName> </contract> </contractInformation> I want to empty every tag that contains the string null, so it would look like this: <?xml version="1.0" encoding="UTF-8"?> <contractInformation> <contract> <idContract type="int">87191827</idContract> <region type="varchar"/> <dueDate type="varchar"/> <contactName type="varchar">John Smith</contactName> </contract> </contractInformation> How do I accomplish that by using XSLT?

    Read the article

  • GridView edit problem If primary key is editable (design problem)

    - by Nassign
    I would like to ask about the design of table based on it's editability in a Grid View. Let me explain. For example, I have a table named ProductCustomerRel. Method 1 CustomerCode varchar PK ProductCode varchar PK StoreCode varchar PK Quantity int Note text So the combination of the CustomerCode, StoreCode and ProductCode must be unique. The record is displayed on a gridview. The requirement is that you can edit the customer, product and storecode but when the data is saved, the PK constraint must still persist. The problem here is it would be natural for a grid to be able to edit the 3 primary key, you can only achieve the update operation of the grid view by first deleting the row and then inserting the row with the updated data. An alternative to this is to just update the table and add a SeqNo, and just enforce the unique constraint of the 3 columns when inserting and updating in the grid view. Method 2 SeqNo int PK CustomerCode varchar ProductCode varchar StoreCode varchar Quantity int Note text My question is which of the two method is better? or is there another way to do this?

    Read the article

  • How do I make a function in SQL Server that accepts a column of data?

    - by brandon k
    I made the following function in SQL Server 2008 earlier this week that takes two parameters and uses them to select a column of "detail" records and returns them as a single varchar list of comma separated values. Now that I get to thinking about it, I would like to take this table and application-specific function and make it more generic. I am not well-versed in defining SQL functions, as this is my first. How can I change this function to accept a single "column" worth of data, so that I can use it in a more generic way? Instead of calling: SELECT ejc_concatFormDetails(formuid, categoryName) I would like to make it work like: SELECT concatColumnValues(SELECT someColumn FROM SomeTable) Here is my function definition: FUNCTION [DNet].[ejc_concatFormDetails](@formuid AS int, @category as VARCHAR(75)) RETURNS VARCHAR(1000) AS BEGIN DECLARE @returnData VARCHAR(1000) DECLARE @currentData VARCHAR(75) DECLARE dataCursor CURSOR FAST_FORWARD FOR SELECT data FROM DNet.ejc_FormDetails WHERE formuid = @formuid AND category = @category SET @returnData = '' OPEN dataCursor FETCH NEXT FROM dataCursor INTO @currentData WHILE (@@FETCH_STATUS = 0) BEGIN SET @returnData = @returnData + ', ' + @currentData FETCH NEXT FROM dataCursor INTO @currentData END CLOSE dataCursor DEALLOCATE dataCursor RETURN SUBSTRING(@returnData,3,1000) END As you can see, I am selecting the column data within my function and then looping over the results with a cursor to build my comma separated varchar. How can I alter this to accept a single parameter that is a result set and then access that result set with a cursor?

    Read the article

  • MySQL foreign key creation with alter table command

    - by user313338
    I created some tables using MySQL Workbench, and then did forward ‘forward engineer’ to create scripts to create these tables. BUT, the scripts lead me to a number of problems. One of which involves the foreign keys. So I tried creating separate foreign key additions using alter table and I am still getting problems. The code is below (the set statements, drop/create statements I left in … though I don’t think they should matter for this): SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0; SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0; SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL'; DROP SCHEMA IF EXISTS `mydb` ; CREATE SCHEMA IF NOT EXISTS `mydb` DEFAULT CHARACTER SET latin1 COLLATE latin1_swedish_ci ; -- ----------------------------------------------------- -- Table `mydb`.`User` -- ----------------------------------------------------- DROP TABLE IF EXISTS `mydb`.`User` ; CREATE TABLE IF NOT EXISTS `mydb`.`User` ( `UserName` VARCHAR(35) NOT NULL , `Num_Accts` INT NOT NULL , `Password` VARCHAR(45) NULL , `Email` VARCHAR(45) NULL , `User_ID` INT NOT NULL AUTO_INCREMENT , PRIMARY KEY (`User_ID`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`User_Space` -- ----------------------------------------------------- DROP TABLE IF EXISTS `mydb`.`User_Space` ; CREATE TABLE IF NOT EXISTS `mydb`.`User_Space` ( `User_UserName` VARCHAR(35) NOT NULL , `User_Space_ID` INT NOT NULL AUTO_INCREMENT , PRIMARY KEY (`User_Space_ID`), FOREIGN KEY (`User_UserName`) REFERENCES `mydb`.`User` (`UserName`) ON UPDATE CASCADE ON DELETE CASCADE) ENGINE = InnoDB; SET SQL_MODE=@OLD_SQL_MODE; SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS; SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS; The error this produces is: Error Code: 1005 Can't create table 'mydb.user_space' (errno: 150) Anybody know what the heck I’m doing wrong?? And anybody else have problems with the script generation done by mysql workbench? It’s a nice tool, but annoying that it pumps out scripts that don’t work for me. [As an fyi here’s the script it auto-generates: SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0; SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0; SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL'; DROP SCHEMA IF EXISTS `mydb` ; CREATE SCHEMA IF NOT EXISTS `mydb` DEFAULT CHARACTER SET latin1 COLLATE latin1_swedish_ci ; -- ----------------------------------------------------- -- Table `mydb`.`User` -- ----------------------------------------------------- DROP TABLE IF EXISTS `mydb`.`User` ; CREATE TABLE IF NOT EXISTS `mydb`.`User` ( `UserName` VARCHAR(35) NOT NULL , `Num_Accts` INT NOT NULL , `Password` VARCHAR(45) NULL , `Email` VARCHAR(45) NULL , `User_ID` INT NOT NULL AUTO_INCREMENT , PRIMARY KEY (`User_ID`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`User_Space` -- ----------------------------------------------------- DROP TABLE IF EXISTS `mydb`.`User_Space` ; CREATE TABLE IF NOT EXISTS `mydb`.`User_Space` ( `User_Space_ID` INT NOT NULL AUTO_INCREMENT , PRIMARY KEY (`User_Space_ID`) , INDEX `User_ID` () , CONSTRAINT `User_ID` FOREIGN KEY () REFERENCES `mydb`.`User` () ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; SET SQL_MODE=@OLD_SQL_MODE; SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS; SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS; ** Thanks!]

    Read the article

  • Export mysql database tables to php code to create same tables in other database?

    - by chefnelone
    How do I Export mysql database tables to php code so that it allows me to create and populate same tables in other database? I have a local database, I exported to sql syntax, then I get something like: CREATE TABLE `boletinSuscritos` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(120) NOT NULL, `email` varchar(120) NOT NULL, `date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=3 ; INSERT INTO `boletinSuscritos` VALUES(1, 'walter', '[email protected]', '2010-03-24 12:53:12'); INSERT INTO `boletinSuscritos` VALUES(2, 'Paco', '[email protected]', '2010-03-24 12:56:56'); but I need it to be: (Is there any way to export the tables in this way) $sql = "CREATE TABLE boletinSuscritos ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(120) NOT NULL, email varchar(120) NOT NULL, date timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY ( id ) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=3 )"; mysql_query($sql,$conexion); mysql_query("INSERT INTO boletinSuscritos VALUES(1, 'walter', '[email protected]', '2010-03-24 12:53:12')"); mysql_query("INSERT INTO boletinSuscritos VALUES(2, 'Paco', '[email protected]', '2010-03-24 12:56:56')");

    Read the article

  • SQL Query to update parent record with child record values

    - by Wells
    I need to create a Trigger that fires when a child record (Codes) is added, updated or deleted. The Trigger stuffs a string of comma separated Code values from all child records (Codes) into a single field in the parent record (Projects) of the added, updated or deleted child record. I am stuck on writing a correct query to retrieve the Code values from just those child records that are the children of a single parent record. -- Create the test tables CREATE TABLE projects ( ProjectId varchar(16) PRIMARY KEY, ProjectName varchar(100), Codestring nvarchar(100) ) GO CREATE TABLE prcodes ( CodeId varchar(16) PRIMARY KEY, Code varchar (4), ProjectId varchar(16) ) GO -- Add sample data to tables: Two projects records, one with 3 child records, the other with 2. INSERT INTO projects (ProjectId, ProjectName) SELECT '101','Smith' UNION ALL SELECT '102','Jones' GO INSERT INTO prcodes (CodeId, Code, ProjectId) SELECT 'A1','Blue', '101' UNION ALL SELECT 'A2','Pink', '101' UNION ALL SELECT 'A3','Gray', '101' UNION ALL SELECT 'A4','Blue', '102' UNION ALL SELECT 'A5','Gray', '102' GO I am stuck on how to create a correct Update query. Can you help fix this query? -- Partially working, but stuffs all values, not just values from chile (prcodes) records of parent (projects) UPDATE proj SET proj.Codestring = (SELECT STUFF((SELECT ',' + prc.Code FROM projects proj INNER JOIN prcodes prc ON proj.ProjectId = prc.ProjectId ORDER BY 1 ASC FOR XML PATH('')),1, 1, '')) The result I get for the Codestring field in Projects is: ProjectId ProjectName Codestring 101 Smith Blue,Blue,Gray,Gray,Pink ... But the result I need for the Codestring field in Projects is: ProjectId ProjectName Codestring 101 Smith Blue,Pink,Gray ... Here is my start on the Trigger. The Update query, above, will be added to this Trigger. Can you help me complete the Trigger creation query? CREATE TRIGGER Update_Codestring ON prcodes AFTER INSERT, UPDATE, DELETE AS WITH CTE AS ( select ProjectId from inserted union select ProjectId from deleted )

    Read the article

  • How can i clear a SQLite database each time i start my app

    - by AndroidUser99
    Hi, i want that each time i start my app my SQLite database get's cleaned for this, i need to make a method on my class MyDBAdapter.java code examples are welcome, i have no idea how to do it this is the dbadapter/helper i'm using: public class MyDbAdapter { private static final String TAG = "NotesDbAdapter"; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private static final String DATABASE_NAME = "gpslocdb"; private static final String PERMISSION_TABLE_CREATE = "CREATE TABLE permission ( fk_email1 varchar, fk_email2 varchar, validated tinyint, hour1 time default '08:00:00', hour2 time default '20:00:00', date1 date, date2 date, weekend tinyint default '0', fk_type varchar, PRIMARY KEY (fk_email1,fk_email2))"; private static final String USER_TABLE_CREATE = "CREATE TABLE user ( email varchar, password varchar, fullName varchar, mobilePhone varchar, mobileOperatingSystem varchar, PRIMARY KEY (email))"; private static final int DATABASE_VERSION = 2; private final Context mCtx; private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(PERMISSION_TABLE_CREATE); db.execSQL(USER_TABLE_CREATE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { db.execSQL("DROP TABLE IF EXISTS user"); db.execSQL("DROP TABLE IF EXISTS permission"); onCreate(db); } } /** * Constructor - takes the context to allow the database to be * opened/created * * @param ctx the Context within which to work */ public MyDbAdapter(Context ctx) { this.mCtx = ctx; } /** * Open the database. If it cannot be opened, try to create a new * instance of the database. If it cannot be created, throw an exception to * signal the failure * * @return this (self reference, allowing this to be chained in an * initialization call) * @throws SQLException if the database could be neither opened or created */ public MyDbAdapter open() throws SQLException { mDbHelper = new DatabaseHelper(mCtx); mDb = mDbHelper.getWritableDatabase(); return this; } public void close() { mDbHelper.close(); } public long createUser(String email, String password, String fullName, String mobilePhone, String mobileOperatingSystem) { ContentValues initialValues = new ContentValues(); initialValues.put("email",email); initialValues.put("password",password); initialValues.put("fullName",fullName); initialValues.put("mobilePhone",mobilePhone); initialValues.put("mobileOperatingSystem",mobileOperatingSystem); return mDb.insert("user", null, initialValues); } public Cursor fetchAllUsers() { return mDb.query("user", new String[] {"email", "password", "fullName", "mobilePhone", "mobileOperatingSystem"}, null, null, null, null, null); } public Cursor fetchUser(String email) throws SQLException { Cursor mCursor = mDb.query(true, "user", new String[] {"email", "password", "fullName", "mobilePhone", "mobileOperatingSystem"} , "email" + "=" + email, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } public List<Friend> retrieveAllUsers() { List <Friend> friends=new ArrayList<Friend>(); Cursor result=fetchAllUsers(); if( result.moveToFirst() ){ do{ //note.getString(note.getColumnIndexOrThrow(NotesDbAdapter.KEY_TITLE))); friends.add(new Friend(result.getString(result.getColumnIndexOrThrow("email")), "","","","")); }while( result.moveToNext() ); } return friends; } }

    Read the article

  • Reading a large SQL Errorlog

    - by steveh99999
    I came across an interesting situation recently where a SQL instance had been configured with the Audit of successful and failed logins being written to the errorlog. ie This meant… every time a user or the application connected to the SQL instance – an entry was written to the errorlog. This meant…  huge SQL Server errorlogs. Opening an errorlog in the usual way, using SQL management studio, was extremely slow… Luckily, I was able to use xp_readerrorlog to work around this – here’s some example queries..   To show errorlog entries from the currently active log, just for today :- DECLARE @now DATETIME DECLARE @midnight DATETIME SET @now = GETDATE() SET @midnight =  DATEADD(d, DATEDIFF(d, 0, getdate()), 0) EXEC xp_readerrorlog 0,1,NULL,NULL,@midnight,@now   To find out how big the current errorlog actually is, and what the earliest and most recent entries are in the errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT COUNT(*) AS 'Number of entries in errorlog', MIN(logdate) AS 'ErrorLog Starts', MAX(logdate) AS 'ErrorLog Ends' FROM #temp_errorlog DROP TABLE #temp_errorlog To show just DBCC history  information in the current errorlog :- EXEC xp_readerrorlog 0,1,'dbcc'   To show backup errorlog entries in the current errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT * from #temp_errorlog WHERE ProcessInfo = 'Backup' ORDER BY Logdate DROP TABLE #temp_errorlog XP_Errorlog is an undocumented system stored procedure – so no official Microsoft link describing the parameters it takes – however,  there’s a good blog on this here And, if you do have a problem with huge errorlogs – please consider running system stored procedure  sp_cycle_errorlog on a nightly or regular basis.  But if you do this,  remember to change the amount of errorlogs you do retain – the default of 6 might not be sufficient for you….

    Read the article

  • SyFy Channel Original Movie Title Generator

    - by Most Valuable Yak (Rob Volk)
    Saw this linked on reddit today and couldn't resist going through all the combinations: create table #pre(name varchar(20))create table #post(name varchar(20), pre varchar(10))insert #pre select 'Dino' union all select'Alien' union all select'Shark' union all select'Raptor' union all select'Tractor' union all select'Arachno' union all select'Cyber' union all select'Robo' union all select'Choco' union all select'Chupa' union all select'Grizzly' union all select'Mega' union all select'Were' union all select'Sabre' union all select'Man' insert #post select 'dactyl','a' union all select'pus','to' union all select'conda','a' union all select'droid',null union all select'dile','o' union all select'bear',null union all select'vampire',null union all select'squito',null union all select'saurus','a' union all select'wolf',null union all select'ghost',null union all select'viper',null union all select'cabra','a' union all select'yeti',null union all select'shark',null select a.name +case when right(a.name,1) not like '[aeiouy]' and b.pre is not null then b.pre else '' end +b.namefrom #pre a cross join #post bwhere a.name<>b.name -- optional, to eliminate the "SharkShark" optionorder by 1  Which one is your favorite?  I like most of the -squito versions, especially Chupasquito and Grizzlysquito.

    Read the article

  • SQL SERVER – Convert Seconds to Hour : Minute : Seconds Format

    - by Pinal Dave
    Here is another question I received via email. “Hi Pinal, I have a unique requirement. We measure time spent on any webpage in measure of seconds. I recently have to build a report over it and I did few summations based on group of web pages. Now my manager wants to convert the time, which is in seconds to the format Hour : Minute : Seconds. I researched online and found a solution on stackoverflow for converting seconds to the Minute : Seconds but could not find a solution for Hour : Minute : Seconds. Would you please help?” Of course the logic is very simple. Here is the script for your need. DECLARE @TimeinSecond INT SET @TimeinSecond = 86399 -- Change the seconds SELECT RIGHT('0' + CAST(@TimeinSecond / 3600 AS VARCHAR),2) + ':' + RIGHT('0' + CAST((@TimeinSecond / 60) % 60 AS VARCHAR),2)  + ':' + RIGHT('0' + CAST(@TimeinSecond % 60 AS VARCHAR),2) Here is the screenshot of the resolution: Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Better than dynamic SQL - How to pass a list of comma separated IDs into a stored proc

    - by Rodney Vinyard
    Better than dynamic SQL - How to pass a list of comma separated IDs into a stored proc:     Derived form "Method 6" from a great article: ·         How to pass a list of values or array to SQL Server stored procedure ·          http://vyaskn.tripod.com/passing_arrays_to_stored_procedures.htm     Create PROCEDURE [dbo].[GetMyTable_ListByCommaSepReqIds] (@CommaSepReqIds varchar(500))   AS   BEGIN   select * from MyTable q               JOIN               dbo.SplitStringToNumberTable(@CommaSepReqIds) AS s               ON               q.MyTableId = s.ID End     ALTER FUNCTION [dbo].[SplitStringToNumberTable] (        @commaSeparatedList varchar(500) ) RETURNS @outTable table (        ID int ) AS BEGIN        DECLARE @parsedItem varchar(10), @Pos int          SET @commaSeparatedList = LTRIM(RTRIM(@commaSeparatedList))+ ','        SET @commaSeparatedList = REPLACE(@commaSeparatedList, ' ', '')        SET @Pos = CHARINDEX(',', @commaSeparatedList, 1)          IF REPLACE(@commaSeparatedList, ',', '') <> ''        BEGIN               WHILE @Pos > 0               BEGIN                      SET @parsedItem = LTRIM(RTRIM(LEFT(@commaSeparatedList, @Pos - 1)))                      IF @parsedItem <> ''                            BEGIN                                   INSERT INTO @outTable (ID)                                   VALUES (CAST(@parsedItem AS int)) --Use Appropriate conversion                            END                            SET @commaSeparatedList = RIGHT(@commaSeparatedList, LEN(@commaSeparatedList) - @Pos)                            SET @Pos = CHARINDEX(',', @commaSeparatedList, 1)               END        END           RETURN END

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

  • SQL SERVER – Introduction to Extended Events – Finding Long Running Queries

    - by pinaldave
    The job of an SQL Consultant is very interesting as always. The month before, I was busy doing query optimization and performance tuning projects for our clients, and this month, I am busy delivering my performance in Microsoft SQL Server 2005/2008 Query Optimization and & Performance Tuning Course. I recently read white paper about Extended Event by SQL Server MVP Jonathan Kehayias. You can read the white paper here: Using SQL Server 2008 Extended Events. I also read another appealing chapter by Jonathan in the book, SQLAuthority Book Review – Professional SQL Server 2008 Internals and Troubleshooting. After reading these excellent notes by Jonathan, I decided to upgrade my course and include Extended Event as one of the modules. This week, I have delivered Extended Events session two times and attendees really liked the said course. They really think Extended Events is one of the most powerful tools available. Extended Events can do many things. I suggest that you read the white paper I mentioned to learn more about this tool. Instead of writing a long theory, I am going to write a very quick script for Extended Events. This event session captures all the longest running queries ever since the event session was started. One of the many advantages of the Extended Events is that it can be configured very easily and it is a robust method to collect necessary information in terms of troubleshooting. There are many targets where you can store the information, which include XML file target, which I really like. In the following Events, we are writing the details of the event at two locations: 1) Ringer Buffer; and 2) XML file. It is not necessary to write at both places, either of the two will do. -- Extended Event for finding *long running query* IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='LongRunningQuery') DROP EVENT SESSION LongRunningQuery ON SERVER GO -- Create Event CREATE EVENT SESSION LongRunningQuery ON SERVER -- Add event to capture event ADD EVENT sqlserver.sql_statement_completed ( -- Add action - event property ACTION (sqlserver.sql_text, sqlserver.tsql_stack) -- Predicate - time 1000 milisecond WHERE sqlserver.sql_statement_completed.duration > 1000 ) -- Add target for capturing the data - XML File ADD TARGET package0.asynchronous_file_target( SET filename='c:\LongRunningQuery.xet', metadatafile='c:\LongRunningQuery.xem'), -- Add target for capturing the data - Ring Bugger ADD TARGET package0.ring_buffer (SET max_memory = 4096) WITH (max_dispatch_latency = 1 seconds) GO -- Enable Event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=START GO -- Run long query (longer than 1000 ms) SELECT * FROM AdventureWorks.Sales.SalesOrderDetail ORDER BY UnitPriceDiscount DESC GO -- Stop the event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=STOP GO -- Read the data from Ring Buffer SELECT CAST(dt.target_data AS XML) AS xmlLockData FROM sys.dm_xe_session_targets dt JOIN sys.dm_xe_sessions ds ON ds.Address = dt.event_session_address JOIN sys.server_event_sessions ss ON ds.Name = ss.Name WHERE dt.target_name = 'ring_buffer' AND ds.Name = 'LongRunningQuery' GO -- Read the data from XML File SELECT event_data_XML.value('(event/data[1])[1]','VARCHAR(100)') AS Database_ID, event_data_XML.value('(event/data[2])[1]','INT') AS OBJECT_ID, event_data_XML.value('(event/data[3])[1]','INT') AS object_type, event_data_XML.value('(event/data[4])[1]','INT') AS cpu, event_data_XML.value('(event/data[5])[1]','INT') AS duration, event_data_XML.value('(event/data[6])[1]','INT') AS reads, event_data_XML.value('(event/data[7])[1]','INT') AS writes, event_data_XML.value('(event/action[1])[1]','VARCHAR(512)') AS sql_text, event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS tsql_stack, CAST(event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS XML).value('(frame/@handle)[1]','VARCHAR(50)') AS handle FROM ( SELECT CAST(event_data AS XML) event_data_XML, * FROM sys.fn_xe_file_target_read_file ('c:\LongRunningQuery*.xet', 'c:\LongRunningQuery*.xem', NULL, NULL)) T GO -- Clean up. Drop the event DROP EVENT SESSION LongRunningQuery ON SERVER GO Just run the above query, afterwards you will find following result set. This result set contains the query that was running over 1000 ms. In our example, I used the XML file, and it does not reset when SQL services or computers restarts (if you are using DMV, it will reset when SQL services restarts). This event session can be very helpful for troubleshooting. Let me know if you want me to write more about Extended Events. I am totally fascinated with this feature, so I’m planning to acquire more knowledge about it so I can determine its other usages. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology Tagged: SQL Extended Events

    Read the article

  • Script to UPDATE STATISTICS with time window

    - by Bill Graziano
    I recently spent some time troubleshooting odd query plans and came to the conclusion that we needed better statistics.  We’ve been running sp_updatestats but apparently it wasn’t sampling enough of the table to get us what we needed.  I have a pretty limited window at night where I can hammer the disks while this runs.  The script below just calls UPDATE STATITICS on all tables that “need” updating.  It defines need as any table whose statistics are older than the number of days you specify (30 by default).  It also has a throttle so it breaks out of the loop after a set amount of time (60 minutes).  That means it won’t start processing a new table after this time but it might take longer than this to finish what it’s doing.  It always processes the oldest statistics first so it will eventually get to all of them.  It defaults to sample 25% of the table.  I’m not sure that’s a good default but it works for now.  I’ve tested this in SQL Server 2005 and SQL Server 2008.  I liked the way Michelle parameterized her re-index script and I took the same approach. CREATE PROCEDURE dbo.UpdateStatistics ( @timeLimit smallint = 60 ,@debug bit = 0 ,@executeSQL bit = 1 ,@samplePercent tinyint = 25 ,@printSQL bit = 1 ,@minDays tinyint = 30 )AS/******************************************************************* Copyright Bill Graziano 2010*******************************************************************/SET NOCOUNT ON;PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Launching...'IF OBJECT_ID('tempdb..#status') IS NOT NULL DROP TABLE #status;CREATE TABLE #status( databaseID INT , databaseName NVARCHAR(128) , objectID INT , page_count INT , schemaName NVARCHAR(128) Null , objectName NVARCHAR(128) Null , lastUpdateDate DATETIME , scanDate DATETIME CONSTRAINT PK_status_tmp PRIMARY KEY CLUSTERED(databaseID, objectID));DECLARE @SQL NVARCHAR(MAX);DECLARE @dbName nvarchar(128);DECLARE @databaseID INT;DECLARE @objectID INT;DECLARE @schemaName NVARCHAR(128);DECLARE @objectName NVARCHAR(128);DECLARE @lastUpdateDate DATETIME;DECLARE @startTime DATETIME;SELECT @startTime = GETDATE();DECLARE cDB CURSORREAD_ONLYFOR select [name] from master.sys.databases where database_id > 4OPEN cDBFETCH NEXT FROM cDB INTO @dbNameWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN SELECT @SQL = ' use ' + QUOTENAME(@dbName) + ' select DB_ID() as databaseID , DB_NAME() as databaseName ,t.object_id ,sum(used_page_count) as page_count ,s.[name] as schemaName ,t.[name] AS objectName , COALESCE(d.stats_date, ''1900-01-01'') , GETDATE() as scanDate from sys.dm_db_partition_stats ps join sys.tables t on t.object_id = ps.object_id join sys.schemas s on s.schema_id = t.schema_id join ( SELECT object_id, MIN(stats_date) as stats_date FROM ( select object_id, stats_date(object_id, stats_id) as stats_date from sys.stats) as d GROUP BY object_id ) as d ON d.object_id = t.object_id where ps.row_count > 0 group by s.[name], t.[name], t.object_id, COALESCE(d.stats_date, ''1900-01-01'') ' SET ANSI_WARNINGS OFF; Insert #status EXEC ( @SQL); SET ANSI_WARNINGS ON; END FETCH NEXT FROM cDB INTO @dbNameENDCLOSE cDBDEALLOCATE cDBDECLARE cStats CURSORREAD_ONLYFOR SELECT databaseID , databaseName , objectID , schemaName , objectName , lastUpdateDate FROM #status WHERE DATEDIFF(dd, lastUpdateDate, GETDATE()) >= @minDays ORDER BY lastUpdateDate ASC, page_count desc, [objectName] ASC OPEN cStatsFETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN IF DATEDIFF(mi, @startTime, GETDATE()) > @timeLimit BEGIN PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + '*** Time Limit Reached ***'; GOTO __DONE; END SELECT @SQL = 'UPDATE STATISTICS ' + QUOTENAME(@dBName) + '.' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@ObjectName) + ' WITH SAMPLE ' + CAST(@samplePercent AS NVARCHAR(100)) + ' PERCENT;'; IF @printSQL = 1 PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + @SQL + ' (Last Updated: ' + CAST(@lastUpdateDate AS VARCHAR(100)) + ')' IF @executeSQL = 1 BEGIN EXEC (@SQL); END END FETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateEND__DONE:CLOSE cStatsDEALLOCATE cStatsPRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Completed.'GO

    Read the article

  • sp_send_dbmail attach files stored as varbinary in database

    - by Mindstorm Interactive
    I have a two part question relating to sending query results as attachments using sp_send_dbmail. Problem 1: Only basic .txt files will open. Any other format like .pdf or .jpg are corrupted. Problem 2: When attempting to send multiple attachments, I receive one file with all file names glued together. I'm running SQL Server 2005 and I have a table storing uploaded documents: CREATE TABLE [dbo].[EmailAttachment]( [EmailAttachmentID] [int] IDENTITY(1,1) NOT NULL, [MassEmailID] [int] NULL, -- foreign key [FileData] [varbinary](max) NOT NULL, [FileName] [varchar](100) NOT NULL, [MimeType] [varchar](100) NOT NULL I also have a MassEmail table with standard email stuff. Here is the SQL Send Mail script. For brevity, I've excluded declare statements. while ( (select count(MassEmailID) from MassEmail where status = 20 )>0) begin select @MassEmailID = Min(MassEmailID) from MassEmail where status = 20 select @Subject = [Subject] from MassEmail where MassEmailID = @MassEmailID select @Body = Body from MassEmail where MassEmailID = @MassEmailID set @query = 'set nocount on; select cast(FileData as varchar(max)) from Mydatabase.dbo.EmailAttachment where MassEmailID = '+ CAST(@MassEmailID as varchar(100)) select @filename = '' select @filename = COALESCE(@filename+ ',', '') +FileName from EmailAttachment where MassEmailID = @MassEmailID exec msdb.dbo.sp_send_dbmail @profile_name = 'MASS_EMAIL', @recipients = '[email protected]', @subject = @Subject, @body =@Body, @body_format ='HTML', @query = @query, @query_attachment_filename = @filename, @attach_query_result_as_file = 1, @query_result_separator = '; ', @query_no_truncate = 1, @query_result_header = 0; update MassEmailset status= 30,SendDate = GetDate() where MassEmailID = @MassEmailID end I am able to successfully read files from the database so I know the binary data is not corrupted. .txt files only read when I cast FilaData to varchar. But clearly original headers are lost. It's also worth noting that attachment file sizes are different than the original files. That is most likely due to improper encoding as well. So I'm hoping there's a way to create file headers using the stored mimetype, or some way to include file headers in the binary data? I'm also not confident in the values of the last few parameters, and I know coalesce is not quite right, because it prepends the first file name with a comma. But good documentation is nearly impossible to find. Please help!

    Read the article

  • Need help with NHibernate / Fluent NHibernate mapping

    - by Mark Boltuc
    Let's say your have the following table structure: ============================== | Case | ============================== | Id | int | | ReferralType | varchar(10) | +---------| ReferralId | int |---------+ | ============================== | | | | | | | ====================== ====================== ====================== | SourceA | | SourceB | | SourceC | ====================== ====================== ====================== | Id | int | | Id | int | | Id | int | | Name | varchar(50) | | Name | varchar(50) | | Name | varchar(50) | ====================== ====================== ====================== Based on the ReferralType the ReferralId contains id to the SourceA, SourceB, or SourceC I'm trying to figure out how to map this using Fluent NHibernate or just plain NHibernate into an object model. I've tried a bunch of different things but I haven't been succesful. Any ideas? The object model might be something like: public class Case { public int Id { get; set; } public Referral { get; set; } } public class Referral { public string Type { get; set; } public int Id { get; set; } public string Name { get; set; } }

    Read the article

  • How do I eliminate Error 3002?

    - by Andrew
    Say I have the following table definitions in SQL Server 2008: CREATE TABLE Person (PersonId INT IDENTITY NOT NULL PRIMARY KEY, Name VARCHAR(50) NOT NULL, ManyMoreIrrelevantColumns VARCHAR(MAX) NOT NULL) CREATE TABLE Model (ModelId INT IDENTITY NOT NULL PRIMARY KEY, ModelName VARCHAR(50) NOT NULL, Description VARCHAR(200) NULL) CREATE TABLE ModelScore (ModelId INT NOT NULL REFERENCES Model (ModelId), Score INT NOT NULL, Definition VARCHAR(100) NULL, PRIMARY KEY (ModelId, Score)) CREATE TABLE PersonModelScore (PersonId INT NOT NULL REFERENCES Person (PersonId), ModelId INT NOT NULL, Score INT NOT NULL, PRIMARY KEY (PersonId, ModelId), FOREIGN KEY (ModelId, Score) REFERENCES ModelScore (ModelId, Score)) The idea here is that each Person may have only one ModelScore per Model, but each Person may have a score for any number of defined Models. As far as I can tell, this SQL should enforce these constraints naturally. The ModelScore has a particular "meaning," which is contained in the Definition. Nothing earth-shattering there. Now, I try translating this into Entity Framework using the designer. After updating the model from the database and doing some editing, I have a Person object, a Model object, and a ModelScore object. PersonModelScore, being a join table, is not an object but rather is included as an association with some other name (let's say ModelScorePersonAssociation). The mapping details for the association are as follows: - Association - Maps to PersonModelScore - ModelScore ModelId : Int32 <=> ModelId : int Score : Int32 <=> Score : int - Person PersonId : Int32 <=> PersonId : int On the right-hand side, the ModelId and PersonId values have primary key symbols, but the Score value does not. Upon compilation, I get: Error 3002: Problem in Mapping Fragment starting at line 5190: Potential runtime violation of table PersonModelScore's keys (PersonModelScore.ModelId, PersonModelScore.PersonId): Columns (PersonModelScore.PersonId, PersonModelScore.ModelId) are mapped to EntitySet ModelScorePersonAssociation's properties (ModelScorePersonAssociation.Person.PersonId, ModelScorePersonAssociation.ModelScore.ModelId) on the conceptual side but they do not form the EntitySet's key properties (ModelScorePersonAssociation.ModelScore.ModelId, ModelScorePersonAssociation.ModelScore.Score, ModelScorePersonAssociation.Person.PersonId). What have I done wrong in the designer or otherwise, and how can I fix the error? Many thanks!

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >