Search Results

Search found 28900 results on 1156 pages for 'sql 2005'.

Page 296/1156 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • SQL Binary Microsoft Access - Combining two tables if specific field values are equal

    - by Jordan
    I am new to Microsoft Access and SQL but have a decent programming background and I believe this problem should be relatively simple. I have two tables that I have imported into Access. I will give you a little context. One table is huge and contains generic, global data. The other table is still big but contains specific, regional data. There is only one common field (or column) between the two tables. Let’s call this common field CF. The other fields in both tables are different. I’ll take you through one iteration of what I need to do. I need to take each CF value in the regional, smaller table and find the common CF value in the larger, global table. After finding the match, I need to take the whole “record” or “row” from the global data and copy it over to the corresponding record in the smaller regional table (This should involve creating the new fields). I need to do this for all CF values in the regional, smaller table. I was recommended to use SQL and a binary search, but I am unfamiliar. Let me know if you have any questions. I appreciate the help!

    Read the article

  • Get a unique data in a SQL query

    - by Jensen
    Hi, I've a database who contain some datas in that form: icon(name, size, tag) (myicon.png, 16, 'twitter') (myicon.png, 32, 'twitter') (myicon.png, 128, 'twitter') (myicon.png, 256, 'twitter') (anothericon.png, 32, 'facebook') (anothericon.png, 128, 'facebook') (anothericon.png, 256, 'facebook') So as you see it, the name field is not uniq I can have multiple icons with the same name and they are separated with the size field. Now in PHP I have a query that get ONE icon set, for example : $dbQueryIcons = mysql_query("SELECT * FROM pl_icon WHERE tag LIKE '%".$SEARCH_QUERY."%' GROUP BY name ORDER BY id DESC LIMIT ".$firstEntry.", ".$CONFIG['icon_per_page']."") or die(mysql_error()); With this example if $tag contain 'twitter' it will show ONLY the first SQL data entry with the tag 'twitter', so it will be : (myicon.png, 16, 'twitter') This is what I want, but I would prefer the '128' size by default. Is this possible to tell SQL to send me only the 128 size when existing and if not another size ? In an another question someone give me a solution with the GROUP BY but in this case that don't run because we have a GROUP BY name. And if I delete the GROUP BY, it show me all size of the same icons. Thanks !

    Read the article

  • Optimize SQL connection?

    - by user1484035
    I am building a multi-page web project in HTML and Javascript that is constantly reading from AND writing to an SQL database. I can connect to the database and successfully run my project with this type of connection. var connection = new ActiveXObject("ADODB.Connection") ; var connectionstring="Data Source=<server>;Initial Catalog=<catalog>;User ID=<user>; Password=<password>;Provider=SQLOLEDB"; connection.Open(connectionstring); var rs = new ActiveXObject("ADODB.Recordset"); rs.Open("SELECT * FROM table", connection); rs.MoveFirst while(!rs.eof) { document.write(rs.fields(1)); rs.movenext; } rs.close; connection.close; Works great and runs fine. BUT, the first 5 lines (from var connection = to var rs =) causes the whole browser to freeze for a few seconds while it establishes the connection. I need to speed that up since I am constantly connecting to the database throughout my project. Is there a more effective way of connecting to a SQL database? or is my computer just bad and this should run faster?

    Read the article

  • SQL return ORDER BY result as an array

    - by EarthMind
    Is it possible to return groups as an associative array? I'd like to know if a pure SQL solution is possible. Note that I release that I could be making things more complex unnecessarily but this is mainly to give me an idea of the power of SQL. My problem: I have a list of words in the database that should be sorted alphabetically and grouped into separate groups according to the first letter of the word. For example: ape broom coconut banana apple should be returned as array( 'a' => 'ape', 'apple', 'b' => 'banana', 'broom', 'c' => 'coconut' ) so I can easily created sorted lists by first letter (i.e. clicking "A" only shows words starting with a, "B" with b, etc. This should make it easier for me to load everything in one query and make the sorted list JavaScript based, i.e. without having to reload the page (or use AJAX). Side notes: I'm using PostgreSQL but a solution for MySQL would be fine too so I can try to port it to PostgreSQL. Scripting language is PHP.

    Read the article

  • SQL aggregate query question

    - by Phil
    Hi, Can anyone help me with a SQL query in Apache Derby SQL to get a "simple" count. Given a table ABC that looks like this... id a b c 1 1 1 1 2 1 1 2 3 2 1 3 4 2 1 1 ** 5 2 1 2 ** ** 6 2 2 1 ** 7 3 1 2 8 3 1 3 9 3 1 1 How can I write a query to get a count of how may distinct values of 'a' have both (b=1 and c=2) AND (b=2 and c=1) to get the correct result of 1. (the two rows marked match the criteria and both have a value of a=2, there is only 1 distinct value of a in this table that match the criteria) The tricky bit is that (b=1 and c=2) AND (b=2 and c=1) are obviously mutually exclusive when applied to a single row. .. so how do I apply that expression across multiple rows of distinct values for a? These queries are wrong but to illustrate what I'm trying to do... "SELECT DISTINCT COUNT(a) WHERE b=1 AND c=2 AND b=2 AND c=1 ..." .. (0) no go as mutually exclusive "SELECT DISTINCT COUNT(a) WHERE b=1 AND c=2 OR b=2 AND c=1 ..." .. (3) gets me the wrong result. SELECT COUNT(a) (CASE WHEN b=1 AND c=10 THEN 1 END) FROM ABC WHERE b=2 AND c=1 .. (0) no go as mutually exclusive Cheers, Phil.

    Read the article

  • ASP.net MVC Linq-To-SQL Extended Class Field Binding

    - by user336858
    Hi there, The short version of this question is "Is there a way to get automatic View Object binding for fields defined in a partial class for a Linq-To-SQL generated class?" Apologies if it's been asked before. Example Suppose I have a typical MVC setup with the tables: Posts {PostID, ...} Categories {CategoryID, ...} A post can have more than one category, and a category can identify more than one post. Thus suppose further that I need an extra table: PostCategories {PostID, CategoryID, ...} This handles the many-to-many relationship between posts and categories. As far as I know, there's no way to do this in Linq-to-SQL right now so I have to shoehorn it in by adding a partial Postclass to the project to add that functionality. Something like: public partial class Post { public IEnumerable<Category> Categories{ get { ... } set { ... } } } So here's my question: If a user is accessing my MVC application front-end and begins editing a Post object, they might enter an invalid category. When the server recognizes the invalid input, the usual practice is to return the faulty object to the original view for re-editing along with some error messages. The fields in the edit page are re-populated with the provided values. However I don't know how to get this mechanism to work with the properties I created with the partial class as shown above. Any terminology, links, or tips you can provide would be tremendously helpful!

    Read the article

  • SQL: find entries in 1:n relation that don't comply with condition spanning multiple rows

    - by milianw
    I'm trying to optimize SQL queries in Akonadi and came across the following problem that is apparently not easy to solve with SQL, at least for me: Assume the following table structure (should work in SQLite, PostgreSQL, MySQL): CREATE TABLE a ( a_id INT PRIMARY KEY ); INSERT INTO a (a_id) VALUES (1), (2), (3), (4); CREATE TABLE b ( b_id INT PRIMARY KEY, a_id INT, name VARCHAR(255) NOT NULL ); INSERT INTO b (b_id, a_id, name) VALUES (1, 1, 'foo'), (2, 1, 'bar'), (3, 1, 'asdf'), (4, 2, 'foo'), (5, 2, 'bar'), (6, 3, 'foo'); Now my problem is to find entries in a that are missing name entries in table b. E.g. I need to make sure each entry in a has at least the name entries "foo" and "bar" in table b. Hence the query should return something similar to: a_id = 3 is missing name "bar" a_id = 4 is missing name "foo" and "bar" Since both tables are potentially huge in Akonadi, performance is of utmost importance. One solution in MySQL would be: SELECT a.a_id, CONCAT('|', GROUP_CONCAT(name ORDER BY NAME ASC SEPARATOR '|'), '|') as names FROM a LEFT JOIN b USING( a_id ) GROUP BY a.a_id HAVING names IS NULL OR names NOT LIKE '%|bar|foo|%'; I have yet to measure the performance tomorrow, but severly doubt it's any fast for tens of thousand of entries in a and thrice as many in b. Furthermore we want to support SQLite and PostgreSQL where to my knowledge the GROUP_CONCAT function is not available. Thanks, good night.

    Read the article

  • ASP.net MVC3 howto edit sql database

    - by user1662380
    MovieStoreEntities MovieDb = new MovieStoreEntities(); public ActionResult Edit(int id) { //var EditMovie1 = MovieDb AddMovieModel EditMovie = (from M in MovieDb.Movies from C in MovieDb.Categories where M.CategoryId == C.Id where M.Id == id select new AddMovieModel { Name = M.Name, Director = M.Director, Country = M.Country, categorie = C.CategoryName, Category = M.CategoryId }).FirstOrDefault(); //AddMovieModel EditMovie1 = MovieDb.Movies.Where(m => m.Id == id).Select(m => new AddMovieModel {m.Id }).First(); List<CategoryModel> categories = MovieDb.Categories .Select(category => new CategoryModel { Category = category.CategoryName, id = category.Id }) .ToList(); ViewBag.Category = new SelectList(categories, "Id", "Category"); return View(EditMovie); } // // POST: /Default1/Edit/5 [HttpPost] public ActionResult Edit(AddMovieModel Model2) { List<CategoryModel> categories = MovieDb.Categories .Select(category => new CategoryModel { Category = category.CategoryName, id = category.Id }) .ToList(); ViewBag.Category = new SelectList(categories, "Id", "Category"); if (ModelState.IsValid) { //MovieStoreEntities model = new MovieStoreEntities(); MovieDb.SaveChanges(); return View("Thanks2", Model2); } else return View(); } This is the Code that I have wrote to edit Movie details and update database in the sql server. This dont have any compile errors, But It didnt update sql server database.

    Read the article

  • How to add "missing" columns in a column group in reporting services?

    - by Gimly
    Hello, I'm trying to create a report that displays for each months of the year the quantity of goods sold. I have a query that returns the list of goods sold for each month, it looks something like this : SELECT Seller.FirstName, Seller.LastName, SellingHistory.Month, SUM(SellingHistory.QuantitySold) FROM SellingHistory JOIN Seller on SellingHistory.SellerId = Seller.SellerId WHERE SellingHistory.Year = @Year GOUP BY Seller.FirstName, Seller.LastName, SellingHistory.Month What I want to do is display a report that has a column for each months + a total column that will display for each Seller the quantity sold in the selected month. Seller Name | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | Total What I managed to do is using a matrix and a column group (group on Month) to display the columns for existing data, if I have data from January to March, it will display the 3 first columns and the total. What I would like to do is always display all the columns. I thought about making that by adding the missing months in the SQL request, but I find that a bit weird and I'm sure there must be some "cleanest" solution as this is something that must be quite frequent. Thanks. PS: I'm using SQL Server Express 2008

    Read the article

  • sql query selecting one name no matter how many rows it was mentioned in

    - by Baruch
    Basically what I'm trying to do is get the information from column x no matter how many times it was mentioned. means that if I have this kind of table: x | y | z ------+-------+-------- hello | one | bye hello | two | goodbye hi | three | see you so what I'm trying to do is create a query that would get all of the names that are mentions in the x column without duplicates and put it into a select list. my goal is that I would have a select list with TWO not THREE options, hello and hi this is what I have so far which isn't working. hope you guys know the answer to that: function getList(){ $options="<select id='names' style='margin-right:40px;'>"; $c_id = $_SESSION['id']; $sql="SELECT * FROM names"; $result=mysql_query($sql); $options.="<option value='blank'>-- Select something --</option>" ; while ($row=mysql_fetch_array($result)) { $name=$row["x"]; $options.="<option value='$name'>$name</option>"; } $options.= "</SELECT>"; return "$options"; } Sorry for confusing... i edited my source

    Read the article

  • Beginner having difficulty with SQL query

    - by Vulcanizer
    Hi, I've been studying SQL for 2 weeks now and I'm preparing for an SQL test. Anyway I'm trying to do this question: For the table: 1 create table data { 2 id int, 3 n1 int not null, 4 n2 int not null, 5 n3 int not null, 6 n4 int not null, 7 primary key (id) 8 } I need to return the relation with tuples (n1, n2, n3) where all the corresponding values for n4 are 0. The problem asks me to solve it WITHOUT using subqueries(nested selects/views) It also gives me an example table and the expected output from my query: 01 insert into data (id, n1, n2, n3, n4) 02 values (1, 2,4,7,0), 03 (2, 2,4,7,0), 04 (3, 3,6,9,8), 05 (4, 1,1,2,1), 06 (5, 1,1,2,0), 07 (6, 1,1,2,0), 08 (7, 5,3,8,0), 09 (8, 5,3,8,0), 10 (9, 5,3,8,0); expects (2,4,7) (5,3,8) and not (1,1,2) since that has a 1 in n4 in one of the cases. The best I could come up with was: 1 SELECT DISTINCT n1, n2, n3 2 FROM data a, data b 3 WHERE a.ID <> b.ID 4 AND a.n1 = b.n1 5 AND a.n2 = b.n2 6 AND a.n3 = b.n3 7 AND a.n4 = b.n4 8 AND a.n4 = 0 but I found out that also prints (1,1,2) since in the example (1,1,2,0) happens twice from IDs 5 and 6. Any suggestions would be really appreciated.

    Read the article

  • Write out to text file using T-SQL

    - by sasfrog
    I am creating a basic data transfer task using TSQL where I am retrieving certain records from one database that are more recent than a given datetime value, and loading them into another database. This will happen periodically throughout the day. It's such a small task that SSIS seems like overkill - I want to just use a scheduled task which runs a .sql file. Where I need guidance is that I need to persist the datetime from the last run of this task, then use this to filter the records next time the task runs. My initial thought is to just store the datetime in a text file, and update (overwrite) it as part of the task each time it runs. I can read the file in without problems using T-SQL, but writing back out has got me stuck. I've seen plenty of examples which make use of a dynamically-built bcp command, which is then executed using xp_cmdshell. Trouble is, security on the server I'm deploying to precludes the use of xp_cmdshell. So, my question is, are there other ways to simply write a datetime value to a file using TSQL, or should I be thinking about a different approach? EDIT: happy to be corrected about SSIS being "overkill"...

    Read the article

  • Stop SQL returning the same result twice in a JOIN

    - by nbs189
    I have joined together several tables to get data i want but since I am new to SQL i can't figure out how to stop data being returned more than once. her's the SQL statement; SELECT T.url, T.ID, S.status, S.ID, E.action, E.ID, E.timestamp FROM tracks T, status S, events E WHERE S.ID AND T.ID = E.ID ORDER BY E.timestamp DESC The data that is returned is something like this; +----------------------------------------------------------------+ | URL | ID | Status | ID | action | ID | timestamp | +----------------------------------------------------------------+ | T.1 | 4 | hello | 4 | has uploaded a track | 4 | time | | T.2 | 3 | bye | 3 | has some news | 3 | time | | t.1 | 4 | more | 4 | has some news | 4 | time | +----------------------------------------------------------------+ That's a very basic example but does outline what happens. If you look at the third row the URL is repeated when there is a different status. This is what I want to happen; +-------------------------------------------------------+ | URL or Status | ID | action | timestamp | +-------------------------------------------------------+ | T.1 | 4 | has uploaded a track | time | | hello | 3 | has some news | time | | bye | 4 | has some news | time | +-------------------------------------------------------+ Please notice that the the url (in this case the mock one is T.1) is shown when the action is has uploaded a track. This is very important. The action in the events table is inserted on trigger of a status or track insert. If a new track is inserted the action is 'has uploaded a track' and you guess what it is for a status. The ID and timestamp is also inserted into the events table at this point. Note: There are more tables that go into the query, 3 more in fact, but I have left them out for simplicity.

    Read the article

  • Manipulating data in sql / asp.net / c# - how?

    - by SLC
    Not sure how to word the question... Basically, so far all my SQL stuff has been stored procedures and dumped into a gridview. The odd case where I had to perform an action based on a value (such as highlighting a row green if a certain value was true) were done as the gridview was rendering in one of the overrides. Now however I have to do something far more complicated - pull three sets of data down, run a series of checks on all three and some date related checks and stuff, then populate a gridview with some of the items. In logic terms, I want to run three queries, and store the lists of results (presumably in Lists?) then run some logic, then populate the gridview. Specifically what I don't know how to do is: Best way of pulling the data, and putting it into a List or other datastructure that lets me easily run through it, and retrieve data based on column (myList.age, or more likely, myList["Age"]). One I have compared the data, I assume I create a new list that will be put into the gridview... how do I put the contents of a list INTO a gridview? How would I add other stuff such as buttons or checkboxes at the same time? Any nudge in the right direction would be appreciated! Particularly doing cool stuff with lists and sql (if there is anything cool you can do with them)

    Read the article

  • Why could "insert (...) values (...)" not insert a new row?

    - by nang
    Hi, I have a simple SQL insert statement of the form: insert into MyTable (...) values (...) It is used repeatedly to insert rows and usually works as expected. It inserts exactly 1 row to MyTable, which is also the value returned by the Delphi statement AffectedRows:= myInsertADOQuery.ExecSQL. After some time there was a temporary network connectivity problem. As a result, other threads of the same application perceived EOleExceptions (Connection failure, -2147467259 = unspecified error). Later, the network connection was reestablished, these threads reconnected and were fine. The thread responsible for executing the insert statement described above, however, did not perceive the connectivity problems (No exceptions) - probably it was simply not executed while the network was down. But after the network connectivity problems myInsertADOQuery.ExecSQL always returned 0 and no rows were inserted to MyTable anymore. After a restart of the application the insert statement worked again as expected. For SQL Server, is there any defined case where an insert statment like the one above would not insert a row and return 0 as the number of affected rows? Primary key is an autogenerated GUID. There are no unique or check constraints (which should result in an exception anyway rather than not inserting a row). Are there any known ADO bugs (Provider=SQLOLEDB.1)? Any other explanations for this behaviour? Thanks, Nang.

    Read the article

  • Row Number Transformation

    The Row Number Transformation calculates a row number for each row, and adds this as a new output column to the data flow. The column number is a sequential number, based on a seed value. Each row receives the next number in the sequence, based on the defined increment value. The final row number can be stored in a variable for later analysis, and can be used as part of a process to validate the integrity of the data movement. The Row Number transform has a variety of uses, such as generating surrogate keys, or as the basis for a data partitioning scheme when combined with the Conditional Split transformation. Properties Property Data Type Description Seed Int32 The first row number or seed value. Increment Int32 The value added to the previous row number to make the next row number. OutputVariable String The name of the variable into which the final row number is written post execution. (Optional). The three properties have been configured to support expressions, or they can set directly in the normal manner. Expressions on components are only visible on the hosting Data Flow task, not at the individual component level. Sometimes the data type of the property is incorrectly set when the properties are created, see the Troubleshooting section below for details on how to fix this. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Row Number transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations, and this component requires a minimum of SQL Server 2005 Service Pack 1. Downloads The Row Number Transformation  is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Row Number Transformation for SQL Server 2005 Row Number Transformation for SQL Server 2008 Row Number Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.6 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.5 - SQL Server 2008 release. (15 Oct 2008) SQL Server 2005 Version 1.2.0.34 – Updated installer. (25 Jun 2008) Version 1.2.0.7 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Added the ability to reuse an existing column to hold the generated row number, as an alternative to the default of adding a new column to the output. (18 Jun 2006) Version 1.2.0.7 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Added the ability to reuse an existing column to hold the generated row number, as an alternative to the default of adding a new column to the output. (18 Jun 2006) Version 1.0.0.0 - Public Release for SQL Server 2005 IDW 15 June CTP (29 Aug 2005) Screenshot Code Sample The following code sample demonstrates using the Data Generator Source and Row Number Transformation programmatically in a very simple package. Package package = new Package(); package.Name = "Data Generator & Row Number"; // Add the Data Flow Task Executable taskExecutable = package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = taskExecutable as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add Data Generator Source IDTSComponentMetaData100 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "Data Generator"; componentSource.ComponentClassID = "Konesans.Dts.Pipeline.DataGenerator.DataGenerator, Konesans.Dts.Pipeline.DataGenerator, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"; CManagedComponentWrapper instanceSource = componentSource.Instantiate(); instanceSource.ProvideComponentProperties(); instanceSource.SetComponentProperty("RowCount", 10000); // Add Row Number Tx IDTSComponentMetaData100 componentRowNumber = dataFlowTask.ComponentMetaDataCollection.New(); componentRowNumber.Name = "FlatFileDestination"; componentRowNumber.ComponentClassID = "Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"; CManagedComponentWrapper instanceRowNumber = componentRowNumber.Instantiate(); instanceRowNumber.ProvideComponentProperties(); instanceRowNumber.SetComponentProperty("Increment", 10); // Connect the two components together IDTSPath100 path = dataFlowTask.PathCollection.New(); path.AttachPathAndPropagateNotifications(componentSource.OutputCollection[0], componentRowNumber.InputCollection[0]); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); #endif package.Execute(); foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); } package.Dispose(); Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you get an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? Please also make sure you have installed a minimum of SP1 for SQL 2005. The IDtsPipelineEnvironmentService was added in SQL Server 2005 Service Pack 1 (SP1) (See  http://support.microsoft.com/kb/916940). If you get an error Could not load type 'Microsoft.SqlServer.Dts.Design.IDtsPipelineEnvironmentService' from assembly 'Microsoft.SqlServer.Dts.Design, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91'. when trying to open the user interface, it implies that your development machine has not had SP1 applied. Very occasionally we get a problem to do with the properties not being created with the correct data type. Since there is no way to programmatically to define the data type of a pipeline component property, it can only infer it. Whilst we set an integer value as we create the property, sometimes SSIS decides to define it is a decimal. This is often highlighted when you use a property expression against the property and get an error similar to Cannot convert System.Int32 to System.Decimal. Unfortunately this is beyond our control and there appears to be no pattern as to when this happens. If you do have more information we would be happy to hear it. To fix this issue you can manually edit the package file. In Visual Studio right click the package file from the Solution Explorer and select View Code, which will open the package as raw XML. You can now search for the properties by name or the component name. You can then change the incorrect property data types highlighted below from Decimal to Int32. <component id="37" name="Row Number Transformation" componentClassID="{BF01D463-7089-41EE-8F05-0A6DC17CE633}" … >     <properties>         <property id="38" name="UserComponentTypeName" …>         <property id="41" name="Seed" dataType="System.Int32" ...>10</property>         <property id="42" name="Increment" dataType="System.Decimal" ...>10</property>         ... If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed.

    Read the article

  • Fixing Robocopy for SQL Server Jobs

    - by Most Valuable Yak (Rob Volk)
    Robocopy is one of, if not the, best life-saving/greatest-thing-since-sliced-bread command line utilities ever to come from Microsoft.  If you're not using it already, what are you waiting for? Of course, being a Microsoft product, it's not exactly perfect. ;)  Specifically, it sets the ERRORLEVEL to a non-zero value even if the copy is successful.  This causes a problem in SQL Server job steps, since non-zero ERRORLEVELs report as failed. You can work around this by having your SQL job go to the next step on failure, but then you can't determine if there was a genuine error.  Plus you still see annoying red X's in your job history.  One way I've found to avoid this is to use a batch file that runs Robocopy, and I add some commands after it (in red): robocopy d:\backups \\BackupServer\BackupFolder *.bak rem suppress successful robocopy exit statuses, only report genuine errors (bitmask 16 and 8 settings)set/A errlev="%ERRORLEVEL% & 24" rem exit batch file with errorlevel so SQL job can succeed or fail appropriatelyexit/B %errlev% (The REM statements are simply comments and don't need to be included in the batch file) The SET command lets you use expressions when you use the /A switch.  So I set an environment variable "errlev" to a bitwise AND with the ERRORLEVEL value. Robocopy's exit codes use a bitmap/bitmask to specify its exit status.  The bits for 1, 2, and 4 do not indicate any kind of failure, but 8 and 16 do.  So by adding 16 + 8 to get 24, and doing a bitwise AND, I suppress any of the other bits that might be set, and allow either or both of the error bits to pass. The next step is to use the EXIT command with the /B switch to set a new ERRORLEVEL value, using the "errlev" variable.  This will now return zero (unless Robocopy had real errors) and allow your SQL job step to report success. This technique should also work for other command-line utilities.  The only issues I've found is that it requires the commands to be part of a batch file, so if you use Robocopy directly in your SQL job step you'd need to place it in a batch.  If you also have multiple Robocopy calls, you'll need to place the SET/A command ONLY after the last one.  You'd therefore lose any errors from previous calls, unless you use multiple "errlev" variables and AND them together. (I'll leave this as an exercise for the reader) The SET/A syntax also permits other kinds of expressions to be calculated.  You can get a full list by running "SET /?" on a command prompt.

    Read the article

  • Using linked servers, OPENROWSET and OPENQUERY

    - by BuckWoody
    SQL Server has a few mechanisms to reach out to another server (even another server type) and query data from within a Transact-SQL statement. Among them are a set of stored credentials and information (called a Linked Server), a statement that uses a linked server called called OPENQUERY, another called OPENROWSET, and one called OPENDATASOURCE. This post isn’t about those particular functions or statements – hit the links for more if you’re new to those topics. I’m actually more concerned about where I see these used than the particular method. In many cases, a Linked server isn’t another Relational Database Management System (RDMBS) like Oracle or DB2 (which is possible with a linked server), but another SQL Server. My concern is that linked servers are the new Data Transformation Services (DTS) from SQL Server 2000 – something that was designed for one purpose but which is being morphed into something much more. In the case of DTS, most of us turned that feature into a full-fledged job system. What was designed as a simple data import and export system has been pressed into service doing logic, routing and timing. And of course we all know how painful it was to move off of a complex DTS system onto SQL Server Integration Services. In the case of linked servers, what should be used as a method of running a simple query or two on another server where you have occasional connection or need a quick import of a small data set is morphing into a full federation strategy. In some cases I’ve seen a complex web of linked servers, and when credentials, names or anything else changes there are huge problems. Now don’t get me wrong – linked servers and other forms of distributing queries is a fantastic set of tools that we have to move data around. I’m just saying that when you start having lots of workarounds and when things get really complicated, you might want to step back a little and ask if there’s a better way. Are you able to tolerate some latency? Perhaps you’re able to use Service Broker. Would you like to be platform-independent on the data source? Perhaps a middle-tier might make more sense, abstracting the queries there and sending them to the proper server. Designed properly, I’ve seen these systems scale further and be more resilient than loading up on linked servers. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL Saturday #162 Cambridge

    - by Most Valuable Yak (Rob Volk)
    Despite the efforts of American Airlines, this past weekend I attended the first SQL Saturday in the UK!  Hosted by the SQLCambs Chapter of PASS and organized by Mark (b|t) & Lorraine Broadbent, ably assisted by John Martin (b|t), Mark Pryce-Maher (b|t) and other folks whose names I've unfortunately forgotten, it was held at the Crowne Plaza Hotel, which is completely surrounded by Cambridge University. On Friday, they presented 3 pre-conference sessions given by the brilliant American Cloud & DBA Guru, Buck Woody (b|t), the brilliant Danish SQL Server Internals Guru, Mark Rasmussen (b|t), and the brilliant Scottish Business Intelligence Guru and recent Outstanding Pass Volunteer, Jen Stirrup (b|t).  While I would have loved to attend any of their pre-cons (having seen them present several times already), finances and American Airlines ultimately made that impossible.  But not to worry, I caught up with them during the regular sessions and at the speaker dinner.  And I got back the money they all owed me.  (Actually I owed Mark some money) The schedule was jam-packed even with only 4 tracks, there were 8 regular slots, a lunch session for sponsor presentations, and a 15 minute keynote given by Buck Woody, who besides giving an excellent history of SQL Server at Microsoft (and before), also explained the source of the "unknown contact" image that appears in Outlook.  Hint: it's not Buck himself. Amazingly, and against all better judgment, I even got to present at SQL Saturday 162!  I did a 5 minute Lightning Talk on Regular Expressions in SSMS.  I then did a regular 50 minute session on Constraints.  You can download the content for the regular session at that link, and for the regular expression presentation here. I had a great time and had a great audience for both of my sessions.  You would never have guessed this was the first event for the organizers, everything went very smoothly, especially for the number of attendees and the relative smallness of the space.  The event sponsors also deserve a lot of credit for making themselves fit in a small area and for staying through the entire event until the giveaways at the very end. Overall this was one of the best SQL Saturdays I've ever attended and I have to congratulate Mark B, Lorraine, John, Mark P-M, and all the volunteers and speakers for making this an astoundingly hard act to follow!  Well done!

    Read the article

  • Create Outlook Appointments from PowerShell

    - by BuckWoody
    I've been toying around with a script to create a special set of calendar objects in Outlook that show when my SQL Server Agent Jobs are scheduled to run. I haven't finished yet, but I thought I would share the part that creates the Outlook Appointments.I have yet to fill a variable with the start and end times, and then loop through that to create the appointments. I'm thinking I'll make the script below into a function, and feed it those variables in a loop. The script below creates a whole new Calendar Folder in Outlook called "SQL Server Agent Jobs". I also use categories quite a bit, so you'll see that too. Caution: If you plan to play with this script, do it on an isolated workstation, not on your "regular" Outlook calendar. Otherwise, you'll have lots of appointments in there that you don't care about!  # Add a new calendar item to a new Outlook folder called "SQL Server Agent Jobs" $outlook = new-object -com Outlook.Application $calendar = $outlook.Session.folders.Item(1).Folders.Item("SQL Server Agent Jobs") $appt = $calendar.Items.Add(1) # == olAppointmentItem $appt.Start = [datetime]"03/11/2010 11:00" $appt.End = [datetime]"03/11/2009 12:00" $appt.Subject = "JobName" $appt.Location = "ServerName" $appt.Body = "Job Details" $appt.Categories = "SQL server Agent Job" $appt.Save()   Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL Server Memory Manager Changes in Denali

    - by SQLOS Team
    The next version of SQL Server will contain significant changes to the memory manager component.  The memory manager component has been rewritten for Denali.  In the previous versions of SQL Server there were two distinct memory managers.  There was one memory manager which handled allocation sizes of 8k or less and another for greater than 8k.  For Denali there will be one memory manager for all allocation sizes.   The majority of the changes will be transparent to the end user.  However, some changes will be visible to the user.  These are listed below: ·         The ‘max server memory’ configuration option has new lower limits.  Specifically, 32-bit versions of SQL Server will have a lower limit of 64 MB.  The 64-bit versions will have a lower limit of 128 MB. ·         All memory allocations by SQL Server components will observe the ‘max server memory’ configuration option.  In previous SQL versions only the 8k allocations were limited the ‘max server memory’ configuration option.  Allocations larger than 8k weren’t constrained. ·         DMVs which refer to memory manager internals have been modified.  This includes adding or removing columns and changing column names. ·         The memory manager configuration messages in the error log have minor changes. ·         DBCC memorystatus output has been changed. ·         Address Windowing Extensions (AWE) has been deprecated.   In the next blog post I will discuss the changes to the memory manager DMVs in greater detail.  In future blog posts I will discuss the other changes in greater detail.  

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >