Search Results

Search found 28900 results on 1156 pages for 'sql 2005'.

Page 599/1156 | < Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >

  • SSIS Intermittent variable error: The system cannot find the file specified

    - by Ben
    Our SSIS pacakges a structured as one Control package and many child packages (about 30) that are invoked from the control package. The child packages are invoked with Execute Package Task. There is one Execute Package Task per child package. Each Execute Package Task uses File Connection Manager to specify path to the child package dtsx file. There is one File Connection Manager per child package. Each File Connection Manager has an expression defined for ConnectionString property. This expression looks like this: @[Template::FolderPackages]+"MyPackage.dtsx", the file name is different for each package. The variable (FolderPackages) is specified in the SSIS package configuration file. The error that is generated during run time is "Error 0x80070002 while loading package file "MyPackage.dtsx". The system cannot find the file specified." The package that fails is different from run to run and sometimes no packages fail at all. This is when run on exactly the same environment/data etc. I ran FileMon during this error and found out that when the error happens SSIS tries to read the dtsx file from a wrong place, namely from system32. I checked that this is identical to what would happen if @[Template::FolderPackages] variable were empty, but because the very same variable is used for every child package and works for some but doesn't work sometimes for others, I have no expalnation to this fact. Anything obvious, or time to raise a support call with Microsoft?

    Read the article

  • MySQL stored procedure, handling multiple cursors and query results

    - by Tirithen
    How can I use two cursors in the same routine? If I remove the second cursor declaration and fetch loop everthing works fine. The routine is used for adding a friend in my webapp. It takes the id of the current user and the email of the friend we want to add as a friend, then it checks if the email has a corresponding user id and if no friend relation exists it will create one. Any other routine solution than this one would be great as well. DROP PROCEDURE IF EXISTS addNewFriend; DELIMITER // CREATE PROCEDURE addNewFriend(IN inUserId INT UNSIGNED, IN inFriendEmail VARCHAR(80)) BEGIN DECLARE tempFriendId INT UNSIGNED DEFAULT 0; DECLARE tempId INT UNSIGNED DEFAULT 0; DECLARE done INT DEFAULT 0; DECLARE cur CURSOR FOR SELECT id FROM users WHERE email = inFriendEmail; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; OPEN cur; REPEAT FETCH cur INTO tempFriendId; UNTIL done = 1 END REPEAT; CLOSE cur; DECLARE cur CURSOR FOR SELECT user_id FROM users_friends WHERE user_id = tempFriendId OR friend_id = tempFriendId; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; OPEN cur; REPEAT FETCH cur INTO tempId; UNTIL done = 1 END REPEAT; CLOSE cur; IF tempFriendId != 0 AND tempId != 0 THEN INSERT INTO users_friends (user_id, friend_id) VALUES(inUserId, tempFriendId); END IF; SELECT tempFriendId as friendId; END // DELIMITER ;

    Read the article

  • Atomic INSERT/SELECT in HSQLDB

    - by PartlyCloudy
    Hello, I have the following hsqldb table, in which I map UUIDs to auto incremented IDs: SHORT_ID (BIG INT, PK, auto incremented) | UUID (VARCHAR, unique) Create command: CREATE TABLE table (SHORT_ID BIGINT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, UUID VARCHAR(36) UNIQUE) In order to add new pairs concurrently, I want to use the atomic MERGE INTO statement. So my (prepared) statement looks like this: MERGE INTO table USING (VALUES(CAST(? AS VARCHAR(36)))) AS v(x) ON ID_MAP.UUID = v.x WHEN NOT MATCHED THEN INSERT VALUES v.x When I execute the statement (setting the placeholder correctly), I always get a Caused by: org.hsqldb.HsqlException: row column count mismatch Could you please give me a hint, what is going wrong here? Thanks in advance.

    Read the article

  • How to avoid using duplicate savepoint names in nested transactions in nested stored procs?

    - by Gary McGill
    I have a pattern that I almost always follow, where if I need to wrap up an operation in a transaction, I do this: BEGIN TRANSACTION SAVE TRANSACTION TX -- Stuff IF @error <> 0 ROLLBACK TRANSACTION TX COMMIT TRANSACTION That's served me well enough in the past, but after years of using this pattern (and copy-pasting the above code), I've suddenly discovered a flaw which comes as a complete shock. Quite often, I'll have a stored procedure calling other stored procedures, all of which use this same pattern. What I've discovered (to my cost) is that because I'm using the same savepoint name everywhere, I can get into a situation where my outer transaction is partially committed - precisely the opposite of the atomicity that I'm trying to achieve. I've put together an example that exhibits the problem. This is a single batch (no nested stored procs), and so it looks a little odd in that you probably wouldn't use the same savepoint name twice in the same batch, but my real-world scenario would be too confusing to post. CREATE TABLE Test (test INTEGER NOT NULL) BEGIN TRAN SAVE TRAN TX BEGIN TRAN SAVE TRAN TX INSERT INTO Test(test) VALUES (1) COMMIT TRAN TX BEGIN TRAN SAVE TRAN TX INSERT INTO Test(test) VALUES (2) COMMIT TRAN TX DELETE FROM Test ROLLBACK TRAN TX COMMIT TRAN TX SELECT * FROM Test DROP TABLE Test When I execute this, it lists one record, with value "1". In other words, even though I rolled back my outer transaction, a record was added to the table. What's happening is that the ROLLBACK TRANSACTION TX at the outer level is rolling back as far as the last SAVE TRANSACTION TX at the inner level. Now that I write this all out, I can see the logic behind it: the server is looking back through the log file, treating it as a linear stream of transactions; it doesn't understand the nesting/hierarchy implied by either the nesting of the transactions (or, in my real-world scenario, by the calls to other stored procedures). So, clearly, I need to start using unique savepoint names instead of blindly using "TX" everywhere. But - and this is where I finally get to the point - is there a way to do this in a copy-pastable way so that I can still use the same code everywhere? Can I auto-generate the savepoint name on the fly somehow? Is there a convention or best-practice for doing this sort of thing? It's not exactly hard to come up with a unique name every time you start a transaction (could base it off the SP name, or somesuch), but I do worry that eventually there would be a conflict - and you wouldn't know about it because rather than causing an error it just silently destroys your data... :-(

    Read the article

  • Relative path reference in WebConfig.ConnectionString

    - by Ngu Soon Hui
    Is it possible to specify a relative path reference in connectionstring, attachDbFileName property in a web.config? For example, In my database is located in the App_data folder, I can easily specify the AttachDBFilename as|DataDirectory|\mydb.mdf and the |Datadirectory| will automatically resolve to the correct path. Now, suppose that web.config file is located in A folder, but the database is located in B\App_data folder, where A and B folder is located in the same folder. Is there anyway to use relative path reference to resolve to the correct path?

    Read the article

  • Get most left|right|top|bottom point contained in box

    - by skyman
    I'm storing Points Of Interest (POI) in PostgreSQL database, and retrieve them via PHP script to Android application. To reduce internet usage I want my mobile app to know if there are any points in the neighborhood of currently displayed area. My idea is to store bounds of the rectangle containing all points already retrieved (in other words: nearest point on the left (West) of most west already retrieved, nearest point above (North) of most north already retrieved etc.) and I will make next query when any edge of screen goes outside of this bounds. Currently I can retrieve points which are in "single screen" (in the area covered by currently displayed map) using: SELECT * FROM ch WHERE loc <@ (box '((".-$latSpan.", ".$lonSpan."),(".$latSpan.", ".-$lonSpan."))' + point '".$loc."') Now I need to know four most remote points in each direction, than I will be able to retrieve next four "more remote" points. Is there any possibility to get those points (or box) directly from PostgreSQL (maybe using some "aggregate points to box" function)?

    Read the article

  • TSQL Query: Escaping Special Characters

    - by Abs
    Hello all, I am trying to escape special characters in a TSQL query. I have done this before: SELECT columns FROM table WHERE column LIKE '%\%%' ESCAPE '\' And it has worked. Now I have tried to do this now: UPDATE match SET rule_name='31' ESCAPE '\' But it has failed. I know none of the vlaues have a \ but it should still work. I am guessing its because it needs a LIKE statement but how else can I escape characters that I am adding to a database? In addition, does anyone have a link to all the special characters that should be escaped, I couldn't find any documentation on this! Thanks all for any help

    Read the article

  • LINQ error when deployed - Security Exception - cannot create DataContext

    - by aximili
    The code below works locally, but when I deploy it to the server it gives the following error. Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. The code is protected void Page_Load(object sender, EventArgs e) { DataContext context = new DataContext(Global.ConnectionString); // <-- throws the exception //Table<Group> _kindergartensTable = context.GetTable<Group>(); Response.Write("ok"); } I have set full write permissons on all files and folders on the server. Any suggestions how to solve this problem? Thanks in advance.

    Read the article

  • Remove Duplicates from LEFT OUTER JOIN

    - by Kaushik Gopal
    Hey folk my question is quite similar to http://stackoverflow.com/questions/757957/restricting-a-left-join I have a variation in that request though and the comment didn't allow too many characters hence posting as a new question. I hope this doesn't go against the posting rules/etiquette. Assuming i have a table SHOP and another table LOCATION. Location is a sort of child table of table SHOP, that has two columns of interest, one is a Division Key (calling it just KEY) and a "SHOP" number. This matches to the Number "NO" in table SHOP. I tried this left outer join: SELECT S.NO, L.KEY FROM SHOP S LEFT OUTER JOIN LOCATN L ON S.NO = L.SHOP But i'm getting a lot of duplicates since there are many locations that belong to a single shop. I want to eliminate them and just get a list of "shop, key" entries without duplicates. any ideas how? (edit: ORACLE 10g Database)

    Read the article

  • Calculate Percentage help ORACLE L@@K

    - by DAVID
    Hi this code gives me employee salaries and manager salaries. SELECT E.EMP_FNAME AS MANAGER, E.EMP_SALARY, D.DEPT_NO, A.EMP_FNAME AS EMPLOYEE, A.EMP_SALARY FROM EMPLOYEE E, EMPLOYEE A, DEPARTMENT D WHERE E.EMP_NIN = A.EMP_MANAGER AND A.EMP_MANAGER = D.EMP_MANAGER; How can i only show the employees that have a salary within 10% of their manager salary?

    Read the article

  • How to optimize this MySQL query

    - by James Simpson
    This query was working fine when the database was small, but now that there are millions of rows in the database, I am realizing I should have looked at optimizing this earlier. It is looking at over 600,000 rows and is Using where; Using temporary; Using filesort (which leads to an execution time of 5-10 seconds). It is using an index on the field 'battle_type.' SELECT username, SUM( outcome ) AS wins, COUNT( * ) - SUM( outcome ) AS losses FROM tblBattleHistory WHERE battle_type = '0' && outcome < '2' GROUP BY username ORDER BY wins DESC , losses ASC , username ASC LIMIT 0 , 50

    Read the article

  • Problem with write query

    - by phenevo
    Hi, I've got collection of geo objects in database: There are four Tables: Countries Regions Provinces Cities Cities has inter alia ProvinceCode Provinces has inter alia regionCode Regions has inter alia CountryCode And there is fifth Table: Descriptions ObjectCode ObjectType(country, region, province, city) Description. How to get from Descriptions table, all descriptions from objects which are in the definite country ??

    Read the article

  • SQL Server, temporary tables with truncate vs table variable with delete

    - by Richard
    I have a stored procedure inside which I create a temporary table that typically contains between 1 and 10 rows. This table is truncated and filled many times during the stored procedure. It is truncated as this is faster than delete. Do I get any performance increase by replacing this temporary table with a table variable when I suffer a penalty for using delete (truncate does not work on table variables) Whilst table variables are mainly in memory and are generally faster than temp tables do I loose any benefit by having to delete rather than truncate?

    Read the article

  • How to get identities of inserted data records using SQL bulk copy

    - by Olga
    Hello I have a ADO.NET dataTable with about 100.000 records. In this table there is a column "xyID" which has no values in it, because they are generated by insertion into my MSSQL Database. Now i have the problem, that i need this IDs for other processes. I am looking for a way to bulk copy this dataTable into the MSSQL database, and within the same "step" to "fill" my dataTable with the generated IDs. Thank you for your answers!

    Read the article

  • Primary Key Identity Value Increments On Unique Key Constraint Violation

    - by Jed
    I have a SqlServer 2008 table which has a Primary Key (IsIdentity=Yes) and three other fields that make up a Unique Key constraint. In addition I have a store procedure that inserts a record into the table and I call the sproc via C# using a SqlConnection object. The C# sproc call works fine, however I have noticed interesting results when the C# sproc call violates the Unique Key constraint.... When the sproc call violates the Unique Key constraint, a SqlException is thrown - which is no surprise and cool. However, I notice that the next record that is successfully added to the table has a PK value that is not exactly one more than the previous record - For example: Say the table has five records where the PK values are 1,2,3,4, and 5. The sproc attempts to insert a sixth record, but the Unique Key constraint is violated and, so, the sixth record is not inserted. Then the sproc attempts to insert another record and this time it is successful. - This new record is given a PK value of 7 instead of 6. Is this normal behavior? If so, can you give me a reason why this is so? (If a record fails to insert, why is the PK index incremented?) If this is not normal behavior, can you give me any hints as to why I am seeing these symptoms?

    Read the article

  • SSIS - Wizard vs manual vs programming

    - by alchemical
    I'd like to move 26 tables from one DB to another. I see I can do this in the SSIS Import and Export Wizard. I believe the other approach would be to select tools from the toolbar in Data Flow and then configure them all. When is it better to use the wizard and when is it best to create the package manually (with the visual tools) or programmatically? One thing I noticed with the Wizard is that it lets me select multiple tables at once, but I could not find a way to get back to that screen once the package is created, so that I could edit the various tables all in one place.

    Read the article

  • MissingMethodException ( Can`t find PInvoke DLL 'sqlceme30.dll ' ) for Windows Mobile

    - by anyinfonet
    Hello. I have developed a win mobile (v5.0) application and I use ONLY 1 database SQLITE with these references: System.Data.SQLite.dll (assembly version & product version : 1.0.65.0); SQLite.Interop.065.DLL (product version : 1.0 and is a c++ lib for first dll ). After 5 weeks of using of this application, I get today a weird exception and I dont understand what it is? Exception is: MissingMethodException Can`t find PInvoke DLL 'sqlceme30.dll ' at System.Data.SqlServerCe.SqlCeCommand.ReleaseNativeInterfaces() at System.Data.SqlServerCe.SqlCeCommand.Dispose(Boolean disposing) ...... What`s wrong? Anyone know about this to explain me please? By the way : until now I delevoped 3-4 applications (1 year ago )using these references and everything worked fine.

    Read the article

  • Mysql Sub Select Query Optimization

    - by Matt
    I'm running a query daily to compile stats - but it seems really inefficient. This is the Query: SELECT a.id, tstamp, label_id, (SELECT author_id FROM b WHERE b.tid = a.id ORDER BY b.tstamp DESC LIMIT 1) AS author_id FROM a, b WHERE (status = '2' OR status = '3') AND category != 6 AND a.id = b.tid AND (b.type = 'C' OR b.type = 'R') AND a.tstamp1 BETWEEN {$timestamp_start} AND {$timestamp_end} ORDER BY b.tstamp DESC LIMIT 500 This query seems to run really slow. Apologies for the crap naming - I've been asked to not reveal the actual table names. The reason there is a sub select is because the outer select gets one row from the table a and it gets a row from table b. But also need to know the latest author_id from table b as well, so I run a subselect to return that one. I don't want to run another select inside a php loop - as that is also inefficient. It works correctly - I just need to find a much faster way of getting this data set.

    Read the article

  • SQLServer:Namespaces preventing access to query data

    - by Brian
    Hi A beginners question, hopefully easily answered. I've got an xml file I want to load into SQLServer 2008 and extract the useful informaiton. I'm starting simple and just trying to extract the name (\gpx\name). The code I have is: DECLARE @x xml; SELECT @x = xCol.BulkColumn FROM OPENROWSET (BULK 'C:\Data\EM.gpx', SINGLE_BLOB) AS xCol; -- confirm the xml data is in @x select @x as XML_Data -- try and get the name of the gpx section SELECT c.value('name[1]', 'varchar(200)') as Name from @x.nodes('gpx') x(c) Below is a heavily shortened version of the xml file: <?xml version="1.0" encoding="utf-8"?> <gpx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" version="1.0" creator="Groundspeak Pocket Query" xsi:schemaLocation="http://www.topografix.com/GPX/1/0 http://www.topografix.com/GPX/1/0/gpx.xsd http://www.groundspeak.com/cache/1/0 http://www.groundspeak.com/cache/1/0/cache.xsd" xmlns="http://www.topografix.com/GPX/1/0"> <name>EM</name> <desc>Geocache file generated by Groundspeak</desc> <author>Groundspeak</author> <email>[email protected]</email> <time>2010-03-24T14:01:36.4931342Z</time> <keywords>cache, geocache, groundspeak</keywords> <wpt lat="51.2586" lon="-2.213067"> <time>2008-03-30T07:00:00Z</time> <name>GC1APHM</name> <desc>Sandman's Noble Hoard by Sandman1973, Unknown Cache (2/3)</desc> <groundspeak:cache id="832000" available="True" archived="False" xmlns:groundspeak="http://www.groundspeak.com/cache/1/0"> <groundspeak:name>Sandman's Noble Hoard</groundspeak:name> <groundspeak:placed_by>Sandman1973</groundspeak:placed_by> </groundspeak:cache> </wpt> </gpx> If the first two lines are replaced with just: <gpx> the above example works correctly, however I then can't access groundspeak:name (/gpx/wpt/groundspeak:cache/groundspeak:name), so my guess its a problem with the namespace. Any help would be appriciated.

    Read the article

  • LINQ weak relation between tables

    - by cleric
    I have two tables with a weak relation. I need get a text value from one table using a key from another. I am using the following C# LINQ code: City = rea.tRealEstateContact.tPostnumre != null ? rea.tRealEstateContact.tPostnumre.Bynavn : string.Empty But when the key cannot be found in the table 1(tPostnumre), an exception is thrown. How should I do this?

    Read the article

  • SqlBulkCopy is slow, doesn't utilize full network speed

    - by Alex
    Hi, for that past couple of weeks I have been creating generic script that is able to copy databases. The goal is to be able to specify any database on some server and copy it to some other location, and it should only copy the specified content. The exact content to be copied over is specified in a configuration file. This script is going to be used on some 10 different databases and run weekly. And in the end we are copying only about 3%-20% of databases which are as large as 500GB. I have been using the SMO assemblies to achieve this. This is my first time working with SMO and it took a while to create generic way to copy the schema objects, filegroups ...etc. (Actually helped find some bad stored procs). Overall I have a working script which is lacking on performance (and at times times out) and was hoping you guys would be able to help. When executing the WriteToServer command to copy large amount of data ( 6GB) it reaches my timeout period of 1hr. Here is the core code for copying table data. The script is written in PowerShell. $query = ("SELECT * FROM $selectedTable " + $global:selectiveTables.Get_Item($selectedTable)).Trim() Write-LogOutput "Copying $selectedTable : '$query'" $cmd = New-Object Data.SqlClient.SqlCommand -argumentList $query, $source $cmd.CommandTimeout = 120; $bulkData = ([Data.SqlClient.SqlBulkCopy]$destination) $bulkData.DestinationTableName = $selectedTable; $bulkData.BulkCopyTimeout = $global:tableCopyDataTimeout # = 3600 $reader = $cmd.ExecuteReader(); $bulkData.WriteToServer($reader); # Takes forever here on large tables The source and target databases are located on different servers so I kept track of the network speed as well. The network utilization never went over 1% which was quite surprising to me. But when I just transfer some large files between the servers, the network utilization spikes up to 10%. I have tried setting the $bulkData.BatchSize to 5000 but nothing really changed. Increasing the BulkCopyTimeout to an even greater amount would only solve the timeout. I really would like to know why the network is not being used fully. Anyone else had this problem? Any suggestions on networking or bulk copy will be appreciated. And please let me know if you need more information. Thanks. UPDATE I have tweaked several options that increase the performance of SqlBulkCopy, such as setting the transaction logging to simple and providing a table lock to SqlBulkCopy instead of the default row lock. Also some tables are better optimized for certain batch sizes. Overall, the duration of the copy was decreased by some 15%. And what we will do is execute the copy of each database simultaneously on different servers. But I am still having a timeout issue when copying one of the databases. When copying one of the larger databases, there is a table for which I consistently get the following exception: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It is thrown about 16 after it starts copying the table which is no where near my BulkCopyTimeout. Even though I get the exception that table is fully copied in the end. Also, if I truncate that table and restart my process for that table only, the tables is copied over without any issues. But going through the process of copying that entire database fails always for that one table. I have tried executing the entire process and reseting the connection before copying that faulty table, but it still errored out. My SqlBulkCopy and Reader are closed after each table. Any suggestions as to what else could be causing the script to fail at the point each time?

    Read the article

< Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >