Search Results

Search found 1114 results on 45 pages for 'sp executesql'.

Page 14/45 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • SharePoint 2010 MySites - Host on separate servers

    - by Chris W
    We're playing with the SP 2010 Beta ahead of a planned deployment later this year in an academic environment. We anticipate that the majority of traffic will be through MySites when everything is provisioned so we're looking at how we can plan our SP topology to scale nicely. An initial thought is to run the main portal on one server, host "Student" MySites on one server and "Staff" on another. Is it acutally possible to do this easily or are we going down a bad path? Specifically - can we have 2 different MySites site collections, each hosted on a dedicated server? If so, can we configure SharePoint to work out from the users's logon account type of user they are and route them to the correct server?

    Read the article

  • IE attachEvent on object tag causes memory corruption

    - by larswa
    I've an ActiveX Control within an embedded IE8 HTML page that has the following event MessageReceived([in] BSTR srcWindowId, [in] BSTR json). On Windows the event is registered with OCX.attachEvent("MessageReceived", onMessageReceivedFunc). Following code fires the event in the HTML page. HRESULT Fire_MessageReceived(BSTR id, BSTR json) { CComVariant varResult; T* pT = static_cast<T*>(this); int nConnectionIndex; CComVariant* pvars = new CComVariant[2]; int nConnections = m_vec.GetSize(); for (nConnectionIndex = 0; nConnectionIndex < nConnections; nConnectionIndex++) { pT->Lock(); CComPtr<IUnknown> sp = m_vec.GetAt(nConnectionIndex); pT->Unlock(); IDispatch* pDispatch = reinterpret_cast<IDispatch*>(sp.p); if (pDispatch != NULL) { VariantClear(&varResult); pvars[1] = id; pvars[0] = json; DISPPARAMS disp = { pvars, NULL, 2, 0 }; pDispatch->Invoke(0x1, IID_NULL, LOCALE_USER_DEFAULT, DISPATCH_METHOD, &disp, &varResult, NULL, NULL); } } delete[] pvars; // -> Memory Corruption here! return varResult.scode; } After I enabled gflags.exe with application verifier, the following strange behaviour occur: After Invoke() that is executing the JavaScript callback, the BSTR from pvars[1] is copied to pvars[0] for some unknown reason!? The delete[] of pvars causes a double free of the same string then which ends in a heap corruption. Does anybody has an idea whats going on here? Is this a IE bug or is there a trick within the OCX Implementation that I'm missing? If I use the tag like: <script for="OCX" event="MessageReceived(id, json)" language="JavaScript" type="text/javascript"> window.onMessageReceivedFunc(windowId, json); </script> ... the strange copy operation does not occur. The following code also seem to be ok due to the fact that the caller of Fire_MessageReceived() is responsible for freeing the BSTRs. HRESULT Fire_MessageReceived(BSTR srcWindowId, BSTR json) { CComVariant varResult; T* pT = static_cast<T*>(this); int nConnectionIndex; VARIANT pvars[2]; int nConnections = m_vec.GetSize(); for (nConnectionIndex = 0; nConnectionIndex < nConnections; nConnectionIndex++) { pT->Lock(); CComPtr<IUnknown> sp = m_vec.GetAt(nConnectionIndex); pT->Unlock(); IDispatch* pDispatch = reinterpret_cast<IDispatch*>(sp.p); if (pDispatch != NULL) { VariantClear(&varResult); pvars[1].vt = VT_BSTR; pvars[1].bstrVal = srcWindowId; pvars[0].vt = VT_BSTR; pvars[0].bstrVal = json; DISPPARAMS disp = { pvars, NULL, 2, 0 }; pDispatch->Invoke(0x1, IID_NULL, LOCALE_USER_DEFAULT, DISPATCH_METHOD, &disp, &varResult, NULL, NULL); } } delete[] pvars; return varResult.scode; } Thanks!

    Read the article

  • Iterative Conversion

    - by stuart ramage
    Question Received: I am toying with the idea of migrating the current information first and the remainder of the history at a later date. I have heard that the conversion tool copes with this, but haven't found any information on how it does. Answer: The Toolkit will support iterative conversions as long as the original master data key tables (the CK_* tables) are not cleared down from Staging (the already converted Transactional Data would need to be cleared down) and the Production instance being migrated into is actually Production (we have migrated into a pre-prod instance in the past and then unloaded this and loaded it into the real PROD instance, but this will not work for your situation. You need to be migrating directly into your intended environment). In this case the migration tool will still know all about the original keys and the generated keys for the primary objects (Account, SA, etc.) and as such it will be able to link the data converted as part of a second pass onto these entities. It should be noted that this may result in the original opening balances potentially being displayed with an incorrect value (if we are talking about Financial Transactions) and also that care will have to be taken to ensure that all related objects are aligned (eg. A Bill must have a set to bill segments, meter reads and a financial transactions, and these entities cannot exist independantly). It should also be noted that subsequent runs of the conversion tool would need to be 'trimmed' to ensure that they are only doing work on the objects affected. You would not want to revalidate and migrate all Person, Account, SA, SA/SP, SP and Premise details since this information has already been processed, but you would definitely want to run the affected transactional record validation and keygen processes. There is no real "hard-and-fast" rule around this processing since is it specific to each implmentations needs, but the majority of the effort required should be detailed in the Conversion Tool section of the online help (under Adminstration/ The Conversion Tool). The major rule is to ensure that you only run the steps and validation/keygen steps that you need and do not do a complete rerun for your subsequent conversion.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #038

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 CASE Statement in ORDER BY Clause – ORDER BY using Variable This article is as per request from the Application Development Team Leader of my company. His team encountered code where the application was preparing string for ORDER BY clause of the SELECT statement. Application was passing this string as variable to Stored Procedure (SP) and SP was using EXEC to execute the SQL string. This is not good for performance as Stored Procedure has to recompile every time due to EXEC. sp_executesql can do the same task but still not the best performance. SSMS – View/Send Query Results to Text/Grid/Files Results to Text – CTRL + T Results to Grid – CTRL + D Results to File – CTRL + SHIFT + F 2008 Introduction to SPARSE Columns Part 2 I wrote about Introduction to SPARSE Columns Part 1. Let us understand the concept of the SPARSE column in more detail. I suggest you read the first part before continuing reading this article. All SPARSE columns are stored as one XML column in the database. Let us see some of the advantage and disadvantage of SPARSE column. Deferred Name Resolution How come when table name is incorrect SP can be created successfully but when an incorrect column is used SP cannot be created? 2009 Backup Timeline and Understanding of Database Restore Process in Full Recovery Model In general, databases backup in full recovery mode is taken in three different kinds of database files. Full Database Backup Differential Database Backup Log Backup Restore Sequence and Understanding NORECOVERY and RECOVERY While doing RESTORE Operation if you restoring database files, always use NORECOVER option as that will keep the database in a state where more backup file are restored. This will also keep database offline also to prevent any changes, which can create itegrity issues. Once all backup file is restored run RESTORE command with a RECOVERY option to get database online and operational. Four Different Ways to Find Recovery Model for Database Perhaps, the best thing about technical domain is that most of the things can be executed in more than one ways. It is always useful to know about the various methods of performing a single task. Two Methods to Retrieve List of Primary Keys and Foreign Keys of Database When Information Schema is used, we will not be able to discern between primary key and foreign key; we will have both the keys together. In the case of sys schema, we can query the data in our preferred way and can join this table to another table, which can retrieve additional data from the same. Get Last Running Query Based on SPID PID is returns sessions ID of the current user process. The acronym SPID comes from the name of its earlier version, Server Process ID. 2010 SELECT * FROM dual – Dual Equivalent Dual is a table that is created by Oracle together with data dictionary. It consists of exactly one column named “dummy”, and one record. The value of that record is X. You can check the content of the DUAL table using the following syntax. SELECT * FROM dual Identifying Statistics Used by Query Someone asked this question in my training class of query optimization and performance tuning.  “Can I know which statistics were used by my query?” 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 14 of 31 What are the basic functions for master, msdb, model, tempdb and resource databases? What is the Maximum Number of Index per Table? Explain Few of the New Features of SQL Server 2008 Management Studio Explain IntelliSense for Query Editing Explain MultiServer Query Explain Query Editor Regions Explain Object Explorer Enhancements Explain Activity Monitors SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 15 of 31 What is Service Broker? Where are SQL server Usernames and Passwords Stored in the SQL server? What is Policy Management? What is Database Mirroring? What are Sparse Columns? What does TOP Operator Do? What is CTE? What is MERGE Statement? What is Filtered Index? Which are the New Data Types Introduced in SQL SERVER 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 16 of 31 What are the Advantages of Using CTE? How can we Rewrite Sub-Queries into Simple Select Statements or with Joins? What is CLR? What are Synonyms? What is LINQ? What are Isolation Levels? What is Use of EXCEPT Clause? What is XPath? What is NOLOCK? What is the Difference between Update Lock and Exclusive Lock? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 17 of 31 How will you Handle Error in SQL SERVER 2008? What is RAISEERROR? What is RAISEERROR? How to Rebuild the Master Database? What is the XML Datatype? What is Data Compression? What is Use of DBCC Commands? How to Copy the Tables, Schema and Views from one SQL Server to Another? How to Find Tables without Indexes? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 18 of 31 How to Copy Data from One Table to Another Table? What is Catalog Views? What is PIVOT and UNPIVOT? What is a Filestream? What is SQLCMD? What do you mean by TABLESAMPLE? What is ROW_NUMBER()? What are Ranking Functions? What is Change Data Capture (CDC) in SQL Server 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 19 of 31 How can I Track the Changes or Identify the Latest Insert-Update-Delete from a Table? What is the CPU Pressure? How can I Get Data from a Database on Another Server? What is the Bookmark Lookup and RID Lookup? What is Difference between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE? What is Difference between GETDATE and SYSDATETIME in SQL Server 2008? How can I Check that whether Automatic Statistic Update is Enabled or not? How to Find Index Size for Each Index on Table? What is the Difference between Seek Predicate and Predicate? What are Basics of Policy Management? What are the Advantages of Policy Management? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 20 of 31 What are Policy Management Terms? What is the ‘FILLFACTOR’? Where in MS SQL Server is ’100’ equal to ‘0’? What are Points to Remember while Using the FILLFACTOR Argument? What is a ROLLUP Clause? What are Various Limitations of the Views? What is a Covered index? When I Delete any Data from a Table, does the SQL Server reduce the size of that table? What are Wait Types? How to Stop Log File Growing too Big? If any Stored Procedure is Encrypted, then can we see its definition in Activity Monitor? 2012 Example of Width Sensitive and Width Insensitive Collation Width Sensitive Collation: A single-byte character (half-width) represented as single-byte and the same character represented as a double-byte character (full-width) are when compared are not equal the collation is width sensitive. In this example we have one table with two columns. One column has a collation of width sensitive and the second column has a collation of width insensitive. Find Column Used in Stored Procedure – Search Stored Procedure for Column Name Very interesting conversation about how to find column used in a stored procedure. There are two different characters in the story and both are having a conversation about how to find column in the stored procedure. Here are two part story Part 1 | Part 2 SQL SERVER – 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage Generate Script for Schema and Data – SQL in Sixty Seconds #021 – Video In simple words, in many cases the database move from one place to another place. It is not always possible to back up and restore databases. There are possibilities when only part of the database (with schema and data) has to be moved. In this video we learn that we can easily generate script for schema for data and move from one server to another one. INFORMATION_SCHEMA.COLUMNS and Value Character Maximum Length -1 I often see the value -1 in the CHARACTER_MAXIMUM_LENGTH column of INFORMATION_SCHEMA.COLUMNS table. I understand that the length of any column can be between 0 to large number but I do not get it when I see value in negative (i.e. -1). Any insight on this subject? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Procedure or function has too many arguments specified

    - by bullpit
    This error took me a while to figure out. I was using SqlDataSource with stored procedures for SELECT and UPDATE commands. The SELECT worked fine. Then I added UPDATE command and while trying to update, I would get this error: "Procedure of function has too many arguments specified" Apparently, good guys at .NET decided it is necessary to send SELECT parameters with the UPDATE command as well. So when I was sending the required parameters to the UPDATE sp, in reality, it was also getting my SELECT parameters, and thus failing. I had to add the extra parameters in the UPDATE stored procedure and make them NULLABLE so that they are not required....phew... Here is piece of SP with unused parameters. ALTER PROCEDURE [dbo].[UpdateMaintenanceRecord]        @RecordID INT     ,@District VARCHAR(255)     ,@Location VARCHAR(255)         --UNUSED PARAMETERS     ,@MTBillTo VARCHAR(255) = NULL     ,@SerialNumber VARCHAR(255) = NULL     ,@PartNumber VARCHAR(255) = NULL Update: I was getting that error because unkowingly, I had bound the extra fields in my GridVeiw with Bind() call. I changed all the extra one's, which did not need to be in Update, to Eval() and everything works fine.

    Read the article

  • Jittery Movement, Uncontrollably Rotating + Front of Sprite?

    - by Vipar
    So I've been looking around to try and figure out how I make my sprite face my mouse. So far the sprite moves to where my mouse is by some vector math. Now I'd like it to rotate and face the mouse as it moves. From what I've found this calculation seems to be what keeps reappearing: Sprite Rotation = Atan2(Direction Vectors Y Position, Direction Vectors X Position) I express it like so: sp.Rotation = (float)Math.Atan2(directionV.Y, directionV.X); If I just go with the above, the sprite seems to jitter left and right ever so slightly but never rotate out of that position. Seeing as Atan2 returns the rotation in radians I found another piece of calculation to add to the above which turns it into degrees: sp.Rotation = (float)Math.Atan2(directionV.Y, directionV.X) * 180 / PI; Now the sprite rotates. Problem is that it spins uncontrollably the closer it comes to the mouse. One of the problems with the above calculation is that it assumes that +y goes up rather than down on the screen. As I recorded in these two videos, the first part is the slightly jittery movement (A lot more visible when not recording) and then with the added rotation: Jittery Movement So my questions are: How do I fix that weird Jittery movement when the sprite stands still? Some have suggested to make some kind of "snap" where I set the position of the sprite directly to the mouse position when it's really close. But no matter what I do the snapping is noticeable. How do I make the sprite stop spinning uncontrollably? Is it possible to simply define the front of the sprite and use that to make it "face" the right way?

    Read the article

  • How can I get the following compiled on UVA?

    - by Michael Tsang
    Note the comment below. It cannot compiled on UVA because of a bug in GCC. #include <cstdio> #include <cstring> #include <cctype> #include <map> #include <stdexcept> class Board { public: bool read(FILE *); enum Colour {none, white, black}; Colour check() const; private: struct Index { size_t x; size_t y; Index &operator+=(const Index &) throw(std::range_error); Index operator+(const Index &) const throw(std::range_error); }; const static std::size_t size = 8; char data[size][size]; // Cannot be compiled on GCC 4.1.2 due to GCC bug 29993 // http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29993 typedef bool CheckFunction(Colour, const Index &) const; CheckFunction pawn, knight, bishop, king, rook; bool queen(const Colour c, const Index &location) const { return rook(c, location) || bishop(c, location); } static char get_king(Colour c) { return c == white ? 'k' : 'K'; } template<std::size_t n> bool check_consecutive(Colour c, const Index &location, const Index (&offsets)[n]) const { for(const Index *p = offsets; p != (&offsets)[1]; ++p) { try { Index target = location + *p; for(; data[target.x][target.y] == '.'; target += *p) { } if(data[target.x][target.y] == get_king(c)) return true; } catch(std::range_error &) { } } return false; } template<std::size_t n> bool check_distinct(Colour c, const Index &location, const Index (&offsets)[n]) const { for(const Index *p = offsets; p != (&offsets)[1]; ++p) { try { Index target = location + *p; if(data[target.x][target.y] == get_king(c)) return true; } catch(std::range_error &) { } } return false; } }; int main() { Board board; for(int d = 1; board.read(stdin); ++d) { Board::Colour c = board.check(); const char *sp; switch(c) { case Board::black: sp = "white"; break; case Board::white: sp = "black"; break; case Board::none: sp = "no"; break; } std::printf("Game #%d: %s king is in check.\n", d, sp); std::getchar(); // discard empty line } } bool Board::read(FILE *f) { static const char empty[] = "........" "........" "........" "........" "........" "........" "........" "........"; // 64 dots for(char (*p)[size] = data; p != (&data)[1]; ++p) { std::fread(*p, size, 1, f); std::fgetc(f); // discard new-line } return std::memcmp(empty, data, sizeof data); } Board::Colour Board::check() const { std::map<char, CheckFunction Board::*> fp; fp['P'] = &Board::pawn; fp['N'] = &Board::knight; fp['B'] = &Board::bishop; fp['Q'] = &Board::queen; fp['K'] = &Board::king; fp['R'] = &Board::rook; for(std::size_t i = 0; i != size; ++i) { for(std::size_t j = 0; j != size; ++j) { CheckFunction Board::* p = fp[std::toupper(data[i][j])]; if(p) { Colour ret; if(std::isupper(data[i][j])) ret = white; else ret = black; if((this->*p)(ret, (Index){i, j}/* C99 extension */)) return ret; } } } return none; } bool Board::pawn(const Colour c, const Index &location) const { const std::ptrdiff_t sh = c == white ? -1 : 1; const Index offsets[] = { {sh, 1}, {sh, -1} }; return check_distinct(c, location, offsets); } bool Board::knight(const Colour c, const Index &location) const { static const Index offsets[] = { {1, 2}, {2, 1}, {2, -1}, {1, -2}, {-1, -2}, {-2, -1}, {-2, 1}, {-1, 2} }; return check_distinct(c, location, offsets); } bool Board::bishop(const Colour c, const Index &location) const { static const Index offsets[] = { {1, 1}, {1, -1}, {-1, -1}, {-1, 1} }; return check_consecutive(c, location, offsets); } bool Board::rook(const Colour c, const Index &location) const { static const Index offsets[] = { {1, 0}, {0, -1}, {0, 1}, {-1, 0} }; return check_consecutive(c, location, offsets); } bool Board::king(const Colour c, const Index &location) const { static const Index offsets[] = { {-1, -1}, {-1, 0}, {-1, 1}, {0, 1}, {1, 1}, {1, 0}, {1, -1}, {0, -1} }; return check_distinct(c, location, offsets); } Board::Index &Board::Index::operator+=(const Index &rhs) throw(std::range_error) { if(x + rhs.x >= size || y + rhs.y >= size) throw std::range_error("result is larger than size"); x += rhs.x; y += rhs.y; return *this; } Board::Index Board::Index::operator+(const Index &rhs) const throw(std::range_error) { Index ret = *this; return ret += rhs; }

    Read the article

  • Performing a clean database creation using msbuild

    - by Robert May
    So I’m taking a break from writing about other Agile stuff for a post. :)  I’m still going to get back to the other subjects, but this is fun too. Something I’ve done quite a bit of is MSBuild and CI work.  I’m experimenting with ways to improve what I’ve done in the past, particularly around database CI. Today, I developed a mechanism for starting from scratch with your database.  By scratch, I mean blowing away the existing database and creating it again from a single command line call.  I’m a firm believer that developers should be able to get to a known clean state at the database level with a single command and that they should be operating off of their own isolated database to improve productivity.  These scripts will help that. Here’s how I did it.  First, we have to disconnect users.  I did so using the help of a script from sql server central.  Note that I’m using sqlcmd variable replacement. -- kills all the users in a particular database -- dlhatheway/3M, 11-Jun-2000 declare @arg_dbname sysname declare @a_spid smallint declare @msg varchar(255) declare @a_dbid int set @arg_dbname = '$(DatabaseName)' select @a_dbid = sdb.dbid from master..sysdatabases sdb where sdb.name = @arg_dbname declare db_users insensitive cursor for select sp.spid from master..sysprocesses sp where sp.dbid = @a_dbid open db_users fetch next from db_users into @a_spid while @@fetch_status = 0 begin select @msg = 'kill '+convert(char(5),@a_spid) print @msg execute (@msg) fetch next from db_users into @a_spid end close db_users deallocate db_users GO Once all users are booted from the database, we can commence with recreating the database.  I generated the script that is used to create a database from SQL Server management studio, so I’m only going to show the bits that weren’t generated that are important.  There are a bunch of Alter Database statements that aren’t shown. First, I had to find the default location of the database files in the install, since they can be in many different locations.  I used Method 1 from a technet blog and then modified it a bit to do what I needed to do.  I ended up using dynamic SQL because for the life of me, I couldn’t get the “Filename” property to not return an error when I used anything besides a string.  I’m dropping the database first, if it exists.  Here’s the code:   IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = N'$(DatabaseName)') BEGIN drop database $(DatabaseName) END; go IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; -- Create temp database. Because no options are given, the default data and --- log path locations are used CREATE DATABASE zzTempDBForDefaultPath; DECLARE @Default_Data_Path VARCHAR(512), @Default_Log_Path VARCHAR(512); --Get the default data path SELECT @Default_Data_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 0); --Get the default Log path SELECT @Default_Log_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 1); --Clean up. IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; DECLARE @SQL nvarchar(max) SET @SQL= 'CREATE DATABASE $(DatabaseName) ON PRIMARY ( NAME = N''$(DatabaseName)'', FILENAME = N''' + @Default_Data_Path + N'$(DatabaseName)' + '.mdf' + ''', SIZE = 2048KB , FILEGROWTH = 1024KB ) LOG ON ( NAME = N''$(DatabaseName)Log'', FILENAME = N''' + @Default_Log_Path + N'$(DatabaseName)' + '.ldf' + ''', SIZE = 1024KB , FILEGROWTH = 10%) ' exec (@SQL) GO And with that, your database is created.  You can run these scripts on any server and on any database name.  To do that, I created an MSBuild script that looks like this: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0"> <PropertyGroup> <DatabaseName>MyDatabase</DatabaseName> <Server>localhost</Server> <SqlCmd>sqlcmd -v DatabaseName=$(DatabaseName) -S $(Server) -i </SqlCmd> <ScriptDirectory>.\Scripts</ScriptDirectory> </PropertyGroup> <Target Name ="Rebuild"> <ItemGroup> <ScriptFiles Include="$(ScriptDirectory)\*.sql"/> </ItemGroup> <Exec Command="$(SqlCmd) &quot;%(ScriptFiles.Identity)&quot;" ContinueOnError="false"/> </Target> </Project> Note that the Scripts directory is underneath the directory where I’m running the msbuild command and is relative to that directory.  Note also that the target is using batching to run each script in the scripts subdirectory, one after the other.  Each script is passed to the sqlcmd command line execution using the .Identity property on the itemgroup that is created.  This target file is saved in the file “Database.target”. To make this work, you’ll need msbuild in your path, and then run the following command: msbuild database.target /target:Rebuild Once you’ve got your virgin database setup, you’d then need to use a tool like dbdeploy.net to determine that it was a virgin database, build a change script based on the change scripts, and then you’d want another sqlcmd call to update the database with the appropriate scripts.  I’m doing that next, so I’ll post a blog update when I’ve got it working. Technorati Tags: MSBuild,Agile,CI,Database

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • Limitations of User-Defined Customer Events (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what are limitations in having user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events, you should be aware that you don't extend or simulate base customer events, the ones that are included in the base product, as they are further used to provide/validate additional business functions.  

    Read the article

  • Can i use a parameter multiple times in the qame query?

    - by djerry
    Hey guys, i was wondering, can a parameter be used more then once in the same query, like this : MySqlParameter oPar0 = new MySqlParameter("e164", MySqlDbType.String); oPar0.Value = user.E164; string sSQL0 = "Delete from callmone.call where (caller=?e164 or called=?e164);"; clsDatabase.ExecuteSQL(sSQL0, oPar0); Is this possible or should i write 2 parameters?

    Read the article

  • i don't solve "must declare a body because it is not marked abstract, extern, or partial" problem?

    - by programmerist
    How can i solve "must declare a body because it is not marked abstract, extern, or partial". This problem. Can you show me some advices? Full Error message is about Save, Update, Delete, Select events... Full message sample : GenoTip.DAL._AccessorForSQL.Save(string, System.Collections.Specialized.ListDictionary, System.Data.CommandType)' must declare a body because it is not marked abstract, extern, or partial This error also return in Update, Delete, Select... public abstract class _AccessorForSQL { public virtual bool Save(string sp, ListDictionary ld, CommandType cmdType); public virtual bool Update(); public virtual bool Delete(); public virtual DataSet Select(); } class GenAccessor : _AccessorForSQL { DataSet ds; DataTable dt; public override bool Save(string sp, ListDictionary ld, CommandType cmdType) { SqlConnection con = null; SqlCommand cmd = null; SqlDataReader dr = null; try { con = GetConnection(); cmd = new SqlCommand(sp, con); con.Open(); cmd.CommandType = cmdType; foreach (string ky in ld.Keys) { cmd.Parameters.AddWithValue(ky, ld[ky]); } dr = cmd.ExecuteReader(); ds = new DataSet(); dt = new DataTable(); ds.Tables.Add(dt); ds.Load(dr, LoadOption.OverwriteChanges, dt); } catch (Exception exp) { HttpContext.Current.Trace.Warn("Error in GetCustomerByID()", exp.Message, exp); } finally { if (dr != null) dr.Close(); if (con != null) con.Close(); } return (ds.Tables[0].Rows.Count 0) ? true : false; } public override bool Update() { return true; } public override bool Delete() { return true; } public override DataSet Select() { DataSet dst = new DataSet(); return dst; } private static SqlConnection GetConnection() { string connStr = WebConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString; SqlConnection conn = new SqlConnection(connStr); return conn; }

    Read the article

  • Call stored procedure using Spring SimpleJdbcCall with callback

    - by MFIhsan
    I have an Oracle Stored procedure that takes CLOB input and REFCURSOR output. I invoke the SP via Spring SimpleJdbcCall passing in a RowMapper to map the results. However, since the result set is large, I need to provide callback feature to the client. I can't quite figure out how to add callback for an SP call using Spring - both with and without SimpleJdbcCall. One thought I have is to pass-in a RowCallbackHandler. Will this work or is there a better way to solve this problem? Any help here is appreciated. private Map<String, Object> arguments = ...; SimpleJdbcCall jdbcCall = new SimpleJdbcCall(this.jdbcTemplate) .withCatalogName(this.packageName) .withProcedureName(this.storedProcName) .withoutProcedureColumnMetaDataAccess() .declareParameters(this.outputParameters.toArray(new SqlOutParameter[]{})); if(!isEmpty(inputParameters)) { jdbcCall.declareParameters(inputParameters.toArray(new SqlParameter[]{})); } this.outputParameters.add(new SqlOutParameter(outputParamName, VARCHAR, rowMapper)); jdbcCall.execute(arguments);

    Read the article

  • ScrollPane has type of "movieclip" in attached movieclip

    - by Chris Porter
    var spw:MovieClip = contentsLayer.attachMovie("ScrollPaneWrapper", "ScrollPaneWrapper123", contentsLayer.getNextHighestDepth()); var sp_:ScrollPane = spw.sp; Here typeof(sp_) == "movieclip" and I can't set any content to it. I've tried exporting it for ActionScript, exporting the wrapper movieclip "ScrollPaneWrapper" and "Export in Frame 1" and all combinations of these options. What's more weird is that I have another Flash project in which I can access the ScrollPane as expected and I can't tell any differences between the two projects. Casting the spw.sp to ScrollPane results in null.

    Read the article

  • Constructing dynamic columns from parameters in Sybase

    - by Chapax
    Hi, I'm trying to write a stored proc (SP) in Sybase. The SP takes 5 varchar parameters. Based on the parameters passed, I want to construct the column names to be selected from a particular table. The below works: DECLARE @TEST VARCHAR(50) SELECT @TEST = "country" --print @TEST execute("SELECT DISTINCT id_country AS id_level, Country AS nm_level FROM tempdb..tbl_books INNER JOIN (tbl_ch2_bespoke_report INNER JOIN tbl_ch2_bespoke_rpt_mapping ON tbl_ch2_bespoke_report.id_report = tbl_ch2_bespoke_rpt_mapping.id_report) ON id_" + @TEST + "= tbl_ch2_bespoke_rpt_mapping.id_pnl_level WHERE tbl_ch2_bespoke_report.id_report = 14") but gives me multiple results: 1 1 row(s) affected. id_level nm_level 1 4376 XYZ 2 4340 ABC I would like to however only obtain the 2nd result. Do I need to necessarily use dynamic SQL to achieve this? Many thanks for your help. --Chapax

    Read the article

  • Convert XML attributes to a Dictionary in Linq to XML

    - by NateD
    I've got a program that needs to convert two attributes of a particular tag to the key and value of an Dictionary<int,string>. The XML looks like this: (fragment) <startingPoint coordinates="1,1" player="1" /> and so far my LINQ looks something like this: XNamespace ns = "http://the_namespace"; var startingpoints = from sp in xml.Elements(ns+"startingPoint") from el in sp.Attributes() select el.Value; Which gets me a nice IEnumerable full of things like "1,1" and "1", but there should be a way to adapt something like this answer to do attributes instead of elements. Little help please? Thank you!

    Read the article

  • Regression tests for T-SQL stored procedures

    - by Achim
    Hi, I would like to regression test t-sql stored procedures. My idea is to specify for each SP multiple input parameter sets. The SP should be executed with these parameters, results should be written to disc. Next time the new results should be compared with results stored before. Does anybody know a good tool for something like that? Should not be that hard to implement, but in practice you will need functionality like "ignore that column" or something like that. And I would assume that such a tool should already exist!? cheers, Achim

    Read the article

  • similarity between strings - sql server 2005

    - by csetzkorn
    Hi, I am looking for a simple way (UDF?) to establish the similarity between strings. The SOUNDEX and DIFFERENCE function do not seem to do the job. Similarity should be based on number of characters in common (order matters). For example: Spiruroidea sp. AM-2008 and Spiruroidea gen. sp. AM-2008 should be recognised as similar. Any pointers would be very much appreciated. Thanks. Christian

    Read the article

  • Subsonic connections and dataset

    - by ajk
    Hi, If using subsonic, i know you hav to close a datareader. but what if you get a dataset back, from a stored procedure? What can you close? So if its like this - SubSonic.StoredProcedure sp = SubSonic.StoredProcedure; DataSet ds = DataSet; ds = sp.GetDataSet; What do i close when done? I don't think dataset can be closed, right? This is Subsonic 2.x Sorry if this was posted already, i tried to post earlier but got error message, and then couldnt find it so am trying again.

    Read the article

  • How to call base abstract or interface from DAL into BLL?

    - by programmerist
    How can i access abstract class in BLL ? i shouldn't see GenAccessor in BLL it must be private class GenAccessor . i should access Save method over _AccessorForSQL. ok? MY BLL cs: public class AccessorForSQL: GenoTip.DAL._AccessorForSQL { public bool Save(string Name, string SurName, string Adress) { ListDictionary ld = new ListDictionary(); ld.Add("@Name", Name); ld.Add("@SurName", SurName); ld.Add("@Adress", Adress); return **base.Save("sp_InsertCustomers", ld, CommandType.StoredProcedure);** } } i can not access base.Save....???????? it is my DAL Layer: namespace GenoTip.DAL { public abstract class _AccessorForSQL { public abstract bool Save(string sp, ListDictionary ld, CommandType cmdType); public abstract bool Update(); public abstract bool Delete(); public abstract DataSet Select(); } private class GenAccessor : _AccessorForSQL { DataSet ds; DataTable dt; public override bool Save(string sp, ListDictionary ld, CommandType cmdType) { SqlConnection con = null; SqlCommand cmd = null; SqlDataReader dr = null; try { con = GetConnection(); cmd = new SqlCommand(sp, con); con.Open(); cmd.CommandType = cmdType; foreach (string ky in ld.Keys) { cmd.Parameters.AddWithValue(ky, ld[ky]); } dr = cmd.ExecuteReader(); ds = new DataSet(); dt = new DataTable(); ds.Tables.Add(dt); ds.Load(dr, LoadOption.OverwriteChanges, dt); } catch (Exception exp) { HttpContext.Current.Trace.Warn("Error in GetCustomerByID()", exp.Message, exp); } finally { if (dr != null) dr.Close(); if (con != null) con.Close(); } return (ds.Tables[0].Rows.Count 0) ? true : false; } public override bool Update() { return true; } public override bool Delete() { return true; } public override DataSet Select() { DataSet dst = new DataSet(); return dst; } private static SqlConnection GetConnection() { string connStr = WebConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString; SqlConnection conn = new SqlConnection(connStr); return conn; }

    Read the article

  • Pear MDB2 class and raiserror exceptions in MSSQL

    - by drholzmichl
    Hi, in MSSQL it's possible to raise an error with raiserror(). I want to use a severity, which doesn't interrupt the connection. This error is raised in a stored procedure. In SQL Management Studio all is fine and I get my error code when executing this SP. But when trying to execute this SP via MDB2 in PHP5 this doesn't work. All I get is an empty array. MDB2 object is created via (including needed options): $db =& MDB2::connect($dsn); $db->setFetchMode(MDB2_FETCHMODE_ASSOC); $db->setOption('portability',MDB2_PORTABILITY_ALL ^ MDB2_PORTABILITY_EMPTY_TO_NULL); The following works (I get a PEAR error): $db->query("RAISERROR('test',11,0);"); But when calling a stored procedure which raises this error via $db->query("EXEC sp_raise_error"); there is not output. What's wrong?

    Read the article

  • Git pre-receive hook

    - by Ralphz
    Hi When you enable pre-receive hook for git repository: It takes no arguments, but for each ref to be updated it receives on standard input a line of the format: < old-value SP < new-value SP < ref-name LF where < old-value is the old object name stored in the ref, < new-value is the new object name to be stored in the ref and is the full name of the ref. When creating a new ref, < old-value is 40 0. Does anyone can explain me how do I examine all the files that will be changed in the repository if i allow this commit? I'd like to run that files through some scripts to check syntax and so on. Thanks.

    Read the article

  • How to clean sys.conversation_endpoints

    - by Manjoor
    I have a table, a trigger on the table implemented using service broker. More than Half million records are inserted daily into the table. The asynchronous SP is used to check sveral condition by using inserted data and update other tables. It was running fine for last 1 month and the SP was get executed withing 2-3 seconds of insertion of record. But now it take more than 90 minute. At present sys.conversation_endpoints have too much records. (Note that all the table are truncated daily as I do not need those records day after) Other database activities are normal (average 60% CPU Utilization). Now where i need to look?? I can re-create database without any problem but i don't think it is a good way to resolve the problem

    Read the article

  • SoundPlayer causing Memory Leaks?

    - by Nick Udell
    I'm writing a basic writing app in C# and I wanted to have the program make typewriter sounds as you typed. I've hooked the KeyPress event on my RichTextBox to a function that uses a SoundPlayer to play a short wav file every time a key is pressed, however I've noticed after a while my computer slows to a crawl and checking my processes, audiodlg.exe was using 5 GIGABYTES of RAM. The code I'm using is as follows: I initialise the SoundPlayer as a global variable on program start with SoundPlayer sp = new SoundPlayer("typewriter.wav") Then on the KeyPress event I simply call sp.Play(); Does anybody know what's causing the heavy memory usage? The file is less than a second long, so it shouldn't be clogging the thing up too much.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >