Search Results

Search found 1740 results on 70 pages for 'procedures'.

Page 3/70 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • MySQL Procedures : Standard/Best Place For Code Documentation?

    - by rlb.usa
    I'd like to put a paragraph or so for documentation on each of my MySQL procedures: date, author, purpose, etc. Is there a standard, accepted, or best place to put this kind of documentation? For example, In MSSQL, some IDE's will generate a template for you to start coding your procedure; it comes with a little place for code documentation. USE [MyDatabase] GO /****** Object: StoredProcedure [dbo].[StoreRecord] Script Date: 04/29/2010 09:21:57 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO -- ============================================= -- Author: -- Create date: -- Description: -- ============================================= ALTER PROCEDURE [dbo].[StoreRecord] -- ... more code ...

    Read the article

  • LINQ-to-SQL vs stored procedures?

    - by scottmarlowe
    I took a look at the "Beginner's Guide to LINQ" post here on StackOverflow (http://stackoverflow.com/questions/8050/beginners-guide-to-linq), but had a follow-up question: We're about to ramp up a new project where nearly all of our database op's will be fairly simple data retrievals (there's another segment of the project which already writes the data). Most of our other projects up to this point make use of stored procedures for such things. However, I'd like to leverage LINQ-to-SQL if it makes more sense. So, the question is this: For simple data retrievals, which approach is better, LINQ-to-SQL or stored procs? Any specific pro's or con's? Thanks.

    Read the article

  • LINQ-to-SQL vs stored procedures?

    - by scottmarlowe
    I took a look at the "Beginner's Guide to LINQ" post here on StackOverflow (http://stackoverflow.com/questions/8050/beginners-guide-to-linq), but had a follow-up question: We're about to ramp up a new project where nearly all of our database op's will be fairly simple data retrievals (there's another segment of the project which already writes the data). Most of our other projects up to this point make use of stored procedures for such things. However, I'd like to leverage LINQ-to-SQL if it makes more sense. So, the question is this: For simple data retrievals, which approach is better, LINQ-to-SQL or stored procs? Any specific pro's or con's? Thanks.

    Read the article

  • MySQL Prepared Statements vs Stored Procedures Performance

    - by amardilo
    Hi there, I have an old MySQL 4.1 database with a table that has a few millions rows and an old Java application that connects to this database and returns several thousand rows from this this table on a frequent basis via a simple SQL query (i.e. SELECT * FROM people WHERE first_name = 'Bob'. I think the Java application uses client side prepared statements but was looking at switching this to the server, and in the example mentioned the value for first_name will vary depending on what the user enters). I would like to speed up performance on the select query and was wondering if I should switch to Prepared Statements or Stored Procedures. Is there a general rule of thumb of what is quicker/less resource intensive (or if a combination of both is better)

    Read the article

  • Maintaining stored procedures in source control

    - by dub
    How do you guys maintain your stored procedures? I'd like to keep versions of them for a few different reasons. I also will be setting up cruisecontrol.net and nant this weekend to automate builds. I was thinking about coding something that would generate the create scripts for all tables/sprocs/udf/xml schemas in my development database. Then it would take those scripts and update them in source control every couple hours.... Ideally, I'd like to make this some sort of plugin/module for cruisecontrol.net. Any other ideas?

    Read the article

  • Using stored procedures to generate RDLC report in C#

    - by NDraskovic
    I'm making an application that generates reports for my client. I'm using his database that contains stored procedures which return the data needed for the reports. The problem is that I don't know how to execute them from the application (more specific the TableAdapter in my dataset). When I use the visual aid to create the TableAdapter, it shows the error "Invalid object named #table1". This is weird because there is a temporary table called #table1 in the stored procedure. When I try to do the whole job programmatically, I get the exception Incorrect syntax near '.'. An object or column name is missing or empty. For SELECT INTO statements, verify each column has a name. For other statements, look for empty alias names. Aliases defined as "" or [] are not allowed. Change the alias to a valid name. I created a DataTable that has identical structure as the result of the stored procedure, but I still get the same exception

    Read the article

  • Permissions for Large Variables to Be Sent Via Stored Procedures (SQL Server)

    - by Joe Majewski
    I can't figure out a way to allow more than 4000 bytes to be received at once via a call to a stored procedure. I am storing images in the table that are around 15 - 20 kilobytes each, but upon getting them and displaying them to the page, they are always exactly 3.91 KB in size (or 4000 bytes). Do stored procedures have a limit on how much data can be sent at once? I double-checked my data, and I am indeed only receiving the first 4000 characters from the varbinary(MAX) field. Is there a permission setting to allow more than 4k bytes at once?

    Read the article

  • Making flexible C# code in MVC2 for Stored Procedures

    - by cc0
    Thanks to Darin Dimitrov's suggestion I got a big step further in understanding good MVC code, but I'm having some problems making it flexible. I implemented Darin's suggested solution, and it works perfectly for single controllers. However I'm having some trouble implementing it with some flexibility. What I'm looking for is this; To be able to make dynamic column names in json Instead of using "Column1: 'value', ..." and "Column2: 'value', ..." inside the json, I'd like to use for example "id: 'value', ..." and "place: 'value' ..." for one stored procedure, and "animal" and "type" in another (inside the json format). To be able to make dynamic amounts of columns dependent on which stored procedure is called Some stored procedures I'll want to read more than 2 rows from, is there a smart way of accomplishing that? To be able to make numeric (floats and integers) rows from the database be presented inside the json without quotes Like this (name and age); { Column1: "John", Column2: 53 }, I would be very grateful for any feedback and suggestions / code examples I can get here. Even imperfect ones.

    Read the article

  • SubSonic missing stored procedures in StoredProcedures.cs when generated with SubCommander sonic.exe

    - by Mark
    We have been using SubSonic to generate our DAL with a lot of success on VS2005 and SubSonic Tools 2.0.3. SubSonic Tools do not work on VS2008 (as far as we can work out) so we have tried to use SubCommander\sonic.exe and are now hitting some problems. When we regenerate the project using SubCommander\sonic.exe and try to compile we get some errors reporting missing members (which should have been automatically generated based on the stored procedures we have). On closer inspection it looks like my StoredProcedures.cs file is missing some (not all) automatically generated methods for my classes. As an example, I have 2 procs: [dbo]._ClassA_Func1 [dbo]._ClassA_Func2 Only one of these is being generated in the StoredProcedures.cs file. These methods generate fine using the SubSonic Tools plugin. We have tried now with versions 2.1 and 2.2 of SubSonic with the same issue. We are still on .NET 2.0 so cannot use SubSonic 3.0. I have checked the permissions of both procs using fn_my_permissions and they seem identical. Does anyone have any ideas on what I can check? Thanks -- Mark

    Read the article

  • temporary tables within stored procedures on slave servers with readonly set

    - by lau
    Hi, We have set up a replication scheme master/slave and we've had problems lately because some users wrote directly on the slave instead of the master, making the whole setup inconsistent. To prevent these problems from happening again, we've decided to remove the insert, delete, update, etc... rights from the users accessing the slave. Problems is that some stored procedure (for reading) require temporary tables. I read that changing the global variable read_only to true would do what I want and allow the stored procedures to work correctly ( http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_read_only ) but I keep getting the error : The MySQL server is running with the --read-only option so it cannot execute this statement (1290) The stored procedure that I used (for testing purpose) is this one : DELIMITER $$ DROP PROCEDURE IF EXISTS test_readonly $$ CREATE DEFINER=dbuser@% PROCEDURE test_readonly() BEGIN CREATE TEMPORARY TABLE IF NOT EXISTS temp ( BT_INDEX int(11), BT_DESC VARCHAR(10) ); INSERT INTO temp (BT_INDEX, BT_DESC) VALUES (222,'walou'), (111,'bidouille'); DROP TABLE temp; END $$ DELIMITER ; The create temporary table and the drop table work fine with the readonly flag - if I comment the INSERT line, it runs fine- but whenever I want to insert or delete from that temporary table, I get the error message. I use Mysql 5.1.29-rc. My default storage engine is InnoDB. Thanks in advance, this problem is really driving me crazy.

    Read the article

  • MS SQL - High performance data inserting with stored procedures

    - by Marks
    Hi. Im searching for a very high performant possibility to insert data into a MS SQL database. The data is a (relatively big) construct of objects with relations. For security reasons i want to use stored procedures instead of direct table access. Lets say i have a structure like this: Document MetaData User Device Content ContentItem[0] SubItem[0] SubItem[1] SubItem[2] ContentItem[1] ... ContentItem[2] ... Right now I think of creating one big query, doing somehting like this (Just pseudo-code): EXEC @DeviceID = CreateDevice ...; EXEC @UserID = CreateUser ...; EXEC @DocID = CreateDocument @DeviceID, @UserID, ...; EXEC @ItemID = CreateItem @DocID, ... EXEC CreateSubItem @ItemID, ... EXEC CreateSubItem @ItemID, ... EXEC CreateSubItem @ItemID, ... ... But is this the best solution for performance? If not, what would be better? Split it into more querys? Give all Data to one big stored procedure to reduce size of query? Any other performance clue? I also thought of giving multiple items to one stored procedure, but i dont think its possible to give a non static amount of items to a stored procedure. Since 'INSERT INTO A VALUES (B,C),(C,D),(E,F) is more performant than 3 single inserts i thought i could get some performance here. Thanks for any hints, Marks

    Read the article

  • Oracle Tutor: Top 10 to Implement Sustainable Policies and Procedures

    - by emily.chorba(at)oracle.com
    Overview Your organization (executives, managers, and employees) understands the value of having written business process documents (process maps, procedures, instructions, reference documents, and form abstracts). Policies and procedures should be documented because they help to reduce the range of individual decisions and encourage management by exception: the manager only needs to give special attention to unusual problems, not covered by a specific policy or procedure. As more and more procedures are written to cover recurring situations, managers will begin to make decisions which will be consistent from one functional area to the next.Companies should take a project management approach when implementing an environment for a sustainable documentation program and do the following:1. Identify an Executive Champion2. Put together a winning team3. Assign ownership4. Centralize publishing5. Establish the Document Maintenance Process Up Front6. Document critical activities only7. Document actual practice8. Minimize documentation9. Support continuous improvement10. Keep it simple 1. Identify an Executive ChampionAppoint a top down driver. Select one key individual to be a mentor for the procedure planning team. The individual should be a senior manager, such as your company president, CIO, CFO, the vice-president of quality, manufacturing, or engineering. Written policies and procedures can be important supportive aids when known to express the thinking for the chief executive officer and / or the president and to have his or her full support. 2. Put Together a Winning TeamChoose a strong Project Management Leader and staff the procedure planning team with management members from cross functional groups. Make sure team members have the responsibility - and the authority - to make things happen.The winning team should consist of the Documentation Project Manager, Document Owners (one for each functional area), a Document Controller, and Document Specialists (as needed). The Tutor Implementation Guide has complete job descriptions for these roles. 3. Assign Ownership It is virtually impossible to keep process documentation simple and meaningful if employees who are far removed from the activity itself create it. It is impossible to keep documentation up-to-date when responsibility for the document is not clearly understood.Key to the Tutor methodology, therefore, is the concept of ownership. Each document has a single owner, who is responsible for ensuring that the document is necessary and that it reflects actual practice. The owner must be a person who is knowledgeable about the activity and who has the authority to build consensus among the persons who participate in the activity as well as the authority to define or change the way an activity is performed. The owner must be an advocate of the performers and negotiate, not dictate practices.In the Tutor environment, a document's owner is the only person with the authority to approve an update to that document. 4. Centralize Publishing Although it is tempting (especially in a networked environment and with document management software solutions) to decentralize the control of all documents -- with each owner updating and distributing his own -- Tutor promotes centralized publishing by assigning the Document Administrator (gate keeper) to manage the updates and distribution of the procedures library. 5. Establish a Document Maintenance Process Up Front (and stick to it) Everyone in your organization should know they are invited to suggest changes to procedures and should understand exactly what steps to take to do so. Tutor provides a set of procedures to help your company set up a healthy document control system. There are many document management products available to automate some of the document change and maintenance steps. Depending on the size of your organization, a simple document management system can reduce the effort it takes to track and distribute document changes and updates. Whether your company decides to store the written policies and procedures on a file server or in a database, the essential tasks for maintaining documents are the same, though some tasks are automated. 6. Document Critical Activities Only The best way to keep your documentation simple is to reduce the number of process documents to a bare minimum and to include in those documents only as much detail as is absolutely necessary. The first step to reducing process documentation is to document only those activities that are deemed critical. Not all activities require documentation. In fact, some critical activities cannot and should not be standardized. Others may be sufficiently documented with an instruction or a checklist and may not require a procedure. A document should only be created when it enhances the performance of the employee performing the activity. If it does not help the employee, then there is no reason to maintain the document. Activities that represent little risk (such as project status), activities that cannot be defined in terms of specific tasks (such as product research), and activities that can be performed in a variety of ways (such as advertising) often do not require documentation. Sometimes, an activity will evolve to the point where documentation is necessary. For example, an activity performed by single employee may be straightforward and uncomplicated -- that is, until the activity is performed by multiple employees. Sometimes, it is the interaction between co-workers that necessitates documentation; sometimes, it is the complexity or the diversity of the activity.7. Document Actual Practices The only reason to maintain process documentation is to enhance the performance of the employee performing the activity. And documentation can only enhance performance if it reflects reality -- that is, current best practice. Documentation that reflects an unattainable ideal or outdated practices will end up on the shelf, unused and forgotten.Documenting actual practice means (1) auditing the activity to understand how the work is really performed, (2) identifying best practices with employees who are involved in the activity, (3) building consensus so that everyone agrees on a common method, and (4) recording that consensus.8. Minimize Documentation One way to keep it simple is to document at the highest level possible. That is, include in your documents only as much detail as is absolutely necessary.When writing a document, you should ask yourself, What is the purpose of this document? That is, what problem will it solve?By focusing on this question, you can target the critical information.• What questions are the end users likely to have?• What level of detail is required?• Is any of this information extraneous to the document's purpose? Short, concise documents are user friendly and they are easier to keep up to date. 9. Support Continuous Improvement Employees who perform an activity are often in the best position to identify improvements to the process. In other words, continuous improvement is a natural byproduct of the work itself -- but only if the improvements are communicated to all employees who are involved in the process, and only if there is consensus among those employees.Traditionally, process documentation has been used to dictate performance, to limit employees' actions. In the Tutor environment, process documents are used to communicate improvements identified by employees. How does this work? The Tutor methodology requires a process document to reflect actual practice, so the owner of a document must routinely audit its content -- does the document match what the employees are doing? If it doesn't, the owner has the responsibility to evaluate the process, to build consensus among the employees, to identify "best practices," and to communicate these improvements via a document update. Continuous improvement can also be an outgrowth of corrective action -- but only if the solutions to problems are communicated effectively. The goal should be to solve a problem once and only once, which means not only identifying the solution, but ensuring that the solution becomes part of the process. The Tutor system provides the method through which improvements and solutions are documented and communicated to all affected employees in a cost-effective, timely manner; it ensures that improvements are not lost or confined to a single employee. 10. Keep it Simple Process documents don't have to be complex and unfriendly. In fact, the simpler the format and organization, the more likely the documents will be used. And the simpler the method of maintenance, the more likely the documents will be kept up-to-date. Keep it simply by:• Minimizing skills and training required• Following the established Tutor document format and layout• Avoiding technology just for technology's sake No other rule has as major an impact on the success of your internal documentation as -- keep it simple. Learn More For more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum.   Emily Chorba Principle Product Manager Oracle Tutor & BPM 

    Read the article

  • Unit Testing, IDataContext and Stored Procedures via Linq

    - by Terry_Brown
    hey folks, I'm currently using Stephen Walther's approach to unit testing Linq to SQL and the datacontext (http://stephenwalther.com/blog/archive/2008/08/17/asp-net-mvc-tip-33-unit-test-linq-to-sql.aspx) which is working a treat for all things linq. My Dal layer takes in an IDataContext, I use DI (via Unity) to concrete that up as the correct DataContext object and all works well. But - we have a company policy here of writes going via Stored Procs. Has anyone come up with a solution for mocking/faking the data context and still allowing stored procs to effectively be unit tested? I've got no real experience of any of the mocking frameworks (Rhino etc.) so if these are the correct means of doing this, I'd love some pointers on guides for them. Many thanks, Terry

    Read the article

  • LINQ to SQL stored procedures with multiple results in Visual Studio 2008

    - by Jeremy
    I'm using visual studio 2008 and I've created a stored procedure that selects back two different result sets. I drag the stored proc on to a linq to sql dbml datacontext class, causing visual studio to create the following code in the cs file: [Function(Name="dbo.List_MultiSelect")] public ISingleResult<DataAccessLayer.DataEntities.List_MultiSelectResult> List_MultiSelect() { IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod()))); return ((ISingleResult<DataAccessLayer.DataEntities.List_MultiSelectResult>)(result.ReturnValue)); } Shouldn't the designer generate the code to use IMultipleResults? Or do I have to hand code that?

    Read the article

  • ASP Function that returns result from stored procedures

    - by Brad
    I am working on a project that requires me to hop into to separate DB's. So I have figured that I need to have multiple functions inside of my VB page. The only problem I am having,is I am not to sure how to get this all accomplished. So far I have figured out the overall structure, just need help implementing that structure. Here is my idea: The main Function would call two other functions. We can Call them Sub Function 1 and Sub Function 2. So, the main Function takes the saved sessions information for the E-mail address and dumps in into Sub Function 1. It needs to open up a new connection to the db/stored procedure and RUN the following procedure and then return the result. Here is the stored procedure and what i think is correct. CREATE PROCEDURE WEB_User ( @EMAIL_ADDRESS varchar(80) = [EMAIL_ADDRESS] ) AS SELECT MEMBER_NUMBER FROM WEB_LOGIN WHERE EMAIL_ADDRESS = @EMAIL_ADDRESS So my question is, what is the function suppose to look like? how do I send the session information to the procedure? and finally, how do I return the stored procedure results and push back into the main function so it can be carried into sub function 2? Thank you in advance for your help... I really appreciate it!

    Read the article

  • MySQL Stored Procedures : Use a variable as the database name in a cursor declaration

    - by Justin
    I need to use a variable to indicate what database to query in the declaration of a cursor. Here is a short snippet of the code : CREATE PROCEDURE `update_cdrs_lnp_data`(IN dbName VARCHAR(25), OUT returnCode SMALLINT) cdr_records:BEGIN DECLARE cdr_record_cursor CURSOR FOR SELECT cdrs_id, called, calling FROM dbName.cdrs WHERE lrn_checked = 'N'; # Setup logging DECLARE EXIT HANDLER FOR SQLEXCEPTION BEGIN #call log_debug('Got exception in update_cdrs_lnp_data'); SET returnCode = -1; END; As you can see, I'm TRYING to use the variable dbName to indicate in which database the query should occur within. However, MySQL will NOT allow that. I also tried things such as : CREATE PROCEDURE `update_cdrs_lnp_data`(IN dbName VARCHAR(25), OUT returnCode SMALLINT) cdr_records:BEGIN DECLARE cdr_record_cursor CURSOR FOR SET @query = CONCAT("SELECT cdrs_id, called, calling FROM " ,dbName, ".cdrs WHERE lrn_checked = 'N' "); PREPARE STMT FROM @query; EXECUTE STMT; # Setup logging DECLARE EXIT HANDLER FOR SQLEXCEPTION BEGIN #call log_debug('Got exception in update_cdrs_lnp_data'); SET returnCode = -1; END; Of course this doesn't work either as MySQL only allows a standard SQL statement in the cursor declaration. Can anyone think of a way to use the same stored procedure in multiple databases by passing in the name of the db that should be affected?

    Read the article

  • Dynamic sql vs stored procedures - pros and cons?

    - by skyeagle
    I have read many strong views (both for and against) SPs or DS. I am writing a query engine in C++ (mySQL backend for now, though I may decide to go with a C++ ORM). I cant decide whether to write a SP, or to dynamically creat the SQL and send the query to the db engine.# Any tips on how to decide?

    Read the article

  • Return value mapping on Stored Procedures in Entity Framework

    - by Yucel
    Hi, I am calling a stored procedure with EntityFramework. But custom property that i set in partial entity class is null. I have Entities in my edmx (I called edmx i dont know what to call for this). For example I have a "User" table in my database and so i have a "User" class on my Entity. I have a stored procedure called GetUserById(@userId) and in this stored procedure i am writing a basic sql statement like below "SELECT * FROM Users WHERE Id=@userId" in my edmx i make a function import to call this stored procedure and set its return value to Entities (also select User from dropdownlist). It works perfectly when i call my stored procedure like below User user = Context.SP_GetUserById(123456); But i add a custom new column to stored procedure to return one more column like below SELECT *, dbo.ConcatRoles(U.Id) AS RolesAsString FROM membership.[User] U WHERE Id = @id Now when i execute it from SSMS new column called RolesAsString appear in result. To work this on entity framework i added a new property called RolesAsString to my User class like below. public partial class User { public string RolesAsString{ get; set; } } But this field isnt filled by stored procedure when i call it. I look to the Mapping Detail windows of my SP_GetUserById there isnt a mapping on this window. I want to add but window is read only i cant map it. I looked to the source of edmx cant find anything about mapping of SP. How can i map this custom field?

    Read the article

  • mysql stored procedures using php

    - by neo skosana
    I have a stored procedure: delimiter // create procedure userlogin(in eml varchar(50)) begin select * from users where email = eml; end// delimiter ; And the php: $db = new mysqli("localhost","root","","houseDB"); $eml = "[email protected]"; $sql = $db-query("CALL userlogin('$eml')"); $result = $sql-fetch_array(); The error that I get from the browser when I run the php script: Fatal error: Call to a member function fetch_array() on a non-object... I am using phpmyadmin version 3.2.4 and mysql client version 5.1.41. Please help. Thank you.

    Read the article

  • Setting parameters after obtaining their values in stored procedures

    - by user1260028
    Right now I have an upload field while uploads files to the server. The prefix is saved so that it can later be obtained for retrieval. For this I need to attach the ID of the form to the prefix. I would like to be able to do this as such: @filePrefix = SCOPE_IDENTITY() + @filePrefix; However I am not so sure this would work because the record has not been created yet. If anything I could call an update function which obtains the ID and then injects it into the row after it has been created. To speed things up, I don't want to do this on the server but rather do this on the database. Regardless of what the approach is, I would still like to know if something like the above is possible (at least for future reference?) So if we replace that with @filePrefix = 5 + @filePrefix; would that be possible? SQL doesn't seem to like the current syntax very much...

    Read the article

  • MySQL Can't Handle Parameters for Stored Procedures

    - by Takkun
    I'm trying to make a stored procedure but it doesn't seem to be recognizing the parameters I've given it. Procedure create procedure test_pro(IN searchTable VARCHAR(55)) begin select * from searchTable limit 10; end // Trying to execute mysql> call test_pro('exampleTable'); ERROR 1146 (42S02): Table 'db.searchTable' doesn't exist It isn't replacing the searchTable with the parameter that is passed in.

    Read the article

  • Using Stored Procedures in SSIS

    - by dataintegration
    The SSIS Data Flow components: the source task and the destination task are the easiest way to transfer data in SSIS. Some data transactions do not fit this model, they are procedural tasks modeled as stored procedures. In this article we show how you can call stored procedures available in RSSBus ADO.NET Providers from SSIS. In this article we will use the CreateJob and the CreateBatch stored procedures available in RSSBus ADO.NET Provider for Salesforce, but the same steps can be used to call a stored procedure in any of our data providers. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow window. Step 3: Open the Data Flow Task and add a Script Component to the data flow pane. A dialog box will pop-up allowing you to select the Script Component Type: pick the source type as we will be outputting columns from our stored procedure. Step 4: Double click the Script Component to open the editor. Step 5: In the "Inputs and Outputs" settings, enter all the columns you want to output to the data flow. Ensure the correct data type has been set for each output. You can check the data type by selecting the output and then changing the "DataType" property from the property editor. In our example, we'll add the column JobID of type String. Step 6: Select the "Script" option in the left-hand pane and click the "Edit Script" button. This will open a new Visual Studio window with some boiler plate code in it. Step 7: In the CreateOutputRows() function you can add code that executes the stored procedures included with the Salesforce Component. In this example we will be using the CreateJob and CreateBatch stored procedures. You can find a list of the available stored procedures along with their inputs and outputs in the product help. //Configure the connection string to your credentials String connectionString = "Offline=False;user=myusername;password=mypassword;access token=mytoken;"; using (SalesforceConnection conn = new SalesforceConnection(connectionString)) { //Create the command to call the stored procedure CreateJob SalesforceCommand cmd = new SalesforceCommand("CreateJob", conn); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add(new SalesforceParameter("ObjectName", "Contact")); cmd.Parameters.Add(new SalesforceParameter("Action", "insert")); //Execute CreateJob //CreateBatch requires JobID as input so we store this value for later SalesforceDataReader rdr = cmd.ExecuteReader(); String JobID = ""; while (rdr.Read()) { JobID = (String)rdr["JobID"]; } //Create the command for CreateBatch, for this example we are adding two new rows SalesforceCommand batCmd = new SalesforceCommand("CreateBatch", conn); batCmd.CommandType = CommandType.StoredProcedure; batCmd.Parameters.Add(new SalesforceParameter("JobID", JobID)); batCmd.Parameters.Add(new SalesforceParameter("Aggregate", "<Contact><Row><FirstName>Bill</FirstName>" + "<LastName>White</LastName></Row><Row><FirstName>Bob</FirstName><LastName>Black</LastName></Row></Contact>")); //Execute CreateBatch SalesforceDataReader batRdr = batCmd.ExecuteReader(); } Step 7b: If you had specified output columns earlier, you can now add data into them using the UserComponent Output0Buffer. For example, we had set an output column called JobID of type String so now we can set a value for it. We will modify the DataReader that contains the output of CreateJob like so:. while (rdr.Read()) { Output0Buffer.AddRow(); JobID = (String)rdr["JobID"]; Output0Buffer.JobID = JobID; } Step 8: Note: You will need to modify the connection string to include your credentials. Also ensure that the System.Data.RSSBus.Salesforce assembly is referenced and include the following using statements to the top of the class: using System.Data; using System.Data.RSSBus.Salesforce; Step 9: Once you are done editing your script, save it, and close the window. Click OK in the Script Transformation window to go back to the main pane. Step 10: If had any outputs from the Script Component you can use them in your data flow. For example we will use a Flat File Destination. Configure the Flat File Destination to output the results to a file, and you should see the JobId in the file. Step 11: Your project should be ready to run.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >