Search Results

Search found 4874 results on 195 pages for 'ssis integration servic'.

Page 54/195 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Specify artifact version outside of pom

    - by Adam B
    Is there a way to specify the artifact version outside of the POM file? I have 2 CI projects that build an artifact. One builds a "stable" development version from a 'develop' branch and the other builds an unstable version which is the result of merging all active feature branches into the develop branch. I want the stable version to build as xyz-1.0.jar and the integration build to go in as xyz-1.0-SNAPSHOT.jar. Is there a way for the CI job to run a maven task or specify via the command line if a release or snapshot jar should be built without manually modifying the POM? Currently I have the version specified as 1.0 in the pom. I considered using the release plugin but I don't want the automatic version number increase and tagging that it does.

    Read the article

  • Maven overlay with war not generated by maven

    - by dnc253
    I'm pretty new to Maven, so I apologize if this is a newbie question. We are trying to integrate a 3rd-party app into ours. This 3rd-party functionality is delivered to us as a war file. As part of this integration, I want to add some extra jars and and a properties file. Googling around, I discovered overlays. However, in all the examples I've seen, it looks the the wars that are being overlayed, were themselves generated by Maven, and thus Maven can figure out dependencies, conflicts, etc. between the two. I'm just wondering if there's a way for me to overlay this 3rd-party war with this extra stuff I want. If so, what would that look like in the pom.xml?

    Read the article

  • why assert_equal() in Ruby on Rails sometimes seem to compare by Identity and sometimes by value?

    - by Jian Lin
    it was very weird that yesterday, I was do an integration test in Rails and assert_equal array_of_obj1, array_of_obj2 # obj1 from db, obj2 created in test and it failed. The values shown inside the array and objects were identical. If I change the test to assert array_of_obj1 == array_of_obj2 Then it will pass. But today, the first test actually passed. What reason could it be? Is assert_equal always using == or .equal? in Rails 2.2 or 2.3.5?

    Read the article

  • Happy Day! VS2010 SP1, Project Server Integration, Load Test Feature Pack

    - by Aaron Kowall
    Microsoft released a PILE of Visual Studio goodness today: Visual Studio 2010 SP1(Including TFS SP1) Finally done with remembering which GDR packs, KB Patches, etc need to be installed with a new VS/TFS 2010 deployment.  Just grab the SP1.  It’s available today for MSDN Subscribers and March 10th for public download. TFS-Project Server Integration Feature Pack MSDN Subscribers got another little treat today with the TFS-Project Server integration feature pack.  We can now get project rollups and portfolio level management with Project Server yet still have the tight developer interaction with TFS.  Finally we can make the PMO happy without duplicate entry or MS Project gymnastics. Visual Studio Load Test Feature Pack This is a new benefit for Visual Studio 2010 Ultimate subscribers.  Previously there was a limit to Ultimate Load Testing of 250 virtual users. If you needed more, you had to buy virtual user license packs.  No more.  Now your Visual Studio Ultimate license allows you to simulate as many virtual users as you need!!  This is HUGE in improving adoption of regular load testing for development projects. All the Details are available from Soma’s blog. Technorati Tags: VS2010,TFS,Load Test

    Read the article

  • Releasing software/Using Continuous Integration - What do most companies seem to use?

    - by Sagar
    I've set up our continuous integration system, and it has been working for about a year now. We have finally reached a point where we want to do releases using the same. Before our CI system, the process(es) that was used was: (Develop) -> Ready for release -> Create a branch -> (Build -> Fix bugs as QA finds them) Loop -> Final build -> Tag (Develop) -> Ready for release -> (build -> fix bugs) Loop -> Tag Our CI setup: 1 server for development (DEV) 1 server for qa/release (QA) The second one has integrated into CI perfectly. I create a branch when the software is ready for release, and the branch never changes thereafter, which means the build is reproduceable without having to change the CI job. Any future development takes place on HEAD, and even maintainence releases get a completely new branch and a completely new job, which remains on the CI system forever, and then some. The first method is harder to adapt. If the branch changes, the build is not reproduceable unless I use the tag to build [jobs on the CI server uses the branch for QA/RELEASE, and HEAD for development builds]. However, if I use the tag to build, I have to create a new CI job to build from the tag (lose changelog on server), or change the existing job (lose original job configuration). I know this sounds complicated, and if required, I will rewrite/edit to explain the situation better. However, my question: [If at all] what process does your company use to release software using continuous integration systems. Is it even done using the CI system, or manually?

    Read the article

  • Gmail Now Supports Google Drive Integration; Share Files Up to 10GB

    - by Jason Fitzpatrick
    Gmail users can now easily send large files thanks to Google Drive’s increased integration with Gmail–blow through the 25MB in-email attachment limit and share files up to 10GB. From the official Gmail announcement: Have you ever tried to attach a file to an email only to find out it’s too large to send? Now with Drive, you can insert filesup to 10GB – 400 times larger than what you can send as a traditional attachment. Also, because you’re sending a file stored in the cloud, all your recipients will have access to the same, most-up-to-date version.  Like a smart assistant, Gmail will also double-check that your recipients all have access to any files you’re sending. This works like Gmail’s forgotten attachment detector: whenever you send a file from Drive that isn’t shared with everyone, you’ll be prompted with the option to change the file’s sharing settings without leaving your email. It’ll even work with Drive links pasted directly into emails.  The new Gmail/Drive integration is rolling out in waves to users over the next few days and is accessible via the new Gmail compose window. How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • How does process of updating code with Continous Integration work?

    - by BleakCabalist
    I want to draw a model of process of updating the source code with the use of Continous Integration. The main issue is I don't really understand how it works when there are several programmers working on various aspects of the code at the same time. I can't visualize it in my mind. Here's what I know but I might be wrong: New code is sent to repository. Continous Integration server asks Version Control System if there is a new code in repository. If there is than CIS executes tests on the code. If tests show there are problems than CIS orders VCS to revert back to working wersion of the code and communicates it to programmer. If tests are passed positively it compiles the repository code and makes new build of a game? New build is made not after ever single change, but at the end of the day I believe? Are my assumptions above correct? If yes, does it also work when there are several programmers updating repository at once? Is this enough to draw a model of the process in your opinions or did I miss something? Also, what software would I need for above process? Can you guys give examples for CIS software and VCS software and whatever else I need? Does CIS software perform code tests or do I need another tool for that and integrate it with CIS? Is there a repository software?

    Read the article

  • what special issues are at play when loading a config file from the comand prompt with DTExec

    - by Ralph Shillington
    If I run a package from the Management Studio, and specify a configuration file, everything works as expected. However if I try and run the package from the command prompt with DTExec I get the error: Cannot load the XML configuration file. The XML configuration file may be malformed or not valid. The command I'm using to execute the package is: dtexec /conf ConfigurationDemo.dtsConfig /f Package.dtsx I am running the dtexec from the folder where these two files reside. Is there an addtional switch or something that must used to get dtexec to behave the same was at the management Stduio in launching a package?

    Read the article

  • Derived Column Editor

    - by Rob Bowman
    Hi I need to assign a formatted date to a column in a data flow. I have added a Derived Column editor and entered the following expression: "BBD" + SUBSTRING((DT_WSTR,4)DATEADD("Day",30,GETDATE()),1,4) + SUBSTRING((DT_WSTR,2)DATEADD("Day",30,GETDATE()),6,2) + SUBSTRING((DT_WSTR,2)DATEADD("Day",30,GETDATE()),9,2) The problem is that the "Derived Column Transformation Editor" automatically assigns a Data Type of "Unicode string[DT_WSTR]" and a length of "7". Howver, the length of a string is 11, therefore the following exception is thrown each time: [Best Before Date [112]] Error: The "component "Best Before Date" (112)" failed because truncation occurred, and the truncation row disposition on "output column "Comments" (132)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component. Does anyone know why the edit is insisting on a length of 7? I don't seem to be able to change this. Many thanks, Rob.

    Read the article

  • how to get the sql connection

    - by sweetsecret
    FS_Setting is a VB class which has all the details of the connections ie: Public Class FS_Setting Public Function Get_RS_Connection() As SqlConnection Try Get_RS_Connection = New SqlConnection("Data Source=***********;User ID=sa;Password=*****;database=*********") Catch ex As System.Exception Throw New System.Exception("Get_RS_Connection Error:" + ex.Message) End Try End Function I need to call the function Get_RS_Connection() in a different class instead of getting the connection all the way again and hard coding.... I want to call the above class where the SQL connection is declared Namespace FS_Library Public Class FS_Errorlog Inherits FS_BaseClass Try **cn = New SqlConnection("Data Source=***********;UserID=sa;Password=*****;database=*********")** cmd = New SqlCommand("dbo.FS_ErrorLog_ADD", cn) cmd.CommandType = CommandType.StoredProcedure cmd.CommandTimeout = Convert.ToInt32(ConfigurationSettings.AppSettings("Command_Timeout")) Me.AddParameter(cmd, "@p_tableKey", SqlDbType.Int, tableKey) Me.AddParameter(cmd, "@p_FunctionCode", SqlDbType.Int, FunctionCode) Me.AddParameter(cmd, "@p_TableAlias", SqlDbType.VarChar, TableAlias) Me.AddParameter(cmd, "@p_ValidationCode", SqlDbType.Int, ValidationCode) If Filename = "" Then Filename = "N/A" End If Me.AddParameter(cmd, "@p_FileName", SqlDbType.VarChar, Filename) Me.AddParameter(cmd, "@p_Message", SqlDbType.VarChar, Message) Me.AddParameter(cmd, "@p_CreateUser", SqlDbType.VarChar, userID) Me.AddParameter(cmd, "@p_UserActionID", SqlDbType.UniqueIdentifier, UserActionID) cn.Open()

    Read the article

  • Newbie T-SQL dynamic stored procedure -- how can I improve it?

    - by Andy Jones
    I'm new to T-SQL; all my experience is in a completely different database environment (Openedge). I've learned enough to write the procedure below -- but also enough to know that I don't know enough! This routine will have to go into a live environment soon, and it works, but I'm quite certain there are a number of c**k-ups and gotchas in it that I know nothing about. The routine copies data from table A to table B, replacing the data in table B. The tables could be in any database. I plan to call this routine multiple times from another stored procedure. Permissions aren't a problem: the routine will be run by the dba as a timed job. Could I have your suggestions as to how to make it fit best-practice? To bullet-proof it? ALTER PROCEDURE [dbo].[copyTable2Table] @sdb varchar(30), @stable varchar(30), @tdb varchar(30), @ttable varchar(30), @raiseerror bit = 1, @debug bit = 0 as begin set nocount on declare @source varchar(65) declare @target varchar(65) declare @dropstmt varchar(100) declare @insstmt varchar(100) declare @ErrMsg nvarchar(4000) declare @ErrSeverity int set @source = '[' + @sdb + '].[dbo].[' + @stable + ']' set @target = '[' + @tdb + '].[dbo].[' + @ttable + ']' set @dropStmt = 'drop table ' + @target set @insStmt = 'select * into ' + @target + ' from ' + @source set @errMsg = '' set @errSeverity = 0 if @debug = 1 print('Drop:' + @dropStmt + ' Insert:' + @insStmt) -- drop the target table, copy the source table to the target begin try begin transaction exec(@dropStmt) exec(@insStmt) commit end try begin catch if @@trancount > 0 rollback select @errMsg = error_message(), @errSeverity = error_severity() end catch -- update the log table insert into HHG_system.dbo.copyaudit (copytime, copyuser, source, target, errmsg, errseverity) values( getdate(), user_name(user_id()), @source, @target, @errMsg, @errSeverity) if @debug = 1 print ( 'Message:' + @errMsg + ' Severity:' + convert(Char, @errSeverity) ) -- handle errors, return value if @errMsg <> '' begin if @raiseError = 1 raiserror(@errMsg, @errSeverity, 1) return 1 end return 0 END Thanks!

    Read the article

  • SQL Server - CAST AND DIVIDE

    - by rs
    DECLARE @table table(XYZ VARCHAR(8) , id int) INSERT INTO @table SELECT '4000', 1 UNION ALL SELECT '3.123', 2 UNION ALL SELECT '7.0', 3 UNION ALL SELECT '80000', 4 UNION ALL SELECT NULL, 5 SELECT CASE WHEN PATINDEX('^[0-9]{1,5}[\.][0-9]{1,3}$', XYZ) = 0 THEN XYZ WHEN PATINDEX('^[0-9]{1,8}$',XYZ) = 0 THEN CAST(XYZ AS decimal(18,3))/1000 ELSE NULL END FROM @table This part - CAST(XYZ AS decimal(18,3))/1000 doesn't divide value it gives me more number of zeros after decimal instead of dividing it. (I even enclosed that in brackets and tried but same result) Am i doing something wrong here?

    Read the article

  • SQL Server Agent 2005 job runs but no output

    - by alimack
    Essentially I have a job which runs in BIDS and as as a stand lone package and while it runs under the SQL Server Agent it doesn't complete properly (no error messages though). The job steps are: 1) Delete all rows from table; 2) Use For each loop to fill up table from Excel spreasheets; 3) Clean up table. I've tried this [MS page][1] (steps 1 & 2), didn't see any need to start changing from Server side security. Also SQLServerCentral.com for [this page][2], no resolution. How can I get error logging or a fix? Note I've reposted this from Server Fault as it's one of those questions that's not pure admin or programming. I have logged in as the proxy account I'm running this under, and the job runs stand alone but complains that the Excel tables are empty?

    Read the article

  • how to get external variable value in dtsx package.

    - by Rishabh
    Hi, I am executing .dtsx package from c#, it was executing fine, if i am passing one variable value from c# code then how can i get it on .dtsx package for my ole db source query. Here is my c# code. string file = @"D:\CYNCZFuzzy\CYNCZFuzzy\Contact.dtsx"; package = app.LoadPackage(file, null); Variables vars = package.Variables; vars["User::parentContactID"].Value = 1028203; pkgResults = package.Execute(); string result = pkgResults.ToString(); I need this 1028203 value on my ole db source query, here my query. select cr.MasterContactID as ParentContactID, c.ID,C.FirstName, C.MiddleName, c.LastName, c.ID as FieldID from Contact c inner join ContactRelation cr on cr.SlaveContactID = c.ID where RelationshipID = 1 AND cr.MasterContactID = ? what I should write on ? for getting 1028203 value from c# page. Thanks in advance...

    Read the article

  • Return dataset in dataflow

    - by praveen
    Hi All, Could I get ideas on retrieving the dataset using lookup method. Basically, my scenario as I have source data needs to lookup for other source table and on matching column from source I need to get all the records from other source data. its a one to many relations. I tried Lookup but gives only one record on matching condition, OLE DB command don't retrieve any data as it will do only Insert/Update operations. Thanks prav

    Read the article

  • Syntax Error in showing Error Description

    - by Sreejesh Kumar
    What is the correct Syntax to be applied for "@[System::ErrorDescription]" inside the query like "INSERT" ? I am unable to retrieve the correct Error Description inside the table, as the result in the table is showing as "@[System::ErrorDescription]". I am not getting the result !

    Read the article

  • how to choose which row to insert with same id in sql?

    - by user1429595
    so Basically I have a table called "table_1" : ID Index STATUS TIME DESCRIPTION 1 15 pending 1:00 Started Pending 1 16 pending 1:05 still in request 1 17 pending 1:10 still in request 1 18 complete 1:20 Transaction has been completed 2 19 pending 2:25 request has been started 2 20 pending 2:30 in progress 2 21 pending 2:35 in progess still 2 22 pending 2:40 still pending 2 23 complete 2:45 Transaction Compeleted I need to insert these data into my second table "table_2" where only start and compelete times are included, so my "table_2" should like this: ID Index STATUS TIME DESCRIPTION 1 15 pending 1:00 Started Pending 1 18 complete 1:20 Transaction has been completed 2 19 pending 2:25 request has been started 2 23 complete 2:45 Transaction Compeleted if anyone can help me write sql query for this I would highly appreciate it. Thanks in advance

    Read the article

  • Handling primary key duplicates in a data warehouse load

    - by Meff
    I'm currently building an ETL system to load a data warehouse from a transactional system. The grain of my fact table is the transaction level. In order to ensure I don't load duplicate rows I've put a primary key on the fact table, which is the transaction ID. I've encountered a problem with transactions being reversed - In the transactional database this is done via a status, which I pick up and I can work out if the transaction is being done, or rolled back so I can load a reversal row in the warehouse. However, the reversal row will have the same transaction ID and so I get a primary key violation. I've solved this for now by negating the primary key, so transaction ID 1 would be a payment, and transaction ID -1 (In the warehouse only) would be the reversal. I have considered an alternative of generating a BIT column, where 0 is normal and 1 is reversal, then making the PK the transaction ID and the BIT column. My question is, is this a good practice, and has anyone else encountered anything like this? For reference, this is a payment processing system, so values will not be modified, so there will only ever be transactions and reversals.

    Read the article

  • Package start and end times

    - by abkl
    Hi , I wanted to know if there is any system variable which is can use to access package execution start and end times. My requirement is that i need to store this in 2 relevant fields in a table in the database. Thank you. Abhi

    Read the article

  • Package Confugurations

    - by Sreejesh Kumar
    I had created a package with necessary connections. And I had enabled Package Configurations. But I am unable to execute the package. Error is shown. And its related to connection failure , I think.

    Read the article

  • My Sessions at Oracle OpenWorld 2012

    - by Ravi Sankaran
    I have 2 sessions at Oracle OpenWorld 2012. Oracle Fusion Applications: Customizing and Extending Business Processes Rajesh Raheja - Senior Director, Product Management for Oracle Fusion Middleware Business Integration joins me  to talk about the approaches in customizing and extending Oracle Fusion Applications with Oracle SOA Suite. CON8719 When: Monday, Oct 1, 4:45 PM – 5:45 PM Where: Palace Hotel – Twin Peaks North Oracle Fusion Applications: Best Practices in Integration Design Patterns I will be join Rajesh Raheja to provide a high level view of the Oracle Fusion Applications integration strategy and showing the best practice integration design patterns. You will learn how to discover integration assets, invoke web services and use cloud data integration. The session is not just limited to SaaS deployments, but will be useful for on-premises customers as well. CON8685 When: Tuesday, Oct 2, 1:15 PM – 2:15 PM Where: Palace Hotel – Telegraph I will also be at the SOA Customer Advisory Board on Thursday. Here is another session that  I would want to strongly recommend. This is a session that discusses how Oracle SOA Suite could be used to integrate applications with the ones on the cloud. How to Integrate Cloud Applications with Oracle SOA Suite Rajesh Raheja will be joined by Geeta Pyne (Director, Middleware at BMC Software) to address cloud integration challenges and how Oracle SOA Suite can help with a consistent approach to integration, whether on-premises or cloud. I am quite excited about this session as we will tackle the hype and myth of “simple” cloud integrations and share real-life application integration experiences. Don’t miss this one! CON8968 When: Tuesday, Oct 2, 11:45 AM – 12:45 PM Where: Moscone West – 3003 See you at Oracle Open World!

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >