Search Results

Search found 4853 results on 195 pages for 'continuous integration'.

Page 14/195 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Updates to Stairway to Integration Services

    - by andyleonard
    The Stairway to integration Services has been updated! I added content to Step 1 to provide more detail about creating a first SSIS project and corrected a typo in Step 2 that referred to an older name for the Step 1 article. I also made the corrected Step 1 article name a link to help. Thanks to Steve Jones ( blog | @way0utwest ) for all his hard work editing and corralling trifling authors. :{>...(read more)

    Read the article

  • Oracle SRM increases enterprise footprint with Eloqua integration: an Ovum report

    - by Richard Lefebvre
    At Oracle OpenWorld in September, Oracle announced that Social Relationship Management (SRM) suite is further integrated with Oracle Eloqua, its newly acquired marketing automation platform. "Oracle is the only leading vendor to date to have fully integrated social with a sales lead management platform within the context of marketing automation" writes Gerry Brown in this Ovum report, in which you can read and understand all the benefits of this integration,

    Read the article

  • SQL Server 2008 and 2008 R2 Integration Services - Managing Local Processes Using Script Task

    SQL Server 2008 R2 Integration Services includes a number of predefined tasks that implement common administrative actions to help with data extraction, transformation and loading (ETL). While in a majority of cases they are sufficient to deliver required functionality, there might be situations where an extra level of flexibility is desired. NEW! SQL Monitor 2.0Monitor SQL Server Central's servers withRed Gate's new SQL Monitor.No installation required. Find out more.

    Read the article

  • ZionTech blogging about integration of OUD and EUS

    - by Sylvain Duloutre
    Here are good posts about OUD and EUS integration : http://ziontech.com/blog/integrating-oud-and-eus/ EUS OUD related posts as of now are here: http://ziontech.com/blog/preparing_database/ http://ziontech.com/blog/integrating-oud-eus-users-groups-mapping/ http://ziontech.com/blog/integrating-oud-eus-oudproxy/ http://ziontech.com/blog/integrating-oud-eus-troubleshooting/

    Read the article

  • SQL Server 2012 Integration Services- Using Environments in Package Execution

    SQL Server 2012 Integration Services offers several different options for deploying and storing SSIS packages along with their associated projects, two of which are directly related to two deployment models available in SQL Server Data Tools console. Marcin Policht presents one of these methods, which deals with packages deployed using Project Deployment Model and leverages newly introduced Environments.

    Read the article

  • SQL Server 2012 Integration Services - Implementing Package Security using Access Control

    SQL Server 2012 Integration Services offers a wide range of powerful features that allow you to streamline and automate tasks involving data extraction, transformation, and loading. However, incorporating these features into your existing business intelligence framework frequently necessitates additional security measures ensuring that data which is being processed remains protected from unauthorized access.

    Read the article

  • Application Integration Architecture – Bringing It All Together - Part 2

    Oracle's Application Integration Architecture (AIA) provides Oracle customers,prospects and partners with the capability to more easily integrate and orchestrate information and transactions across multiple systems. Learn more about Oracle AIA and get an update on new and planned integrations from Jose Lazares,Vice President, Oracle Applications Development.

    Read the article

  • Application Integration Architecture – Bringing It All Together - Part 1

    Oracle's Application Integration Architecture (AIA) provides Oracle customers,prospects and partners with the capability to more easily integrate and orchestrate information and transactions across multiple systems. Learn more about Oracle AIA and get an update on new and planned integrations from Jose Lazares,Vice President, Oracle Applications Development.

    Read the article

  • SQL Server 2012 Integration Services - Project Deployment

    SQL Server 2012 Integration Services parameters introduce a new way of dealing with package development, deployment, and execution. In order to truly appreciate their relevance, it is necessary to take a look at the new Project Deployment Model. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • Data Mining: Part 14 Export DMX results with Integration Services

    In this chapter we will explain how to work with Data Mining models and the Integration Services. Specifically, we will talk about the Data Mining Query Task in SSIS. Free ebook "TortoiseSVN and Subversion Cookbook - Oracle Edition"Use these recipes to work better, faster, and do things you never knew you could do with SVN. If you're new to source control, this book provides a concise guide to getting the most out of Subversion. Download it for free.

    Read the article

  • Can an internally developed fast evolving, agile, short sprint web application lend itself to offshoring?

    - by Gavin Howden
    I have recently been set a target to achieve readiness to successfully manage and deliver results through the usage of offshore teams on our mainline development project within 12 months. Our mainline is a multi-thousand user highly available web application, and various related SAAS components delivered through the above mentioned web application. We work agile on the mainline with a rapid 1 week sprint using continuous integration. Our delivery platform is a bespoke php framework, although we have some .net services and components in the mix. My view is: an offshore team could work if we either ship out an entire isolated project for offshore development, or we specify a component for our system in huge detail up front. But we don't currently work like that, and it will conflict with the in-house method, and unless the off-shore is working within our team, with our development/deployment chain it could be an integration nightmare. So my question is, given we have a closed source bespoke framework (Private IP) which we train our developers to use, and we work agile minimising documentation, maximising communication and responding to rapidly changing requirements, and much of the quality control is via team skills building and peer review, how can I make off-shoring work on our main line development?

    Read the article

  • CI - How long is continous?

    - by Andy
    We currently are using CCNet as our continous integration server. Most projects check for changes every 30 seconds (the default) and if needed perform a build (unit tests, stylecop, fxcop, etc). We've gotten quite a few projects now, and the server spends most of its time near 100% cpu utilization. This has alarmed some of the development team, even though the server is responsive and builds are still about the same length of time they've always been. Its been suggested that we lower the check interval to about five minutes. To me that seems too long, and we risk people committing code and then going home for the weekend and now there's a broken build possibly holding up others. In response, the suggestion is that if someone needs to know the results they can force the build. But that seems to defeat the purpose of CI, as I thought it was supposed to be automated. My proposed solution is just to get another build server and split the builds amongst the servers. Am I thinking about this the wrong way, or is there a point where if integration isn't often enough you're not really doing CI anymore?

    Read the article

  • Should integration testing of DAOs be done in an application server?

    - by HDave
    I have a three tier application under development and am creating integration tests for DAOs in the persistence layer. When the application runs in Websphere or JBoss I expect to use the connection pooling and transaction manager of those application servers. When the application runs in Tomcat or Jetty, we'll be using C3P0 for pooling and Atomikos for transactions. Because of these different subsystems, should the DAO's be tested in a fully configured application server environment or should we handle those concerns when integration testing the service layer? Currently we plan on setting up a simple JDBC data source with non-JTA (i.e. resource-local) transactions for DAO integration testing, thus no application server is involved....but this leaves me wondering about environmental problems we won't uncover.

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Bulletproof way to DROP and CREATE a database under Continuous Integration.

    - by H. Abraham Chavez
    I am attempting to drop and recreate a database from my CI setup. But I'm finding it difficult to automate the dropping and creation of the database, which is to be expected given the complexities of the db being in use. Sometimes the process hangs, errors out with "db is currently in use" or just takes too long. I don't care if the db is in use, I want to kill it and create it again. Does some one have a straight shot method to do this? alternatively does anyone have experience dropping all objects in the db instead of dropping the db itself? USE master --Create a database IF EXISTS(SELECT name FROM sys.databases WHERE name = 'mydb') BEGIN ALTER DATABASE mydb SET SINGLE_USER --or RESTRICTED_USER --WITH ROLLBACK IMMEDIATE DROP DATABASE uAbraham_MapSifterAuthority END CREATE DATABASE mydb;

    Read the article

  • Typical SVN repo structure seems to be sub-optimal for continuous integration...

    - by Dave
    I've set up our SVN repository like the Subversion book suggests, and this is also how my previous companies have done it. It looks something like this: /trunk /branches /tags /extlibs /docs where the first three are pretty obvious, and extlibs is for 3rd party assemblies that we wouldn't typically recompile ourselves. All of this works great for the daily development stuff. Now I've installed TeamCity and have builds, unit tests, code coverage, and code analysis running. Everything is great, except for the fact that this code structure results in too much code getting downloaded. So here's the catch 22, in my opinion: it's silly to download all of aforementioned folders from the SVN repo when I only need /trunk and /extlibs. But I can only specify one repo folder to download in the TeamCity VCS settings. So then the other possibility is to put the /extlibs folder into /trunk, but in order to compile branches, /extlibs would have to go into all of those as well (since I usually branch the trunk, and not individual subfolders... and this would seem infinitely more evil since /extlibs could actually be larger than /trunk and /branches, with all of the binaries stored there... Do you guys have any suggestions for me? Thanks!

    Read the article

  • Is there a pre-made Continuous Integration solution for .NET applications?

    - by Brett Rigby
    From my perspective, we're constructing our own 'flavour' of NAnt/Ivy/CruiseControl.Net in-house and can't help but get the feeling that other dev shops are doing exactly the same work, but then everybody is finding out the same problems and pitfalls with it. I'm not complaining about NAnt, Ivy or CruiseControl at all, as they've been brilliant in helping our team of developers become more sure of the quality of their code, but it just seems strange that these tools are very popular, yet we're all re-inventing the CI-wheel. Is there a pre-made solution for building .Net applications, using the tools mentioned above?

    Read the article

  • Hell and Diplomacy: Notes on Software Integration

    - by ericajanine
    Well, I'm getting cabin fever and short-timer's ADD all at the same time. I haven't been anywhere outside of my greater city area in FOREVER and I'm only days away from my vacation. I have brainlock because the last few days have been non-stop diffusing amazingly hostile conversations. I think I'll write about that. So then, I "do" software. At the end of the day, software is pretty straightforward. Software is that thing we love and try to make do things not currently in play, in existence. If a process around getting software to do something is broken (like most actually are), then we should acknowledge it and move on. We are professional. We are helpful beyond the normal call of duty. We live and breathe making the lives better for those apps being active in the world. But above all--the shocker: We are SERVICE. In a service frame of mind, all perspectives shift to what is best overall for system stabilization vs. what must be in production to meet business objectives. It doesn't matter how much you like or dislike the creator of said software. It doesn't matter what time you went to bed last night or if your mate appreciates your Death March attitude. Getting a product in and when is an age-old dilemma in a software environment where more than, say, 3 people are involved. We know this. Taking a servant's perspective eliminates the drama surrounding what a group of half-baked developers forgot to tell each other in the 11th hour about their trampling changes before check-in. We, my counterparts in society, get paid to deal with that drama. I get paid to diffuse that drama and make everything integrate as smoothly as possible. At the end of the day, attacking someone over a minor detail not only makes things worse, it's against the whole point of our real existence. Being in support or software integration means you are to keep your eyes on the end game. That end game? It's making a solution work for all stakeholders, not just you or your immediate superior. Development and technology groups exist because business groups need them to exist and solve their issues. The end game? Doing what is best for those business groups ultimately. Period. Note: That does not mean you let your business users solely dictate when and if something gets changed in an environment you ultimately own. That's just crazy. Software and its environments are legitimately owned by those who manage it directly, no matter how important a business group believes it is to the existence of mankind. So, you both negotiate the terms of changing that environment and only do so upon that negotiation. Diplomacy is in order. So, to finish my thoughts: If you have no ability to keep your mouth shut in a situation where a business or development group truly need your help to make something work even beyond a deadline, find another profession. Beating up someone verbally because they screw up means a service attitude is not at the forefront of your motivation for doing what is ultimately their work and their product. Software, especially integration, requires a strong will and a soft touch to keep it on track. Not a hammer covered in broken glass.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >