Search Results

Search found 9326 results on 374 pages for 'enterprise integration'.

Page 76/374 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • How do I manage dependencies for automated builds on my build server?

    - by Tom Pickles
    I'm trying to implement continuous integration into our day to day workings. In our team, we're moving from just building our code in Visual Studio on our workstations and deploying, to using MSBuild.exe and automating on our build server (which is Jenkins) without the use of Visual Studio. We have external dependencies to references such as Automap in our projects. Because the automap (for example) dll isn't on the build server, the msbuild execution fails, for obvious reasons. There are other dll's which I need to be part of the build, I'm just using automap as an example. So what's the best way to get any dependencies onto the build server as part of the automated build? I've seen references to using a 'lib' folder, but I don't really understand where I should be putting it (in my project, filesystem, SVN ...?), and how the build server will get to it. I've also read that NuGet can do something with dependencies, but my build server isn't connected to the internet, and I don't understand how I can get my build to pull a NuGet package I may have created, and how it works together. Edit: I'm using subversion and we cannot use TeamCity as we would have to buy it and there's zero chance of funding.

    Read the article

  • Uncommitted reads in SSIS

    - by OldBoy
    I'm trying to debug some legacy Integration Services code, and really want some confirmation on what I think the problem is: We have a very large data task inside a control flow container. This control flow container is set up with TransactionOption = supported - i.e. it will 'inherit' transactions from parent containers, but none are set up here. Inside the data flow there is a call to a stored proc that writes to a table with pseudo code something like: "If a record doesn't exist that matches these parameters then write it" Now, the issue is that there are three records being passed into this proc all with the same parameters, so logically the first record doesn't find a match and a record is created. The second record (with the same parameters) also doesn't find a match and another record is created. My understanding is that the first 'record' passed to the proc in the dataflow is uncommitted and therefore can't be 'read' by the second call. The upshot being that all three records create a row, when logically only the first should. In this scenario am I right in thinking that it is the uncommitted transaction that stops the second call from seeing the first? Even setting the isolation level on the container doesn't help because it's not being wrapped in a transaction anyway.... Hope that makes sense, and any advice gratefully received. Work-arounds confer god-like status on you.

    Read the article

  • Live Webcast, Dec. 6: Enterprise Clouds with Oracle VM

    - by Monica Kumar
    Mark your calendar! On Tuesday, Dec. 6th at 9am PT, we are hosting a live webcast with Oracle VM experts. Enterprise Clouds with Oracle VM Tuesday, Dec. 6 at 9 AM US PT The ability to create a cloud leveraging public or private infrastructure has been hampered by the lack of availability of practical, cost-effective choices for server virtualization. In this session, you will learn how Oracle provides a single virtualization solution for your entire infrastructure, and how Oracle Enterprise Manager and Oracle Virtual Assembly Builder help you manage Oracle Applications across the cloud. Also find out how virtualization was leveraged to transform IT for Oracle University and support more than 350,00 students in more than 40,000 classes each year. Those lessons have paved the path to private cloud computing inside Oracle. Speakers: Adam Hawley, Senior Director of Product Management, Oracle Dan Herrup, Principal Systems Engineer, Oracle Corporate Citizenship Register Now.

    Read the article

  • Oracle's Vision for the Social-Enabled Enterprise - Partner Webcast. September 10th

    - by Richard Lefebvre
    Smart companies are developing social media strategies to engage customers, gain brand insights, and transform employee collaboration and recruitment. Oracle is powering this transformation with the most comprehensive enterprise social platform that lets you: Monitor and engage in social conversations Collect and analyze social data Build and grow brands through social media Integrate enterprisewide social functionality into a single system Create rich social applications Join Oracle President Mark Hurd and senior Oracle executives to learn more about Oracle’s vision for the social-enabled enterprise. Register now for this Webcast.  - Mon., Sept. 10, 2012 - 10 a.m. PT / 19:00 CET

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 2: Platforms and Features

    - by Tim Murphy
    This is part 2 in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. In the previous post I discussed what reasons a company might have for creating a smartphone application.  In this installment I will cover some of history and state of the different platforms as well as features that can be leveraged for building enterprise smartphone applications. Platforms Before you start choosing a platform to develop your solutions on it is good to understand how we got here and what features you can leverage. History To my memory we owe all of this to a product called the Apple Newton that came out in 1987. It was the first PDA and back then I was much more of an Apple fan.  I was very impressed with this device even though it never really went anywhere.  The Palm Pilot by US Robotics was the next major advancement in PDA. It had a simple short hand window that allowed for quick stylus entry.. Later, Windows CE came out and started the broadening of the PDA market. After that it was the Palm and CE operating systems that started showing up on cell phones and for some time these were the two dominant operating systems that were distributed with devices from multiple hardware vendors. Current The iPhone was the first smartphone to take away the stylus and give us a multi-touch interface.  It was a revolution in usability and really changed the attractiveness of smartphones for the general public.  This brought us to the beginning of the current state of the market with the concept of an online store that makes it easy for customers to get new features and functionality on demand. With Android, Google made this more than a one horse race.  Not only did they come to compete, their low cost actually made them the leading OS.  Of course what made Android so attractive also is its major fault.  It is so open that it has been a target for malware which leaves consumers exposed.  Fortunately for Google though, most consumers aren’t aware of the threat that they are under. Although Microsoft had put out one of the first smart phone operating systems with CE it had to play catch up and finally came out with the Windows Phone.  They have gone for a market approach between those of iOS and Android.  They support multiple hardware vendors like Google, but they kept a certification process for applications that is similar to Apple.  They also created a user interface that was different enough to give it a clear separation from the other two platforms. The result of all this is hundreds of millions of smartphones being sold monthly across all three platforms giving us a wide range of choices and challenges when it comes to developing solutions. Features So what are the features that make these devices flexible enough be considered for use in the enterprise? The biggest advantage of today's devices is network connectivity.  The ability to access information from multiple sources at a moment’s notice is critical for businesses.  Add to that the ability to communicate over a variety of text, voice and video modes and we have a powerful starting point. Every smartphone has a cameras and they are not just useful for posting to Instagram. We are seeing more applications such as Bing vision that allow us to scan just about any printed code or text to find information.  These capabilities have been made available to developers in the form of standard libraries for reading barcodes of just about an flavor and optical character recognition (OCR) interpretation. Bluetooth give us the ability to communicate with multiple devices. Whether these are headsets, keyboard or printers the wireless communication capabilities are just starting to evolve.  The more these wireless communication protocols grow, the more opportunities we will see to transfer data between users and a variety of devices. Local storage of information that can be called up even when the device cannot reach the network is the other big capability.  This give users the ability to work offline as well and transmit information when connections are restored. These are the tools that we have to work with to build applications that can be leveraged to gain a competitive advantage for companies that implement them. Coming Up In the third installment I will cover key concerns that you face when building enterprise smartphone apps. del.icio.us Tags: smartphones,enterprise smartphone Apps,architecture,iOS,Android,Windows Phone

    Read the article

  • SAP et Microsoft annoncent Duet Enterprise, une solution de travail collaboratif qui connecte SharePoint 2010 et les applications SAP

    SAP et Microsoft annoncent Duet Enterprise Une solution de travail collaboratif qui connecte SharePoint 2010 et les applications SAP SAP et Microsoft se sont associés dans le cadre d'un programme commun pour développer un portefeuilles de solutions et « adresser davantage de clients ». Ce programme commun a été baptisé « SAP-Microsoft Unite Partner Connection ». Le président de la division Microsoft Office, Kurt DelBene, et Vishal Sikka, membre du Comite Executif de SAP, ont ainsi annoncé Duet Enterprise, en détaillant la manière dont les deux entreprises font converger leurs stratégies dans le domaine du logiciel. Le but est de proposer une plus grande valeur ajouté...

    Read the article

  • How do I log back into a Windows Server 2003 guest OS after Hyper-V integration services installs and breaks my domain logins?

    - by Warren P
    After installing Hyper-V integration services, I appear to have a problem with logging in to my Windows Server 2003 virtual machine. Incorrect passwords and logins give the usual error message, but a correct login/password gives me this message: Windows cannot connect to the domain, either because the domain controller is down or otherwise unavailable, or because your computer account was not found. Please try again later. If this message continues to appear, contact your system administrator for assistance. Nothing pleases me more than Microsoft telling me (the ersatz system administrator) to contact my system administrator for help, when I suspect that I'm hooped. The virtual machine has a valid network connection, and has decided to invalidate all my previous logins on this account, so I can't log in and remotely fix anything, and I can't remotely connect to it from outside either. This appears to be a catch 22. Unfortunately I don't know any non-domain local logins for this virtual machine, so I suspect I am basically hooped, or that I need ophcrack. is there any alternative to ophcrack? Second and related question; I used Disk2VHD to do the conversion, and I could log in fine several times, until after the Hyper-V integration services were installed, then suddenly this happens and I can't log in now - was there something I did wrong? I can't get networking working inside the VM BEFORE I install integration services, and at the very moment that integration services is being installed, I'm getting locked out like this. I probably should always know the local login of any machine I'm upgrading so I don't get stuck like this in the future.... great. Now I am reminded again of this.

    Read the article

  • Clouds Everywhere But not a Drop of Rain – Part 3

    - by sxkumar
    I was sharing with you how a broad-based transformation such as cloud will increase agility and efficiency of an organization if process re-engineering is part of the plan.  I have also stressed on the key enterprise requirements such as “broad and deep solutions, “running your mission critical applications” and “automated and integrated set of capabilities”. Let me walk you through some key cloud attributes such as “elasticity” and “self-service” and what they mean for an enterprise class cloud. I will also talk about how we at Oracle have taken a very enterprise centric view to developing cloud solutions and how our products have been specifically engineered to address enterprise cloud needs. Cloud Elasticity and Enterprise Applications Requirements Easy and quick scalability for a short-period of time is the signature of cloud based solutions. It is this elasticity that allows you to dynamically redistribute your resources according to business priorities, helps increase your overall resource utilization, and reduces operational costs by allowing you to get the most out of your existing investment. Most public clouds are offering a instant provisioning mechanism of compute power (CPU, RAM, Disk), customer pay for the instance-hours(and bandwidth) they use, adding computing resources at peak times and removing them when they are no longer needed. This type of “just-in-time” serving of compute resources is well known for mid-tiers “state less” servers such as web application servers and web servers that just need another machine to start and run on it but what does it really mean for an enterprise application and its underlying data? Most enterprise applications are not as quite as “state less” and justifiably so. As such, how do you take advantage of cloud elasticity and make it relevant for your enterprise apps? This is where Cloud meets Grid Computing. At Oracle, we have invested enormous amount of time, energy and resources in creating enterprise grid solutions. All our technology products offer built-in elasticity via clustering and dynamic scaling. With products like Real Application Clusters (RAC), Automatic Storage Management, WebLogic Clustering, and Coherence In-Memory Grid, we allow all your enterprise applications to benefit from Cloud elasticity –both vertically and horizontally - without requiring any application changes. A number of technology vendors take a rather simplistic route of starting up additional or removing unneeded VM as the "Cloud Scale-Out" solution. While this may work for stateless mid-tier servers where load balancers can handle the addition and remove of instances transparently but following a similar approach for the database tier - often called as "database sharding" - requires significant application modification and typically does not work with off the shelf packaged applications. Technologies like Oracle Database Real Application Clusters, Automatic Storage Management, etc. on the other hand bring the benefits of incremental scalability and on-demand elasticity to ANY application by providing a simplified abstraction layers where the application does not need deal with data spread over multiple database instances. Rather they just talk to a single database and the database software takes care of aggregating resources across multiple hardware components. It is the technologies like these that truly make a cloud solution relevant for enterprises.  For customers who are looking for a next generation hardware consolidation platform, our engineered systems (e.g. Exadata, Exalogic) not only provide incredible amount of performance and capacity, they also reduce the data center complexity and simplify operations. Assemble, Deploy and Manage Enterprise Applications for Cloud Products like Oracle Virtual assembly builder (OVAB) resolve the complex problem of bringing the cloud speed to complex multi-tier applications. With assemblies, you can not only provision all components of a multi-tier application and wire them together by push of a button, other aspects of application lifecycle, such as real-time application testing, scale-up/scale-down, performance and availability monitoring, etc., are also automated using Oracle Enterprise Manager.  An essential criteria for an enterprise cloud to succeed is the ability to ensure business service levels especially when business users have either full visibility on the usage cost with a “show back” or a “charge back”. With Oracle Enterprise Manager 12c, we have created the most comprehensive cloud management solution in the industry that is capable of managing business service levels “applications-to-disk” in a enterprise private cloud – all from a single console. It is the only cloud management platform in the industry that allows you to deliver infrastructure, platform and application cloud services out of the box. Moreover, it offers integrated and complete lifecycle management of the cloud - including planning and set up, service delivery, operations management, metering and chargeback, etc .  Sounds unbelievable? Well, just watch this space for more details on how Oracle Enterprise Manager 12c is the nerve center of Oracle Cloud! Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies.  Summary  But the intent of this blog post isn't to dwell on how great our solutions are (these are just some examples to illustrate how we at Oracle have approached this problem space). Rather it is to help you ask the right questions before you embark on your cloud journey.  So to summarize, here are the key takeaways.       It is critical that you are clear on why you are building the cloud. Successful organizations keep business benefits as the first and foremost cloud objective. On the other hand, those who approach this purely as a technology project are more likely to fail. Think about where you want to be in 3-5 years before you get started. Your long terms objectives should determine what your first step ought to be. As obvious as it may seem, more people than not make the first move without knowing where they are headed.  Don’t make the mistake of equating cloud to virtualization and Infrastructure-as-a-Service (IaaS). Spinning a VM on-demand will give some short term relief to your IT staff but is unlikely to solve your larger business problems. As such, even if IaaS is your first step towards a more comprehensive cloud, plan the roadmap around those higher level services before you begin. And ask your vendors on how they are going to be your partners in this journey. Capabilities like self-service access and chargeback/showback are absolutely critical if you really expect your cloud to be transformational. Your business won't see the full benefits of the cloud until it empowers them with same kind of control and transparency that they are used to while using a public cloud service.  Evaluate the benefits of integration, as opposed to blindly following the best-of-breed strategy. Integration is a huge challenge and more so in a cloud environment. There are enormous costs associated with stitching a solution out of disparate components and even more in maintaining it. Hope you found these ideas helpful. Looking forward to hearing your thoughts and experiences.

    Read the article

  • System.IO.FileLoadException

    - by Raj G
    Hi All, I have got this error when using Enterprise Library 3.1 May 2007 version. We are developing a product and have a common lib directory beneath the Subversion Trunk directory <\Trunk\Lib\ into which we put all the third party DLLs. Inside this we have Microsoft\EnterpriseLibrary\v3.1 in which we have copied all the dlls from \Program Files\Microsoft Enterprise Library May2007\bin. Everything was working properly until one of the developers installed the source code on this machine. There were some dlls copied at the end of the source code installation and once that was done, he is not able to run the project anymore. He always gets this error 'Microsoft.Practices.EnterpriseLibrary.Data, Version=3.1.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)' What is the problem here? I thought that when the source code was installed it was just supposed to build everything and copy in the bin directory within the source code parent directory. Also we have copied the Dlls from Microsoft Enterprise Library May 2007\bin directory into our product development directory and references into our project with a copylocal flag set to true. Can anyone help me out here RK

    Read the article

  • Problems upgrading VB.Net 2008 project into VS2010

    - by Brett Rigby
    Hi there, I have been upgrading several different VS2008 projects into VS2010 and have found a problem with VB.Net projects when they are converted. Once converted, the .vbproj files have changed from this in VS2008: <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' "> <DebugSymbols>true</DebugSymbols> <DebugType>full</DebugType> <DefineDebug>true</DefineDebug> <DefineTrace>true</DefineTrace> <OutputPath>bin\Debug\</OutputPath> <DocumentationFile>CustomerManager.xml</DocumentationFile> <WarningsAsErrors>41999,42016,42017,42018,42019,42020,42021,42022,42032,42036</WarningsAsErrors> </PropertyGroup> To this in VS2010: <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' "> <DebugSymbols>true</DebugSymbols> <DebugType>full</DebugType> <DefineDebug>true</DefineDebug> <DefineTrace>true</DefineTrace> <OutputPath>bin\Debug\</OutputPath> <DocumentationFile>CustomerManager.xml</DocumentationFile> <NoWarn>42353,42354,42355</NoWarn> <WarningsAsErrors>41999,42016,42017,42018,42019,42020,42021,42022,42032,42036</WarningsAsErrors> </PropertyGroup> The main difference, is that in the VS2010 version, the 42353,42354,42355 value has been added; Inside the IDE, this manifests itself as the following setting in the Project Properties | Compile section as: "Function returning intrinsic value type without return value" = None This isn't a problem when building code inside Visual Studio 2010, but when trying to build the code through our continuous integration scripts, it fails with the following errors: [msbuild] vbc : Command line error BC2026: warning number '42353' for the option 'nowarn' is either not configurable or not valid [msbuild] vbc : Command line error BC2026: warning number '42354' for the option 'nowarn' is either not configurable or not valid [msbuild] vbc : Command line error BC2026: warning number '42355' for the option 'nowarn' is either not configurable or not valid I couldn't find anything on Google for these messages, which is strange, as I am trying to find out why this is happening. Any suggestions as to why Visual Studio 2010's conversion wizard is doing this?

    Read the article

  • Mocking a concrete class : templates and avoiding conditional compilation

    - by AshirusNW
    I'm trying to testing a concrete object with this sort of structure. class Database { public: Database(Server server) : server_(server) {} int Query(const char* expression) { server_.Connect(); return server_.ExecuteQuery(); } private: Server server_; }; i.e. it has no virtual functions, let alone a well-defined interface. I want to a fake database which calls mock services for testing. Even worse, I want the same code to be either built against the real version or the fake so that the same testing code can both: Test the real Database implementation - for integration tests Test the fake implementation, which calls mock services To solve this, I'm using a templated fake, like this: #ifndef INTEGRATION_TESTS class FakeDatabase { public: FakeDatabase() : realDb_(mockServer_) {} int Query(const char* expression) { MOCK_EXPECT_CALL(mockServer_, Query, 3); return realDb_.Query(); } private: // in non-INTEGRATION_TESTS builds, Server is a mock Server with // extra testing methods that allows mocking Server mockServer_; Database realDb_; }; #endif template <class T> class TestDatabaseContainer { public: int Query(const char* expression) { int result = database_.Query(expression); std::cout << "LOG: " << result << endl; return result; } private: T database_; }; Edit: Note the fake Database must call the real Database (but with a mock Server). Now to switch between them I'm planning the following test framework: class DatabaseTests { public: #ifdef INTEGRATION_TESTS typedef TestDatabaseContainer<Database> TestDatabase ; #else typedef TestDatabaseContainer<FakeDatabase> TestDatabase ; #endif TestDatabase& GetDb() { return _testDatabase; } private: TestDatabase _testDatabase; }; class QueryTestCase : public DatabaseTests { public: void TestStep1() { ASSERT(GetDb().Query(static_cast<const char *>("")) == 3); return; } }; I'm not a big fan of that compile-time switching between the real and the fake. So, my question is: Whether there's a better way of switching between Database and FakeDatabase? For instance, is it possible to do it at runtime in a clean fashion? I like to avoid #ifdefs. Also, if anyone has a better way of making a fake class that mimics a concrete class, I'd appreciate it. I don't want to have templated code all over the actual test code (QueryTestCase class). Feel free to critique the code style itself, too. You can see a compiled version of this code on codepad.

    Read the article

  • New Book! SQL Server 2012 Integration Services Design Patterns!

    - by andyleonard
    SQL Server 2012 Integration Services Design Patterns has been released! The book is done and available thanks to the hard work and dedication of a great crew: Michelle Ufford ( Blog | @sqlfool ) – co-author Jessica M. Moss ( Blog | @jessicammoss ) – co-author Tim Mitchell ( Blog | @tim_mitchell ) – co-author Matt Masson ( Blog | @mattmasson ) – co-author Donald Farmer ( Blog | @donalddotfarmer ) – foreword David Stein ( Blog | @made2mentor ) – technical editing Mark Powers – editing Jonathan Gennick...(read more)

    Read the article

  • Node.js : enfin une intégration native sous Windows, le framework événementiel en JavaScript arrive sur le Cloud d'Azure

    Node.js : enfin une intégration native et complète sous Windows Le framework événementiel en JavaScript arrive sur le Cloud d'Azure Mise à jour du 9 novembre 2011 par Idelways Microsoft a manifesté en juin dernier son soutien au projet Node.js, le framework JavaScript événementiel et open source (lire ci-devant). La collaboration de l'entreprise avec Joycent, qui parraine son équipe de développeurs, vient d'aboutir à la version 0.6.0 de Node, qui bénéficie d'un support natif et complet sur la plateforme Windows. Cette troisième édition stable de Node.js exploite l'API Windows « I/O Completion Ports », pour ...

    Read the article

  • Le Service Pack 1 de Visual Studio 2010 est disponible, avec le Feature Pack de Team Foundation Server-Project Server Integration

    Le Service Pack 1 de Visual Studio 2010 est disponible Avec le Feature Pack de Team Foundation Server - Project Server Integration Mise à jour du 10/03/11 Disponible en version bêta depuis décembre 2010, le Service Pack 1 (SP1) de Visual Studio 2010 est désormais disponible en version finale pour les développeurs. Cette version corrige plusieurs bogues de la version beta et offre des fonctionnalités permettant un meilleur support ainsi que l'IntelliTrace pour SharePoint. L'IntelliTrace permet une amélioration du débogage en donnant la possibilité aux développeurs de suivre les événements lors du processus, au lieu d'avoir à les ...

    Read the article

  • FreeBSD 8.2 améliore le chiffrement des données et l'intégration du système de fichiers ZFS, FreeBSD 7.4 également disponible

    FreeBSD 8.2 améliore le chiffrement des données Et l'intégration du système de fichiers ZFS, FreeBSD 7.4 également disponible Mise à jour du 28/02/11 Deux nouvelles versions du système d'exploitation UNIX libre FreeBSD viennent de sortir. La première est FreeBSD 8.2. Un e version qui apporte deux nouveautés majeures*: l'amélioration de ZFS et de nouvelles capacités de chiffrement de disques. Pour mémoire, ZFS est un système de fichiers issu de feu OpenSolaris et aujourd'hui sous l'égide de Oracle. A noter, cependant, que ce système particulièrement puissant est ...

    Read the article

  • SOFTEAM organise deux séminaires gratuits sur l'intégration continue et partage son expérience sur ces pratiques le 17 et 18 avril

    SOFTEAM organise deux séminaires gratuits sur l'intégration continue La société de conseil en technologies logicielles partage son expérience sur ces pratiques le 17 et 18 avril SOFTEAM, société de Conseil et Services spécialisée, est un des membres votants de l'OMG (Object Management Group). Elle est reconnue pour son expertise dans les nouvelles technologies logicielles (technologies Objets, Architectures Orientées Services et Développement Agile) et intervient sur des projets, des applications et des systèmes d'information pour des grands comptes dans des domaines aussi divers que la Banque, la Finance,les Médias, le Web, ou les Services. Dans le cadre de ces développements, les équipe...

    Read the article

  • Best Method to SFTP or FTPS Files via SSIS

    - by Registered User
    What is the best method using SSIS (SQL Server Integration Services) to upload a file to either a remote SFTP (secure FTP with SSH2 protocal) or FTPS (FTP over SSL) site? I've used the following methods, but each has short-comings I would like to avoid: COZYROC LIBRARY Method: Install the CozyRoc library on each development and production server and use the SFTP task to upload the files. Pros: Easy to use. It looks, smells, and feels like a normal SSIS task. SSIS also recognizes the password as sensitive information and allows you all the normal options for protecting the sensitive information instead of just storing it in clear text in a non-secure manner. Works well with other SSIS tasks such as ForEach Loop Containers. Errors out when uploads and downloads fail. Works well when you don't know the names of the files on the remote FTP site to download or when you won't know the name of the file to upload until run-time. Cons: Costs money to license in a production environment. Makes you dependent upon the vendor to update their libraries between each version. Although they already have a 2008 version, this caused me a problem during the CTP's of 2008. Requires installing the libraries on each development and production machine. COMMAND LINE SFTP PROGRAM Method: Install a free command-line SFTP application such as Putty and execute it either by running a batch file or operating system process task. Pros: Free, free, and free. You can be sure it is secure if you are using Putty since numerous GUI FTP clients appear to use Putty under the covers. You DEFINATELY know you are using SSH2 and not SSH. Cons: The two command-line utilities I tried (Putty and Cygwin) required storing the SFTP password in a non-secure location. I haven't found a good way to capture failures or errors when uploading files. The process doesn't look and smell like SSIS. Most of the code is encapsulated in text files instead of SSIS itself. Difficult to use if you don't know the exact name of the file you are uploading or downloading. A 3RD PARTY C# or VB.NET LIBRARY Method: Install a SFTP or FTPS library and use a Script Task that references the library to upload the files. (I've never tried this, so I'm going to guess at the pros and cons) Pros: Probably easy to capture errors. Should work well with variables, so it would probably be easy to use even when you don't know the exact name of the file you are uploading or downloading. Cons: It's a script task combined with .NET libraries. If you are using SSIS, then you probably are more comfortable with SSIS tasks then .NET code. Script tasks are also difficult to troubleshoot since they don't have the same debugging tools and features as regular .NET projects. Creates a dependency on 3rd party code that may not work between different versions of SQL Server. To be fair, it is probably MORE likely to work between different versions of SQL Server than a 3rd party SSIS task library. Another huge con -- I haven't found a free C# or VB.NET library that does this as of yet. So if anyone knows of one, then please let me know!

    Read the article

  • Any tool to make git build every commit to a branch in a seperate repository?

    - by Wayne
    A git tool that meets the specs below is needed. Does one already exists? If not, I will create a script and make it available on GitHub for others to use or contribute. Is there a completely different and better way to solve the need to build/test every commit to a branch in a git repository? Not just to the latest but each one back to a certain staring point. Background: Our development environment uses a separate continuous integration server which is wonderful. However, it is still necessary to do full builds locally on each developer's PC to make sure the commit won't "break the build" when pushed to the CI server. Unfortunately, with auto unit tests, those build force the developer to wait 10 or 15 minutes for a build every time. To solve this we have setup a "mirror" git repository on each developer PC. So we develop in the main repository but anytime a local full build is needed. We run a couple commands in a in the mirror repository to fetch, checkout the commit we want to build, and build. It's works extremely lovely so we can continue working in the main one with the build going in parallel. There's only one main concern now. We want to make sure every single commit builds and tests fine. But we often get busy and neglect to build several fresh commits. Then if it the build fails you have to do a bisect or manually figure build each interim commit to figure out which one broke. Requirements for this tool. The tool will look at another repo, origin by default, fetch and compare all commits that are in branches to 2 lists of commits. One list must hold successfully built commits and the other lists commits that failed. It identifies any commit or commits not yet in either list and begins to build them in a loop in the order that they were committed. It stops on the first one that fails. The tool appropriately adds each commit to either the successful or failed list after it as attempted to build each one. The tool will ignore any "legacy" commits which are prior to the oldest commit in the success list. This logic makes the starting point possible in the next point. Starting Point. The tool building a specific commit so that, if successful it gets added to the success list. If it is the earliest commit in the success list, it becomes the "starting point" so that none of the commits prior to that are examined for builds. Only linear tree support? Much like bisect, this tool works best on a commit tree which is, at least from it's starting point, linear without any merges. That is, it should be a tree which was built and updated entirely via rebase and fast forward commits. If it fails on one commit in a branch it will stop without building the rest that followed after that one. Instead if will just move on to another branch, if any. The tool must do these steps once by default but allow a parameter to loop with an option to set how many seconds between loops. Other tools like Hudson or CruiseControl could do more fancy scheduling options. The tool must have good defaults but allow optional control. Which repo? origin by default. Which branches? all of them by default. What tool? by default an executable file to be provided by the user named "buildtest", "buildtest.sh" "buildtest.cmd", or buildtest.exe" in the root folder of the repository. Loop delay? run once by default with option to loop after a number of seconds between iterations.

    Read the article

  • Configuring Cruise Control Net with sourcesafe - Unable to load array item 'executable'

    - by albert
    Hi all, I'm trying to create a continuous integration environment. To do so, I've used a guide that can be found at http://www.15seconds.com/issue/040621.htm. In this step by step, the goal is to create a CI with CCNet, NAnt, NUni, NDoc, FxCop and source safe. I've been able to create my build by using the command prompt (despite the the different versions issues). The problem has come with the configuration of ccnet.config I've made some changes because of the new versions, but I'm still getting errors when starting the CCNet server. Can anyone help me to fix this issue or point where to find a guide with this scenario? The error that I'm getting: Unable to instantiate CruiseControl projects from configuration document. Configuration document is likely missing Xml nodes required for properly populating CruiseControl configuration. Unable to load array item 'executable' - Cannot convert from type System.String to ThoughtWorks.CruiseControl.Core.ITask for object with value: "\DevTools\nant\bin\NAnt.exe" Xml: E:\DevTools\nant\bin\NAnt.exe My CCNet config file below: <cruisecontrol> <project name="BuildingSolution"> <webURL>http://localhost/ccnet</webURL> <modificationDelaySeconds>10</modificationDelaySeconds> <triggers> <intervaltrigger name="continuous" seconds="60" /> </triggers> <sourcecontrol type="vss" autoGetSource="true"> <ssdir>E:\VSS\</ssdir> <executable>C:\Program Files\Microsoft Visual SourceSafe\SS.EXE</executable> <project>$/CCNet/slnCCNet.root/slnCCNet</project> <username>Albert</username> <password></password> </sourcecontrol> <prebuild type="nant"> <executable>E:\DevTools\nant\bin\NAnt.exe</executable> <buildFile>E:\Builds\buildingsolution\WebForm.build</buildFile> <logger>NAnt.Core.XmlLogger</logger> <buildTimeoutSeconds>300</buildTimeoutSeconds> </prebuild> <tasks> <nant> <executable>E:\DevTools\nant\bin\nant.exe</executable> <nologo>true</nologo> <buildFile>E:\Builds\buildingsolution\WebForm.build</buildFile> <logger>NAnt.Core.XmlLogger</logger> <targetList> <target>build</target> </targetList> <buildTimeoutSeconds>6000</buildTimeoutSeconds> </nant> </tasks> <publishers> <merge> <files> <file>E:\Builds\buildingsolution\latest\*-results.xml</file> </files> </merge> <xmllogger /> </publishers> </project> </cruisecontrol> enter code here

    Read the article

  • Request header is too large

    - by stck777
    I found serveral IllegalStateException Exception in the logs: [#|2009-01-28T14:10:16.050+0100|SEVERE|sun-appserver2.1|javax.enterprise.system.container.web|_ThreadID=26;_ThreadName=httpSSLWorkerThread-80-53;_RequestID=871b8812-7bc5-4ed7-85f1-ea48f760b51e;|WEB0777: Unblocking keep-alive exception java.lang.IllegalStateException: PWC4662: Request header is too large at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:740) at org.apache.coyote.http11.InternalInputBuffer.parseHeader(InternalInputBuffer.java:657) at org.apache.coyote.http11.InternalInputBuffer.parseHeaders(InternalInputBuffer.java:543) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.parseRequest(DefaultProcessorTask.java:712) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.doProcess(DefaultProcessorTask.java:577) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.process(DefaultProcessorTask.java:831) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.executeProcessorTask(DefaultReadTask.java:341) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:263) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:214) at com.sun.enterprise.web.portunif.PortUnificationPipeline$PUTask.doTask(PortUnificationPipeline.java:380) at com.sun.enterprise.web.connector.grizzly.TaskBase.run(TaskBase.java:265) at com.sun.enterprise.web.connector.grizzly.ssl.SSLWorkerThread.run(SSLWorkerThread.java:106) |#] Does anybody know configuration changes to fix this?

    Read the article

  • What label of tests are BizUnit tests?

    - by charlie.mott
    BizUnit is defined as a "Framework for Automated Testing of Distributed Systems.  However, I've never seen a catchy label to describe what sort of tests we create using this framework. They are not really “Unit Tests” that's for sure. "Integration Tests" might be a good definition, but I want a label that clearly separates it from the manual "System Integration Testing" phase of a project where real instances of the integrated systems are used. Among some colleagues, we brainstormed some suggestions: Automated Integration Tests Stubbed Integration Tests Sandbox Integration Tests Localised Integration Tests All give a good view of the sorts of tests that are being done. I think "Stubbed Integration Tests" is most catchy and descriptive. So I will use that until someone comes up with a better idea.

    Read the article

  • Trying get dynamic content hole-punched through Magento's Full Page Cache

    - by rlflow
    I am using Magento Enterprise 1.10.1.1 and need to get some dynamic content on our product pages. I am inserting the current time in a block to quickly see if it is working, but can't seem to get through full page cache. I have tried a variety of implementations found here: http://tweetorials.tumblr.com/post/10160075026/ee-full-page-cache-hole-punching http://oggettoweb.com/blog/customizations-compatible-magento-full-page-cache/ http://magentophp.blogspot.com/2011/02/magento-enterprise-full-page-caching.html (http://www.exploremagento.com/magento/simple-custom-module.php - custom module) Any solutions, thoughts, comments, advice is welcome. here is my code: app/code/local/Fido/Example/etc/config.xml <?xml version="1.0"?> <config> <modules> <Fido_Example> <version>0.1.0</version> </Fido_Example> </modules> <global> <blocks> <fido_example> <class>Fido_Example_Block</class> </fido_example> </blocks> </global> </config> app/code/local/Fido/Example/etc/cache.xml <?xml version="1.0" encoding="UTF-8"?> <config> <placeholders> <fido_example> <block>fido_example/view</block> <name>example</name> <placeholder>CACHE_TEST</placeholder> <container>Fido_Example_Model_Container_Cachetest</container> <cache_lifetime>86400</cache_lifetime> </fido_example> </placeholders> </config> app/code/local/Fido/Example/Block/View.php <?php /** * Example View block * * @codepool Local * @category Fido * @package Fido_Example * @module Example */ class Fido_Example_Block_View extends Mage_Core_Block_Template { private $message; private $att; protected function createMessage($msg) { $this->message = $msg; } public function receiveMessage() { if($this->message != '') { return $this->message; } else { $this->createMessage('Hello World'); return $this->message; } } protected function _toHtml() { $html = parent::_toHtml(); if($this->att = $this->getMyCustom() && $this->getMyCustom() != '') { $html .= '<br />'.$this->att; } else { $now = date('m-d-Y h:i:s A'); $html .= $now; $html .= '<br />' ; } return $html; } } app/code/local/Fido/Example/Model/Container/Cachetest.php <?php class Fido_Example_Model_Container_Cachetest extends Enterprise_PageCache_Model_Container_Abstract { protected function _getCacheId() { return 'HOMEPAGE_PRODUCTS' . md5($this->_placeholder->getAttribute('cache_id') . $this->_getIdentifier()); } protected function _renderBlock() { $blockClass = $this->_placeholder->getAttribute('block'); $template = $this->_placeholder->getAttribute('template'); $block = new $blockClass; $block->setTemplate($template); return $block->toHtml(); } protected function _saveCache($data, $id, $tags = array(), $lifetime = null) { return false; } } app/design/frontend/enterprise/[mytheme]/template/example/view.phtml <?php /** * Fido view template * * @see Fido_Example_Block_View * */ ?> <div> <?php echo $this->receiveMessage(); ?> </span> </div> snippet from app/design/frontend/enterprise/[mytheme]/layout/catalog.xml <reference name="content"> <block type="catalog/product_view" name="product.info" template="catalog/product/view.phtml"> <block type="fido_example/view" name="product.info.example" as="example" template="example/view.phtml" />

    Read the article

  • jms unresolved message-destination-ref

    - by portoalet
    hi, I am using netbeans 6.8, and glassfish v3, and making a simple jms application to work. I got this: com.sun.enterprise.container.common.spi.util.InjectionException: Exception attempting to inject Unresolved Message-Destination-Ref jms/[email protected]@null into class enterpriseapplication4.Main Code: public class Main { @Resource(name = "jms/myQueue") private static Topic myQueue; @Resource(name = "jms/myFactory") private static ConnectionFactory myFactory; ... // the rest is just boiler plate created by netbeans } In my Glassfish v3 admin console, I have jms/myFactory as my ConnectionFactory and jms/myQueue as my Destination Resources. What am I missing? Full stack: WARNING: enterprise.deployment.backend.invalidDescriptorMappingFailure com.sun.enterprise.container.common.spi.util.InjectionException: Exception attempting to inject Unresolved Message-Destination-Ref jms/[email protected]@null into class enterpriseapplication4.Main at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl._inject(InjectionManagerImpl.java:614) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.inject(InjectionManagerImpl.java:384) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.injectClass(InjectionManagerImpl.java:210) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.injectClass(InjectionManagerImpl.java:202) at org.glassfish.appclient.client.acc.AppClientContainer$ClientMainClassSetting.getClientMainClass(AppClientContainer.java:599) at org.glassfish.appclient.client.acc.AppClientContainer.getMainMethod(AppClientContainer.java:498) at org.glassfish.appclient.client.acc.AppClientContainer.completePreparation(AppClientContainer.java:397) at org.glassfish.appclient.client.acc.AppClientContainer.prepare(AppClientContainer.java:311) at org.glassfish.appclient.client.AppClientFacade.prepareACC(AppClientFacade.java:264) at org.glassfish.appclient.client.acc.agent.AppClientContainerAgent.premain(AppClientContainerAgent.java:75) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:323) at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:338) Caused by: javax.naming.NamingException: Lookup failed for 'java:comp/env/jms/myQueue' in SerialContext targetHost=localhost,targetPort=3700 [Root exception is javax.naming.NameNotFoundException: No object bound for java:comp/env/jms/myQueue [Root exception is java.lang.NullPointerException]] at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:442) at javax.naming.InitialContext.lookup(InitialContext.java:392) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl._inject(InjectionManagerImpl.java:513) ... 15 more Caused by: javax.naming.NameNotFoundException: No object bound for java:comp/env/jms/myQueue [Root exception is java.lang.NullPointerException] at com.sun.enterprise.naming.impl.JavaURLContext.lookup(JavaURLContext.java:218) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:428) ... 17 more Caused by: java.lang.NullPointerException at javax.naming.InitialContext.getURLScheme(InitialContext.java:269) at javax.naming.InitialContext.getURLOrDefaultInitCtx(InitialContext.java:318) at javax.naming.InitialContext.lookup(InitialContext.java:392) at com.sun.enterprise.naming.util.JndiNamingObjectFactory.create(JndiNamingObjectFactory.java:75) at com.sun.enterprise.naming.impl.GlassfishNamingManagerImpl.lookup(GlassfishNamingManagerImpl.java:688) at com.sun.enterprise.naming.impl.GlassfishNamingManagerImpl.lookup(GlassfishNamingManagerImpl.java:657) at com.sun.enterprise.naming.impl.JavaURLContext.lookup(JavaURLContext.java:148) ... 18 more Regards

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >