Search Results

Search found 33074 results on 1323 pages for 'database deployment'.

Page 28/1323 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Oracle 10g Failover Database - How to fail back?

    - by rrkwells
    I want to know how the failover database concept works after recovery. We have defined our application to connect to a backup database in case the production database fails. If this happens, then all the transactions will be happening on that backup database. Once the production db server is running again, then how do we make sure the changes made in the backup database will be reflected on the production database? We want to make sure that any changes made while failed over are not lost. We are using Oracle 10g.

    Read the article

  • Event on SQL Server 2008 Disk IO and the new Complex Event Processing (StreamInsight) feature in R2

    - by tonyrogerson
    Allan Mitchell and myself are doing a double act, Allan is becoming one of the leading guys in the UK on StreamInsight and will give an introduction to this new exciting technology; on top of that I'll being talking about SQL Server Disk IO - well, "Disk" might not be relevant anymore because I'll be talking about SSD and IOFusion - basically I'll be talking about the underpinnings - making sure you understand and get it right, how to monitor etc... If you've any specific problems or questions just ping me an email [email protected]. To register for the event see: http://sqlserverfaq.com/events/217/SQL-Server-and-Disk-IO-File-GroupsFiles-SSDs-FusionIO-InRAM-DBs-Fragmentation-Tony-Rogerson-Complex-Event-Processing-Allan-Mitchell.aspx 18:15 SQL Server and Disk IOTony Rogerson, SQL Server MVPTony's Blog; Tony on TwitterIn this session Tony will talk about RAID levels, how SQL server writes to and reads from disk, the effect SSD has and will talk about other options for throughput enhancement like Fusion IO. He will look at the effect fragmentation has and how to minimise the impact, he will look at the File structure of a database and talk about what benefits multiple files and file groups bring. We will also touch on Database Mirroring and the effect that has on throughput, how to get a feeling for the throughput you should expect.19:15 Break19:45 Complex Event Processing (CEP)Allan Mitchell, SQL Server MVPhttp://sqlis.com/sqlisStreamInsight is Microsoft’s first foray into the world of Complex Event Processing (CEP) and Event Stream Processing (ESP).  In this session I want to show an introduction to this technology.  I will show how and why it is useful.  I will get us used to some new terminology but best of all I will show just how easy it is to start building your first CEP/ESP application.

    Read the article

  • VSDB to SSDT part 3 : command-line deployment with SqlPackage.exe, replacement for Vsdbcmd.exe

    - by Etienne Giust
    For our continuous integration needs, we use a powershell script to handle deployment. A simpler approach would be to have a deployment task embedded within the build process. See the solution provided here by Jakob Ehn (a most interesting read which also dives into the '”deploying from Visual Studio” specifics) : http://geekswithblogs.net/jakob/archive/2012/04/25/deploying-ssdt-projects-with-tfs-build.aspx   For our needs, though, clearly separating our build phase from our deployment phase is important. It allows us to instantly deploy old versions. Also it is more convenient for continuous integration. So we stick with the powershell script approach. With VSDB projects, that script used to call the following command (the vsdbcmd executable was locally available, along with needed libraries): vsdbcmd.exe /a:Deploy /dd /cs:<CONNECTIONSTRING TO TARGET DB> /dsp:SQL /manifest:< PATH TO .deploymanifest FILE>   To be able to do the approximately same thing with a SSDT produced file (dacpac), you would call this command on a machine which has VS2012 installed (or the SSDT installed, see here : http://msdn.microsoft.com/en-us/library/hh500335%28v=vs.103%29):   C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   And from within a powershell script :   & "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe" /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   The command will consume a publish.xml file where the connection string and the deployment options are specified. You must be familiar with it if you have done some deployments from visual studio. If not, please refer to the above mentioned article by Jakob Ehn.   It is also possible to pass those parameters in the command line. The complete SqlPackage.exe syntax is detailed here : http://msdn.microsoft.com/en-us/library/hh550080%28v=vs.103%29.aspx

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Best approach to accessing multiple data source in a web application

    - by ced
    I've a base web application developed with .net technologies (asp.net) used into our LAN by 30 users simultanousley. From this web application I've developed two verticalization used from online users. In future i expect hundreds users simultanousley. Our company has different locations. Each site use its own database. The web application needs to retrieve information from all existing databases. Currently there are 3 database, but it's not excluded in the future expansion of new offices. My question then is: What is the best strategy for a web application to retrieve information from different databases (which have the same schema) whereas the main objective performance data access and high fault tolerance? There are case studies in the literature that I can take as an example? Do you know some good documents to study? Do you have any tips to implement this task so efficient? Intuitively I would say that two possible strategy are: perform queries from different sources in real time and aggregate data on the fly; create a repository that contains the union of the entities of interest and perform queries directly on repository;

    Read the article

  • Oracle Exadata?????????? ??1

    - by takashi.hitomi
    2009?7?~2010?5????Oracle Exadata???????????????? 2010?4?15? ???? ???????????? ? ??????Exadata V2??????????????????????? 2010?4?13? ???? ??????????? ? ?????????????????????????? 2010?4?6? ?????·???????·??????? ? T???????????????????????????????????? 2010?3?1? ?????? ????? ?????????????????????????????????????? 2010?2?2? ???? ????????? ? ??????????????Intel???????????????Sun Oracle Database Machine??????? 2010?1?26? ???? ????(????·???) ? ?Oracle Exadata??????????·??????????????????????? 2009?7?14? ?????????? ? ???????????HP Oracle Database Machine????

    Read the article

  • Conflict resolution for two-way sync

    - by K.Steff
    How do you manage two-way synchronization between a 'main' database server and many 'secondary' servers, in particular conflict resolution, assuming a connection is not always available? For example, I have an mobile app that uses CoreData as the 'database' on the iOS and I'd like to allow users to edit the contents without Internet connection. In the same time, this information is available on a website the devices will connect to. What do I do if/when the data on the two DB servers is in conflict? (I refer to CoreData as a DB server, though I am aware it is something slightly different.) Are there any general strategies for dealing with this sort of issue? These are the options I can think of: 1. Always use the client-side data as higher-priority 2. Same for server-side 3. Try to resolve conflicts by marking each field's edit timestamp and taking the latest edit Though I'm certain the 3rd option will open room for some devastating data corruption. I'm aware that the CAP theorem concerns this, but I only want eventual consistency, so it doesn't rule it out completely, right? Related question: Best practice patterns for two-way data synchronization. The second answer to this question says it probably can't be done.

    Read the article

  • Storing revisions of a document

    - by dev.e.loper
    This is a follow up question to my original question. I'm thinking of going with generating diffs and storing those diffs in the database 'History' table. I'm using diff-match-patch library to generate what is called a 'patch'. On every save, I compare previous and new version and generate this patch. The patch could be used to generate a document at specific point in time. My dilemma is how to store this data. Should I: a Insert a new database record for every patch? b. Store these patches in javascript array and store that array in history table. So there is only one db History record for document with an array of all the patches. Concerns with: a. Too many db records generated. Will be slow and CPU intensive to query. b. Only one record. If record is somehow corrupted/deleted. Entire revision history is gone. I'm looking for suggestions, concerns with either approach.

    Read the article

  • Deployment Error: Silverlight 4.0 w/WCF RIA Services in ASP.NET MVC 2 App

    - by Dennis Ward
    I've got an MVC 2 App with an RIA Services link to a Silverlight Application. On my local machine, all is well, but when I deploy to Discount ASP servers, neither the MVC controller nor the WCF RIA services called from silverlight function: A silverlight datagrid gets a load error: System.ServiceModel.DomainServices.Client.DomainOperationException: Load operation failed for query... The remote server returned an error NotFound. In the MVC page where I had a simple table that worked prior to adding an EF model and DomainDataSource, I now get the error: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. This is very similar to an issue I had before, but after upgrading from the betas of WCF/Silverlight 4, but the fix I had added there doesn't seem to work any longer. The link for that issue is: SL4/MVC2/WCF RIA Services = Load Error I'm really struggling with deploying, and could use some help if anybody can shed any light on this. Thanks! Dennis

    Read the article

  • JBoss 6 deployment of message-driven bean error

    - by AntonioP
    Hello, I have an java EE application which has one message-driven bean and it runs fine on JBoss 4, however when I configure the project for JBoss 6 and deploy on it, I get this error; WARN [org.jboss.ejb.deployers.EjbDeployer.verifier] EJB spec violation: ... The message driven bean must declare one onMessage() method. ... org.jboss.deployers.spi.DeploymentException: Verification of Enterprise Beans failed, see above for error messages. But my bean HAS the onMessage method! It would not have worked on jboss 4 either then. Why do I get this error!? Edit: The class in question looks like this package ... imports ... public class MyMDB implements MessageDrivenBean, MessageListener { AnotherSessionBean a; OneMoreSessionBean b; public MyMDB() {} public void onMessage(Message message) { if (message instanceof TextMessage) { try { //Lookup sessionBeans by jndi, create them lookupABean(); // check message-type, then invokie a.handle(message); // else b.handle(message); } catch (SomeException e) { //handling it } } } public void lookupABean() { try { // code to lookup session beans and create. } catch (CreateException e) { // handling it and catching NamingException too } } } Edit 2: And this is the jboss.xml relevant parts <message-driven> <ejb-name>MyMDB</ejb-name> <destination-jndi-name>topic/A_Topic</destination-jndi-name> <local-jndi-name>A_Topic</local-jndi-name> <mdb-user>user</mdb-user> <mdb-passwd>pass</mdb-passwd> <mdb-client-id>MyMessageBean</mdb-client-id> <mdb-subscription-id>subid</mdb-subscription-id> <resource-ref> <res-ref-name>jms/TopicFactory</res-ref-name> <jndi-name>jms/TopicFactory</jndi-name> </resource-ref> </message-driven>

    Read the article

  • Sitefinity deployment

    - by Jim Snyder
    I recently stepped into a project that is using Telerik Sitefinity CMS with custom user controls. I would like to get the developers off of the production server. Does anyone have any experience with deploying a Sitefinity site by means of publication (precompiled .DLL)? Any discussion of benefits, disadvantages, or potential issues would be welcome.

    Read the article

  • Sinatra 1.0 fastcgi deployment

    - by TheMoonMaster
    I am trying to deploy my sinatra app to my hosting(shared) and I keep getting this error. /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/handler/fastcgi.rb:23:in `initialize': Address family not supported by protocol - socket(2) (Errno::EAFNOSUPPORT) from /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/handler/fastcgi.rb:23:in `new' from /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/handler/fastcgi.rb:23:in `run' from /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:946:in `run!' from /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/main.rb:25 from dispatch.fcgi:17 I have no idea what this means and I have tried many different things to fix it but nothing I tried seemed to work. My dispatch.fcgi is the following #!/usr/bin/ruby require 'rubygems' require 'sinatra' fastcgi_log = File.open("fastcgi.log", "a") STDOUT.reopen fastcgi_log STDERR.reopen fastcgi_log STDOUT.sync = true set :logging, false set :server, "FastCGI" load 'simple.rb' And finally, my .htaccess (fcgid is how my host told me to set it up) RewriteEngine on AddHandler fcgid-script .fcgi Options +FollowSymLinks +ExecCGI RewriteRule ^(.*)$ dispatch.fcgi [QSA,L]

    Read the article

  • addins deployment

    - by user326198
    Hello Everyone, we have a product that work as standalone and Clickonce , we created some components as addins in the system (based on microsoft System.addin) we need a mechanism to update this addins on the customers in the two cases stand and click once I'm thinking for the stand alone we just send the customer a CD to update the addins and I'm thinking also to deploy the addins as Packages like System.IO.Packaging so I can know the version of addin update it or delete it but How I will Achieve this in click once the user will just press update in the addin manager in the application How I can manage the versioning and updating this addins I hope to help me to architect this structure of addins update

    Read the article

  • WCF RIA Silverlight deployment issues

    - by Handleman
    It seems the world is awash with people having problems deploying RIA WCF services, and now I'm one too. I've already tried a bunch of things, but to no avail. I need WCF RIA to support a Silverlight 3 application I've built. The short story is, using the new WCF RIA services (Nov 09?) I open VS 2008, create new project (silverlight application), enabling ".NET RIA services". Add new item to web project - Linq2SQL dbml file (from SQL 2005 DB prepared earlier) and compile. I add a new item to the web project - domain service (link the tables I need) and compiled. Using the domain context I "Load" data with a standard RIA get query in the MainPage and add a TextBlock to display returned data. Build & run (cassini) - success. Using VS to publish to IIS on local PC - success. Using VS to publish to test server (IIS6) - browse to location and the Silverlight app loads but Fiddler tells me I've got a 404 on all the the WCF .svc requests. Use Fiddler to "launch IE" on the service request and it's true - 404. I have already run aspnet_regiis, ServiceModelReg and added mime types for .xap, .xaml, .xbap and .svc. I have included the System.Web.Ria and System.Web.DomainServices DLL with copy local true. I need help with either a) a solution b) an approach to find a solution

    Read the article

  • Windows Azure WebRole stuck in a deployment loop

    - by Rob G
    I've been struggling with this one for a couple of days now. My current Windows Azure WebRole is stuck in a loop where the status keeps changing between Initializing, Busy, Stopping and Stopped. It never goes live, and I can can never see the website as a result. The WebRole is an "out of the box" MVC 2 application with Copy Local set to true on the Mvc dll and I haven't even tried hooking up a storage or WorkerRole yet, and there is nothing really happening inside the Start method that I can see would crash. I've really tried going back to basics to ensure nothing can complicate the process and the website launches without a problem on the Dev Fabric and yes it looks just like the standard "Home", "About" MVC app - just can't get it running in the cloud! Funny thing is, a few days ago, this exact package worked on the staging area in the cloud, and I could even see it in the browser - but could never get it swapped over to production, so I deleted everything and started from scratch, and now I can't even get it running on staging... Does anyone have any ideas on what I could do to diagnose this problem myself because since logging this problem on the forums 2 days ago, there has been no improvement or feedback. Any help appreciated, Regards, Rob G

    Read the article

  • Best deployment strategy for Python google app engine

    - by sushant
    I wonder if there are any best practices/patterns for deploying python apps on Google app engine specifically Django. The best practice should be combination of existing best practices viz. Fabric, Paver, Buildout etc. Also please share best practice patterns for developing (I could not get virtualenv running with Django and Django App engine helper)

    Read the article

  • Temporary "Backup" of SharePoint Content During Feature and Solution Deployment

    - by ccomet
    I need to decide on a method for storing a subset of the content in a SharePoint site, so that when I delete and recreate certain lists as part of a feature activation, I can re-insert all of this content back where it should belong. I have an idea myself, but I don't know if it's the only method and more importantly, the right method. My client has me creating a SharePoint system for them to communicate with their clients. The business process has maybe 5 stages in it (maybe it's more, I don't even know because they don't tell me everything), and the current system I've written over the past months is maybe 2 stages through. This meets our deadline of completing those systems by Monday next week... but at that point my client is planning on making the site live from that point. In effect, their work with their clients will be running parallel with my work for them. As I complete my own work on a separate test server, I'll push each following stage of the process onto the live server. Scheduled downtimes during non-business times (like a weekend) will be available for me to perform these pushes. Keeping pace so that my development is faster than the actual business process is my own problem and off-topic... so let's get back to the problem I stated at the start of this post. In this system, we have sets of features which will create lists for their associated content types and field types when activated, and delete these lists when the feature is deactivated. Most updates don't need to deactivate and reactivate these features, such as workflow changes, custom actions, custom forms, and similar ilk. But there are some parts which do require this. On my test server, it's okay for me to obliterate lists, but once the site is live and there's real correspondence data, it's absolutely unacceptable to do this. So when I need to implement a new change in functionality, I need to be able to store the currently present data in several lists, deactivate the feature, reactivate the feature, and restore all of this data. Perhaps I have hoist myself by my own petard with the feature system I implemented. Unfortunately, the necessity to later on make several of these "project sites" meant I had to do a lot of my code with the concept of "Can be deployed repeatedly" in mind. My current plan is to run through lists and libraries which will be affected by the particular feature that is to be reset. Files and all of their versions will be saved in a directory on the server. Then, a set of text files will be used to store all of the important field values for the items. This includes a lot of cross-list reference lookups that will need to be maintained, but that's simple enough. Then, I deactivate the feature, deploy the new solution, and reactivate the feature. We upload all of the files in the order specified by their versions and update them with the stored fields for those versions, so that we retain the version structure. As each one is first uploaded, the new ID is picked out, and all relevant lookups in the rest of the files are updated (in some manner that I make sure I don't re-update it later with an incorrect value, of course). After that, we run through all the rest of the items in the order most conducive to keeping the relational data correct. This roughly summarizes what my current plan is. To my advantage, there are no long running workflows in the system that will be affected by this, so there's nothing I will have to worry about making sure nothing is "still running" when I do this stuff. I don't really know all the cons of this approach... I can imagine they're quite hefty. But I'm unsure what other choices I even have, and my searches haven't turned up anything. Is there anyone who can think of a better idea? Or will anyone just tell me that I really have no other choice? Thanks in advance!

    Read the article

  • Rails Deployment: moving static files to S3

    - by Joseph Silvashy
    Someone posted something similar but it didn't really solve the problem. I want to move all my static files (images, javascript, css) to an Amazon S3 bucket when I deploy my app, as well as rewrite those paths in my app, is there a simple way to accomplish this? or am I in for a huge amount of work here?

    Read the article

  • PHP Deployment to Live Server

    - by zx
    Hello, I am new to this, I just reading about how I should not edit code on the live production server. I don't know anything about source control or SVN. I would like to start coding on a test server then once everything is confirmed working, I want to send all the files over to the production server. How should I go about this? I am on mac os x and was looking into apps like http://versionsapp.com/ but I am not sure if this is the right solution. What do you suggest?

    Read the article

  • Java Applet Deployment, ClassNotFoundException (primary class)

    - by Matt
    This is driving me up the wall. I have checked and rechecked spelling and paths. I have tried just about every combination of paths, including relative, absolute, and full http paths. I continue to get the following error when trying to load a Java applet: java.lang.ClassNotFoundException: AppletClient.class at sun.plugin2.applet.Applet2ClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.plugin2.applet.Plugin2ClassLoader.loadCode(Unknown Source) at sun.plugin2.applet.Plugin2Manager.createApplet(Unknown Source) at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Exception: java.lang.ClassNotFoundException: AppletClient.class The HTML used to load the applet: <applet width="100" height="100" archive="applet/myapplet.jar, applet/applet_dependency.jar" code="AppletClient.class"> <param value="blahblah" name="username"> <param value="false" name="codebase_lookup"> </applet> The applet is in a relative directory, "applet", from the path of the current page. I have unzipped the jar file and can see AppletClient.class. Also, in the source of the project, it is spelled that way (casing and all). I have tried with/without the parameters. I have changed the names of the archive jars in the applet include tag just to see if I get a different error for bad file names (same error). I have manually done GETs on the jars to make sure the server is responding to the requests (it is). I have tried with and without the codebase tag, with all different varieties of paths (start getting bad "magic number" errors on those). I know that this error sometimes pops up when a dependency fails to load, so it can be misleading, but all dependencies are present, accounted for, and are fetchable via manual GETs. Between each and every attempt I always clear my cache in FireFox. These problems are reproduced in IE8 and Chrome as well. Per my Java Console from the browser, I am running Java Plug-in 1.6.0_20. This is from the same machine that I develop the applet on, which runs fine via Eclipse. Finally, I kicked on Fiddler2, and I don't see a single request for the jar files anywhere The host site is running from my Visual Studio debugger, so it's running on localhost. But I see the requests for all the other resources on Fiddler. Just... no Jars. ANYWHERE. I clear the log, cleared my browser cache, and did a ctrl-R refresh. And still, not a single Jar request on the Fiddler log. I even did a delayed write (with JS) of the applet tag after the page loaded, once all the Fiddler activity slowed down. The element gets written to the document (and I can see the 100x100 Java error window), but not a single request shows up on Fiddler. Any suggestions, before I go crawl into the corner and cry myself to sleep? EDIT: From the Java console, if I hit "l" (el) to "dump classloader list", I see something that looks like this: Live entry: key=http://localhost:55446/BaseWebSite/,http://localhost:55446/BaseWebSite/applet/myappliet.jar, http://localhost:55446/BaseWebSite/applet/applet_dependency.jar, refCount=1, threadGroup=sun.plugin2.applet.Applet2ThreadGroup[name=http://localhost:55446/BaseWebSite/-threadGroup,maxpri=4]

    Read the article

  • How to control webapp deployment order when restarting tomcat

    - by artejera
    Hi, I have a number of war projects deployed in a single tomcat 5.5 container. They consume each other's services through http, and thus I need to make sure that, when Tomcat is restarted, they are deployed in an specific order. After a couple of hours googlin' around, no luck. Anyone knows how to setup tomcat 5.5 to deploy wars on restart in an specific order? Thanks in advance

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >