Search Results

Search found 1818 results on 73 pages for 'migration'.

Page 34/73 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Is there a rake task for advancing or retreating your schema version by exactly one?

    - by user30997
    Back when migration version numbers were simply incremented as you created migrations, it was easy enough to do: rake migrate VERSION=097 rake migrate VERSION=098 rake migrate VERSION=099 rake migrate VERSION=100 ...but we now have migration numbers that are something like YYYYMMDDtimeofday. Not that this is a bad thing - it keeps the migration version collisions to a minimum - but when I have 50 migrations and want to step through them one-at-a-time, it is a hassle: rake migrate VERSION=20090129215142 rake migrate VERSION=20090129219783 ...etc. I have to have a list of all the migrations open in front of me, typing out the version numbers to advance by one. Is there anything that would have an easier syntax, like: rake migrate VERSION=NEXT or rake migrate VERSION=PREV ?

    Read the article

  • Plural to Singular conversion trouble in Rails Migrations?

    - by Earlz
    Hi, I'm a beginner at Ruby On Rails and am trying to get a migration to work with the name Priorities So, here is the code I use in my migration: class Priorities < ActiveRecord::Migration def self.up create_table :priorities do |t| t.column :name, :string, :null => false, :limit => 32 end Priority.create :name => "Critical" Priority.create :name => "Major" Priority.create :name => "Minor" end def self.down drop_table :priorities end end This results in the following error though: NOTICE: CREATE TABLE will create implicit sequence "priorities_id_seq" for serial column "priorities.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "priorities_pkey" for table "priorities" rake aborted! An error has occurred, this and all later migrations canceled: uninitialized constant Priorities::Priority Is this some problem with turning ies to y for converting something plural to singular?

    Read the article

  • Inheritance in Ruby on Rails: setting the base class type

    - by Régis B.
    I am implementing a single table inheritance inside Rails. Here is the corresponding migration: class CreateA < ActiveRecord::Migration def self.up create_table :a do |t| t.string :type end end Class B inherits from A: class B < A end Now, it's easy to get all instances of class B: B.find(:all) or A.find_all_by_type("B") But how do I find all instances of class A (those that are not of type B)? Is this bad organization? I tried this: A.find_all_by_type("A") But instances of class A have a nil type. I could do A.find_all_by_type(nil) but this doesn't feel right, somehow. In particular, it would stop working if I decided to make A inherit from another class. Would it be more appropriate to define a default value for :type in the migration? Something like: t.string :type, :default => "A" Am I doing something wrong here?

    Read the article

  • What happens if a user jumps over 10 versions before updating, and every version had a new data mode

    - by dontWatchMyProfile
    Example: User installs app v.1.0, adds data. Then the dev submits 10 updates in 10 weeks. After 11 weeks, the user wants v.11.0 and grabs a copy from the app store. Assuming that the app has got 11 .xcdatamodel versions inside, where ***11.xcdatamodel is the current one, what would happen now since the persistent store of the user is ages old? would the migration happen 10 times, step-by-step through every migration iteration? Or does the actual migration of data (lets assume gigabytes of data) happen exactly once, after Core Data (or the persistent store coordinator) has figured out precisely what to do to go from v.1.0 to v.11.0?

    Read the article

  • Replication - between pools in the same system

    - by Steve Tunstall
    OK, I fully understand that's it's been a LONG time since I've blogged with any tips or tricks on the ZFSSA, and I'm way behind. Hey, I just wrote TWO BLOGS ON THE SAME DAY!!! Make sure you keep scrolling down to see the next one too, or you may have missed it. To celebrate, for the one or two of you out there who are still reading this, I got something for you. The first TWO people who make any comment below, with your real name and email so I can contact you, will get some cool Oracle SWAG that I have to give away. Don't get excited, it's not an iPad, but it pretty good stuff. Only the first two, so if you already see two below, then settle down. Now, let's talk about Replication and Migration.  I have talked before about Shadow Migration here: https://blogs.oracle.com/7000tips/entry/shadow_migrationShadow Migration lets one take a NFS or CIFS share in one pool on a system and migrate that data over to another pool in the same system. That's handy, but right now it's only for file systems like NFS and CIFS. It will not work for LUNs. LUN shadow migration is a roadmap item, however. So.... What if you have a ZFSSA cluster with multiple pools, and you have a LUN in one pool but later you decide it's best if it was in the other pool? No problem. Replication to the rescue. What's that? Replication is only for replicating data between two different systems? Who told you that? We've been able to replicate to the same system now for a few code updates back. These instructions below will also work just fine if you're setting up replication between two different systems. After replication is complete, you can easily break replication, change the new LUN into a primary LUN and then delete the source LUN. Bam. Step 1- setup a target system. In our case, the target system is ourself, but you still have to set it up like it's far away. Go to Configuration-->Services-->Remote Replication. Click the plus sign and setup the target, which is the ZFSSA you're on now. Step 2. Now you can go to the LUN you want to replicate. Take note which Pool and Project you're in. In my case, I have a LUN in Pool2 called LUNp2 that I wish to replicate to Pool1.  Step 3. In my case, I made a Project called "Luns" and it has LUNp2 inside of it. I am going to replicate the Project, which will automatically replicate all of the LUNs and/or Filesystems inside of it.  Now, you can also replicate from the Share level instead of the Project. That will only replicate the share, and not all the other shares of a project. If someone tells you that if you replicate a share, it always replicates all the other shares also in that Project, don't listen to them.Note below how I can choose not only the Target (which is myself), but I can also choose which Pool to replicate it to. So I choose Pool1.  Step 4. I did not choose a schedule or pick the "Continuous" button, which means my replication will be manual only. I can now push the Manual Replicate button on my Actions list and you will see it start. You will see both a barber pole animation and also an update in the status bar on the top of the screen that a replication event has begun. This also goes into the event log.  Step 5. The status bar will also log an event when it's done. Step 6. If you go back to Configuration-->Services-->Remote Replication, you will see your event. Step 7. Done. To see your new replica, go to the other Pool (Pool1 for me), and click the "Replica" area below the words "Filesystems | LUNs" Here, you will see any replicas that have come in from any of your sources. It's a simple matter from here to break the replication, which will change this to a "Local" LUN, and then delete the original LUN back in Pool2. Ok, that's all for now, but I promise to give out more tricks sometime in November !!! There's very exciting stuff coming down the pipe for the ZFSSA. Both new hardware and new software features that I'm just drooling over. That's all I can say, but contact your local sales SC to get a NDA roadmap talk if you want to hear more.   Happy Halloween,Steve 

    Read the article

  • L'HADOPI aimerait surveiller les plateformes de streaming, quelles mesures répressives pourraient en découler ?

    L'HADOPI aimerait surveiller les plateformes de streaming, quelles mesures répressives pourraient en découler ? Alors que l'Hadopi a déjà fait moult mécontents, ce chiffre pourrait encore augmenter. La Haute Autorité est en effet consciente que son arrivée à poussé un grand nombre d'internautes vers le streaming. Or, elle n'a de pouvoir d'action que sur les réseaux P2P. De quoi pousser largement la communauté on-line à fuir ces plateformes, et faire naître de nouvelles préoccupations pour le gouvernement. « Pour l'instant, ce qui se dit c'est qu'il y a une migration. Est-ce qu'on l'a constaté ? Non. Dire qu'il y a une migration, ne veut pas dire qu'il y a un effet Hadopi chez le téléchargeur illégal. Cela veut dire ...

    Read the article

  • Calculated Columns in Entity Framework Code First Migrations

    - by David Paquette
    I had a couple people ask me about calculated properties / columns in Entity Framework this week.  The question was, is there a way to specify a property in my C# class that is the result of some calculation involving 2 properties of the same class.  For example, in my database, I store a FirstName and a LastName column and I would like a FullName property that is computed from the FirstName and LastName columns.  My initial answer was: 1: public string FullName 2: { 3: get { return string.Format("{0} {1}", FirstName, LastName); } 4: } Of course, this works fine, but this does not give us the ability to write queries using the FullName property.  For example, this query: 1: var users = context.Users.Where(u => u.FullName.Contains("anan")); Would result in the following NotSupportedException: The specified type member 'FullName' is not supported in LINQ to Entities. Only initializers, entity members, and entity navigation properties are supported. It turns out there is a way to support this type of behavior with Entity Framework Code First Migrations by making use of Computed Columns in SQL Server.  While there is no native support for computed columns in Code First Migrations, we can manually configure our migration to use computed columns. Let’s start by defining our C# classes and DbContext: 1: public class UserProfile 2: { 3: public int Id { get; set; } 4: 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: 8: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 9: public string FullName { get; private set; } 10: } 11: 12: public class UserContext : DbContext 13: { 14: public DbSet<UserProfile> Users { get; set; } 15: } The DatabaseGenerated attribute is needed on our FullName property.  This is a hint to let Entity Framework Code First know that the database will be computing this property for us. Next, we need to run 2 commands in the Package Manager Console.  First, run Enable-Migrations to enable Code First Migrations for the UserContext.  Next, run Add-Migration Initial to create an initial migration.  This will create a migration that creates the UserProfile table with 3 columns: FirstName, LastName, and FullName.  This is where we need to make a small change.  Instead of allowing Code First Migrations to create the FullName property, we will manually add that column as a computed column. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: CreateTable( 6: "dbo.UserProfiles", 7: c => new 8: { 9: Id = c.Int(nullable: false, identity: true), 10: FirstName = c.String(), 11: LastName = c.String(), 12: //FullName = c.String(), 13: }) 14: .PrimaryKey(t => t.Id); 15: Sql("ALTER TABLE dbo.UserProfiles ADD FullName AS FirstName + ' ' + LastName"); 16: } 17: 18: 19: public override void Down() 20: { 21: DropTable("dbo.UserProfiles"); 22: } 23: } Finally, run the Update-Database command.  Now we can query for Users using the FullName property and that query will be executed on the database server.  However, we encounter another potential problem. Since the FullName property is calculated by the database, it will get out of sync on the object side as soon as we make a change to the FirstName or LastName property.  Luckily, we can have the best of both worlds here by also adding the calculation back to the getter on the FullName property: 1: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 2: public string FullName 3: { 4: get { return FirstName + " " + LastName; } 5: private set 6: { 7: //Just need this here to trick EF 8: } 9: } Now we can both query for Users using the FullName property and we also won’t need to worry about the FullName property being out of sync with the FirstName and LastName properties.  When we run this code: 1: using(UserContext context = new UserContext()) 2: { 3: UserProfile userProfile = new UserProfile {FirstName = "Chanandler", LastName = "Bong"}; 4: 5: Console.WriteLine("Before saving: " + userProfile.FullName); 6: 7: context.Users.Add(userProfile); 8: context.SaveChanges(); 9:  10: Console.WriteLine("After saving: " + userProfile.FullName); 11:  12: UserProfile chanandler = context.Users.First(u => u.FullName == "Chanandler Bong"); 13: Console.WriteLine("After reading: " + chanandler.FullName); 14:  15: chanandler.FirstName = "Chandler"; 16: chanandler.LastName = "Bing"; 17:  18: Console.WriteLine("After changing: " + chanandler.FullName); 19:  20: } We get this output: It took a bit of work, but finally Chandler’s TV Guide can be delivered to the right person. The obvious downside to this implementation is that the FullName calculation is duplicated in the database and in the UserProfile class. This sample was written using Visual Studio 2012 and Entity Framework 5. Download the source code here.

    Read the article

  • Oracle announces Brand New Tuxedo 11g Release

    - by ruma.sanyal
    Today Oracle introduced two brand new products within the Tuxedo product line of its application grid portfolio. Oracle Tuxedo Application Runtime for CICS and Batch and Oracle Application Rehosting Workbench provide the ability to automate rehosting of mainframe Online and Batch applications to open systems running under Oracle Tuxedo. Oracle Application Rehosting Workbench automates adaptation of COBOL programs, JCL conversion for batch applications, and migration of VSAM files and DB2 data schema. Migration cost, risk, and project length and complexity are dramatically reduced with over 90% of application assets re-hosted on open systems 'as-is'. Impact on the organization is minimized - users are protected from change by support for 3270 green screens, and developers continue to use familiar CICS APIs, batxh functions, and common utilities. Other major features of this release are as follows: - Hotpluggability through introduction of Oracle Tuxedo JCA Adapter - Metadata driven application development using SCA programming model - Support for Python and Ruby languages to develop business services - Improved scalability and availability, TSAM enhancements Register for a live webinar with Oracle Fusion Middleware Senior VP Hasan Rizvi Read the press release Find more details on these exciting new products

    Read the article

  • Oracle Database Machine: customer case at OOW2010

    - by rene.kundersma
    I proudly announce that on Openworld 2010, together with TUI I will be co-presenting the customer case on their Database Machine implementation. Our session number is S314935. The sesison will be about the business case, the choices made for the setup, how we did the migration to v1, the migration to v2. Also how we implemented backup/restore and disaster recovery solutions. It will be a very interesting case for everyone interested in customer implementations of the DBM ! Hope to see you there Rene Kundersma Technical Architect Oracle Technology Services

    Read the article

  • Sql Server Data Tools & Entity Framework - is there any synergy here?

    - by Benjol
    Coming out of a project using Linq2Sql, I suspect that the next (bigger) one might push me into the arms of Entity Framework. I've done some reading-up on the subject, but what I haven't managed to find is a coherent story about how SQL Server Data Tools and Entity Framework should/could/might be used together. Were they conceived totally separately, and using them together is stroking the wrong way? Are they somehow totally orthogonal and I'm missing the point? Some reasons why I think I might want both: SSDT is great for having 'compiled' (checked) and easily versionable sql and schema But the SSDT 'migration/update' story is not convincing (to me): "Update anything" works ok for schema, but there's no way (AFAIK) that it can ever work for data. On the other hand, I haven't tried the EF migration to know if it presents similar problems, but the Up/Down bits look quite handy.

    Read the article

  • Forms&Reports upgrade characterset issues

    - by Lukasz Romaszewski
    Hello,This quick post is based on my findings during recent IMC workshops, especially those related to upgrading the Forms 6i/9i/10g applications to Forms 11g platform. The upgrade process itself is pretty straightforward and it basically requires recompiling your Forms application with a latest version of frmcmp tool. For some cases though, especially when you migrate from Forms 6i which is a client-server architecture to a 3-tier web solution (Forms 11g), you need to rewrite some parts of your code to make it run on new platform. The things you need to change range from reimplementing (using webutil library) typical client-site functionality like local IO operation, access to WinAPI, invoking DLLs etc. to changing deprecated or obsolete APIs like RUN_PRODUCT to RUN_REPORT_OBJECT. To automate those changes Oracle provides complete Java API  which allows you to manipulate the code and structure of you modules (JDAPI). To make it even easier we can use Forms Migration Assistant tool (written in Java using JDAPI) which is able to replace all occurrences of old API entries with their 11g equivalents or warn you when the replacement is not possible. You can also add your own replacement definitions in the search_replace.properties file. But you need to be aware of some issues that can be encountered using this tool. First of all if you are using some hard-coded text inside your triggers you may notice that after processing them by the Migration Assistant tool the national characters may be lost. This is due to the fact that you need to explicitly tell Java application (which MA really is) what kind of characterset it should use to read those text properly. In order to do that just add to a script calling MA the following line:  export JAVA_TOOL_OPTIONS=-Dfile.encoding=<JAVA_ISO_ENCODING>  when the particular encoding must match the NLS_LANG in your Forms Builder environment (for example for Polish characterset you need to use ISO-8859-2).Second issue you can encounter related to national charactersets is lack of national symbols in you reports after migration. This can be solved by adding appropriate NLS_LANG entry in your reports environment. Sometimes instead of particular characterset you see "Greek characters" in your reports. This is just default font used by reports engine instead of the one defined in your report. To solve it you must copy fonts definitions from your old environment (e.g. Forms 10g installation) to appropriate directory in new installation (usually AFM folder). For more information about this and other issues please refer to https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN&id=1297012.1at My Oracle Support site. That's all for today, stay tuned for more posts on this topic! Lukasz

    Read the article

  • Migrating from GlassFish 2.x to 3.1.x

    - by alexismp
    With clustering now available in GlassFish since version 3.1 (our Spring 2011 release), a good number of folks have been looking at migrating their existing GlassFish 2.x-based clustered environments to a more recent version to take advantage of Java EE 6, our modular design, improved SSH-based provisioning and enhanced HA performance. The GlassFish documentation set is quite extensive and has a dedicated Upgrade Guide. It obviously lists a number of small changes such as file layout on disk (mostly due to modularity), some option changes (grizzly, shoal), the removal of node agents (using SSH instead), new JPA default provider name, etc... There is even a migration tool (glassfish/bin/asupgrade) to upgrade existing domains. But really the only thing you need to know is that each module in GlassFish 3 and beyond is responsible for doing its part of the upgrade job which means that the migration is as simple as copying a 2.x domain directory to the domains/ directory and starting the server with asadmin start-domain --upgrade. Binary-compatible products eligible for such upgrades include Sun Java System Application Server 9.1 Update 2 as well as version 2.1 and 2.1.1 of Sun GlassFish Enterprise Server.

    Read the article

  • VSDB to SSDT Series : Introduction

    - by Etienne Giust
    At the office, we extensively use VS2010 SQL Server 2008 Database Projects and SQL Server 2008 Server Projects  in our Visual Studio 2010 solutions. With Visual Studio 2012, those types of projects are replaced by the  SQL Server Database Project  using the SSDT (SQL Server Data Tools) technology. I started investigating the shift from Visual Studio 2010 to Visual Studio 2012 and specifically what needs to be done concerning those database projects in terms of painless migration, continuous integration and standalone deployment. I will write my findings in a series of 4 short articles: Part 1 will be about the database projects migration process and the cleaning up that ensues Part 2 will be about creating SQL Server 2008 Server Projects equivalents with the new SSDT project type Part 3 will introduce a replacement to the vsdbcmd.exe command used for deployment in our continuous integration process Part 4 will explain how to create standalone packages of SSDT projects for deployment on non accessible servers (such as a production server)

    Read the article

  • links for 2011-02-15

    - by Bob Rhubart
    Why the hybrid cloud model is the best approach | Cloud Computing - InfoWorld Although some cloud providers look at the hybrid model as blasphemy, there are strong reasons for them to adopt it, says David Linthicum.  (tags: davidlinthicum cloud) Exadata Part V: Monitoring with Database Control The Oracle Instructor Uwe Hesse shows how "we can use Oracle Enterprise Manager Database Control to monitor an Exadata Database Machine, especially the Storage Servers (Cells). " (tags: oracle exadata) ATG Live Webcast Feb. 24th: Using the EBS 12 SOA Adapter (Oracle E-Business Suite Technology) "This live one-hour webcast will offer a review of the Service Oriented Architecture (SOA) capabilities within E-Business Suite R12 focusing on the E-Business Suite Adapter." (tags: oracle soa) Oracle Forms Migration to ADF - Webinar vom ORACLE Partner PITSS (Oracle Fusion Middleware für den Finanzsektor) "Join Oracle's Grant Ronald and PITSS to see a software architecture comparison of Oracle Forms and ADF and a live step-by-step presentation on how to achieve a successful migration." (tags: oracle adf)

    Read the article

  • Reading the tea leaves from Windows Azure support

    - by jamiet
    A few idle thoughts… Three months ago I had an issue regarding Windows Azure where I was unable to login to the management portal. At the time I contacted Azure support, the issue was soon resolved and I thought no more about it. Until today that is when I received an email from Azure support providing a detailed analysis of the root cause, the fix and moreover precise details about when and where things occurred. The email itself is interesting and I have included the entirety of it below. A few things were interesting to me: The level of detail and the diligence in investigating and reporting the issue I found really rather impressive. They even outline the number of users that were affected (127 in case you can’t be bothered reading). Compare this to the quite pathetic support that another division within Microsoft, Skype, provided to Greg Low recently: Skype support and dead parrot sketches   This line: “Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th. This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification. ” I also found to be particularly interesting. I have long thought that one of the reasons Microsoft has proved to be such a money-making machine in the enterprise is because they provide the infrastructure and then upsell on top of that – and nothing is more infrastructural than Active Directory. It has struck me of late that they are trying to make the same play of late in the cloud by tying all their services into Azure Active Directory and here we see a clear indication of that by making AAD the authentication mechanism for anyone using Windows Azure. I get the feeling that we’re going to hear much much more about AAD in the future; isn’t it about time we could log on to SQL Azure Windows Azure SQL Database without resorting to SQL authentication, for example? And why do Microsoft have two identity providers – Microsoft Account (aka Windows Live ID) and AAD – isn’t it about time those things were combined? As I said, just some idle thoughts. Below is the transcript of the email if you are interested. @Jamiet  This is regarding the support request <redacted> where in you were not able to login into the windows azure management portal with live id. We are providing you with the summary, root cause analysis and information about permanent fix: Incident Title: You were unable to access Windows Azure Portal after Microsoft Account to Azure Active Directory account Migration. Service Impacted: Management Portal Incident Start Date and Time: 8/24/2012 4:30:00 PM Date and Time Service was Restored: 10/17/2012 12:00:00 AM Summary: Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th.   This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification.   While this migration was largely transparent to Windows Azure users, a small number of users whose sign-in names were part of a Windows Live Custom Domain were unable to login.   This incompatibility was not discovered during the Quality Assurance testing phase prior to the migration. Customer Impact: Customers whose sign-in names were part of a Windows Live Custom Domain were unable to sign-in the Management Portal after ~4:00 p.m. PST on August 24th, 2012.   We determined that the issue did impact at least 127 users in 98 of these Windows Live Custom Domains and had a maximum potential impact of 1,110 users in total. Root Cause: The root cause of the issue was an incompatibility in the AAD authentication service to handle logins from Microsoft accounts whose sign-in names were part of a Windows Live Custom Domains.  This issue was not discovered during the Quality Assurance testing phase prior to the migration from Microsoft Account (MSA) to AAD. Mitigations: The issue was mitigated for the majority of affected users by 8:20 a.m. PST on August 25th, 2012 by running some internal scripts to correct many known Windows Live Custom Domains.   The remaining affected domains fell into two categories: Windows Live Custom Domains that were not corrected by 8/25/2012. An additional 48 Windows Live Custom Domains were fixed in the weeks following the incident within 2 business days after the AAD team received an escalation from product support regarding those accounts. Windows Live Custom domains that were also provisioned in Office365. Some of the affected Windows Live Custom Domains had already been provisioned in AAD because their owners signed up for Office365 which is a service that also uses AAD.   In these cases the Azure customers had to work around the issue by renaming their Microsoft Account or using a different Microsoft Account to administer their Azure subscription. Permanent Fix: The Azure Active Directory team permanently fixed the issue for all customers on 10/17/2012 in an upgraded release of the AAD service.

    Read the article

  • Advisor Webcast: Hyperion Planning: Migrating Business Rules to Calc Manager

    - by inowodwo
    As you may be aware EPM 11.1.2.1 was the terminal release of Hyperion Business Rules (see Hyperion Business Rules Statement of Direction (Doc ID 1448421.1). This webcast aims to help you migrate from Business Rules to Calc Manager. Date: January 10, 2013 at 3:00 pm, GMT Time (London, GMT) / 4:00 pm, Europe Time (Berlin, GMT+01:00) / 07:00 am Pacific / 8:00 am Mountain / 10:00 am Eastern TOPICS WILL INCLUDE:    Calculation Manager in 11.1.2.2    Migration Consideration    How to migrate the the HBR rules from 11.1.2.1 to Calculation Manager 11.1.2.2    How to migrate the security of the Business Rules.    How to approach troubleshooting and known issues with migration. For registration details please go to Migrating Business Rules to Calc Manager (Doc ID 1506296.1). Alternatively, to view all upcoming webcasts go to Advisor Webcasts: Current Schedule and Archived recordings [ID 740966.1] and chose Oracle Business Analytics from the drop down menu.

    Read the article

  • Is Azure Compatible with JPEG XR?

    - by Shawn Eary
    I just put an F#/MVC app into a Windows Azure solution as a Web Role. Before migration, my JPEG XR (*.WDP) files were getting displayed on the client in IE9 without issue via my local and hosted sites. Now, after migration into Windows Azure, my JPEG XR files neither get displayed in my local Windows Azure compute emulator nor do they get displayed when they are deployed to http://*.cloudapp.net. Is there some sort of conflict with Widows Azure and (JPEG XR) *.wdp files? If so, what is the accepted best practice for overcoming this conflict?

    Read the article

  • EXADATA & GoldenGate - the perfect combination for thetrainline.com

    - by maria costanzo
    enhanced the customer experience sustaining rapid search and booking times for hundreds of millions of journey requests per annum EXADATA & GoldenGate : the perfect combination thetrainline.com used Oracle GoldenGate to migrate data from its legacy system to two  Oracle Exadata Database Machine X2-2 HC Quarter Rack instances to reduce downtime, avoid  risk of data loss, and eliminate the need for complex programming. "Oracle GoldenGate enabled us to complete the migration of three terabytes to Oracle Exadata, within a single 30-minute system outage,” East said. "Without Oracle GoldenGate, we would have required a 20 hour outage window to complete the migration, something that was completely unacceptable."  Discover more at the following link  

    Read the article

  • Java.net Reborn

    - by Tori Wieldt
    Java.net, the home of  Java community projects, has been re-launched with a new look and new tools for developers.  The move from CollabNet to the Kenai infrastructure offers more flexibility for developers who want to host or contribute to community projects.  Instead of the large, fixed infrastructure per project (for example, several mailing lists per project), Kenai's ala carte features allow users to take only what they need. "We will continue to have the great mix of blogs, forums, and editorial content as well as new tools on the project side, including Mercurial, Git, and JIRA for developers," Java.net Community Manager Sonya Barry explains. The migration was huge effort. Over 1400 projects were migrated (and some 30 projects are left to go). A large part of the migration was a big cleanup of abandoned projects. With the high abandonment rate of open source projects, the was a lot to remove. The new java.net site is smaller, faster and now the percentage of good, current content is much higher.Check it out at http://home.java.net/

    Read the article

  • Migrating Spring to Java EE 6 Article Series at OTN - Part 3

    - by arungupta
    The spring season is characterized by migration of birds, whales, butterflies, frogs, and other animals for different reasons. If you use Spring framework and are interested in migrating to a standards-based Java EE platform, for whatever reason, then we have a solution for you. David Heffelfinger's, a renowned author and an ardent Java EE fan, has published third part of Spring to Java EE migration series at OTN. The article series takes a typical Spring application and shows how to migrate it to Java EE 6 using NetBeans. This new part builds upon part 1 and part 2 and also compares the generated WAR files and LoC in XML configuration in the two environments. There is an interesting discussion on Why Java EE 6 over Spring ? as well.

    Read the article

  • PHP : Symfony sort en version 2.1 définitive, gestion des dépendances avec Composer, formulaires plus efficaces et Mailer plus performant

    Symfony2 est un projet très communautaire, depuis le début (des centaines de bundles étaient disponibles bien avant les premières RC de la 2.0), une tendance qui se confirme : 250 contributeurs, 1 000 pull requests sur GitHub pour la première version beta de Symfony 2.1 ! Après les difficultés de migration avec symfony 1.x, l'équipe a tenté autant que possible de restreindre les changements à même de casser la rétrocompatibilité ; de même, le refactoring du module de formulaires a fait que la version finale de la 2.1 devrait sortir en août, afin de concentrer autant que possible les changements et faire que de plus en plus de code ne devra pas être modifié lors de la migration d'une version à l'autre. Ainsi, n'hésitez pas à tenter de migrer vos applications vers cette beta,...

    Read the article

  • PHP : Symfony sort en version 2.1 définitive, gestion des dépendances avec Composer, formulaires et Mailer plus performants

    Symfony2 est un projet très communautaire, depuis le début (des centaines de bundles étaient disponibles bien avant les premières RC de la 2.0), une tendance qui se confirme : 250 contributeurs, 1 000 pull requests sur GitHub pour la première version beta de Symfony 2.1 ! Après les difficultés de migration avec symfony 1.x, l'équipe a tenté autant que possible de restreindre les changements à même de casser la rétrocompatibilité ; de même, le refactoring du module de formulaires a fait que la version finale de la 2.1 devrait sortir en août, afin de concentrer autant que possible les changements et faire que de plus en plus de code ne devra pas être modifié lors de la migration d'une version à l'autre. Ainsi, n'hésitez pas à tenter de migrer vos applications vers cette beta,...

    Read the article

  • Sortie de la première beta de Symfony 2.1, Composer, chargement automatique des classes et adoption des règles de codage de la communauté

    Symfony2 est un projet très communautaire, depuis le début (des centaines de bundles étaient disponibles bien avant les premières RC de la 2.0), une tendance qui se confirme : 250 contributeurs, 1 000 pull requests sur GitHub pour la première version beta de Symfony 2.1 ! Après les difficultés de migration avec symfony 1.x, l'équipe a tenté autant que possible de restreindre les changements à même de casser la rétrocompatibilité ; de même, le refactoring du module de formulaires a fait que la version finale de la 2.1 devrait sortir en août, afin de concentrer autant que possible les changements et faire que de plus en plus de code ne devra pas être modifié lors de la migration d'une version à l'autre. Ainsi, n'hésitez pas à tenter de migrer vos applications vers cette beta,...

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >