Search Results

Search found 2384 results on 96 pages for 'vb6 migration'.

Page 49/96 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Calculated Columns in Entity Framework Code First Migrations

    - by David Paquette
    I had a couple people ask me about calculated properties / columns in Entity Framework this week.  The question was, is there a way to specify a property in my C# class that is the result of some calculation involving 2 properties of the same class.  For example, in my database, I store a FirstName and a LastName column and I would like a FullName property that is computed from the FirstName and LastName columns.  My initial answer was: 1: public string FullName 2: { 3: get { return string.Format("{0} {1}", FirstName, LastName); } 4: } Of course, this works fine, but this does not give us the ability to write queries using the FullName property.  For example, this query: 1: var users = context.Users.Where(u => u.FullName.Contains("anan")); Would result in the following NotSupportedException: The specified type member 'FullName' is not supported in LINQ to Entities. Only initializers, entity members, and entity navigation properties are supported. It turns out there is a way to support this type of behavior with Entity Framework Code First Migrations by making use of Computed Columns in SQL Server.  While there is no native support for computed columns in Code First Migrations, we can manually configure our migration to use computed columns. Let’s start by defining our C# classes and DbContext: 1: public class UserProfile 2: { 3: public int Id { get; set; } 4: 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: 8: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 9: public string FullName { get; private set; } 10: } 11: 12: public class UserContext : DbContext 13: { 14: public DbSet<UserProfile> Users { get; set; } 15: } The DatabaseGenerated attribute is needed on our FullName property.  This is a hint to let Entity Framework Code First know that the database will be computing this property for us. Next, we need to run 2 commands in the Package Manager Console.  First, run Enable-Migrations to enable Code First Migrations for the UserContext.  Next, run Add-Migration Initial to create an initial migration.  This will create a migration that creates the UserProfile table with 3 columns: FirstName, LastName, and FullName.  This is where we need to make a small change.  Instead of allowing Code First Migrations to create the FullName property, we will manually add that column as a computed column. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: CreateTable( 6: "dbo.UserProfiles", 7: c => new 8: { 9: Id = c.Int(nullable: false, identity: true), 10: FirstName = c.String(), 11: LastName = c.String(), 12: //FullName = c.String(), 13: }) 14: .PrimaryKey(t => t.Id); 15: Sql("ALTER TABLE dbo.UserProfiles ADD FullName AS FirstName + ' ' + LastName"); 16: } 17: 18: 19: public override void Down() 20: { 21: DropTable("dbo.UserProfiles"); 22: } 23: } Finally, run the Update-Database command.  Now we can query for Users using the FullName property and that query will be executed on the database server.  However, we encounter another potential problem. Since the FullName property is calculated by the database, it will get out of sync on the object side as soon as we make a change to the FirstName or LastName property.  Luckily, we can have the best of both worlds here by also adding the calculation back to the getter on the FullName property: 1: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 2: public string FullName 3: { 4: get { return FirstName + " " + LastName; } 5: private set 6: { 7: //Just need this here to trick EF 8: } 9: } Now we can both query for Users using the FullName property and we also won’t need to worry about the FullName property being out of sync with the FirstName and LastName properties.  When we run this code: 1: using(UserContext context = new UserContext()) 2: { 3: UserProfile userProfile = new UserProfile {FirstName = "Chanandler", LastName = "Bong"}; 4: 5: Console.WriteLine("Before saving: " + userProfile.FullName); 6: 7: context.Users.Add(userProfile); 8: context.SaveChanges(); 9:  10: Console.WriteLine("After saving: " + userProfile.FullName); 11:  12: UserProfile chanandler = context.Users.First(u => u.FullName == "Chanandler Bong"); 13: Console.WriteLine("After reading: " + chanandler.FullName); 14:  15: chanandler.FirstName = "Chandler"; 16: chanandler.LastName = "Bing"; 17:  18: Console.WriteLine("After changing: " + chanandler.FullName); 19:  20: } We get this output: It took a bit of work, but finally Chandler’s TV Guide can be delivered to the right person. The obvious downside to this implementation is that the FullName calculation is duplicated in the database and in the UserProfile class. This sample was written using Visual Studio 2012 and Entity Framework 5. Download the source code here.

    Read the article

  • Oracle announces Brand New Tuxedo 11g Release

    - by ruma.sanyal
    Today Oracle introduced two brand new products within the Tuxedo product line of its application grid portfolio. Oracle Tuxedo Application Runtime for CICS and Batch and Oracle Application Rehosting Workbench provide the ability to automate rehosting of mainframe Online and Batch applications to open systems running under Oracle Tuxedo. Oracle Application Rehosting Workbench automates adaptation of COBOL programs, JCL conversion for batch applications, and migration of VSAM files and DB2 data schema. Migration cost, risk, and project length and complexity are dramatically reduced with over 90% of application assets re-hosted on open systems 'as-is'. Impact on the organization is minimized - users are protected from change by support for 3270 green screens, and developers continue to use familiar CICS APIs, batxh functions, and common utilities. Other major features of this release are as follows: - Hotpluggability through introduction of Oracle Tuxedo JCA Adapter - Metadata driven application development using SCA programming model - Support for Python and Ruby languages to develop business services - Improved scalability and availability, TSAM enhancements Register for a live webinar with Oracle Fusion Middleware Senior VP Hasan Rizvi Read the press release Find more details on these exciting new products

    Read the article

  • Oracle Database Machine: customer case at OOW2010

    - by rene.kundersma
    I proudly announce that on Openworld 2010, together with TUI I will be co-presenting the customer case on their Database Machine implementation. Our session number is S314935. The sesison will be about the business case, the choices made for the setup, how we did the migration to v1, the migration to v2. Also how we implemented backup/restore and disaster recovery solutions. It will be a very interesting case for everyone interested in customer implementations of the DBM ! Hope to see you there Rene Kundersma Technical Architect Oracle Technology Services

    Read the article

  • Sql Server Data Tools & Entity Framework - is there any synergy here?

    - by Benjol
    Coming out of a project using Linq2Sql, I suspect that the next (bigger) one might push me into the arms of Entity Framework. I've done some reading-up on the subject, but what I haven't managed to find is a coherent story about how SQL Server Data Tools and Entity Framework should/could/might be used together. Were they conceived totally separately, and using them together is stroking the wrong way? Are they somehow totally orthogonal and I'm missing the point? Some reasons why I think I might want both: SSDT is great for having 'compiled' (checked) and easily versionable sql and schema But the SSDT 'migration/update' story is not convincing (to me): "Update anything" works ok for schema, but there's no way (AFAIK) that it can ever work for data. On the other hand, I haven't tried the EF migration to know if it presents similar problems, but the Up/Down bits look quite handy.

    Read the article

  • Forms&Reports upgrade characterset issues

    - by Lukasz Romaszewski
    Hello,This quick post is based on my findings during recent IMC workshops, especially those related to upgrading the Forms 6i/9i/10g applications to Forms 11g platform. The upgrade process itself is pretty straightforward and it basically requires recompiling your Forms application with a latest version of frmcmp tool. For some cases though, especially when you migrate from Forms 6i which is a client-server architecture to a 3-tier web solution (Forms 11g), you need to rewrite some parts of your code to make it run on new platform. The things you need to change range from reimplementing (using webutil library) typical client-site functionality like local IO operation, access to WinAPI, invoking DLLs etc. to changing deprecated or obsolete APIs like RUN_PRODUCT to RUN_REPORT_OBJECT. To automate those changes Oracle provides complete Java API  which allows you to manipulate the code and structure of you modules (JDAPI). To make it even easier we can use Forms Migration Assistant tool (written in Java using JDAPI) which is able to replace all occurrences of old API entries with their 11g equivalents or warn you when the replacement is not possible. You can also add your own replacement definitions in the search_replace.properties file. But you need to be aware of some issues that can be encountered using this tool. First of all if you are using some hard-coded text inside your triggers you may notice that after processing them by the Migration Assistant tool the national characters may be lost. This is due to the fact that you need to explicitly tell Java application (which MA really is) what kind of characterset it should use to read those text properly. In order to do that just add to a script calling MA the following line:  export JAVA_TOOL_OPTIONS=-Dfile.encoding=<JAVA_ISO_ENCODING>  when the particular encoding must match the NLS_LANG in your Forms Builder environment (for example for Polish characterset you need to use ISO-8859-2).Second issue you can encounter related to national charactersets is lack of national symbols in you reports after migration. This can be solved by adding appropriate NLS_LANG entry in your reports environment. Sometimes instead of particular characterset you see "Greek characters" in your reports. This is just default font used by reports engine instead of the one defined in your report. To solve it you must copy fonts definitions from your old environment (e.g. Forms 10g installation) to appropriate directory in new installation (usually AFM folder). For more information about this and other issues please refer to https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN&id=1297012.1at My Oracle Support site. That's all for today, stay tuned for more posts on this topic! Lukasz

    Read the article

  • Migrating from GlassFish 2.x to 3.1.x

    - by alexismp
    With clustering now available in GlassFish since version 3.1 (our Spring 2011 release), a good number of folks have been looking at migrating their existing GlassFish 2.x-based clustered environments to a more recent version to take advantage of Java EE 6, our modular design, improved SSH-based provisioning and enhanced HA performance. The GlassFish documentation set is quite extensive and has a dedicated Upgrade Guide. It obviously lists a number of small changes such as file layout on disk (mostly due to modularity), some option changes (grizzly, shoal), the removal of node agents (using SSH instead), new JPA default provider name, etc... There is even a migration tool (glassfish/bin/asupgrade) to upgrade existing domains. But really the only thing you need to know is that each module in GlassFish 3 and beyond is responsible for doing its part of the upgrade job which means that the migration is as simple as copying a 2.x domain directory to the domains/ directory and starting the server with asadmin start-domain --upgrade. Binary-compatible products eligible for such upgrades include Sun Java System Application Server 9.1 Update 2 as well as version 2.1 and 2.1.1 of Sun GlassFish Enterprise Server.

    Read the article

  • VSDB to SSDT Series : Introduction

    - by Etienne Giust
    At the office, we extensively use VS2010 SQL Server 2008 Database Projects and SQL Server 2008 Server Projects  in our Visual Studio 2010 solutions. With Visual Studio 2012, those types of projects are replaced by the  SQL Server Database Project  using the SSDT (SQL Server Data Tools) technology. I started investigating the shift from Visual Studio 2010 to Visual Studio 2012 and specifically what needs to be done concerning those database projects in terms of painless migration, continuous integration and standalone deployment. I will write my findings in a series of 4 short articles: Part 1 will be about the database projects migration process and the cleaning up that ensues Part 2 will be about creating SQL Server 2008 Server Projects equivalents with the new SSDT project type Part 3 will introduce a replacement to the vsdbcmd.exe command used for deployment in our continuous integration process Part 4 will explain how to create standalone packages of SSDT projects for deployment on non accessible servers (such as a production server)

    Read the article

  • links for 2011-02-15

    - by Bob Rhubart
    Why the hybrid cloud model is the best approach | Cloud Computing - InfoWorld Although some cloud providers look at the hybrid model as blasphemy, there are strong reasons for them to adopt it, says David Linthicum.  (tags: davidlinthicum cloud) Exadata Part V: Monitoring with Database Control The Oracle Instructor Uwe Hesse shows how "we can use Oracle Enterprise Manager Database Control to monitor an Exadata Database Machine, especially the Storage Servers (Cells). " (tags: oracle exadata) ATG Live Webcast Feb. 24th: Using the EBS 12 SOA Adapter (Oracle E-Business Suite Technology) "This live one-hour webcast will offer a review of the Service Oriented Architecture (SOA) capabilities within E-Business Suite R12 focusing on the E-Business Suite Adapter." (tags: oracle soa) Oracle Forms Migration to ADF - Webinar vom ORACLE Partner PITSS (Oracle Fusion Middleware für den Finanzsektor) "Join Oracle's Grant Ronald and PITSS to see a software architecture comparison of Oracle Forms and ADF and a live step-by-step presentation on how to achieve a successful migration." (tags: oracle adf)

    Read the article

  • Reading the tea leaves from Windows Azure support

    - by jamiet
    A few idle thoughts… Three months ago I had an issue regarding Windows Azure where I was unable to login to the management portal. At the time I contacted Azure support, the issue was soon resolved and I thought no more about it. Until today that is when I received an email from Azure support providing a detailed analysis of the root cause, the fix and moreover precise details about when and where things occurred. The email itself is interesting and I have included the entirety of it below. A few things were interesting to me: The level of detail and the diligence in investigating and reporting the issue I found really rather impressive. They even outline the number of users that were affected (127 in case you can’t be bothered reading). Compare this to the quite pathetic support that another division within Microsoft, Skype, provided to Greg Low recently: Skype support and dead parrot sketches   This line: “Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th. This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification. ” I also found to be particularly interesting. I have long thought that one of the reasons Microsoft has proved to be such a money-making machine in the enterprise is because they provide the infrastructure and then upsell on top of that – and nothing is more infrastructural than Active Directory. It has struck me of late that they are trying to make the same play of late in the cloud by tying all their services into Azure Active Directory and here we see a clear indication of that by making AAD the authentication mechanism for anyone using Windows Azure. I get the feeling that we’re going to hear much much more about AAD in the future; isn’t it about time we could log on to SQL Azure Windows Azure SQL Database without resorting to SQL authentication, for example? And why do Microsoft have two identity providers – Microsoft Account (aka Windows Live ID) and AAD – isn’t it about time those things were combined? As I said, just some idle thoughts. Below is the transcript of the email if you are interested. @Jamiet  This is regarding the support request <redacted> where in you were not able to login into the windows azure management portal with live id. We are providing you with the summary, root cause analysis and information about permanent fix: Incident Title: You were unable to access Windows Azure Portal after Microsoft Account to Azure Active Directory account Migration. Service Impacted: Management Portal Incident Start Date and Time: 8/24/2012 4:30:00 PM Date and Time Service was Restored: 10/17/2012 12:00:00 AM Summary: Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th.   This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification.   While this migration was largely transparent to Windows Azure users, a small number of users whose sign-in names were part of a Windows Live Custom Domain were unable to login.   This incompatibility was not discovered during the Quality Assurance testing phase prior to the migration. Customer Impact: Customers whose sign-in names were part of a Windows Live Custom Domain were unable to sign-in the Management Portal after ~4:00 p.m. PST on August 24th, 2012.   We determined that the issue did impact at least 127 users in 98 of these Windows Live Custom Domains and had a maximum potential impact of 1,110 users in total. Root Cause: The root cause of the issue was an incompatibility in the AAD authentication service to handle logins from Microsoft accounts whose sign-in names were part of a Windows Live Custom Domains.  This issue was not discovered during the Quality Assurance testing phase prior to the migration from Microsoft Account (MSA) to AAD. Mitigations: The issue was mitigated for the majority of affected users by 8:20 a.m. PST on August 25th, 2012 by running some internal scripts to correct many known Windows Live Custom Domains.   The remaining affected domains fell into two categories: Windows Live Custom Domains that were not corrected by 8/25/2012. An additional 48 Windows Live Custom Domains were fixed in the weeks following the incident within 2 business days after the AAD team received an escalation from product support regarding those accounts. Windows Live Custom domains that were also provisioned in Office365. Some of the affected Windows Live Custom Domains had already been provisioned in AAD because their owners signed up for Office365 which is a service that also uses AAD.   In these cases the Azure customers had to work around the issue by renaming their Microsoft Account or using a different Microsoft Account to administer their Azure subscription. Permanent Fix: The Azure Active Directory team permanently fixed the issue for all customers on 10/17/2012 in an upgraded release of the AAD service.

    Read the article

  • Advisor Webcast: Hyperion Planning: Migrating Business Rules to Calc Manager

    - by inowodwo
    As you may be aware EPM 11.1.2.1 was the terminal release of Hyperion Business Rules (see Hyperion Business Rules Statement of Direction (Doc ID 1448421.1). This webcast aims to help you migrate from Business Rules to Calc Manager. Date: January 10, 2013 at 3:00 pm, GMT Time (London, GMT) / 4:00 pm, Europe Time (Berlin, GMT+01:00) / 07:00 am Pacific / 8:00 am Mountain / 10:00 am Eastern TOPICS WILL INCLUDE:    Calculation Manager in 11.1.2.2    Migration Consideration    How to migrate the the HBR rules from 11.1.2.1 to Calculation Manager 11.1.2.2    How to migrate the security of the Business Rules.    How to approach troubleshooting and known issues with migration. For registration details please go to Migrating Business Rules to Calc Manager (Doc ID 1506296.1). Alternatively, to view all upcoming webcasts go to Advisor Webcasts: Current Schedule and Archived recordings [ID 740966.1] and chose Oracle Business Analytics from the drop down menu.

    Read the article

  • EXADATA & GoldenGate - the perfect combination for thetrainline.com

    - by maria costanzo
    enhanced the customer experience sustaining rapid search and booking times for hundreds of millions of journey requests per annum EXADATA & GoldenGate : the perfect combination thetrainline.com used Oracle GoldenGate to migrate data from its legacy system to two  Oracle Exadata Database Machine X2-2 HC Quarter Rack instances to reduce downtime, avoid  risk of data loss, and eliminate the need for complex programming. "Oracle GoldenGate enabled us to complete the migration of three terabytes to Oracle Exadata, within a single 30-minute system outage,” East said. "Without Oracle GoldenGate, we would have required a 20 hour outage window to complete the migration, something that was completely unacceptable."  Discover more at the following link  

    Read the article

  • Is Azure Compatible with JPEG XR?

    - by Shawn Eary
    I just put an F#/MVC app into a Windows Azure solution as a Web Role. Before migration, my JPEG XR (*.WDP) files were getting displayed on the client in IE9 without issue via my local and hosted sites. Now, after migration into Windows Azure, my JPEG XR files neither get displayed in my local Windows Azure compute emulator nor do they get displayed when they are deployed to http://*.cloudapp.net. Is there some sort of conflict with Widows Azure and (JPEG XR) *.wdp files? If so, what is the accepted best practice for overcoming this conflict?

    Read the article

  • Java.net Reborn

    - by Tori Wieldt
    Java.net, the home of  Java community projects, has been re-launched with a new look and new tools for developers.  The move from CollabNet to the Kenai infrastructure offers more flexibility for developers who want to host or contribute to community projects.  Instead of the large, fixed infrastructure per project (for example, several mailing lists per project), Kenai's ala carte features allow users to take only what they need. "We will continue to have the great mix of blogs, forums, and editorial content as well as new tools on the project side, including Mercurial, Git, and JIRA for developers," Java.net Community Manager Sonya Barry explains. The migration was huge effort. Over 1400 projects were migrated (and some 30 projects are left to go). A large part of the migration was a big cleanup of abandoned projects. With the high abandonment rate of open source projects, the was a lot to remove. The new java.net site is smaller, faster and now the percentage of good, current content is much higher.Check it out at http://home.java.net/

    Read the article

  • Migrating Spring to Java EE 6 Article Series at OTN - Part 3

    - by arungupta
    The spring season is characterized by migration of birds, whales, butterflies, frogs, and other animals for different reasons. If you use Spring framework and are interested in migrating to a standards-based Java EE platform, for whatever reason, then we have a solution for you. David Heffelfinger's, a renowned author and an ardent Java EE fan, has published third part of Spring to Java EE migration series at OTN. The article series takes a typical Spring application and shows how to migrate it to Java EE 6 using NetBeans. This new part builds upon part 1 and part 2 and also compares the generated WAR files and LoC in XML configuration in the two environments. There is an interesting discussion on Why Java EE 6 over Spring ? as well.

    Read the article

  • PHP : Symfony sort en version 2.1 définitive, gestion des dépendances avec Composer, formulaires plus efficaces et Mailer plus performant

    Symfony2 est un projet très communautaire, depuis le début (des centaines de bundles étaient disponibles bien avant les premières RC de la 2.0), une tendance qui se confirme : 250 contributeurs, 1 000 pull requests sur GitHub pour la première version beta de Symfony 2.1 ! Après les difficultés de migration avec symfony 1.x, l'équipe a tenté autant que possible de restreindre les changements à même de casser la rétrocompatibilité ; de même, le refactoring du module de formulaires a fait que la version finale de la 2.1 devrait sortir en août, afin de concentrer autant que possible les changements et faire que de plus en plus de code ne devra pas être modifié lors de la migration d'une version à l'autre. Ainsi, n'hésitez pas à tenter de migrer vos applications vers cette beta,...

    Read the article

  • PHP : Symfony sort en version 2.1 définitive, gestion des dépendances avec Composer, formulaires et Mailer plus performants

    Symfony2 est un projet très communautaire, depuis le début (des centaines de bundles étaient disponibles bien avant les premières RC de la 2.0), une tendance qui se confirme : 250 contributeurs, 1 000 pull requests sur GitHub pour la première version beta de Symfony 2.1 ! Après les difficultés de migration avec symfony 1.x, l'équipe a tenté autant que possible de restreindre les changements à même de casser la rétrocompatibilité ; de même, le refactoring du module de formulaires a fait que la version finale de la 2.1 devrait sortir en août, afin de concentrer autant que possible les changements et faire que de plus en plus de code ne devra pas être modifié lors de la migration d'une version à l'autre. Ainsi, n'hésitez pas à tenter de migrer vos applications vers cette beta,...

    Read the article

  • Sortie de la première beta de Symfony 2.1, Composer, chargement automatique des classes et adoption des règles de codage de la communauté

    Symfony2 est un projet très communautaire, depuis le début (des centaines de bundles étaient disponibles bien avant les premières RC de la 2.0), une tendance qui se confirme : 250 contributeurs, 1 000 pull requests sur GitHub pour la première version beta de Symfony 2.1 ! Après les difficultés de migration avec symfony 1.x, l'équipe a tenté autant que possible de restreindre les changements à même de casser la rétrocompatibilité ; de même, le refactoring du module de formulaires a fait que la version finale de la 2.1 devrait sortir en août, afin de concentrer autant que possible les changements et faire que de plus en plus de code ne devra pas être modifié lors de la migration d'une version à l'autre. Ainsi, n'hésitez pas à tenter de migrer vos applications vers cette beta,...

    Read the article

  • 'undefined method init for Mysql:Class'

    - by sscirrus
    I've been having problems with a MySQL Server installation that got messed up after a power outage. Configuration Intel i5 Mac running OS X 10.6.5 Ruby 1.9.2 installed Rails 3.0.1 installed MySQL Server (finally) installed and running I completely reinstalled MySQL, which deleted the local development/test/production databases. So, I have run create database development; in MySQL to get the dev database ready for a migration. Current Goal Run rake db:migrate to get my databases back again. (I cannot currently access my databases or Mysql at all from Rails.) Error Using the gem 'mysql', '2.8.1' and run rake db:migrate, I get the error: rake aborted! undefined method 'init' for Mysql:Class Stack Trace: /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/mysql_adapter.rb:30:in 'mysql_connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:230:in 'new_connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:238:in 'checkout_new_connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:194:in 'block (2 levels) in checkout' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in 'loop' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in 'block in checkout' /Users/sscirrus/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/monitor.rb:201:in 'mon_synchronize' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:189:in 'checkout' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:96:in 'connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:318:in 'retrieve_connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_specification.rb:97:in 'retrieve_connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_specification.rb:89:in 'connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/migration.rb:486:in 'initialize' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/migration.rb:433:in 'new' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/migration.rb:433:in 'up' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/migration.rb:415:in 'migrate' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/railties/databases.rake:142:in 'block (2 levels) in <top (required)>' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:636:in 'call' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:636:in 'block in execute' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:631:in 'each' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:631:in 'execute' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:597:in 'block in invoke_with_call_chain' /Users/sscirrus/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/monitor.rb:201:in 'mon_synchronize' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:590:in 'invoke_with_call_chain' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:583:in 'invoke' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2051:in 'invoke_task' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2029:in 'block (2 levels) in top_level' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2029:in 'each' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2029:in 'block in top_level' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2068:in 'standard_exception_handling' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2023:in 'top_level' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2001:in 'block in run' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2068:in 'standard_exception_handling' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:1998:in 'run' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/bin/rake:31:in '<top (required)>' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/bin/rake:19:in 'load' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/bin/rake:19:in '<main>'

    Read the article

  • How to configure Hyper-V failover cluster to live migrate when dynamic memory runs out?

    - by Matt Johnson
    Appologies in advance that this is not a direct programming question, but I have a feeling that the solution involves custom powershell scripts (maybe), so this is as good a place to ask as any. I maintain a website that has a large Hyper-V cluster for SQL Servers. We are using Windows 2008 R2 SP1, and the new "dynamic memory" feature. I've already ready reviewed the Best Practices Guide, and implemented it's suggested configuration. Everything works well, except that when SQL demand increases memory pressure to expand to more memory than is available on the physical machine, the memory status goes into the "Warning" state and stays there. I assume the hypervisor is using a swapfile on the host to fulfill the memory requirement, thus slowing the virtual machine down. When this happens, there are plenty of other nodes in the cluster that have available resources. I can live-migrate the virtual server over there and everything works, and the warnings go away. Now how can I automate this? I see no menu options in either Hyper-V or the Failover Cluster Manager for performing a migration or shutdown when dynamic memory goes into the warning state. Any ideas about how to script this, or monitor it and invoke the action directly, would be helpful. If the solution involves coding, powershell would be ideal, but I could envison this as a .Net Service that monitors for this state and kicks off the migration request. I just don't know what objects are involved in doing the monitoring or kicking off the live migration. Thanks in advance.

    Read the article

  • How to migrate a running KVM (with full disk copy) to another node?

    - by klipz
    I'm doing tests on KVM, and I'd like to see if I can make a hot migration, I mean the virtual machine won't stop running during the migration (but a few seconds of freeze is ok). I use a small cluster for my test : kvm1, kvm2, and kvmnfs. kvm1 and kvm2 runs the virtual machines kvmnfs is a NFS server, and it's mounted on /KVM on both kvm1 and kvm2 To migrate a VM (only RAM in fact) from kvm1 to kvm2, I run the same kvm command on kvm2 (with -incoming tcp:0:4444) that on kvm1, then I use "migrate -d tcp:kvm2:4444" : It works great, since the VM file is common to both machines. Now, I wan't to make a full migration (RAM + disk) of a local VM file (no more NFS) of kvm1 to kvm2. I tried to create an empty file, with touch, on kvm2 and use the same kvm command line + the "-incoming ..."). Then on kvm1 I use "migrate -d tcp:kvm2:4444" : It copies everything, then... the VM fails (any I/O disk gives an I/O error) ! And my VM file on kvm2, the one I created with touch, as still a size of 0 bytes. What am I doing wrong ? What is the exact command to use on kvm2 ? And what is the command to launch, in the monitoring mode, on kvm1 ?

    Read the article

  • Opening an existing process

    - by Grasper
    I am using Eclipse in Linux through a remote connection (xrdp). My internet got disconnected, so I got disconnected from the server while eclipse was running. Now I logged in again, and I do the "top" command I can see that eclipse is running and still under my user name. Is there some way I can bring that process back into my view (I do not want to kill it because I am in the middle of checking in a large swath of code)? It doesnt show up on the bottom panel after I logged in again. Here is the "top" output: /home/mclouti% top top - 08:32:31 up 43 days, 13:06, 29 users, load average: 0.56, 0.79, 0.82 Tasks: 447 total, 1 running, 446 sleeping, 0 stopped, 0 zombie Cpu(s): 6.0%us, 0.7%sy, 0.0%ni, 92.1%id, 1.1%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 3107364k total, 2975852k used, 131512k free, 35756k buffers Swap: 2031608k total, 59860k used, 1971748k free, 817816k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 13415 mclouti 15 0 964m 333m 31m S 21.2 11.0 83:12.96 eclipse 16040 mclouti 15 0 2608 1348 888 R 0.7 0.0 0:00.12 top 31395 mclouti 15 0 29072 20m 8524 S 0.7 0.7 611:08.08 Xvnc 2583 root 20 0 898m 2652 1056 S 0.3 0.1 139:26.82 automount 28990 postgres 15 0 13564 868 304 S 0.3 0.0 26:33.36 postgres 28995 postgres 16 0 13808 1248 300 S 0.3 0.0 6:54.95 postgres 31440 mclouti 15 0 3072 1592 1036 S 0.3 0.1 6:01.54 gam_server 1 root 15 0 2072 524 496 S 0.0 0.0 0:03.00 init 2 root RT -5 0 0 0 S 0.0 0.0 0:04.53 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:01.72 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:04.33 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.05 ksoftirqd/2

    Read the article

  • 0% CPU in top for all processes, but load average > 1

    - by chrisdew
    On two different servers (with Ubuntu 12.04LTS AMD64) I have seen the following behaviour: op - 10:50:05 up 305 days, 21:17, 1 user, load average: 1.94, 2.52, 2.97 Tasks: 141 total, 2 running, 139 sleeping, 0 stopped, 0 zombie Cpu(s): 41.5%us, 6.5%sy, 0.0%ni, 51.8%id, 0.0%wa, 0.2%hi, 0.1%si, 0.0%st Mem: 8178432k total, 5753740k used, 2424692k free, 159480k buffers Swap: 15625208k total, 0k used, 15625208k free, 4905292k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 23928 2072 1216 S 0 0.0 0:56.42 init 2 root 20 0 0 0 0 S 0 0.0 0:00.01 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:01.23 migration/0 4 root 20 0 0 0 0 S 0 0.0 2:39.82 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:02.99 migration/1 7 root 20 0 0 0 0 S 0 0.0 2:32.15 ksoftirqd/1 8 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root RT 0 0 0 0 S 0 0.0 0:11.67 migration/2 10 root 20 0 0 0 0 S 0 0.0 29:00.34 ksoftirqd/2 The server is working fine, but top shows all processes as using 0% CPU. A reboot fixed this on an earlier machine, but I haven't yet tried it on this one. I have tried top several times, and so am sure that I haven't accidentally pressed '<' or '' to sort by a different column. Sorting the process list by all of the available columns, stills shows 0% CPU for all displayed processes. What is going on? If this a kernel bug? Update: If I use top -p <PID> for a know, busy process, top still displays 0% CPU for that process.

    Read the article

  • RPC Server Unavailable on Hyper-V cluster when moving resources after the host adapter has failed

    - by Doug Luxem
    On a Windows 2008 R2 SP1 cluster running Hyper-V, a lost network connectivity on the primary host interface. The interface was rapidly flapping up and down, and this was later determined to be caused by a faulty switch port. As this was a clustered server, the host interface was not fault tolerant (seeing as how the whole server was fault tolerant), so connectivity to the host was going up and down. The Hyper-V guests were completely unaffected by the network outage as they used a dedicated trunk on the server separate from the host interface. Additionally, dedicated interfaces for the cluster and live migration networks were fine. In order to diagnose the server, I tried to move all resources (Hyper-V Guests) to other nodes through Failover Cluster Manager. These moves failed with an error RPC Server Unavailable. The only way to move resources was by shutting down the guests, stopping the cluster service on the Node A, allowing other nodes to take ownership of the resources, and restarting the guests. A few other notes: All nodes have Client for MS Networks and File & Printer Sharing enabled on the Cluster and LM networks. Node A was accessible over cluster and LM networks from other nodes (these are private, cluster-only networks); pingable, CIFs, etc. Accessing \\NODEA is done over the Host adapters, as you would expect in this case and is the reason for the RPC Server Unavailable error with that adapter being down. My questions here are - Is there a way to still use Live Migration in a failure scenario such as this to prevent shutting down the Hyper-V guests? How can the network be reconfigured in the future so that the cluster service attempts to use the cluster and/or live migration networks to issue the RPC requests?

    Read the article

  • How to detect UTF-8-based encoded strings [closed]

    - by Diego Sendra
    A customer of asked us to build him a multi-language based support VB6 scraper, for which we had the need to detect UTF-8 based encoded strings to decode it later for proper displaying in application UI. It's necessary to point out that this need arises based on VB6 limitations to natively support UTF-8 in its controls, contrary to what it happens in .NET where you can tell a control that it should expect UTF-8 encoding. VB6 natively supports ISO 8859-1 and/or Windows-1252 encodings only, for which textboxes, dropdowns, listview controls, others can't be defined to natively support/expect UTF-8 as you can do in .NET considering what we just explained; so we would see weird symbols such as é, è among others, making it a whole mess at the time of displaying. So, next function contains whole UTF-8 encoded punctuation marks and symbols from languages like Spanish, Italian, German, Portuguese, French and others, based on an excellent UTF-8 based list we got from this link - Ref. http://home.telfort.nl/~t876506/utf8tbl.html Basically, the function compares if each and one of the listed UTF-8 encoded sentences, separated by | (pipe) are found in our passed string making a substring search first. Whether it's not found, it makes an alternative ASCII value based search to get a match. Say, a string like "Societé" (Society in english) would return FALSE through calling isUTF8("Societé") while it would return TRUE when calling isUTF8("SocietÈ") since È is the UTF-8 encoded representation of é. Once you got it TRUE or FALSE, you can decode the string through DecodeUTF8() function for properly displaying it, a function we found somewhere else time ago and also included in this post. Function isUTF8(ByVal ptstr As String) Dim tUTFencoded As String Dim tUTFencodedaux Dim tUTFencodedASCII As String Dim ptstrASCII As String Dim iaux, iaux2 As Integer Dim ffound As Boolean ffound = False ptstrASCII = "" For iaux = 1 To Len(ptstr) ptstrASCII = ptstrASCII & Asc(Mid(ptstr, iaux, 1)) & "|" Next tUTFencoded = "Ä|Ã…|Ç|É|Ñ|Ö|ÃŒ|á|Ã|â|ä|ã|Ã¥|ç|é|è|ê|ë|í|ì|î|ï|ñ|ó|ò|ô|ö|õ|ú|ù|û|ü|â€|°|¢|£|§|•|¶|ß|®|©|â„¢|´|¨|â‰|Æ|Ø|∞|±|≤|≥|Â¥|µ|∂|∑|âˆ|Ï€|∫|ª|º|Ω|æ|ø|¿|¡|¬|√|Æ’|≈|∆|«|»|…|Â|À|Ã|Õ|Å’|Å“|–|—|“|â€|‘|’|÷|â—Š|ÿ|Ÿ|â„|€|‹|›|ï¬|fl|‡|·|‚|„|‰|Â|Ú|Ã|Ë|È|Ã|ÃŽ|Ã|ÃŒ|Ó|Ô||Ã’|Ú|Û|Ù|ı|ˆ|Ëœ|¯|˘|Ë™|Ëš|¸|Ë|Ë›|ˇ" & _ "Å|Å¡|¦|²|³|¹|¼|½|¾|Ã|×|Ã|Þ|ð|ý|þ" & _ "â‰|∞|≤|≥|∂|∑|âˆ|Ï€|∫|Ω|√|≈|∆|â—Š|â„|ï¬|fl||ı|˘|Ë™|Ëš|Ë|Ë›|ˇ" tUTFencodedaux = Split(tUTFencoded, "|") If UBound(tUTFencodedaux) > 0 Then iaux = 0 Do While Not ffound And Not iaux > UBound(tUTFencodedaux) If InStr(1, ptstr, tUTFencodedaux(iaux), vbTextCompare) > 0 Then ffound = True End If If Not ffound Then 'ASCII numeric search tUTFencodedASCII = "" For iaux2 = 1 To Len(tUTFencodedaux(iaux)) 'gets ASCII numeric sequence tUTFencodedASCII = tUTFencodedASCII & Asc(Mid(tUTFencodedaux(iaux), iaux2, 1)) & "|" Next 'tUTFencodedASCII = Left(tUTFencodedASCII, Len(tUTFencodedASCII) - 1) 'compares numeric sequences If InStr(1, ptstrASCII, tUTFencodedASCII) > 0 Then ffound = True End If End If iaux = iaux + 1 Loop End If isUTF8 = ffound End Function Function DecodeUTF8(s) Dim i Dim c Dim n s = s & " " i = 1 Do While i <= Len(s) c = Asc(Mid(s, i, 1)) If c And &H80 Then n = 1 Do While i + n < Len(s) If (Asc(Mid(s, i + n, 1)) And &HC0) <> &H80 Then Exit Do End If n = n + 1 Loop If n = 2 And ((c And &HE0) = &HC0) Then c = Asc(Mid(s, i + 1, 1)) + &H40 * (c And &H1) Else c = 191 End If s = Left(s, i - 1) + Chr(c) + Mid(s, i + n) End If i = i + 1 Loop DecodeUTF8 = s End Function

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >