Search Results

Search found 9469 results on 379 pages for 'live migration'.

Page 17/379 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • ATG Live Webcast Event - EBS 12 OAF Rich UI Enhancements

    - by Bill Sawyer
    The E-Business Suite Applications Technology Group (ATG) participates in several conferences a year, including Oracle OpenWorld in San Francisco and OAUG/Collaborate.   We announce new releases, roadmaps, updates, and other news at these events.  These events are exciting, drawing thousands of attendees, but it's clear that only a fraction of our EBS users are able to participate. We touch upon many of the same announcements here on this blog, but a blog article is necessarily different than an hour-long conference session.  We're very interested in offering more in-depth technical content and the chance to interact directly with senior ATG Development staff.  New ATG Live Webcast series -- free of charge As part of that initiative, I'm very pleased to announce that we're launching a new series of free ATG Live Webcasts jointly with Oracle University.  Our goal is to provide solid, authoritative coverage of some of the latest ATG technologies, broadcasting live from our development labs to you. Our first event is titled: The Latest E-Business Suite R12.x OA Framework Rich User Interface Enhancements This live one-hour webcast will offer a comprehensive review of the latest user interface enhancements and updates to OA Framework in EBS 12. Developers will get a detailed look at new features designed to enhance usability, offer more capabilities for personalization and extensions, and support the development and use of dashboards and web services. Topics will include new rich user interface (UI) capabilities such as: 

    Read the article

  • Install and upgrade strategies for Oracle Enterprise Manager 12c - Upcoming Webcasts with live demos

    - by Anand Akela
    At Oracle Open World 2011, we launched the Oracle Enterprise Manager 12c , the only complete cloud management solution for your enterprise cloud. With the new release of Oracle Enterprise Manager Cloud Control 12c, the installation and upgrade process has been enhanced to provide a fast and smooth install experience. In the upcoming webcasts, Oracle Enterprise Manager experts will discuss the installation and upgrade strategies for Oracle Enterprise Manager Cloud Control 12c . These webcasts will include live demonstrations of the install and upgrade processes. In the Webcast on November 17th, we will cover the installation steps and provide recommendations to setup a new Oracle Enterprise Manager Cloud Control 12c environment. We'll also provide a live demonstration of the complete installation process.   Upgrading your Oracle Enterprise Manager environment can be a challenging and complex task especially with large environments consisting of hundreds or thousands of targets. In the webcast on November 18th, we'll describe key facts that administrators must know before upgrading their Enterprise Manager system as well as introduce the different approaches for an upgrade. We'll also walk you through the key steps for upgrading an existing Enterprise Manager 11g (or 10g) Grid Control to Oracle Enterprise Manager Cloud Control 12c. In addition to the live webcasts on Oracle Enterprise Manager Cloud Control 12c install and upgrade processes, please consider attending the replay of  Oracle Enterprise Manager Ops Center webcast with live Q&A . Schedule and registration links of upcoming webcasts  :- Topics Schedule Oracle Enterprise Manager Ops Center: Global Systems Management Made Easy (Replay) November 17 10 a.m PT December 1 10 a.m PT Oracle Enterprise Manager Cloud Control 12c Installation Overview November 17 8 a.m PT Upgrade Smoothly to Oracle Enterprise Manager Cloud Control 12c November 18 8 a.m PT For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter   Facebook YouTube Linkedin

    Read the article

  • Are You Using Windows Live Mesh?

    - by Ben Griswold
    Most of the time, I’m the guy who authors the show notes for the Herding Code Podcast.  The workflow is relatively straight-forward: Jon shares the pre-production audio with me, I compete my write up and then ship the notes back to Jon for publishing with the edited audio.  All file sharing is all done with shared folders in the Windows Live Mesh. The director of my kid’s preschool was looking for a way to access her work computer from her home office.  VPN connection?  Remote desktop?  FTP?  Nope. I installed Windows Live Mesh in a matter of minutes, synchronized a number of folders and she was off and running.  (The neat thing is she’s running a PC in the office and a Mac at home.) I was using Dropbox before discovering Mesh. Dropbox is still very cool but I’m in and out of Mesh enough that it’s taken over.  Actually I still have a Dropbox folder – it’s just being synched by Mesh now. If you’re interested in giving Live Mesh a whirl, here’ are the notable links as found on the product’s site: What you need Create your mesh Sync folders Share folders Use your Live Desktop Connect to a remote computer Use a mobile phone Good luck!

    Read the article

  • ReSharper C# Live Template for Declaring Routed Event

    - by Bart Read
    Here's another WPF ReSharper Live Template for you. This one is for declaring standalone routed events of any type. Again, it's pretty simple:        #region $EVENTNAME$ Routed Event       public static readonly RoutedEvent $EVENTNAME$Event = EventManager.RegisterRoutedEvent(            "$EVENTNAME$",           RoutingStrategy.$ROUTINGSTRATEGY$,           typeof( $EVENTHANDLERDELEGATE$ ),           typeof( $DECLARINGTYPE$ ) );       public event $EVENTHANDLERDELEGATE$ $EVENTNAME$       {           add { AddHandler( $EVENTNAME$Event, value ); }           remove { RemoveHandler( $EVENTNAME$Event, value ); }       }       protected virtual void On$EVENTNAME$()       {           RaiseEvent( new $EVENTARGSTYPE$( $EVENTNAME$Event, this ) );           $END$       }       #endregion Here are my previous posts along the same lines: ReSharper C# Live Template for Read-Only Dependency Property and Routed Event Boilerplate ReSharper C# Live Template for Dependency Property and Property Change Routed Event Boilerplate Code Enjoy! Technorati Tags: resharper,live template,c#,routed event,wpf,boilerplate,code generation

    Read the article

  • Database Migration Scripts: Getting from place A to place B

    - by Phil Factor
    We’ll be looking at a typical database ‘migration’ script which uses an unusual technique to migrate existing ‘de-normalised’ data into a more correct form. So, the book-distribution business that uses the PUBS database has gradually grown organically, and has slipped into ‘de-normalisation’ habits. What’s this? A new column with a list of tags or ‘types’ assigned to books. Because books aren’t really in just one category, someone has ‘cured’ the mismatch between the database and the business requirements. This is fine, but it is now proving difficult for their new website that allows searches by tags. Any request for history book really has to look in the entire list of associated tags rather than the ‘Type’ field that only keeps the primary tag. We have other problems. The TypleList column has duplicates in there which will be affecting the reporting, and there is the danger of mis-spellings getting there. The reporting system can’t be persuaded to do reports based on the tags and the Database developers are complaining about the unCoddly things going on in their database. In your version of PUBS, this extra column doesn’t exist, so we’ve added it and put in 10,000 titles using SQL Data Generator. /* So how do we refactor this database? firstly, we create a table of all the tags. */IF  OBJECT_ID('TagName') IS NULL OR OBJECT_ID('TagTitle') IS NULL  BEGIN  CREATE TABLE  TagName (TagName_ID INT IDENTITY(1,1) PRIMARY KEY ,     Tag VARCHAR(20) NOT NULL UNIQUE)  /* ...and we insert into it all the tags from the list (remembering to take out any leading spaces */  INSERT INTO TagName (Tag)     SELECT DISTINCT LTRIM(x.y.value('.', 'Varchar(80)')) AS [Tag]     FROM     (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )  /* we can then use this table to provide a table that relates tags to articles */  CREATE TABLE TagTitle   (TagTitle_ID INT IDENTITY(1, 1),   [title_id] [dbo].[tid] NOT NULL REFERENCES titles (Title_ID),   TagName_ID INT NOT NULL REFERENCES TagName (Tagname_ID)   CONSTRAINT [PK_TagTitle]       PRIMARY KEY CLUSTERED ([title_id] ASC, TagName_ID)       ON [PRIMARY])        CREATE NONCLUSTERED INDEX idxTagName_ID  ON  TagTitle (TagName_ID)  INCLUDE (TagTitle_ID,title_id)        /* ...and it is easy to fill this with the tags for each title ... */        INSERT INTO TagTitle (Title_ID, TagName_ID)    SELECT DISTINCT Title_ID, TagName_ID      FROM        (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )    INNER JOIN TagName ON TagName.Tag=LTRIM(x.y.value('.', 'Varchar(80)'))    END    /* That's all there was to it. Now we can select all titles that have the military tag, just to try things out */SELECT Title FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE tagname.tag='Military'/* and see the top ten most popular tags for titles */SELECT Tag, COUNT(*) FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  GROUP BY Tag ORDER BY COUNT(*) DESC/* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID So we’ve refactored our PUBS database without pain. We’ve even put in a check to prevent it being re-run once the new tables are created. Here is the diagram of the new tag relationship We’ve done both the DDL to create the tables and their associated components, and the DML to put the data in them. I could have also included the script to remove the de-normalised TypeList column, but I’d do a whole lot of tests first before doing that. Yes, I’ve left out the assertion tests too, which should check the edge cases and make sure the result is what you’d expect. One thing I can’t quite figure out is how to deal with an ordered list using this simple XML-based technique. We can ensure that, if we have to produce a list of tags, we can get the primary ‘type’ to be first in the list, but what if the entire order is significant? Thank goodness it isn’t in this case. If it were, we might have to revisit a string-splitter function that returns the ordinal position of each component in the sequence. You’ll see immediately that we can create a synchronisation script for deployment from a comparison tool such as SQL Compare, to change the schema (DDL). On the other hand, no tool could do the DML to stuff the data into the new table, since there is no way that any tool will be able to work out where the data should go. We used some pretty hairy code to deal with a slightly untypical problem. We would have to do this migration by hand, and it has to go into source control as a batch. If most of your database changes are to be deployed by an automated process, then there must be a way of over-riding this part of the data synchronisation process to do this part of the process taking the part of the script that fills the tables, Checking that the tables have not already been filled, and executing it as part of the transaction. Of course, you might prefer the approach I’ve taken with the script of creating the tables in the same batch as the data conversion process, and then using the presence of the tables to prevent the script from being re-run. The problem with scripting a refactoring change to a database is that it has to work both ways. If we install the new system and then have to rollback the changes, several books may have been added, or had their tags changed, in the meantime. Yes, you have to script any rollback! These have to be mercilessly tested, and put in source control just in case of the rollback of a deployment after it has been in place for any length of time. I’ve shown you how to do this with the part of the script .. /* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID …which would be turned into an UPDATE … FROM script. UPDATE titles SET  typelist= ThisTaglistFROM     (SELECT title_ID, title, STUFF(    (SELECT ','+tagname.tag FROM titles thisTitle      INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID      INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID    WHERE ThisTitle.title_id=titles.title_ID    ORDER BY CASE WHEN tagname.tag=titles.[type] THEN 1 ELSE 0  END DESC    FOR XML PATH(''), TYPE).value('.', 'varchar(max)')    ,1,1,'')  AS ThisTagList  FROM titles)fINNER JOIN Titles ON f.title_ID=Titles.title_ID You’ll notice that it isn’t quite a round trip because the tags are in a different order, though we’ve managed to make sure that the primary tag is the first one as originally. So, we’ve improved the database for the poor book distributors using PUBS. It is not a major deal but you’ve got to be prepared to provide a migration script that will go both forwards and backwards. Ideally, database refactoring scripts should be able to go from any version to any other. Schema synchronization scripts can do this pretty easily, but no data synchronisation scripts can deal with serious refactoring jobs without the developers being able to specify how to deal with cases like this.

    Read the article

  • Partner Webcast - Migration to Weblogic Server 11g

    - by lukasz.romaszewski(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Partner Webcast - Migration to Weblogic Server 11g March 25th, 12noonCET (1pm  EET/ 11am GMT)   Description The Oracle WebLogic 11g application server product line is the industry's most comprehensive Java platform for developing, deploying, and integrating enterprise applications. It provides the foundation for application grid, which is an architecture that enables enterprises to outperform their competitors while minimizing operational costs. Agenda 1.      1. Introduction to the Oracle WebLogic Server 11g 2.      2. Migration Process overview 3.      3. Migrating from iAS 10g a.      SmartUpgrade utility introduction b.      SmartUpgrade Demo 4.      4. Migrating from other JEE application servers a.      Understanding potential caveats b.      Using WebLogic classloader mechanism to isolate application c.       Shared libraries overview 5.      5. Migrating Oracle Fusion Middleware components (Forms&Reports, ADF, SOA etc) 6.      6. Summary 7.      7. Q&A    Delivery Format This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. To register, click here For any questions please contact [email protected]. Registrations received less than 24hours  prior to start time may not receive confirmation to attend.

    Read the article

  • Hyper-V Live Migration across Sites!

    - by Ryan Roussel
    One of the great sessions I sat in on at Tech Ed this week was stretching a Windows 2008 R2 Hyper-V  Failover Cluster across sites.  With this ability, you could actually implement a Hyper-V cluster where you could migrate or even Live Migrate VMs across sites.   With this area’s propensity for Hurricanes, this will be a very popular topic for me over the next few months. While this technology is possible today, it’s also very complicated and can be very expensive to implement.    First your WAN connection has to support the ability to trunk your VLAN across both sites in order to Live Migrate.  This means you can’t use a Layer 3 routed connection like MPLS.  It has to be a Metro Ethernet connection or "Dark Fiber”.  Dark Fiber is unused Fiber already in the ground that can be leased from  various providers. Both of these connections would allow you to trunk layer 2 across your WAN.  Cisco does have the ability to trunk layer 2 across a routed connection by muxing the traffic but this is only available in their Nexus product line which has a very steep price tag.   If you are stuck with MPLS or the like and Nexus switching is not a realistic possibility, you will have to implement a multi-subnet cluster in which case Live Migration won’t be possible.  However you can still failover VMs to the remote site with some planning and manual intervention.  The consideration here is that the VMs will be on a different subnet once migrated, so you will have to change the IP addressing of your VMs.  This also has ramifications with DNS and Name resolution to control your down time.  DHCP with Reservations for your VMs is the preferred method to achieve the IP changes as this will automate that part of the process.   Secondly, you will have to have  a mechanism to replicate your storage across both sites.  Many SAN vendors natively support hardware based synchronous and asynchronous replication.  Some even support cluster shared volumes which were introduced in 2008 R2.   If your SANs do not support this natively, there are alternative file based replication products either software based like Double Take or hardware appliance like EMC.  Be sure to check with your vendor on the support of Disk majority if you’re replicating your quorum disk between SANs.   The last consideration is the ability to maintain quorum for your cluster.  If your replication provider does not support Disk Majority through replication, you will have to explore Node Majority with File Share Witness.  This will affect your design as a 3 node cluster with 1 node at the remote site and FSW at the production site would not have the ability to maintain quorum if the production site was lost. MS best practice for this would be to implement an even node cluster with 2 nodes at  each site and the FSW at a third site.   And there you have it.  While some considerations and research goes into implementing this solution, even a multi-subnet solution would be invaluable to organizations in the implementations of “warm” DR sites.

    Read the article

  • Watch the Live Broadcast of the Silverlight 4 Launch Event

     Want to be at DevConnections for the Silverlight 4 Launch but can;t make it? No worries, you can watch as Scott Guthrie launches Silverlight 4. Following the keynote you can watch Scott in special one hour edition of "Ask the Gu" along with other Silverlight folk like me to answer your questions on Channel 9 Live. To watch the keynotes and Channel 9 Live coverage head to http://live.ch9.ms on April 12th and 13th. Silverlight required, of course :)  To be a part of the conversation...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How can I compress a movie to a specific file size in Windows 7's Live Movie Maker?

    - by Nathan Fellman
    In previous versions of Windows Movie Maker I could take a raw video file and specify the file size to compress it to, and Movie Maker would compress it accordingly (with the appropriate loss in quality). Live Movie Maker, which comes with Windows 7, doesn't seem to have this option. I can only set specify the requested quality. Is there any way to specify the size of the target file for Windows Live Movie Maker?

    Read the article

  • How to Join N live MP3 streams into one using FFMPEG?

    - by Ole Jak
    How to Join N live MP3 streams (radio streams like such live KCDX mp3 stream http://mp3.kcdx.com:8000/stream ) into 1 using FFMPEG? (I have N incoming live mp3 streams I want to join them and stream out 1 live mp3 stream) I mean I want to mix sounds like thay N speakers speak at the same time (btw N stereo to 1 mono), please help. BTW: My problem is mainly how to make FFMPEG read from stream not from file... Would you mind giving some code examples, please.

    Read the article

  • Enterprise IPv6 Migration - End of proxypac ? Start of Point-to-Point ? +10K users

    - by Yohann
    Let's start with a diagram : We can see a "typical" IPv4 company network with : An Internet acces through a proxy An "Others companys" access through an dedicated proxy A direct access to local resources All computers have a proxy.pac file that indicates which proxy to use or whether to connect directly. Computers have access to just a local DNS (no name resolution for google.com for example.) By the way ... The company does not respect the RFC1918 internally and uses public addresses! (historical reason). The use of internet proxy explicitly makes it possible to not to have problem. What if we would migrate to IPv6? Step 1 : IPv6 internet access Internet access in IPv6 is easy. Indeed, just connect the proxy in Internet IPv4 and IPv6. There is nothing to do in internal network : Step 2 : IPv6 AND IPv4 in internal network And why not full IPv6 network directly? Because there is always the old servers that are not compatible IPv6 .. Option 1 : Same architecture as in IPv4 with a proxy pac This is probably the easiest solution. But is this the best? I think the transition to IPv6 is an opportunity not to bother with this proxy pac! Option 2 : New architecture with transparent proxy, whithout proxypac, recursive DNS Oh yes! In this new architecture, we have: Explicit Internet Proxy becomes a Transparent Internet Proxy Local DNS becomes a Normal Recursive DNS + authorative for local domains No proxypac Explicit Company Proxy becomes a Transparent Company Proxy Routing Internal Routers reditect IP of appx.ext.example.com to Company Proxy. The default gateway is the Transparent Internet proxy. Questions What do you think of this architecture IPv6? This architecture will reveal the IP addresses of our internal network but it is protected by firewalls. Is this a real big problem? Should we keep the explicit use of a proxy? -How would you make for this migration scenario? -And you, how do you do in your company? Thanks! Feel free to edit my post to make it better.

    Read the article

  • How to Join N live MP3 streams into 1 using FFMPEG?

    - by Ole Jak
    How to Join N live MP3 streams (radio streams like such live KCDX mp3 stream http://mp3.kcdx.com:8000/stream ) into 1 using FFMPEG? (I have N incoming live mp3 streams I vant to join them and stream out 1 live mp3 stream) I mean I wanna to mix sounds like thay N speakers speak at the same time (btw N stereo to 1 mono), please help BTW: My problem is mainly how to make FFMPEG read from stream not from file... Would you mind giving some code examples, please...

    Read the article

  • Delphi dbExpress and Interbase: UTF8 migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally.

    Read the article

  • How do you deal with breaking changes in a Rails migration?

    - by Adam Lassek
    Let's say I'm starting out with this model: class Location < ActiveRecord::Base attr_accessible :company_name, :location_name end Now I want to refactor one of the values into an associated model. class CreateCompanies < ActiveRecord::Migration def self.up create_table :companies do |t| t.string :name, :null => false t.timestamps end add_column :locations, :company_id, :integer, :null => false end def self.down drop_table :companies remove_column :locations, :company_id end end class Location < ActiveRecord::Base attr_accessible :location_name belongs_to :company end class Company < ActiveRecord::Base has_many :locations end This all works fine during development, since I'm doing everything a step at a time; but if I try deploying this to my staging environment, I run into trouble. The problem is that since my code has already changed to reflect the migration, it causes the environment to crash when it attempts to run the migration. Has anyone else dealt with this problem? Am I resigned to splitting my deployment up into multiple steps?

    Read the article

  • JDK bug migration: components and subcomponents

    - by darcy
    One subtask of the JDK migration from the legacy bug tracking system to JIRA was reclassifying bugs from a three-level taxonomy in the legacy system, (product, category, subcategory), to a fundamentally two-level scheme in our customized JIRA instance, (component, subcomponent). In the JDK JIRA system, there is technically a third project-level classification, but by design a large majority of JDK-related bugs were migrated into a single "JDK" project. In the end, over 450 legacy subcategories were simplified into about 120 subcomponents in JIRA. The 120 subcomponents are distributed among 17 components. A rule of thumb used was that a subcategory had to have at least 50 bugs in it for it to be retained. Below is a listing the component / subcomponent classification of the JDK JIRA project along with some notes and guidance on which OpenJDK email addresses cover different areas. Eventually, a separate incidents project to host new issues filed at bugs.sun.com will use a slightly simplified version of this scheme. The preponderance of bugs and subcomponents for the JDK are in library-related areas, with components named foo-libs and subcomponents primarily named after packages. While there was an overall condensation of subcomponents in the migration, in some cases long-standing informal divisions in core libraries based on naming conventions in the description were promoted to formal subcomponents. For example, hundreds of bugs in the java.util subcomponent whose descriptions started with "(coll)" were moved into java.util:collections. Likewise, java.lang bugs starting with "(reflect)" and "(proxy)" were moved into java.lang:reflect. client-libs (Predominantly discussed on 2d-dev and awt-dev and swing-dev.) 2d demo java.awt java.awt:i18n java.beans (See beans-dev.) javax.accessibility javax.imageio javax.sound (See sound-dev.) javax.swing core-libs (See core-libs-dev.) java.io java.io:serialization java.lang java.lang.invoke java.lang:class_loading java.lang:reflect java.math java.net java.nio (Discussed on nio-dev.) java.nio.charsets java.rmi java.sql java.sql:bridge java.text java.util java.util.concurrent java.util.jar java.util.logging java.util.regex java.util:collections java.util:i18n javax.annotation.processing javax.lang.model javax.naming (JNDI) javax.script javax.script:javascript javax.sql org.openjdk.jigsaw (See jigsaw-dev.) security-libs (See security-dev.) java.security javax.crypto (JCE: includes SunJCE/MSCAPI/UCRYPTO/ECC) javax.crypto:pkcs11 (JCE: PKCS11 only) javax.net.ssl (JSSE, includes javax.security.cert) javax.security javax.smartcardio javax.xml.crypto org.ietf.jgss org.ietf.jgss:krb5 other-libs corba corba:idl corba:orb corba:rmi-iiop javadb other (When no other subcomponent is more appropriate; use judiciously.) Most of the subcomponents in the xml component are related to jaxp. xml jax-ws jaxb javax.xml.parsers (JAXP) javax.xml.stream (JAXP) javax.xml.transform (JAXP) javax.xml.validation (JAXP) javax.xml.xpath (JAXP) jaxp (JAXP) org.w3c.dom (JAXP) org.xml.sax (JAXP) For OpenJDK, most JVM-related bugs are connected to the HotSpot Java virtual machine. hotspot (See hotspot-dev.) build compiler (See hotspot-compiler-dev.) gc (garbage collection, see hotspot-gc-dev.) jfr (Java Flight Recorder) jni (Java Native Interface) jvmti (JVM Tool Interface) mvm (Multi-Tasking Virtual Machine) runtime (See hotspot-runtime-dev.) svc (Servicability) test core-svc (See serviceability-dev.) debugger java.lang.instrument java.lang.management javax.management tools The full JDK bug database contains entries related to legacy virtual machines that predate HotSpot as well as retired APIs. vm-legacy jit (Sun Exact VM) jit_symantec (Symantec VM, before Exact VM) jvmdi (JVM Debug Interface ) jvmpi (JVM Profiler Interface ) runtime (Exact VM Runtime) Notable command line tools in the $JDK/bin directory have corresponding subcomponents. tools appletviewer apt (See compiler-dev.) hprof jar javac (See compiler-dev.) javadoc(tool) (See compiler-dev.) javah (See compiler-dev.) javap (See compiler-dev.) jconsole launcher updaters (Timezone updaters, etc.) visualvm Some aspects of JDK infrastructure directly affect JDK Hg repositories, but other do not. infrastructure build (See build-dev and build-infra-dev.) licensing (Covers updates to the third party readme, licenses, and similar files.) release_eng (Release engineering) staging (Staging of web pages related to JDK releases.) The specification subcomponent encompasses the formal language and virtual machine specifications. specification language (The Java Language Specification) vm (The Java Virtual Machine Specification) The code for the deploy and install areas is not currently included in OpenJDK. deploy deployment_toolkit plugin webstart install auto_update install servicetags In the JDK, there are a number of cross-cutting concerns whose organization is essentially orthogonal to other areas. Since these areas generally have dedicated teams working on them, it is easier to find bugs of interest if these bugs are grouped first by their cross-cutting component rather than by the affected technology. docs doclet guides hotspot release_notes tools tutorial embedded build hotspot libraries globalization locale-data translation performance hotspot libraries The list of subcomponents will no doubt grow over time, but my inclination is to resist that growth since the addition of each subcomponent makes the system as a whole more complicated and harder to use. When the system gets closer to being externalized, I plan to post more blog entries describing recommended use of various custom fields in the JDK project.

    Read the article

  • Slides and Scripts from Metalogix Webcast Master Your SharePoint Migration With PowerShell

    - by Brian Jackett
    Thanks to everyone who attended the Metalogix webcast “Master Your SharePoint Migration with PowerShell” I guest presented on today.  We had great attendance and no technical hitches which is always a plus.  A number of attendees asked for my slide deck which you can find at the link below.  As a bonus I am including a set of demo scripts that I typically use with the longer version of this presentation.  If you have any questions or comments please feel free to reach out to me.  A big thanks once again to Metalogix for giving me the opportunity to work with them. Scripts and Slidedeck Click Here         -Frog Out

    Read the article

  • Static HTML to Wordpress Migration SEO Implications?

    - by Kayle
    Recently, I migrated a client's site to a new server and a new home within wordpress so they could more easily edit their website and start a blog section. The static site was 10 years old a was showing up at place #3 for it's primary keyword, consistently, according to my client, and has dropped to rank #6-8 following the migration. At launch, we made sure the urls were identical (save the removal of ".htm" which we used 301 redirects to compensate for) and we generated a new XML map and pinged google with the new site. We keep a 404 log to make sure we're not losing any incoming links. We also have Google Webmaster Tools on this site and have zero errors/suggestions, everything seems ok. I was told by numerous sources that Google would not penalize us for the use of 301s, but it's the only thing I can think of right now that is different about the site, other than the platform. Any ideas about what we could be getting docked for?

    Read the article

  • Application Migration: Windows/VB6 Apps to ASP.NET HTML5

    - by Webgui
    I would like to invite you to a fascinating webinar on extending applications to HTML5 and Mobile that we are doing in collaboration with Jeffrey S. Hammond, Principal Analyst serving Application Development & Delivery Professionals at Forrester Research.The webinar is free and it will will introduce the substantial changes brought on by the move to Web Applications and Open Web architectures, and the challenges it places on application development shops. We’ll also introduce how we at Gizmox are helping client navigate this mobile shift and evolve existing Windows applications with a new set of Transposition tools called Instant CloudMove. We will discuss the alternatives in the market to evolve your existing applications and focus on our transposition tools that reduce migration risk, minimize costs, and accelerate your time to market. So if you have locally installed Windows, VB6 or ASP applications that you are looking to enable as SaaS, offer over private or public Cloud platforms or allow end users with mobile accessibility then you shouldn't miss this webinar. Extending Windows Applications to HTML5 and Mobile Has Never Been Easier Tuesday, April 24, 2012 1:00 PM - 2:00 PM EST Free registration:http://www.visualwebgui.com/Gizmox/Landing/tabid/674/articleType/ArticleView/articleId/987/Extending-Windows-Applications-to-HTML5-and-Mobile-Has-Never-Been-Easier.aspx

    Read the article

  • Compiz setting migration from 12.04 to 12.10

    - by Maksim
    I just make migration my ubuntu 12.04 home folder to fresh installed ubuntu 12.10. And again have problem with compiz. It have no one my custom settings! How it's possible then full copy home dir not transfer settings? Those things began the compiz start play with digits on config folders. like .compiz-1 or .compiz. I have now idea then I have to use this -1 then not. But I've rename my compiz folders in .gconf and .compiz. Which not help. How get my setting back. And why this mess happening?

    Read the article

  • Using git (or any version control) regarding migration from one language to another [on hold]

    - by Max Benin
    I'm polishing an old project that i released some years ago, and the main purpose on that, is to arrange some folder structures and port the entire code from actionscript to haxe. All the game features, assets and design will remain the same. I have some doubts regarding versioning the project in this circumstance. Assuming that the only thing that will be drastically changed is the code migration, is it correct maintaining the new project changes on the same repository ? I was thinking in tagging it something like V1.1 or Branch the entire project. But i'm afraid that i'm gonna deviate from the versioning patterns. How can i use this version control issue in the best practice way ? Thanks.

    Read the article

  • How to install Grub2 under several common scenarios

    - by Huckle
    I feel the community has long needed a clean guide on how to install Grub2 under a a few extremely common scenarios. I will accept answer as solved when it has one section per scenario and assumes nothing other than what is specified. Please add to the existing answer, wiki style, keeping to the original assumptions. Rules: 1. You cannot, at any point in the answer, invoke Ubiquity (the Ubuntu installer). 2. I strongly recommend not using any automatic boor-repair tools as they're not very educational Scenario 1: Non-booting Linux OS, No boot partition, Fix from Live CD Setup: /dev/sda1 is formatted ext* /dev/sda2 is formatted linux_swap /dev/sda1 doesn't boot because MBR is scrambled and /boot/* was erased Explain: How to boot to a Live CD / USB and restore Grub2 to the MBR and /boot of /dev/sda1 Scenario 2: Non-booting Linux OS, Boot partition, Fix from Live CD Setup: /dev/sda1 is formatted fat /dev/sda2 is formatted ext* /dev/sda3 is formatted linux_swap /dev/sda2 doesn't boot because the MBR is scrambled and /dev/sda1 was formatted Explain: How to boot to a Live CD / USB and restore Grub2 to the MBR and /dev/sda1 and then update the fstab on /dev/sda2 Scenario 3: Install on to thumb drive, Booting various OSes, From Linux OS Setup: /dev/sdb is removable media /dev/sdb1 is formatted fat /dev/sdb2 is formatted ext* /dev/sdb3 is formatted fat The MBR of /dev/sdb is otherwise not initialized You are executing from a Linux based OS installed on /dev/sda Explain: How to install Grub2 on to /dev/sdb1, mark /dev/sdb1 active, be able to chose between /dev/sdb2 and /dev/sdb3 on boot. Scenario 4: (Bonus) Install on to thumb drive, Booting ISO, From Linux OS Setup: /dev/sdb is removable media /dev/sdb1 is formatted fat /dev/sdb1 contains /iso/live.iso /dev/sdb2 is formatted ext* /dev/sdb3 is formatted fat The MBR of /dev/sdb is otherwise not initialized You are executing from a Linux based OS installed on /dev/sda Explain: How to install Grub2 on to /dev/sdb1, mark /dev/sdb1 active, be able to chose between /dev/sdb2, /dev/sdb3, and /iso/live.iso on boot.

    Read the article

  • Recover Data Like a Forensics Expert Using an Ubuntu Live CD

    - by Trevor Bekolay
    There are lots of utilities to recover deleted files, but what if you can’t boot up your computer, or the whole drive has been formatted? We’ll show you some tools that will dig deep and recover the most elusive deleted files, or even whole hard drive partitions. We’ve shown you simple ways to recover accidentally deleted files, even a simple method that can be done from an Ubuntu Live CD, but for hard disks that have been heavily corrupted, those methods aren’t going to cut it. In this article, we’ll examine four tools that can recover data from the most messed up hard drives, regardless of whether they were formatted for a Windows, Linux, or Mac computer, or even if the partition table is wiped out entirely. Note: These tools cannot recover data that has been overwritten on a hard disk. Whether a deleted file has been overwritten depends on many factors – the quicker you realize that you want to recover a file, the more likely you will be able to do so. Our setup To show these tools, we’ve set up a small 1 GB hard drive, with half of the space partitioned as ext2, a file system used in Linux, and half the space partitioned as FAT32, a file system used in older Windows systems. We stored ten random pictures on each hard drive. We then wiped the partition table from the hard drive by deleting the partitions in GParted. Is our data lost forever? Installing the tools All of the tools we’re going to use are in Ubuntu’s universe repository. To enable the repository, open Synaptic Package Manager by clicking on System in the top-left, then Administration > Synaptic Package Manager. Click on Settings > Repositories and add a check in the box labelled “Community-maintained Open Source software (universe)”. Click Close, and then in the main Synaptic Package Manager window, click the Reload button. Once the package list has reloaded, and the search index rebuilt, search for and mark for installation one or all of the following packages: testdisk, foremost, and scalpel. Testdisk includes TestDisk, which can recover lost partitions and repair boot sectors, and PhotoRec, which can recover many different types of files from tons of different file systems. Foremost, originally developed by the US Air Force Office of Special Investigations, recovers files based on their headers and other internal structures. Foremost operates on hard drives or drive image files generated by various tools. Finally, scalpel performs the same functions as foremost, but is focused on enhanced performance and lower memory usage. Scalpel may run better if you have an older machine with less RAM. Recover hard drive partitions If you can’t mount your hard drive, then its partition table might be corrupted. Before you start trying to recover your important files, it may be possible to recover one or more partitions on your drive, recovering all of your files with one step. Testdisk is the tool for the job. Start it by opening a terminal (Applications > Accessories > Terminal) and typing in: sudo testdisk If you’d like, you can create a log file, though it won’t affect how much data you recover. Once you make your choice, you’re greeted with a list of the storage media on your machine. You should be able to identify the hard drive you want to recover partitions from by its size and label. TestDisk asks you select the type of partition table to search for. In most cases (ext2/3, NTFS, FAT32, etc.) you should select Intel and press Enter. Highlight Analyse and press enter. In our case, our small hard drive has previously been formatted as NTFS. Amazingly, TestDisk finds this partition, though it is unable to recover it. It also finds the two partitions we just deleted. We are able to change their attributes, or add more partitions, but we’ll just recover them by pressing Enter. If TestDisk hasn’t found all of your partitions, you can try doing a deeper search by selecting that option with the left and right arrow keys. We only had these two partitions, so we’ll recover them by selecting Write and pressing Enter. Testdisk informs us that we will have to reboot. Note: If your Ubuntu Live CD is not persistent, then when you reboot you will have to reinstall any tools that you installed earlier. After restarting, both of our partitions are back to their original states, pictures and all. Recover files of certain types For the following examples, we deleted the 10 pictures from both partitions and then reformatted them. PhotoRec Of the three tools we’ll show, PhotoRec is the most user-friendly, despite being a console-based utility. To start recovering files, open a terminal (Applications > Accessories > Terminal) and type in: sudo photorec To begin, you are asked to select a storage device to search. You should be able to identify the right device by its size and label. Select the right device, and then hit Enter. PhotoRec asks you select the type of partition to search. In most cases (ext2/3, NTFS, FAT, etc.) you should select Intel and press Enter. You are given a list of the partitions on your selected hard drive. If you want to recover all of the files on a partition, then select Search and hit enter. However, this process can be very slow, and in our case we only want to search for pictures files, so instead we use the right arrow key to select File Opt and press Enter. PhotoRec can recover many different types of files, and deselecting each one would take a long time. Instead, we press “s” to clear all of the selections, and then find the appropriate file types – jpg, gif, and png – and select them by pressing the right arrow key. Once we’ve selected these three, we press “b” to save these selections. Press enter to return to the list of hard drive partitions. We want to search both of our partitions, so we highlight “No partition” and “Search” and then press Enter. PhotoRec prompts for a location to store the recovered files. If you have a different healthy hard drive, then we recommend storing the recovered files there. Since we’re not recovering very much, we’ll store it on the Ubuntu Live CD’s desktop. Note: Do not recover files to the hard drive you’re recovering from. PhotoRec is able to recover the 20 pictures from the partitions on our hard drive! A quick look in the recup_dir.1 directory that it creates confirms that PhotoRec has recovered all of our pictures, save for the file names. Foremost Foremost is a command-line program with no interactive interface like PhotoRec, but offers a number of command-line options to get as much data out of your had drive as possible. For a full list of options that can be tweaked via the command line, open up a terminal (Applications > Accessories > Terminal) and type in: foremost –h In our case, the command line options that we are going to use are: -t, a comma-separated list of types of files to search for. In our case, this is “jpeg,png,gif”. -v, enabling verbose-mode, giving us more information about what foremost is doing. -o, the output folder to store recovered files in. In our case, we created a directory called “foremost” on the desktop. -i, the input that will be searched for files. This can be a disk image in several different formats; however, we will use a hard disk, /dev/sda. Our foremost invocation is: sudo foremost –t jpeg,png,gif –o foremost –v –i /dev/sda Your invocation will differ depending on what you’re searching for and where you’re searching for it. Foremost is able to recover 17 of the 20 files stored on the hard drive. Looking at the files, we can confirm that these files were recovered relatively well, though we can see some errors in the thumbnail for 00622449.jpg. Part of this may be due to the ext2 filesystem. Foremost recommends using the –d command-line option for Linux file systems like ext2. We’ll run foremost again, adding the –d command-line option to our foremost invocation: sudo foremost –t jpeg,png,gif –d –o foremost –v –i /dev/sda This time, foremost is able to recover all 20 images! A final look at the pictures reveals that the pictures were recovered with no problems. Scalpel Scalpel is another powerful program that, like Foremost, is heavily configurable. Unlike Foremost, Scalpel requires you to edit a configuration file before attempting any data recovery. Any text editor will do, but we’ll use gedit to change the configuration file. In a terminal window (Applications > Accessories > Terminal), type in: sudo gedit /etc/scalpel/scalpel.conf scalpel.conf contains information about a number of different file types. Scroll through this file and uncomment lines that start with a file type that you want to recover (i.e. remove the “#” character at the start of those lines). Save the file and close it. Return to the terminal window. Scalpel also has a ton of command-line options that can help you search quickly and effectively; however, we’ll just define the input device (/dev/sda) and the output folder (a folder called “scalpel” that we created on the desktop). Our invocation is: sudo scalpel /dev/sda –o scalpel Scalpel is able to recover 18 of our 20 files. A quick look at the files scalpel recovered reveals that most of our files were recovered successfully, though there were some problems (e.g. 00000012.jpg). Conclusion In our quick toy example, TestDisk was able to recover two deleted partitions, and PhotoRec and Foremost were able to recover all 20 deleted images. Scalpel recovered most of the files, but it’s very likely that playing with the command-line options for scalpel would have enabled us to recover all 20 images. These tools are lifesavers when something goes wrong with your hard drive. If your data is on the hard drive somewhere, then one of these tools will track it down! Similar Articles Productive Geek Tips Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDUse an Ubuntu Live CD to Securely Wipe Your PC’s Hard DriveReset Your Ubuntu Password Easily from the Live CDBackup Your Windows Live Writer SettingsAdding extra Repositories on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >