Search Results

Search found 21539 results on 862 pages for 'hanging process'.

Page 137/862 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Lost all privileges since upgrading to 13.10

    - by Chris Poole
    Since upgrading to 13.10, I no longer have the 'privileges' to do the following things: Mount USB/CDROM drives Run software centre or software updater Press the GUI shut down or restart buttons Unlock my account in the 'settings - user accounts' section (padlock is greyed out) Also, when logging on as a guest user I get error messages relating to Compiz crashing with SIGSEGV and it hangs on a blank wallpaper screen. However, I still am able to use sudo in the terminal. Output of 'groups' is jenchris adm dialout cdrom sudo audio video plugdev lpadmin admin pulse pulse-access sambashare sudo usermod -U username doesn't have any effect Output of sudo dpkg-reconfigure -phigh -a acpid stop/waiting acpid start/running, process 30454 * Starting AppArmor profiles Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd [ OK ] * Reloading AppArmor profiles Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd [ OK ] apport stop/waiting apport start/running gpg: key 437D05B5: "Ubuntu Archive Automatic Signing Key <[email protected]>" not changed gpg: key FBB75451: "Ubuntu CD Image Automatic Signing Key <[email protected]>" not changed gpg: key C0B21F32: "Ubuntu Archive Automatic Signing Key (2012) <[email protected]>" not changed gpg: key EFE21092: "Ubuntu CD Image Automatic Signing Key (2012) <[email protected]>" not changed gpg: Total number processed: 4 gpg: unchanged: 4 atd stop/waiting atd start/running, process 1388 avahi-daemon stop/waiting avahi-daemon start/running, process 1521 Rebuilding /usr/share/applications/bamf-2.index... update-alternatives: using /usr/share/man/man7/bash-builtins.7.gz to provide /usr/share/man/man7/builtins.7.gz (builtins.7.gz) in auto mode update-binfmts: warning: current package is openjdk-7, but binary format already installed by openjdk-6 binfmt-support stop/waiting bluetooth stop/waiting bluetooth start/running, process 4255 update-initramfs: deferring update (trigger activated) /var/lib/dpkg/info/compiz.config: 1: /var/lib/dpkg/info/compiz.config: [general]: not found /var/lib/dpkg/info/compiz.config: 2: /var/lib/dpkg/info/compiz.config: backend: not found /var/lib/dpkg/info/compiz.config: 3: /var/lib/dpkg/info/compiz.config: plugin_list_autosort: not found /var/lib/dpkg/info/compiz.config: 5: /var/lib/dpkg/info/compiz.config: [gnome_session]: not found /var/lib/dpkg/info/compiz.config: 6: /var/lib/dpkg/info/compiz.config: backend: not found /var/lib/dpkg/info/compiz.config: 7: /var/lib/dpkg/info/compiz.config: integration: not found /var/lib/dpkg/info/compiz.config: 8: /var/lib/dpkg/info/compiz.config: plugin_list_autosort: not found /var/lib/dpkg/info/compiz.config: 9: /var/lib/dpkg/info/compiz.config: profile: not found /var/lib/dpkg/info/compiz.config: 11: /var/lib/dpkg/info/compiz.config: [general_ubuntu]: not found /var/lib/dpkg/info/compiz.config: 12: /var/lib/dpkg/info/compiz.config: backend: not found /var/lib/dpkg/info/compiz.config: 13: /var/lib/dpkg/info/compiz.config: integration: not found /var/lib/dpkg/info/compiz.config: 14: /var/lib/dpkg/info/compiz.config: plugin_list_autosort: not found /var/lib/dpkg/info/compiz.config: 15: /var/lib/dpkg/info/compiz.config: profile: not found

    Read the article

  • Upcoming Webcast on June 17: Gain Control Over Your Financial Close

    - by Theresa Hickman
    Accenture and Oracle EPM (Enterprise Perfromance Management) and GRC (Governance, Risk, and Compliance) will be hosting a live webcast called "Gain Control Over Your Financial Close - Confidence in the Process, Trust in the Numbers." When: Thursday, June 17, 2010 Time: 9:00am PST (Noon EST) Don't miss this chance to find out how you could optimize the financial close process and transform the speed, quality and integrity of your financial reporting. For more information and to register for this event, see this webpage.

    Read the article

  • Using the BackgroundWorker in a Silverlight MVVM Application

    - by axshon
    With Silverlight 4 and the Entity Framework you get a lot of work done on your behalf in terms of standard UI CRUD-style operations. Validations and I/O are pretty easy to accommodate out of the box. But sometimes you need to perform some long running tasks either on the client or on the server via service calls. To prevent your UI from hanging and annoying your users, you should consider placing these operations on a background thread. The BackgroundWorker object is the perfect solution for this...(read more)

    Read the article

  • « BPMN2 : L'essentiel », un e-book pour découvrir les concepts clés du BPMN et modéliser ses processus métier, téléchargeable gratuitement

    « BPMN2 : L'essentiel », un e-book pour découvrir les concepts clés du BPMN et modéliser ses processus métier, téléchargeable gratuitementBPM (Business Process Management) est une discipline qui consiste à considérer la gestion des processus comme un moyen d'améliorer la performance opérationnelle. Les processus métier sont représentés sous forme de modèles graphiques grâce à l'ensemble des conventions graphiques BPMN (BPMN Business Process Model and Notation).BPMN est un standard permettant de...

    Read the article

  • Product Development Investment: A Measure of Vendor Performance

    - by Jim Mcglothlin
    The relationship between a large, complex organization and its key suppliers of information technology is normally more than just "strategic". Expectations about the duration of the relationship typically exceed 20 years. Enterprise applications and technology infrastructure are not expected to be changed out like petunias. So how would you rate the due diligence processes as performed in Higher Education when selecting critical, transformational information technology? My observation: I see a lot of effort put into elaborate demonstration of basic software functionality. I see a lot of attention paid to the cost element of technology acquisition, including the contracted cost of implementation consulting services. But the factor that receives only cursory analysis and due diligence is long-term performance--the ability of a vendor to grow, expand, and develop, and bring its customers along with it. So what should you look for in a long-term IT supplier? Oracle has a public track record for product development. The annual investment has been on a run rate of almost $3 Billion organic product development. Oracle's well-publicized acquisitions and mergers have been supplemental to its R&D. This is important for Higher Education. Another meaningful way to evaluate a company is to look at the tangible track record of enhancement. Consider the Oracle-PeopleSoft enterprise business platform since acquired by Oracle 6 years ago: Product or Technology Enhancement Customer or User Impact Service Oriented Architecture (SOA) 300+ new web services delivered in versions 9.0 & 9.1 provide flexibility, so that customers can integrate PeopleSoft with other applications. Campus Solutions has added Admissions and Constituent Web Services. Constituent Relationship Management PeopleSoft CRM 9.1 for Higher Education introduced new process flows for student recruiting and retention to support "Student Success" initiatives. A 360 view of the constituent is now delivered, and the concept of a single-stop Student Services Center is now in CRM 9.1 with tight integration to PeopleSoft Campus Solutions. Human Capital Management Contract Pay for Education, with flexibility for configuration and calculation, has been extended in HCM 9.1. New chartfield integration among Project Costing - Time & Labor - Payroll to serve the labor distribution requirements for Grants / Sponsored Research. Talent Management PeopleSoft 9.0 and 9.1 feature an integrated talent management approach centered on definitions in "Profile Manager", with all new usability improvements. Internal and external candidate pools, and the entire recruitment process, are driven by delivered configurable selection and on-boarding processes. Interview scheduling, and online job offers are newly delivered processes. Performance Management PeopleSoft HCM ePerformance 9.1 will include significant new functionality designed to help organizations more effectively align business objectives with employee goals. Using an Organization Chart view, your business goals can flow down to become tangible objectives per employee. Succession Planning / Workforce Development New in HCM 9.0, enhanced in 9.1, is a planning capability for regular or unusual (major organizational change) succession of internal or external candidates. PeopleSoft supports employee-based career planning, which ultimately increases the integrity of the succession planning process (identify their career needs, plans, preferences, and interests). Dashboards / Oracle Business Intelligence Application Suite Oracle Human Resources Analytics provides the workforce information foundation that integrates data from HR functional areas and Finance. Oracle Human Resources Analytics delivers 9 dashboards and over 200 reports. Provide your HR professionals and front-line managers the tools to analyze workforce staffing, retention, productivity, to better source high-quality applicants, and to reduce absence costs. Multi-year Planning and Commitment Control External funding sources, especially Grants, require a multi-year encumbrance business process. PeopleSoft HCM 9.1 adds multi-year funding and commitment control, including budget checking. The newly designed Real Time Budget Checking will provide the customer with an updated snapshot of their budget and encumbrances at any given time. Position Budgeting with Hyperion Hyperion Planning world-class products now include delivered integration to PeopleSoft HCM. Position Budgeting is available in the new Public Sector Planning module of Hyperion. Web 2.0 features for the latest in usability PeopleSoft 9.1 features a contemporary internet user experience: Partial-page refreshing Drag and drop pagelets New menu structure Navigation pagelets Modal popup message windows Favorites & recently used links Type-ahead Drag and drop grid columns, pop-out grids Portal Workspaces Enterprise 2.0 for your collaborative web communities, using new content management, along with Wikis, blogs, and discussion forums in PeopleSoft Portal 9.1. PeopleTools enhanced by Oracle Fusion Middleware Standards-based tools have been added to the PeopleTools application infrastructure: BI (XML) Publisher, Java tools. Certified for use with PeopleSoft: Oracle Business Intelligence (OBIEE), Oracle Enterprise Manager, Oracle Weblogic Server, Oracle SOA Suite. Hosting for PeopleSoft applications A solid new deployment option: Oracle On Demand remote hosting center for high scalability, security, and continuity of operations. Business Process Outsourcing (BPO) for HCM / Payroll functions Partnership with AT&T provides hosting of HR/Payroll application along with payroll business process operations, and subscription-based service fees (SaaS). AT&T BPO full service includes pay sheet processing, bank and 3rd party file transfer, payroll tax handling, etc. Continuous Delivery Model Feature Packs provide faster time-to-benefit; new features become available in PeopleSoft 9.1 (or Campus Solutions 9.0) without need to perform upgrade. Golden person data model across all campus applications Oracle Higher Education Constituent Hub provides synchronization and data governance of person data across any application, e.g. HR/ Payroll, Student Information System, Housing, Emergency Contact, LMS, CRM. Oracle's aggressive enhancement plans within the "Applications Unlimited" program continue, as new functionality is under development for a new version of a PeopleSoft release planned for 2012. Meanwhile, new capabilities are planned on an annual basis in Feature Packs. PeopleSoft just delivered the HCM 2010 Feature Pack and another is planned for 2011. In February we plan to have over 100 customers from our Customer Advisory Boards at our PeopleSoft Development Center in California to review designs for all of these releases. For those of you near New York City The investment and progressive development story described above is the subject of an Oracle road show event on February 9, 2011. Charting Your Course with Oracle Applications is a global event series designed to help business and IT executives assess the impact of new inflection points on their business and applications roadmap: changing workforces, shifting customer and constituent bases, and increased volatility. Learn how innovations ranging from new deployment models like cloud computing to the introduction of social applications and smart devices are delivering results across all areas of business and industry. THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT.

    Read the article

  • Replication Services as ETL extraction tool

    - by jorg
    In my last blog post I explained the principles of Replication Services and the possibilities it offers in a BI environment. One of the possibilities I described was the use of snapshot replication as an ETL extraction tool: “Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!” Well I have tried it out and I must say it worked well. I was able to let replication services do work in a fraction of the time it would cost me to do the same in SSIS. What I did was the following: Configure snapshot replication for some Adventure Works tables, this was quite simple and straightforward. Create an SSIS package that executes the snapshot replication on demand and waits for its completion. This is something that you can’t do with out of the box functionality. While configuring the snapshot replication two SQL Agent Jobs are created, one for the creation of the snapshot and one for the distribution of the snapshot. Unfortunately these jobs are  asynchronous which means that if you execute them they immediately report back if the job started successfully or not, they do not wait for completion and report its result afterwards. So I had to create an SSIS package that executes the jobs and waits for their completion before the rest of the ETL process continues. Fortunately I was able to create the SSIS package with the desired functionality. I have made a step-by-step guide that will help you configure the snapshot replication and I have uploaded the SSIS package you need to execute it. Configure snapshot replication   The first step is to create a publication on the database you want to replicate. Connect to SQL Server Management Studio and right-click Replication, choose for New.. Publication…   The New Publication Wizard appears, click Next Choose your “source” database and click Next Choose Snapshot publication and click Next   You can now select tables and other objects that you want to publish Expand Tables and select the tables that are needed in your ETL process In the next screen you can add filters on the selected tables which can be very useful. Think about selecting only the last x days of data for example. Its possible to filter out rows and/or columns. In this example I did not apply any filters. Schedule the Snapshot Agent to run at a desired time, by doing this a SQL Agent Job is created which we need to execute from a SSIS package later on. Next you need to set the Security Settings for the Snapshot Agent. Click on the Security Settings button.   In this example I ran the Agent under the SQL Server Agent service account. This is not recommended as a security best practice. Fortunately there is an excellent article on TechNet which tells you exactly how to set up the security for replication services. Read it here and make sure you follow the guidelines!   On the next screen choose to create the publication at the end of the wizard Give the publication a name (SnapshotTest) and complete the wizard   The publication is created and the articles (tables in this case) are added Now the publication is created successfully its time to create a new subscription for this publication.   Expand the Replication folder in SSMS and right click Local Subscriptions, choose New Subscriptions   The New Subscription Wizard appears   Select the publisher on which you just created your publication and select the database and publication (SnapshotTest)   You can now choose where the Distribution Agent should run. If it runs at the distributor (push subscriptions) it causes extra processing overhead. If you use a separate server for your ETL process and databases choose to run each agent at its subscriber (pull subscriptions) to reduce the processing overhead at the distributor. Of course we need a database for the subscription and fortunately the Wizard can create it for you. Choose for New database   Give the database the desired name, set the desired options and click OK You can now add multiple SQL Server Subscribers which is not necessary in this case but can be very useful.   You now need to set the security settings for the Distribution Agent. Click on the …. button Again, in this example I ran the Agent under the SQL Server Agent service account. Read the security best practices here   Click Next   Make sure you create a synchronization job schedule again. This job is also necessary in the SSIS package later on. Initialize the subscription at first synchronization Select the first box to create the subscription when finishing this wizard Complete the wizard by clicking Finish The subscription will be created In SSMS you see a new database is created, the subscriber. There are no tables or other objects in the database available yet because the replication jobs did not ran yet. Now expand the SQL Server Agent, go to Jobs and search for the job that creates the snapshot:   Rename this job to “CreateSnapshot” Now search for the job that distributes the snapshot:   Rename this job to “DistributeSnapshot” Create an SSIS package that executes the snapshot replication We now need an SSIS package that will take care of the execution of both jobs. The CreateSnapshot job needs to execute and finish before the DistributeSnapshot job runs. After the DistributeSnapshot job has started the package needs to wait until its finished before the package execution finishes. The Execute SQL Server Agent Job Task is designed to execute SQL Agent Jobs from SSIS. Unfortunately this SSIS task only executes the job and reports back if the job started succesfully or not, it does not report if the job actually completed with success or failure. This is because these jobs are asynchronous. The SSIS package I’ve created does the following: It runs the CreateSnapshot job It checks every 5 seconds if the job is completed with a for loop When the CreateSnapshot job is completed it starts the DistributeSnapshot job And again it waits until the snapshot is delivered before the package will finish successfully Quite simple and the package is ready to use as standalone extract mechanism. After executing the package the replicated tables are added to the subscriber database and are filled with data:   Download the SSIS package here (SSIS 2008) Conclusion In this example I only replicated 5 tables, I could create a SSIS package that does the same in approximately the same amount of time. But if I replicated all the 70+ AdventureWorks tables I would save a lot of time and boring work! With replication services you also benefit from the feature that schema changes are applied automatically which means your entire extract phase wont break. Because a snapshot is created using the bcp utility (bulk copy) it’s also quite fast, so the performance will be quite good. Disadvantages of using snapshot replication as extraction tool is the limitation on source systems. You can only choose SQL Server or Oracle databases to act as a publisher. So if you plan to build an extract phase for your ETL process that will invoke a lot of tables think about replication services, it would save you a lot of time and thanks to the Extract SSIS package I’ve created you can perfectly fit it in your usual SSIS ETL process.

    Read the article

  • Engagement: Don’t Forget Your Employees!

    - by Kellsey Ruppel
    By Mark Brown, Sr. Director, Oracle WebCenter  This week we want to focus on Employee Engagement, and how it is critical to your business. Today we hear and read a great deal about “Customer Engagement” – and rightly so, it is those customers, whether they be traditional paying customers, citizens, students, club members, or whomever it is that are “paying the bills”.  A more engaged customer is more likely to make it easier to pay those bills by buying more, giving good reviews, or spreading the word of how wonderful their experience was. But what about those who are providing those services, those who design and make those goods; why is it that all too often they are left out of conversations concerning engagement? In fact, it is critical that we consider our employees as customers since they are using internal systems that run your organization the same way customers use external systems. Studies have shown that an organization in which the employees feel “engaged” or better able to make decisions, do their jobs, and are connected to their peers have better return to their stakeholders. (shareholders).  On the surface this seems obvious, happy employees are more productive employees. But it leads to the question – how many of our existing policies, systems and processes are actually reducing that level of engagement? Let’s look at a couple examples. If posting new information that may be of great value to everyone in the larger organization is hard to do because we use an antiquated system, then we’re making it hard to share and increasing the potential for duplicate work. If it is not trivially obvious how to create and publish this post, then chances are very high that I’ll put it on the bottom of my queue. And finally, when critical information is spread across various systems, intranet sites, workgroups and peoples inboxes, then it is very hard to learn and grow from that information.  These may sound trivial, but how often do we push things off not because it is intellectually challenging, we may have the answer at our fingertips, but because it is hard to make that information readily available.  If an engaged employee is a productive employee, then what can we do to increase their level of engagement? We can start by looking for opportunities to provide self-documenting self-service solutions. Our newer employees grew up using simplified web interfaces everyday and they loathe calling a help-desk unless it is the last resort. Sadly, many of our enterprise applications have not kept pace and we all still have processes that are based on sending an email -- like discount approvals, vacation requests, or even offer-letter approvals.   My suggestion is to pick one highly visible, high-impact process where employees are either reticent to execute on the process or openly complain about how cumbersome it is and look at the mechanism for that process. If there are better ways, streamlined steps, better UIs that could be done, then you have a candidate to reconfigure that process and make it more engaging. Looking to better engage your employees? Start here!

    Read the article

  • Using Keyword Analysis to Write Articles and Blogs

    Keyword analysis is a process by which you can discover what search phases are used at search engines by users for find information. Keywords are nothing but search words or phrases entered by users at search engines like Google, Yahoo and Bing. For article, blog and web content writers, keyword research is the most important part of the process.

    Read the article

  • A Technique for Performing Cross-host Upgrades to FMW 11gR1

    - by reza.shafii
    The main tool used for the upgrade of iAS 10g mid-tier (data not stored in 10g meta-data repository schemas) environments to Fusion Middleware (FMW) 11gR1 is the FMW Upgrade Assistant (UA). This tool performs what we call an out-of-place upgrade which in a nut-shell means the following: Upgrade is performed by pointing the UA to a 10g source topology as well as an 11g destination topology. The destination topology must be created, using the standard FMW 11g installation and configuration process, prior to the execution of the UA. The UA carries over all of the required changes from the source environment to the destination. This approach has a number of advantages rooted in the fact that the source environment - which is presumably working well and serving its needs - is not disturbed during the upgrade process as the UA only performs read-only operations on it. The UA today can only perform such out-of-place upgrades when the source and destination topologies reside on the same machine. This can sometimes be an issue when the host on which the iAS 10g environment is installed is running at full capacity and installing new hardware for the purpose of the upgrade (in most cases what would be needed is extra memory) is completely infeasible. In such cases, upgrade across a different host is still possible by using the following technique: Backup your source environment and restore it on to a target machine. The backup and restore procedures for the iAS 10.1.2 components are described within this section of the release's Administration Guide. As described in the docs, the Oracle Application Server Backup and Recovery Tool provides capabilities for backing up the installation on one machine and restoring it on another which is exactly what you want to do for the purpose of cross host upgrade. Ensure that the restored environment on your target host is fully functional. Go through the upgrade steps on the target machine to perform the out-of-place upgrade using the UA. Although this process does add another big step to the overall upgrade process, it does make it possible to perform a cross-host upgrade to 11gR1 when necessary. The easiest approach would of course be to find a way of ensuring that the required hardware capacity for upgrade is available on the original 10g host. Using techniques such as scheduling the upgrade at low traffic times and/or temporarily stopping other processes running on the machine to clear up some memory might provide you the sufficient memory needed to perform the out-of-place upgrade and save you the need for using the backup/restore technique I have described in this post.

    Read the article

  • CI Deployment Of Azure Web Roles Using TeamCity

    - by srkirkland
    After recently migrating an important new website to use Windows Azure “Web Roles” I wanted an easier way to deploy new versions to the Azure Staging environment as well as a reliable process to rollback deployments to a certain “known good” source control commit checkpoint.  By configuring our JetBrains’ TeamCity CI server to utilize Windows Azure PowerShell cmdlets to create new automated deployments, I’ll show you how to take control of your Azure publish process. Step 0: Configuring your Azure Project in Visual Studio Before we can start looking at automating the deployment, we should make sure manual deployments from Visual Studio are working properly.  Detailed information for setting up deployments can be found at http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx#PublishAzure or by doing some quick Googling, but the basics are as follows: Install the prerequisite Windows Azure SDK Create an Azure project by right-clicking on your web project and choosing “Add Windows Azure Cloud Service Project” (or by manually adding that project type) Configure your Role and Service Configuration/Definition as desired Right-click on your azure project and choose “Publish,” create a publish profile, and push to your web role You don’t actually have to do step #4 and create a publish profile, but it’s a good exercise to make sure everything is working properly.  Once your Windows Azure project is setup correctly, we are ready to move on to understanding the Azure Publish process. Understanding the Azure Publish Process The actual Windows Azure project is fairly simple at its core—it builds your dependent roles (in our case, a web role) against a specific service and build configuration, and outputs two files: ServiceConfiguration.Cloud.cscfg: This is just the file containing your package configuration info, for example Instance Count, OsFamily, ConnectionString and other Setting information. ProjectName.Azure.cspkg: This is the package file that contains the guts of your deployment, including all deployable files. When you package your Azure project, these two files will be created within the directory ./[ProjectName].Azure/bin/[ConfigName]/app.publish/.  If you want to build your Azure Project from the command line, it’s as simple as calling MSBuild on the “Publish” target: msbuild.exe /target:Publish Windows Azure PowerShell Cmdlets The last pieces of the puzzle that make CI automation possible are the Azure PowerShell Cmdlets (http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx).  These cmdlets are what will let us create deployments without Visual Studio or other user intervention. Preparing TeamCity for Azure Deployments Now we are ready to get our TeamCity server setup so it can build and deploy Windows Azure projects, which we now know requires the Azure SDK and the Windows Azure PowerShell Cmdlets. Installing the Azure SDK is easy enough, just go to https://www.windowsazure.com/en-us/develop/net/ and click “Install” Once this SDK is installed, I recommend running a test build to make sure your project is building correctly.  You’ll want to setup your build step using MSBuild with the “Publish” target against your solution file.  Mine looks like this: Assuming the build was successful, you will now have the two *.cspkg and *cscfg files within your build directory.  If the build was red (failed), take a look at the build logs and keep an eye out for “unsupported project type” or other build errors, which will need to be addressed before the CI deployment can be completed. With a successful build we are now ready to install and configure the Windows Azure PowerShell Cmdlets: Follow the instructions at http://msdn.microsoft.com/en-us/library/windowsazure/jj554332 to install the Cmdlets and configure PowerShell After installing the Cmdlets, you’ll need to get your Azure Subscription Info using the Get-AzurePublishSettingsFile command. Store the resulting *.publishsettings file somewhere you can get to easily, like C:\TeamCity, because you will need to reference it later from your deploy script. Scripting the CI Deploy Process Now that the cmdlets are installed on our TeamCity server, we are ready to script the actual deployment using a TeamCity “PowerShell” build runner.  Before we look at any code, here’s a breakdown of our deployment algorithm: Setup your variables, including the location of the *.cspkg and *cscfg files produced in the earlier MSBuild step (remember, the folder is something like [ProjectName].Azure/bin/[ConfigName]/app.publish/ Import the Windows Azure PowerShell Cmdlets Import and set your Azure Subscription information (this is basically your authentication/authorization step, so protect your settings file Now look for a current deployment, and if you find one Upgrade it, else Create a new deployment Pretty simple and straightforward.  Now let’s look at the code (also available as a gist here: https://gist.github.com/3694398): $subscription = "[Your Subscription Name]" $service = "[Your Azure Service Name]" $slot = "staging" #staging or production $package = "[ProjectName]\bin\[BuildConfigName]\app.publish\[ProjectName].cspkg" $configuration = "[ProjectName]\bin\[BuildConfigName]\app.publish\ServiceConfiguration.Cloud.cscfg" $timeStampFormat = "g" $deploymentLabel = "ContinuousDeploy to $service v%build.number%"   Write-Output "Running Azure Imports" Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.psd1" Import-AzurePublishSettingsFile "C:\TeamCity\[PSFileName].publishsettings" Set-AzureSubscription -CurrentStorageAccount $service -SubscriptionName $subscription   function Publish(){ $deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue   if ($a[0] -ne $null) { Write-Output "$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. " } if ($deployment.Name -ne $null) { #Update deployment inplace (usually faster, cheaper, won't destroy VIP) Write-Output "$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename. Upgrading deployment." UpgradeDeployment } else { CreateNewDeployment } }   function CreateNewDeployment() { write-progress -id 3 -activity "Creating New Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress"   $opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Creating New Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: $completeDeploymentID" }   function UpgradeDeployment() { write-progress -id 3 -activity "Upgrading Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress"   # perform Update-Deployment $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service -Force   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Upgrading Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID" }   Write-Output "Create Azure Deployment" Publish   Creating the TeamCity Build Step The only thing left is to create a second build step, after your MSBuild “Publish” step, with the build runner type “PowerShell”.  Then set your script to “Source Code,” the script execution mode to “Put script into PowerShell stdin with “-Command” arguments” and then copy/paste in the above script (replacing the placeholder sections with your values).  This should look like the following:   Wrap Up After combining the MSBuild /target:Publish step (which creates the necessary Windows Azure *.cspkg and *.cscfg files) and a PowerShell script step which utilizes the Azure PowerShell Cmdlets, we have a fully deployable build configuration in TeamCity.  You can configure this step to run whenever you’d like using build triggers – for example, you could even deploy whenever a new master branch deploy comes in and passes all required tests. In the script I’ve hardcoded that every deployment goes to the Staging environment on Azure, but you could deploy straight to Production if you want to, or even setup a deployment configuration variable and set it as desired. After your TeamCity Build Configuration is complete, you’ll see something that looks like this: Whenever you click the “Run” button, all of your code will be compiled, published, and deployed to Windows Azure! One additional enormous benefit of automating the process this way is that you can easily deploy any specific source control changeset by clicking the little ellipsis button next to "Run.”  This will bring up a dialog like the one below, where you can select the last change to use for your deployment.  Since Azure Web Role deployments don’t have any rollback functionality, this is a critical feature.   Enjoy!

    Read the article

  • How you can extend Tasklists in Fusion Applications

    - by Elie Wazen
    In this post we describe the process of modifying and extending a Tasklist available in the Regional Area of a Fusion Applications UI Shell. This is particularly useful to Customers who would like to expose Setup Tasks (generally available in the Fusion Setup Manager application) in the various functional pillars workareas. Oracle Composer, the tool used to implement such extensions allows changes to be made at runtime. The example provided in this document is for an Oracle Fusion Financials page. Let us examine the case of a customer role who requires access to both, a workarea and its associated functional tasks, and to an FSM (setup) task.  Both of these tasks represent ADF Taskflows but each is accessible from a different page.  We will show how an FSM task is added to a Functional tasklist and made accessible to a user from within a single workarea, eliminating the need to navigate between the FSM application and the Functional workarea where transactions are conducted. In general, tasks in Fusion Applications are grouped in two ways: Setup tasks are grouped in tasklists available to implementers in the Functional Setup Manager (FSM). These Tasks are accessed by implementation users and in general do not represent daily operational tasks that fit into a functional business process and were consequently included in the FSM application. For these tasks, the primary organizing principle is precedence between tasks. If task "Manage Suppliers" has prerequisites, those tasks must precede it in a tasklist. Task Lists are organized to efficiently implement an offering. Tasks frequently performed as part of business process flows are made available as links in the tasklist of their corresponding menu workarea. The primary organizing principle in the menu and task pane entries is to group tasks that are generally accessed together. Customizing a tasklist thus becomes required for business scenarios where a task packaged under FSM as a setup task, is for a particular customer a regular maintenance task that is accessed for record updates or creation as part of normal operational activities and where the frequency of this access merits the inclusion of that task in the related operational tasklist A user with the role of maintaining Journals in General Ledger is also responsible for maintaining Chart of Accounts Mappings.  In the Fusion Financials Product Family, Manage Journals is a task available from within the Journals Menu whereas Chart of Accounts Mapping is available via FSM under the Define Chart of Accounts tasklist Figure 1. The Manage Chart of Accounts Mapping Task in FSM Figure 2. The Manage Journals Task in the Task Pane of the Journals Workarea Our goal is to simplify cross task navigation and allow the user to access both tasks from a single tasklist on a single page without having to navigate to FSM for the Mapping task and to the Journals workarea for the Manage task. To accomplish that, we use Oracle Composer to customize  the Journals tasklist by adding to it the Mapping task. Identify the Taskflow name and path of the FSM Task The first step in our process is to identify the underlying taskflow for the Manage Chart of Accounts Mappings task. We select to Setup and Maintenance from the Navigator to launch the FSM Application, and we query the task from Manage Tasklists and Tasks Figure 3. Task Details including Taskflow path The Manage Chart of Accounts Mapping Task Taskflow is: /WEB-INF/oracle/apps/financials/generalLedger/sharedSetup/coaMappings/ui/flow /CoaMappingsMainAreaFlow.xml#CoaMappingsMainAreaFlow We copy that value and use it later as a parameter to our new task in the customized Journals Tasklist. Customize the Journals Page A user with Administration privileges can start the run time customization directly from the Administration Menu of the Global Area.  This customization is done at the Site level and once implemented becomes available to all users with access to the Journals Workarea. Figure 4.  Customization Menu The Oracle Composer Window is displayed in the same browser and the Hierarchy of the page component is displayed and available for modification. Figure 5.  Oracle Composer In the composer Window select the PanelFormLayout node and click on the Edit Button.  Note that the selected component is simultaneously highlighted in the lower pane in the browser. In the Properties popup window, select the Tasks List and Task Properties Tab, where the user finds the hierarchy of the Tasklist and is able to Edit nodes or create new ones. src="https://blogs.oracle.com/FunctionalArchitecture/resource/TL5.jpg" Figure 6.  The Tasklist in edit mode Add a Child Task to the Tasklist In the Edit Window the user will now create a child node at the desired level in the hierarchy by selecting the immediate parent node and clicking on the insert node button.  This process requires four values to be set as described in Table 1 below. Parameter Value How to Determine the Value Focus View Id /JournalEntryPage This is the Focus View ID of the UI Shell where the Tasklist we want to customize is.  A simple way to determine this value is to copy it from any of the Standard tasks on the Tasklist Label COA Mapping This is the Display name of the Task as it will appear in the Tasklist Task Type dynamicMain If the value is dynamicMain, the page contains a new link in the Regional Area. When you click the link, a new tab with the loaded task opens Taskflowid /WEB-INF/oracle/apps/financials/generalLedger/sharedSetup/ coaMappings/ui/flow/ CoaMappingsMainAreaFlow.xml#CoaMappingsMainAreaFlow This is the Taskflow path we retrieved from the Task Definition in FSM earlier in the process Table 1.  Parameters and Values for the Task to be added to the customized Tasklist Figure 7.   The parameters window of the newly added Task   Access the FSM Task from the Journals Workarea Once the FSM task is added and its parameters defined, the user saves the record, closes the Composer making the new task immediately available to users with access to the Journals workarea (Refer to Figure 8 below). Figure 8.   The COA Mapping Task is now visible and can be invoked from the Journals Workarea   Additional Considerations If a Task Flow is part of a product that is deployed on the same app server as the Tasklist workarea then that task flow can be added to a customized tasklist in that workarea. Otherwise that task flow can be invoked from its parent product’s workarea tasklist by selecting that workarea from the Navigator menu. For Example The following Taskflows  belong respectively to the Subledger Accounting, and to the General Ledger Products.  /WEB-INF/oracle/apps/financials/subledgerAccounting/accountingMethodSetup/mappingSets/ui/flow/MappingSetFlow.xml#MappingSetFlow /WEB-INF/oracle/apps/financials/generalLedger/sharedSetup/coaMappings/ui/flow/CoaMappingsMainAreaFlow.xml#CoaMappingsMainAreaFlow Since both the Subledger Accounting and General Ledger products are part of the LedgerApp J2EE Applicaton and are both deployed on the General Ledger Cluster Server (Figure 8 below), the user can add both of the above taskflows to the  tasklist in the  /JournalEntryPage FocusVIewID Workarea. Note:  both FSM Taskflows and Functional Taskflows can be added to the Tasklists as described in this document Figure 8.   The Topology of the Fusion Financials Product Family. Note that SubLedger Accounting and General Ledger are both deployed on the Ledger App Conclusion In this document we have shown how an administrative user can edit the Tasklist in the Regional Area of a Fusion Apps page using Oracle Composer. This is useful for cases where tasks packaged in different workareas are frequently accessed by the same user. By making these tasks available from the same page, we minimize the number of steps in the navigation the user has to do to perform their transactions and queries in Fusion Apps.  The example explained above showed that tasks classified as Setup tasks, meaning made accessible to implementation users from the FSM module can be added to the workarea of their respective Fusion application. This eliminates the need to navigate to FSM to access tasks that are both setup and regular maintenance tasks. References Oracle Fusion Applications Extensibility Guide 11g Release 1 (11.1.1.5) Part Number E16691-02 (Section 3.2) Oracle Fusion Applications Developer's Guide 11g Release 1 (11.1.4) Part Number E15524-05

    Read the article

  • nfs-kernel-server installation : file does not exist

    - by Stuti Rastogi
    I am extremely new to Ubuntu and need to work on EdX platform. I need to install the NFS Client on Ubuntu 12.04 for the same. I used the following stuti@stuti:/$ sudo apt-get install nfs-kernel-server However this gives me an error as follows: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: nfs-common The following NEW packages will be installed: nfs-common nfs-kernel-server 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/355 kB of archives. After this operation, 1,222 kB of additional disk space will be used. Do you want to continue [Y/n]? y Selecting previously unselected package nfs-common. (Reading database ... 200367 files and directories currently installed.) Unpacking nfs-common (from .../nfs-common_1%3a1.2.5-3ubuntu3.1_i386.deb) ... Selecting previously unselected package nfs-kernel-server. Unpacking nfs-kernel-server (from .../nfs-kernel-server_1%3a1.2.5-3ubuntu3.1_i386.deb) ... Processing triggers for ureadahead ... Processing triggers for man-db ... Setting up nfs-common (1:1.2.5-3ubuntu3.1) ... statd start/running, process 4574 gssd stop/pre-start, process 4603 idmapd start/running, process 4643 Setting up nfs-kernel-server (1:1.2.5-3ubuntu3.1) ... update-rc.d: /etc/init.d/nfs-kernel-server: file does not exist dpkg: error processing nfs-kernel-server (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: nfs-kernel-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried sudo apt-get autoremove nfs-kernel-server sudo apt-get autoremove nfs-common After these, I tried to install but I keep getting the same error. apt-get update or upgrade also do not help and give the same error. I am clueless as to where can I find this missing file as stated in the output. I tried to google about this problem but none of the solutions I came across have helped or I have not been able to understand some of them. Any help would really be appreciated. Thanks in advance for your time and attention.

    Read the article

  • SQL Server data platform upgrade - Why upgrade and how best you can reduce pre & post upgrade problems?

    - by ssqa.net
    SQL Server upgrade, let it be database(s) or instance(s) or both the process and procedures must follow best practices in order to reduce any problems that may occur even after the platform is upgraded. The success of any project relies upon the simpler methods of implementation and a process to reduce the complexity in testing to ensure a successful outcome. Also the topic has been a popular topic that .... read more from here ......(read more)

    Read the article

  • What is Database Continuous Integration?

    - by David Atkinson
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • What is Database Continuous Integration?

    - by SQLDev
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • SSIS: Deploying OLAP cubes using C# script tasks and AMO

    - by DrJohn
    As part of the continuing series on Building dynamic OLAP data marts on-the-fly, this blog entry will focus on how to automate the deployment of OLAP cubes using SQL Server Integration Services (SSIS) and Analysis Services Management Objects (AMO). OLAP cube deployment is usually done using the Analysis Services Deployment Wizard. However, this option was dismissed for a variety of reasons. Firstly, invoking external processes from SSIS is fraught with problems as (a) it is not always possible to ensure SSIS waits for the external program to terminate; (b) we cannot log the outcome properly and (c) it is not always possible to control the server's configuration to ensure the executable works correctly. Another reason for rejecting the Deployment Wizard is that it requires the 'answers' to be written into four XML files. These XML files record the three things we need to change: the name of the server, the name of the OLAP database and the connection string to the data mart. Although it would be reasonably straight forward to change the content of the XML files programmatically, this adds another set of complication and level of obscurity to the overall process. When I first investigated the possibility of using C# to deploy a cube, I was surprised to find that there are no other blog entries about the topic. I can only assume everyone else is happy with the Deployment Wizard! SSIS "forgets" assembly references If you build your script task from scratch, you will have to remember how to overcome one of the major annoyances of working with SSIS script tasks: the forgetful nature of SSIS when it comes to assembly references. Basically, you can go through the process of adding an assembly reference using the Add Reference dialog, but when you close the script window, SSIS "forgets" the assembly reference so the script will not compile. After repeating the operation several times, you will find that SSIS only remembers the assembly reference when you specifically press the Save All icon in the script window. This problem is not unique to the AMO assembly and has certainly been a "feature" since SQL Server 2005, so I am not amazed it is still present in SQL Server 2008 R2! Sample Package So let's take a look at the sample SSIS package I have provided which can be downloaded from here: DeployOlapCubeExample.zip  Below is a screenshot after a successful run. Connection Managers The package has three connection managers: AsDatabaseDefinitionFile is a file connection manager pointing to the .asdatabase file you wish to deploy. Note that this can be found in the bin directory of you OLAP database project once you have clicked the "Build" button in Visual Studio TargetOlapServerCS is an Analysis Services connection manager which identifies both the deployment server and the target database name. SourceDataMart is an OLEDB connection manager pointing to the data mart which is to act as the source of data for your cube. This will be used to replace the connection string found in your .asdatabase file Once you have configured the connection managers, the sample should run and deploy your OLAP database in a few seconds. Of course, in a production environment, these connection managers would be associated with package configurations or set at runtime. When you run the sample, you should see that the script logs its activity to the output screen (see screenshot above). If you configure logging for the package, then these messages will also appear in your SSIS logging. Sample Code Walkthrough Next let's walk through the code. The first step is to parse the connection string provided by the TargetOlapServerCS connection manager and obtain the name of both the target OLAP server and also the name of the OLAP database. Note that the target database does not have to exist to be referenced in an AS connection manager, so I am using this as a convenient way to define both properties. We now connect to the server and check for the existence of the OLAP database. If it exists, we drop the database so we can re-deploy. svr.Connect(olapServerName); if (svr.Connected) { // Drop the OLAP database if it already exists Database db = svr.Databases.FindByName(olapDatabaseName); if (db != null) { db.Drop(); } // rest of script } Next we start building the XMLA command that will actually perform the deployment. Basically this is a small chuck of XML which we need to wrap around the large .asdatabase file generated by the Visual Studio build process. // Start generating the main part of the XMLA command XmlDocument xmlaCommand = new XmlDocument(); xmlaCommand.LoadXml(string.Format("<Batch Transaction='false' xmlns='http://schemas.microsoft.com/analysisservices/2003/engine'><Alter AllowCreate='true' ObjectExpansion='ExpandFull'><Object><DatabaseID>{0}</DatabaseID></Object><ObjectDefinition/></Alter></Batch>", olapDatabaseName));  Next we need to merge two XML files which we can do by simply using setting the InnerXml property of the ObjectDefinition node as follows: // load OLAP Database definition from .asdatabase file identified by connection manager XmlDocument olapCubeDef = new XmlDocument(); olapCubeDef.Load(Dts.Connections["AsDatabaseDefinitionFile"].ConnectionString); // merge the two XML files by obtain a reference to the ObjectDefinition node oaRootNode.InnerXml = olapCubeDef.InnerXml;   One hurdle I had to overcome was removing detritus from the .asdabase file left by the Visual Studio build. Through an iterative process, I found I needed to remove several nodes as they caused the deployment to fail. The XMLA error message read "Cannot set read-only node: CreatedTimestamp" or similar. In comparing the XMLA generated with by the Deployment Wizard with that generated by my code, these read-only nodes were missing, so clearly I just needed to strip them out. This was easily achieved using XPath to find the relevant XML nodes, of which I show one example below: foreach (XmlNode node in rootNode.SelectNodes("//ns1:CreatedTimestamp", nsManager)) { node.ParentNode.RemoveChild(node); } Now we need to change the database name in both the ID and Name nodes using code such as: XmlNode databaseID = xmlaCommand.SelectSingleNode("//ns1:Database/ns1:ID", nsManager); if (databaseID != null) databaseID.InnerText = olapDatabaseName; Finally we need to change the connection string to point at the relevant data mart. Again this is easily achieved using XPath to search for the relevant nodes and then replace the content of the node with the new name or connection string. XmlNode connectionStringNode = xmlaCommand.SelectSingleNode("//ns1:DataSources/ns1:DataSource/ns1:ConnectionString", nsManager); if (connectionStringNode != null) { connectionStringNode.InnerText = Dts.Connections["SourceDataMart"].ConnectionString; } Finally we need to perform the deployment using the Execute XMLA command and check the returned XmlaResultCollection for errors before setting the Dts.TaskResult. XmlaResultCollection oResults = svr.Execute(xmlaCommand.InnerXml);  // check for errors during deployment foreach (Microsoft.AnalysisServices.XmlaResult oResult in oResults) { foreach (Microsoft.AnalysisServices.XmlaMessage oMessage in oResult.Messages) { if ((oMessage.GetType().Name == "XmlaError")) { FireError(oMessage.Description); HadError = true; } } } If you are not familiar with XML programming, all this may all seem a bit daunting, but perceiver as the sample code is pretty short. If you would like the script to process the OLAP database, simply uncomment the lines in the vicinity of Process method. Of course, you can extend the script to perform your own custom processing and to even synchronize the database to a front-end server. Personally, I like to keep the deployment and processing separate as the code can become overly complex for support staff.If you want to know more, come see my session at the forthcoming SQLBits conference.

    Read the article

  • Oracle Tutor: Create Accessible Content for the Disabled Community

    - by emily.chorba(at)oracle.com
    For many reasons--legal, business, and ethical--Oracle recognizes the need for its applications, and our customers' and partners' products built with our tools, to be usable by the disabled community. The following features of Tutor Author and Publisher software facilitate the creation of accessible HTML content for the disabled community.TablesThe following formatting guidelines will ensure that Tutor documents containing tables will be accessible once they are converted to HTML.• Determine whether a table is a "data table" or whether you are using a table simply for formatting. If it's a data table, you must use a heading for each column, and you should format this heading row as "table heading" style and select Table > Heading Rows Repeat.• For non data tables, it is not necessary to include a heading row.GraphicsTo create accessible graphics, add a caption to the graphic. In Microsoft Office 2000 and greater, right-click on the graphic and select Format Picture > Web (tab) > Alternative Text or select the graphic then Format > Picture > Web (tab) Alternative Text. Enter the appropriate information in the dialog box.When a document containing a graphic with alternative text is converted to HTML by Tutor, the HTML document will contain the appropriate accessibility information.Javascript elementsThe tabbed format and other javascript elements in the HTML version of the Tutor documents may not be accessible to all users. A link to an accessible/printable version of the document is available in the upper right corner of all Tutor documents.Repetitive dataIf repetitive data such as the distribution section and the ownership section are causing accessibility issues with your Tutor documents, you can insert a bookmark in the appropriate location of the document, and, when the document is converted to HTML, the bookmark will be converted to an A NAME reference (also known as an internal link). With this reference, you can create a link in Header.txt that can be prepended to each Tutor document that allows the user to bypass repetitive sections. Tutor and Oracle ApplicationsRegarding accessibility, please check Oracle's website on accessibility http://www.oracle.com/accessibility/ to find out what version of E-Business Suite is certified to work with screen readers. Oracle Tutor 11.5.6A and greater works with screen readers such as JAWS.There is no certification between Oracle Tutor and Oracle Applications because there are no related dependencies. It doesn't matter which version of the Oracle Applications you are running. Therefore, it is possible to use Oracle Tutor with earlier versions of Oracle Applications.Oracle Business Process Converter and Oracle ApplicationsOracle Business Process Converter (OBPC) converts Visio, XPDL, and Tutor models to Oracle Business Process Architect and Oracle Business Process Management. The OBPC is one of a collection of plugins to Oracle JDeveloper. Please see the VPAT as the same considerations apply.Learn MoreFor more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum. Emily ChorbaPrinciple Product Manager Oracle Tutor & BPM

    Read the article

  • Thoughts on the Nomination Committee Campaign 2014

    - by Testas
    Congratulations to Erin, Andy and Allen on making the Nomination Committee for 2014. As Mark Broadbent (@retracement) stated in his tweet, there’s a great set of individuals for the Nom Com, and I could not agree more. I know Erin and Allen, and I know how much value they will bring to the process. I don’t know Andy as well, but I am sure he will do a great job and I hope I can meet him at PASS soon. The final candidate appointed by the PASS board is Rick Bolesta, who brings a wealth of experience to the process. I also want to take the opportunity to thank all who have voted. Not just for me, but for all the candidates during the election. Your contribution is greatly appreciated. Would I apply for the Nom Com again?  Yes I would. My first election experience has been a learning experience in itself. So I accept the result and look forward to applying next year. Moving on from this, I do want to express my opinion about the lack of international representation in the election process. One of the tweets that I saw after the result was from Adam Machanic (@AdamMachanic) who commented on the lack of international members on the Nom Com. If truth be told, I was disappointed – when the candidate list was released -- that for the second time in recent elections there was a lack of international candidates on the candidate list. It feels that only Brits and Americans partake in such elections. This is a real shame, and I can’t help thinking why this is the case. Hugo Kornelis (@Hugo_Kornelis) wrote a blog here to express his thoughts. He did raise some valid points. I don’t know why there is an absence of international candidates. I know that the team at PASS are looking to improve the situation, so I do not want to give the impression that PASS are doing nothing. For reference please see Bill Graziano’ s article here to see how PASS are addressing the situation. There is a clear direction to change the rules within PASS to give greater inclusion of international members. In addition to this, I wanted to explore a couple of potential approaches to address the situation. I am not saying that they are the right answer, but when I see challenges, I like to bring potential solutions to the table. 1.       Use the PASS mission statement to define a tactical objective that engages community leaders into the election process. If you are not familiar with the PASS mission statement, let me provide it here as laid out on the PASS website. “Empower data professionals who leverage Microsoft technologies to connect, share, and learn through networking, knowledge sharing, and peer-based learning” PASS fulfil this mission statement regularly. Whether you attend SQL Saturday, SQLRally, SQLPASS and BA conference itself. The biggest value of PASS is the ability to bring our profession together. And the 24 hour hop allows you to learn from the comfort of your own office/home. This mission should be extended to define a tactical objectives that bring greater networking and knowledge sharing between PASS Chapter leaders/Regional Mentors and PASS HQ. It should help educate the leaders about the opportunities of elections and how leaders can become involved. I know PASS engage with Chapter leaders on a regular basis to discuss community matters for the benefit of PASS members. How could this be achieved? Perhaps PASS could perform a quarterly virtual meeting that specifically looks at helping leaders become more involved with the election process 2.       Evolve the Global Growth Strategy into a Global Engagement Strategy. One of the remits of the PASS board over the last couple of years is the Global Growth strategy. This has been very successful as we have seen the massive growth of events across the world. For that, I congratulate the board for this success. Perhaps the time is now right to look at solidifying this success, through a Global Engagement Strategy that starts with the collaboration of Chapter Leaders, Regional Mentors and Evangelists in their respective Countries or Regions. The engagement strategy should look at increasing collaboration between community leaders for the benefit of their respective communities. It should also provide a channel for encouraging leaders to put themselves forward for the elections. How could this be achieved? In the UK, there has been a big growth in PASS Chapters and SQL Server Events that was approaching saturation point. The introduction of the Community Engagement Day -- channelled through the SQLBits conference -- has enabled Chapter Leaders to collaborate, connect and share with PASS, Sponsors and Microsoft. It also provides the ability for Chapter Leaders to speak directly to the PASS representatives from PASSHQ. This brings with it the ability for PASS community evangelists to communicate PASS objectives. It has also been the event where we have found out; and/or encouraged, Chapter Leaders to put themselves forward for elections. People like encouragement and validation when going for something like an election, and being able to discuss this with peers at a dedicated event provides a useful platform. PASS has the people in place already to facilitate such an event. Regional Mentors could potentially help organise such events on an annual basis, with PASSHQ providing support in providing a room/Lync access for the event to take place. It would be really good if a PASSHQ representative could attend in person as well.   3.       Restrict candidates to serve only a limited number of terms. A frequent comment I saw on social networking was that the elections can be seen by some as a popularity conference. Perhaps by limiting the number of terms that an individual can serve on either the Nom Com or the BOD, other candidates may be encouraged to be more actively involved within the PASS election process. I don’t think that the current byelaws deal with this particular suggestion. I also saw a couple of tweets that stated that more active community members did not apply for the Nom Com. I struggled to understand how the individuals of the tweets measured “more active”. It just also further solidified the subjective nature of elections. In the absence of how candidates are put forward for the elections. Then a restriction of terms enables the opportunity to be extended to others. How could this be achieved? Set a resolution that is put to a community vote as to the viability of such a solution. For example, the questions for the vote could be: Should individuals in the Nom Com and BoD be limited to a certain number of terms?  Yes/No. What is the maximum number of terms a candidate could serve?   It would be simple to execute such a vote, and the community will have an opportunity to have a say in an important aspect of the PASS organisation. And is the change is successful, then add it as a byelaw.   So there are some of my thoughts. I am not saying they are right or wrong. But I do hope that there is a concerted effort to encourage more candidates from other reaches of the Globe to become involved with future elections.   It would be good to hear your thoughts   Thanks   Chris

    Read the article

  • How to design a scalable notification system?

    - by Trent
    I need to write a notification system manager. Here is my requirements: I need to be able to send a Notification on different platforms, which may be totally different (for exemple, I need to be able to send either an SMS or an E-mail). Sometimes the notification may be the same for all recipients for a given platform, but sometimes it may be a notification per recipients (or several) per platform. Each notification can contain platform specific payload (for exemple an MMS can contains a sound or an image). The system need to be scalable, I need to be able to send a very large amount of notification without crashing either the application or the server. It is a two step process, first a customer may type a message and choose a platform to send to, and the notification(s) should be created to be processed either real-time either later. Then the system needs to send the notification to the platform provider. For now, I end up with some though but I don't know how scalable it will be or if it is a good design. I've though of the following objects (in a pseudo language): a generic Notification object: class Notification { String $message; Payload $payload; Collection<Recipient> $recipients; } The problem with the following objects is what if I've 1.000.000 recipients ? Even if the Recipient object is very small, it'll take too much memory. I could also create one Notification per recipient, but some platform providers requires me to send it in batch, meaning I need to define one Notification with several Recipients. Each created notification could be stored in a persistent storage like a DB or Redis. Would it be a good it to aggregate this later to make sure it is scalable? On the second step, I need to process this notification. But how could I distinguish the notification to the right platform provider? Should I use an object like MMSNotification extending an abstract Notification? or something like Notification.setType('MMS')? To allow to process a lot of notification at the same time, I think a messaging queue system like RabbitMQ may be the right tool. Is it? It would allow me to queue a lot of notification and have several worker to pop notification and process them. But what if I need to batch the recipients as seen above? Then I imagine a NotificationProcessor object for which I could I add NotificationHandler each NotificationHandler would be in charge to connect the platform provider and perform notification. I can also use an EventManager to allow pluggable behavior. Any feedbacks or ideas? Thanks for giving your time. Note: I'm used to work in PHP and it is likely the language of my choice.

    Read the article

  • Oracle E-Business Suite Release 12: Upgrading Customizations

    - by Oracle_EBS
    Please consider reviewing the following NEW whitepaper to help with the process of upgrading customizations to Release 12. Upgrading your Customizations to Oracle E-Business Suite Release 12.1 (Note 1435894.1) This document  discusses upgrading Oracle E-Business Suite customizations in the context of the following process: Creating an Inventory of Your Existing Customizations Comparing Customizations to Release 12 Upgrading Customizations Reimplementing Customizations Creating Future Customizations 

    Read the article

  • SQL Server MCM is too easy, is it?

    - by simonsabin
    We all know that Brent Ozar did the MCM training/certification over the past few weeks. He wrote an interesting article on Friday about the bad bits ( http://www.brentozar.com/archive/2010/04/sql-mcm-now-bad-stuff/ ) of the training and it lead me to thinking about the certification process again(I often think about it, and it appears often in response to something from Brent http://sqlblogcasts.com/blogs/simons/archive/2010/02/12/Whats-missing-in-the-SQL-Certification-process-.aspx ) This time what...(read more)

    Read the article

  • Documenting Business Processes and Capturing Organizational Knowledge with Oracle Tutor 12.2

    Organizations can master the challenges of documenting business processes and capturing organizational knowledge with Oracle Tutor. They can also solve the documentation challenges they face during an implementation/upgrade and satisfy business process regulatory compliance initiatives. Oracle Tutor can help project teams lay the foundation for a successful application rollout or compliance audit by quickly and consistently creating and sustaining employee process documentation throughout the business lifecycle.

    Read the article

  • The Road to Professional Database Development: Database Normalization

    Not only is the process of normalization valuable for increasing data quality and simplifying the process of modifying data, but it actually makes the database perform much faster. To prove the point, Peter Larsson takes a large unnormalised database and subjects it to successive stages of normalisation. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • Unrated Easy iOS 6.1.4/6.1.3 Unlock/Jailbreak iPhone 5/4S/4/3GS Untehtered System

    - by user171772
    Popular jailbreak tool Unlock-Jailbreak.net – compiled by the iPhone Team – has just been updated with full support for Unlock/Jailbreak iPhone 5/4S/4/3GS iOS 6.1.4 and 6.1.3/6.0.1 Untethered. You may have caught our tutorial, which detailed how one could jailbreak their device tethered using Redsn0w, although since it was a pre-iOS 6.1.1 release, users needed to "point" the tool to the older firmware. Team Unlock-Jailbreak was established few years ago, combines some of the jailbreak and unlock community’s most talented developers all known for producing reliable jailbreaks in the past. This team was assembled in order to develop a reliable untethered jailbreak and unlock iphone 5,4S,4 iOS 6.1 for post-A5 devices, including the iPhone 5, the iPad mini and the latest-generation iPad. This has now been achieved with the just-released userland jailbreak tool, known as Unlock-Jailbreak.net. To Jailbreak and Unlock your iPhone 5/4/4S/3GS iOS 6.1.4 and 6.1.3 visit the official website http://www.Unlock-Jailbreak.net http://www.Unlock-Jailbreak.net was formed in mid 2008 and have successfully jailbroken over 250,000 iPhones worldwide. This is unparalleled by any other service in the industry. They have achieved this by combining a very simple solution with a fantastic customer service department that is available 24/7 through many forms of contact, including telephone. Unlock-Jailbreak from Unlock-Jailbreak.nethas been downloaded by over 250,000 customers located in over 145 countries. To further ensure customers of its products usability, Unlock-Jailbreak offers a 100% full money back guarantee on all orders. Customers dissatisfied with the company’s product will be given a full refund, no questions asked. One good advantage of the software is that the jailbreaking and unlocking process is coampletely reversible and there will be no evidence that the iPhone has been jailbroken and unlocked . iOS 6.1/6.1.4 and 6.1.3 comes with many new features and updates for multitasking and storage. By unlocking and jailbreaking the iPhone,Unlock/Jailbreak iPhone 5/4S/4/3GS iOS 6.1/6.1.4 and 6.1.3/6.0.1 Untethered unleash unlimited possibilities to improve this already fantastic experience and the iPhone FULL potential. Before going through any jailbreak process with Unlock-Jailbreak it is always good housekeeping to perform a full backup of all information on the device. It is unlikely that anything will go wrong during the process but when undertaking any process that modifies the internals of a file system it is always prudent to err on the side of caution.

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >