Search Results

Search found 10764 results on 431 pages for 'extending ruby'.

Page 418/431 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • MySQL my.cnf file not being read, Ubuntu 10.04 64bit

    - by reallyordinary
    I've been researching this for a few hours with no luck. Basically it looks like my server's my.cnf file isn't being read at all. I've searched my server, and there's only one my.cnf file on it, located at /etc/mysql/my.cnf. Its ownership is root:root. I'm running Ubuntu 10.04 64bit on a Linode.com server. I have the latest versions of MySQL and PHP installed. I've edited the my.cnf file, commented out "skip-innodb", and have set innodb to be the default storage engine using default-storage-engine = innodb And then restarted mysql. But when I do show engines, MyISAM is still coming up as the default engine. Also - none of the innodb settings I've added to the my.cnf file are being read. For example, I have this in my.cnf: innodb_buffer_pool_size=4G But in phpmyadmin, InnoDB is showing as having a buffer pool size of 8,192 KiB. Similarly, I have this in the my.cnf: innodb_data-file_path = ibdata1:500M:autoextend But in phpmyadmin, it's reading as ibdata1:10M:autoextend. It doesn't look like MyISAM info is being read from the my.cnf file either. The my.cnf file has skip-external-locking queried out, but it's showing as "on" in phpmyadmin. So - yeah, it looks like nothing in the my.cnf file is being read at all. But the server still works. I'm running a Drupal site on it and it seems to operate fine. So mysql seems to be drawing default settings from... some mysterious secret location. Any idea how I can make mysql see and use this my.cnf file? Actually, wait - it looks like it may be being read, not sure. I checked the error.log and found this: 101128 4:28:52 [ERROR] Cannot find or open table databasename/cache_apachesolr from the internal data dictionary of InnoDB though the .frm file for the table exists. Maybe you have deleted and recreated InnoDB data files but have forgotten to delete the corresponding .frm files of InnoDB tables, or you have moved .frm files to another database? or, the table contains indexes that this version of the engine doesn't support. See http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html how you can resolve the problem. InnoDB: Error: auto-extending data file ./ibdata1 is of a different size InnoDB: 640 pages (rounded down to MB) than specified in the .cnf file: InnoDB: initial 32000 pages, max 0 (relevant if non-zero) pages! InnoDB: Could not open or create data files. InnoDB: If you tried to add new data files, and it failed here, InnoDB: you should now edit innodb_data_file_path in my.cnf back InnoDB: to what it was, and remove the new ibdata files InnoDB created InnoDB: in this failed attempt. InnoDB only wrote those files full of InnoDB: zeros, but did not yet use them in any way. But be careful: do not InnoDB: remove old data files which contain your precious data! 101128 4:28:52 [ERROR] Plugin 'InnoDB' init function returned error. 101128 4:28:52 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 101128 4:28:52 [ERROR] /usr/sbin/mysqld: unknown variable 'innodb_lock_wait_timout=50' 101128 4:28:52 [ERROR] Aborting 101128 4:28:52 [Note] /usr/sbin/mysqld: Shutdown complete

    Read the article

  • FFmpeg Video Hosting for Linux and Windows Server

    - by Aditi
    FFmpeg hosting is a special type of web hosting where the host servers have video transcoding software loaded on them, which allows the automatic conversion of videos from one format to another. FFmpeg is a cross-platform solution for recording, converting, transcoding and stream audio and video. It includes libavcodec – the leading audio/video codec library. FFmpeg hosting gets its name from a set of server side programs (modules) called FFmpeg. There are a number of applications or web scripts available, which allow webmasters to create their own video sharing websites. Video hosting typically requires: PHP 4.3 and above (including support of CLI) Mencoder and also Mplayer FFMpeg-PHP MySQL database server LAME MP3 Encoder Libogg + Libvorbis GD Library 2 or higher CGI-BIN There are number of web service providers who provide FFmpeg hosting service. Following is a list of some of the Best FFmpeg hosting providers for both Linux and Windows Server below. Dream Host Dreamhost provides for web based email access, mail filtering, spam filtering, unlimited email ids, vacation autoresponder, python support, full CGI access and many more services. Price: $7.95 View Details Micfo It offers unlimited disk space and bandwidth. Other services include free domain for life and free Website Transfer with many more services. All in all one of the best option to consider. Price: $5 View Details Host Upon HostUpon offers FFMpeg Hosting on all their hosting packages, with readily installed modules to start a Video website or Social Network with Video uploading. These scripts such as Boonex Dolphin / PHPMotion / Social Engine / ABKsoft Scripts / Joomla Video Plugin / Clipshare / ClipBucket / Social Media / Rayzz / Vidi Script work with their ffmpeg. Their FFMPEG hosting plan offers 24/7/365 support with typical response time of 15min or less. Price: $5.95 View Details DownTown Host DownTown Host provides full and exceptional support by live chat and telephone. It has high-power, modern servers and the finest web server technology. It offers free search engine Submission and continuous data backup protection with free email forwarding and site move. There are many more services too. Site5 This ffmpeg service provider offers uptime guarantee, a real time stats on each server and many more attractive services. Price: $4.95 View Details Cirtex Hosting Cirtex Hosting allows to host 7 websites & domains and provides for unlimited storage space and monthly bandwidth. It also offers FTP and email accounts and many more services. Price: $2.49 View Details FLV Hosting FLV hosting supplies RTMP SERVER STREAMING for large size video streaming and server side recording. It is flexible and costs less. They customize to the clients requirements. Price: $9.95 View Details AptHost This hosting service provides for 24x7x365 Premium Support and fully ffmpeg enabled services. Price: $4.95 View Details HostMDS Great Support, Priced Low. It provides for SSH access, CGI, Ruby on Rails, Perl, PHP, MySQL, front page extentions, 24/7 Support, FREE Domain transfer and spam filtering. It offers instant account setup, low latency fast bandwidth & much more! They were formerly known as Vistapages. Price: $4.95 View Details Related posts:Best WordPress Video Themes for a Video Blog Free Web Based Applications 24+ Coda Alternatives for Windows and Linux

    Read the article

  • Granular Clipboard Control in Oracle IRM

    - by martin.abrahams
    One of the main leak prevention controls that customers are looking for is clipboard control. After all, there is little point in controlling access to a document if authorised users can simply make unprotected copies by use of the cut and paste mechanism. Oddly, for such a fundamental requirement, many solutions only offer very simplistic clipboard control - and require the customer to make an awkward choice between usability and security. In many cases, clipboard control is simply an ON-OFF option. By turning the clipboard OFF, you disable one of the most valuable edit functions known to man. Try working for any length of time without copying and pasting, and you'll soon appreciate how valuable that function is. Worse, some solutions disable the clipboard completely - not just for the protected document but for all of the various applications you have open at the time. Normal service is only resumed when you close the protected document. In this way, policy enforcement bleeds out of the particular assets you need to protect and interferes with the entire user experience. On the other hand, turning the clipboard ON satisfies a fundamental usability requirement - but also makes it really easy for users to create unprotected copies of sensitive information, maliciously or otherwise. All they need to do is paste into another document. If creating unprotected copies is this simple, you have to question how much you are really gaining by applying protection at all. You may not be allowed to edit, forward, or print the protected asset, but all you need to do is create a copy and work with that instead. And that activity would not be tracked in any way. So, a simple ON-OFF control creates a real tension between usability and security. If you are only using IRM on a small scale, perhaps security can outweigh usability - the business can put up with the restriction if it only applies to a handful of important documents. But try extending protection to large numbers of documents and large user communities, and the restriction rapidly becomes really unwelcome. I am aware of one solution that takes a different tack. Rather than disable the clipboard, pasting is always permitted, but protection is automatically applied to any document that you paste into. At first glance, this sounds great - protection travels with the content. However, at any scale this model may not be so appealing once you've had to deal with support calls from users who have accidentally applied protection to documents that really don't need it - which would be all too easily done. This may help control leakage, but it also pollutes the system with documents that have policies applied with no obvious rhyme or reason, and it can seriously inconvenience the business by making non-sensitive documents difficult to access. And what policy applies if you paste some protected content into an already protected document? Which policy applies? There are no prizes for guessing that Oracle IRM takes a rather different approach. The Oracle IRM Approach Oracle IRM offers a spectrum of clipboard controls between the extremes of ON and OFF, and it leverages the classification-based rights model to give granular control that satisfies both security and usability needs. Firstly, we take it for granted that if you have EDIT rights, of course you can use the clipboard within a given document. Why would we force you to retype a piece of content that you want to move from HERE... to HERE...? If the pasted content remains in the same document, it is equally well protected whether it be at the beginning, middle, or end - or all three. So, the first point is that Oracle IRM always enables the clipboard if you have the right to edit the file. Secondly, whether we enable or disable the clipboard, we only affect the protected document. That is, you can continue to use the clipboard in the usual way for unprotected documents and applications regardless of whether the clipboard is enabled or disabled for the protected document(s). And if you have multiple protected documents open, each may have the clipboard enabled or disabled independently, according to whether you have Edit rights for each. So, even for the simplest cases - the ON-OFF cases - Oracle IRM adds value by containing the effect to the protected documents rather than to the whole desktop environment. Now to the granular options between ON and OFF. Thanks to our classification model, we can define rights that enable pasting between documents in the same classification - ie. between documents that are protected by the same policy. So, if you are working on this month's financial report and you want to pull some data from last month's report, you can simply cut and paste between the two documents. The two documents are classified the same way, subject to the same policy, so the content is equally safe in both documents. However, if you try to paste the same data into an unprotected document or a document in a different classification, you can be prevented. Thus, the control balances legitimate user requirements to allow pasting with legitimate information security concerns to keep data protected. We can take this further. You may have the right to paste between related classifications of document. So, the CFO might want to copy some financial data into a board document, where the two documents are sealed to different classifications. The CFO's rights may well allow this, as it is a reasonable thing for a CFO to want to do. But policy might prevent the CFO from copying the same data into a classification that is accessible to external parties. The above option, to copy between classifications, may be for specific classifications or open-ended. That is, your rights might enable you to go from A to B but not to C, or you might be allowed to paste to any classification subject to your EDIT rights. As for so many features of Oracle IRM, our classification-based rights model makes this type of granular control really easy to manage - you simply define that pasting is permitted between classifications A and B, but omit C. Or you might define that pasting is permitted between all classifications, but not to unprotected locations. The classification model enables millions of documents to be controlled by a few such rules. Finally, you MIGHT have the option to paste anywhere - such that unprotected copies may be created. This is rare, but a legitimate configuration for some users, some use cases, and some classifications - but not something that you have to permit simply because the alternative is too restrictive. As always, these rights are defined in user roles - so different users are subject to different clipboard controls as required in different classifications. So, where most solutions offer just two clipboard options - ON-OFF or ON-but-encrypt-everything-you-touch - Oracle IRM offers real granularity that leverages our classification model. Indeed, I believe it is the lack of a classification model that makes such granularity impractical for other IRM solutions, because the matrix of rules for controlling pasting would be impossible to manage - there are so many documents to consider, and more are being created all the time.

    Read the article

  • SQLAuthority News – Amazon Gift Card Raffle for Beta Tester Feedback for NuoDB

    - by pinaldave
    As regular readers know I’ve been spending some time working with the NuoDB beta software. They contacted me last week and asked if I would give you a chance to try their new web-based console for their scalable, SQL-compliant database. They have just put out their final beta release, Beta 9.  It contains a preview of a new web-based “NuoConsole” that will replace and extend the functionality of their current desktop version.  I haven’t spent any time with the new console yet but a really quick look tells me it should make it easier to do deeper monitoring than the older one. It also looks like they have added query-level reporting through the console. I will try to play with it soon. NuoDB is doing a last, big push to get some more feedback from developers before they release their 1.0 product sometime in the next several weeks. Since the console is new, they are especially interested in some quick feedback on it before general availability. For SQLAuthority readers only, NuoDB will raffle off three $50 Amazon gift cards in exchange for your feedback on the NuoConsole preview. Here’s how to Enter Download NuoDBeta 9 here You must build a domain before you can start the console. Launch the Web Console. Windows Code: start java -jar jarnuodbwebconsole.jar Mac, Linux, Solaris, Unix Code: java -jar jar/nuodbwebconsole.jar Access the Web Console: Code: http://localhost:8080 When you have tried it out, go to a short (8 question) survey to enter the raffle Click here for the survey You must complete the survey before midnight EDT on October 17, 2012. Here’s what else they are saying about this last beta before general availability: Beta 9 now supports the Zend PHP framework so that PHP developers can directly integrate web applications with NuoDB. Multi-threaded HDFS support – NuoDB Storage Managers can now be configured to persist data to the high performance Hadoop distributed file system (HDFS). Beta 9 optimizes for multi-thread I/O streams at maximum performance. This enhancement allows users to make Hadoop their core storage with no extra effort which is a pretty cool idea. Improved Performance –On a single transaction node, Beta 9 offers performance comparable with MySQL and MariaDB. As additional nodes are added, NuoDB performance improves significantly at near linear scale. Query & Explain Plan Logging – Beta 9 introduces SQL explain plans for your queries. Qualify queries with the word “EXPLAIN” and NuoDB will respond with the details of the execution plan allowing performance optimization to SQL. Through the NuoConsole, you can now kill hung or long running queries. Java App Server Support – Beta 9 now supports leading Web JEE app servers including JBoss, Tomcat, and ColdFusion. They’ve also reported: Improved PHP/PDO drivers Support for Drupal Faster Ruby on Rails driver The Hibernate Dialect supports version 4.1 And good news for my readers: numerous SQL enhancements They will share the results of the web console feedback with me.  I’ll let you know how it goes. Also the winner of their last contest was Jaime Martínez Lafargue!  Do leave a comment here once you complete the survey.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL Authority Tagged: NuoDB

    Read the article

  • Taking the training wheels off: Accelerating the Business with Oracle IAM by Brian Mozinski (Accenture)

    - by Greg Jensen
    Today, technical requirements for IAM are evolving rapidly, and the bar is continuously raised for high performance IAM solutions as organizations look to roll out high volume use cases on the back of legacy systems.  Existing solutions were often designed and architected to support offline transactions and manual processes, and the business owners today demand globally scalable infrastructure to support the growth their business cases are expected to deliver. To help IAM practitioners address these challenges and make their organizations and themselves more successful, this series we will outline the: • Taking the training wheels off: Accelerating the Business with Oracle IAM The explosive growth in expectations for IAM infrastructure, and the business cases they support to gain investment in new security programs. • "Necessity is the mother of invention": Technical solutions developed in the field Well proven tricks of the trade, used by IAM guru’s to maximize your solution while addressing the requirements of global organizations. • The Art & Science of Performance Tuning of Oracle IAM 11gR2 Real world examples of performance tuning with Oracle IAM • No Where to go but up: Extending the benefits of accelerated IAM Anything is possible, compelling new solutions organizations are unlocking with accelerated Oracle IAM Let’s get started … by talking about the changing dynamics driving these discussions. Big Companies are getting bigger everyday, and increasingly organizations operate across state lines, multiple times zones, and in many countries or continents at the same time.  No longer is midnight to 6am a safe time to take down the system for upgrades, to run recon’s and import or update user accounts and attributes.  Further IT organizations are operating as shared services with SLA’s similar to telephone carrier levels expected by their “clients”.  Workers are moved in and out of roles on a weekly, daily, or even hourly rate and IAM is expected to support those rapid changes.  End users registering for services during business hours in Singapore are expected their access to be green-lighted in custom apps hosted in Portugal within the hour.  Many of the expectations of asynchronous systems and batched updates are not adequate and the number and types of users is growing. When organizations acted more like independent teams at functional or geographic levels it was manageable to have processes that relied on a handful of people who knew how to make things work …. Knew how to get you access to the key systems to get your job done.  Today everyone is expected to do more with less, the finance administrator previously supporting their local Atlanta sales office might now be asked to help close the books for the Johannesburg team, and access certification process once completed monthly by Joan on the 3rd floor is now done by a shared pool of resources in Sao Paulo.   Fragmented processes that rely on institutional knowledge to get access to systems and get work done quickly break down in these scenarios.  Highly robust processes that have automated workflows for connected or disconnected systems give organizations the dynamic flexibility to share work across these lines and cut costs or increase productivity. As the IT industry computing paradigms continue to change with the passing of time, and as mature or proven approaches become clear, it is normal for organizations to adjust accordingly. Businesses must manage identity in an increasingly hybrid world in which legacy on-premises IAM infrastructures are extended or replaced to support more and more interconnected and interdependent services to a wider range of users. The old legacy IAM implementation models we had relied on to manage identities no longer apply. End users expect to self-request access to services from their tablet, get supervisor approval over mobile devices and email, and launch the application even if is hosted on the cloud, or run by a partner, vendor, or service provider. While user expectations are higher, they are also simpler … logging into custom desktop apps to request approvals, or going through email or paper based processes for certification is unacceptable.  Users expect security to operate within the paradigm of the application … i.e. feel like the application they are using. Citizen and customer facing applications have evolved from every where, with custom applications, 3rd party tools, and merging in from acquired entities or 3rd party OEM’s resold to expand your portfolio of services.  These all have their own user stores, authentication models, user lifecycles, session management, etc.  Often the designers/developers are no longer accessible and the documentation is limited.  Bringing together underlying directories to scale for growth, and improve user experience is critical for revenue … but also for operations. Job functions are more dynamic.... take the Olympics for example.  Endless organizations from corporations broadcasting, endorsing, or marketing through the event … to non-profit athletic foundations and public/government entities for athletes and public safety, all operate simultaneously on the world stage.  Each organization needs to spin up short-term teams, often dealing with proprietary information from hot ads to racing strategies or security plans.  IAM is expected to enable team’s to spin up, enable new applications, protect privacy, and secure critical infrastructure.  Then it needs to be disabled just as quickly as users go back to their previous responsibilities. On a more technical level … Optimized system directory; tuning guidelines and parameters are needed by businesses today. Business’s need to be making the right choices (virtual directories) and considerations via choosing the correct architectural patterns (virtual, direct, replicated, and tuning), challenge is that business need to assess and chose the correct architectural patters (centralized, virtualized, and distributed) Today's Business organizations have very complex heterogeneous enterprises that contain diverse and multifaceted information. With today's ever changing global landscape, the strategic end goal in challenging times for business is business agility. The business of identity management requires enterprise's to be more agile and more responsive than ever before. The continued proliferation of networking devices (PC, tablet, PDA's, notebooks, etc.) has caused the number of devices and users to be granted access to these devices to grow exponentially. Business needs to deploy an IAM system that can account for the demands for authentication and authorizations to these devices. Increased innovation is forcing business and organizations to centralize their identity management services. Access management needs to handle traditional web based access as well as handle new innovations around mobile, as well as address insufficient governance processes which can lead to rouge identity accounts, which can then become a source of vulnerabilities within a business’s identity platform. Risk based decisions are providing challenges to business, for an adaptive risk model to make proper access decisions via standard Web single sign on for internal and external customers,. Organizations have to move beyond simple login and passwords to address trusted relationship questions such as: Is this a trusted customer, client, or citizen? Is this a trusted employee, vendor, or partner? Is this a trusted device? Without a solid technological foundation, organizational performance, collaboration, constituent services, or any other organizational processes will languish. A Single server location presents not only network concerns for distributed user base, but identity challenges. The network risks are centered on latency of the long trip that the traffic has to take. Other risks are a performance around availability and if the single identity server is lost, all access is lost. As you can see, there are many reasons why performance tuning IAM will have a substantial impact on the success of your organization.  In our next installment in the series we roll up our sleeves and get into detailed tuning techniques used everyday by thought leaders in the field implementing Oracle Identity & Access Management Solutions.

    Read the article

  • IIS Logfile Visualization with XNA

    - by BobPalmer
    In my office, I have a wall mounted monitor who's whole purpose in life is to display perfmon stats from our various servers.  And on a fairly regular basis, I have folks walk by asking what the lines mean.    After providing the requisite explaination about CPU utilization, disk I/O bottlenecks, etc. this is usually followed by some blank stares from the user in question, and a distillation of all of our engineering wizardry down to the phrase 'So when the red line goes up that's bad then?'   This of course would not do.  So I talked to my friends and our network admin about an option to show something more eye catching and visual, with which we could catch at a glance a feel for what was up with our site.    He initially pointed me out to a video showing GLTail and Chipmunk done in Ruby.  Realizing this was both awesome, and that I needed an excuse to do something in XNA, I decided to knock out a proof of concept for something very similar, but with a few tweaks.   Here's a link to a video of the current prototype:   http://www.youtube.com/watch?v=jM_PWZbtH2I   Essentially this app opens up a log file (even an active one) and begins pulling out the lines of text.  (Here's a good Code Project link that covers how to do tail reading from an active text file: http://www.codeproject.com/KB/files/tail.aspx).   As new data is added, a bubble is generated in the application - a GET statement comes from the left, and a POST from the right.  I then run it through a series of expression checkers, and based on the kind of statement and the pattern, a bubble of an appropriate color is generated.   For example, if I get a 500, a huge red bubble pops out.  Others are based on the part of the system the page is from - i.e. green bubbles are from our claims management subsystem, and blue bubbles are from the pages our scheduling staff use to schedule patients.  Others include the purple bubbles for security and login, and yellow bubbles for some miscellaneous pages.   The little grey bubbles represent things like images, JS, CSS, etc - and their small size makes them work like grease to keep the larger page bubbles moving.   The app is also smart enough that if it is starting to bog down with handling the physics and interactions, it will suspend new bubbles until enough have dropped off that performance can resume (you can see this slight stuttering in the sample video).   The net result is that anyone will be able to look up on the wall monitor, and instantly get a quick feel for how things are going on the floor.  Website slow?  You can get a feel for both volume and utilized modules with one glance.  Website crashing?  Look for a wall of giant red bubbles.  No activity at all?  Maybe the site is down.  Now couple this with utilization within a farm, and cross referenced with a second app showing the same kind of data from your SQL database...   As for the app itself, it's a windows XNA project with the code in C#.   The physics are handled by the Farseer physicis eingine for XNA (http://www.codeplex.com/FarseerPhysics) which is just pure goodness.  The samples are great, and I had the app up and working in two evenings (half of that was fine tuning, and the other was me coding with a kid in my lap).   My next steps include wiring this to SQL (I have some ideas...), and adding a nice configuration module.  For example, you could use polygons, etc to tie to your regex - or more entertaining things like having a little human ragdoll to represent a user login.     Once that's wrapped up and I have a chance to complete some hardening, I will be releasing the whole thing into the wild as opensource.     Feel free to ping me if you have any questions! -Bob

    Read the article

  • How you can extend Tasklists in Fusion Applications

    - by Elie Wazen
    In this post we describe the process of modifying and extending a Tasklist available in the Regional Area of a Fusion Applications UI Shell. This is particularly useful to Customers who would like to expose Setup Tasks (generally available in the Fusion Setup Manager application) in the various functional pillars workareas. Oracle Composer, the tool used to implement such extensions allows changes to be made at runtime. The example provided in this document is for an Oracle Fusion Financials page. Let us examine the case of a customer role who requires access to both, a workarea and its associated functional tasks, and to an FSM (setup) task.  Both of these tasks represent ADF Taskflows but each is accessible from a different page.  We will show how an FSM task is added to a Functional tasklist and made accessible to a user from within a single workarea, eliminating the need to navigate between the FSM application and the Functional workarea where transactions are conducted. In general, tasks in Fusion Applications are grouped in two ways: Setup tasks are grouped in tasklists available to implementers in the Functional Setup Manager (FSM). These Tasks are accessed by implementation users and in general do not represent daily operational tasks that fit into a functional business process and were consequently included in the FSM application. For these tasks, the primary organizing principle is precedence between tasks. If task "Manage Suppliers" has prerequisites, those tasks must precede it in a tasklist. Task Lists are organized to efficiently implement an offering. Tasks frequently performed as part of business process flows are made available as links in the tasklist of their corresponding menu workarea. The primary organizing principle in the menu and task pane entries is to group tasks that are generally accessed together. Customizing a tasklist thus becomes required for business scenarios where a task packaged under FSM as a setup task, is for a particular customer a regular maintenance task that is accessed for record updates or creation as part of normal operational activities and where the frequency of this access merits the inclusion of that task in the related operational tasklist A user with the role of maintaining Journals in General Ledger is also responsible for maintaining Chart of Accounts Mappings.  In the Fusion Financials Product Family, Manage Journals is a task available from within the Journals Menu whereas Chart of Accounts Mapping is available via FSM under the Define Chart of Accounts tasklist Figure 1. The Manage Chart of Accounts Mapping Task in FSM Figure 2. The Manage Journals Task in the Task Pane of the Journals Workarea Our goal is to simplify cross task navigation and allow the user to access both tasks from a single tasklist on a single page without having to navigate to FSM for the Mapping task and to the Journals workarea for the Manage task. To accomplish that, we use Oracle Composer to customize  the Journals tasklist by adding to it the Mapping task. Identify the Taskflow name and path of the FSM Task The first step in our process is to identify the underlying taskflow for the Manage Chart of Accounts Mappings task. We select to Setup and Maintenance from the Navigator to launch the FSM Application, and we query the task from Manage Tasklists and Tasks Figure 3. Task Details including Taskflow path The Manage Chart of Accounts Mapping Task Taskflow is: /WEB-INF/oracle/apps/financials/generalLedger/sharedSetup/coaMappings/ui/flow /CoaMappingsMainAreaFlow.xml#CoaMappingsMainAreaFlow We copy that value and use it later as a parameter to our new task in the customized Journals Tasklist. Customize the Journals Page A user with Administration privileges can start the run time customization directly from the Administration Menu of the Global Area.  This customization is done at the Site level and once implemented becomes available to all users with access to the Journals Workarea. Figure 4.  Customization Menu The Oracle Composer Window is displayed in the same browser and the Hierarchy of the page component is displayed and available for modification. Figure 5.  Oracle Composer In the composer Window select the PanelFormLayout node and click on the Edit Button.  Note that the selected component is simultaneously highlighted in the lower pane in the browser. In the Properties popup window, select the Tasks List and Task Properties Tab, where the user finds the hierarchy of the Tasklist and is able to Edit nodes or create new ones. src="https://blogs.oracle.com/FunctionalArchitecture/resource/TL5.jpg" Figure 6.  The Tasklist in edit mode Add a Child Task to the Tasklist In the Edit Window the user will now create a child node at the desired level in the hierarchy by selecting the immediate parent node and clicking on the insert node button.  This process requires four values to be set as described in Table 1 below. Parameter Value How to Determine the Value Focus View Id /JournalEntryPage This is the Focus View ID of the UI Shell where the Tasklist we want to customize is.  A simple way to determine this value is to copy it from any of the Standard tasks on the Tasklist Label COA Mapping This is the Display name of the Task as it will appear in the Tasklist Task Type dynamicMain If the value is dynamicMain, the page contains a new link in the Regional Area. When you click the link, a new tab with the loaded task opens Taskflowid /WEB-INF/oracle/apps/financials/generalLedger/sharedSetup/ coaMappings/ui/flow/ CoaMappingsMainAreaFlow.xml#CoaMappingsMainAreaFlow This is the Taskflow path we retrieved from the Task Definition in FSM earlier in the process Table 1.  Parameters and Values for the Task to be added to the customized Tasklist Figure 7.   The parameters window of the newly added Task   Access the FSM Task from the Journals Workarea Once the FSM task is added and its parameters defined, the user saves the record, closes the Composer making the new task immediately available to users with access to the Journals workarea (Refer to Figure 8 below). Figure 8.   The COA Mapping Task is now visible and can be invoked from the Journals Workarea   Additional Considerations If a Task Flow is part of a product that is deployed on the same app server as the Tasklist workarea then that task flow can be added to a customized tasklist in that workarea. Otherwise that task flow can be invoked from its parent product’s workarea tasklist by selecting that workarea from the Navigator menu. For Example The following Taskflows  belong respectively to the Subledger Accounting, and to the General Ledger Products.  /WEB-INF/oracle/apps/financials/subledgerAccounting/accountingMethodSetup/mappingSets/ui/flow/MappingSetFlow.xml#MappingSetFlow /WEB-INF/oracle/apps/financials/generalLedger/sharedSetup/coaMappings/ui/flow/CoaMappingsMainAreaFlow.xml#CoaMappingsMainAreaFlow Since both the Subledger Accounting and General Ledger products are part of the LedgerApp J2EE Applicaton and are both deployed on the General Ledger Cluster Server (Figure 8 below), the user can add both of the above taskflows to the  tasklist in the  /JournalEntryPage FocusVIewID Workarea. Note:  both FSM Taskflows and Functional Taskflows can be added to the Tasklists as described in this document Figure 8.   The Topology of the Fusion Financials Product Family. Note that SubLedger Accounting and General Ledger are both deployed on the Ledger App Conclusion In this document we have shown how an administrative user can edit the Tasklist in the Regional Area of a Fusion Apps page using Oracle Composer. This is useful for cases where tasks packaged in different workareas are frequently accessed by the same user. By making these tasks available from the same page, we minimize the number of steps in the navigation the user has to do to perform their transactions and queries in Fusion Apps.  The example explained above showed that tasks classified as Setup tasks, meaning made accessible to implementation users from the FSM module can be added to the workarea of their respective Fusion application. This eliminates the need to navigate to FSM to access tasks that are both setup and regular maintenance tasks. References Oracle Fusion Applications Extensibility Guide 11g Release 1 (11.1.1.5) Part Number E16691-02 (Section 3.2) Oracle Fusion Applications Developer's Guide 11g Release 1 (11.1.4) Part Number E15524-05

    Read the article

  • C# Extension Methods - To Extend or Not To Extend...

    - by James Michael Hare
    I've been thinking a lot about extension methods lately, and I must admit I both love them and hate them. They are a lot like sugar, they taste so nice and sweet, but they'll rot your teeth if you eat them too much.   I can't deny that they aren't useful and very handy. One of the major components of the Shared Component library where I work is a set of useful extension methods. But, I also can't deny that they tend to be overused and abused to willy-nilly extend every living type.   So what constitutes a good extension method? Obviously, you can write an extension method for nearly anything whether it is a good idea or not. Many times, in fact, an idea seems like a good extension method but in retrospect really doesn't fit.   So what's the litmus test? To me, an extension method should be like in the movies when a person runs into their twin, separated at birth. You just know you're related. Obviously, that's hard to quantify, so let's try to put a few rules-of-thumb around them.   A good extension method should:     Apply to any possible instance of the type it extends.     Simplify logic and improve readability/maintainability.     Apply to the most specific type or interface applicable.     Be isolated in a namespace so that it does not pollute IntelliSense.     So let's look at a few examples in relation to these rules.   The first rule, to me, is the most important of all. Once again, it bears repeating, a good extension method should apply to all possible instances of the type it extends. It should feel like the long lost relative that should have been included in the original class but somehow was missing from the family tree.    Take this nifty little int extension, I saw this once in a blog and at first I really thought it was pretty cool, but then I started noticing a code smell I couldn't quite put my finger on. So let's look:       public static class IntExtensinos     {         public static int Seconds(int num)         {             return num * 1000;         }           public static int Minutes(int num)         {             return num * 60000;         }     }     This is so you could do things like:       ...     Thread.Sleep(5.Seconds());     ...     proxy.Timeout = 1.Minutes();     ...     Awww, you say, that's cute! Well, that's the problem, it's kitschy and it doesn't always apply (and incidentally you could achieve the same thing with TimeStamp.FromSeconds(5)). It's syntactical candy that looks cool, but tends to rot and pollute the code. It would allow things like:       total += numberOfTodaysOrders.Seconds();     which makes no sense and should never be allowed. The problem is you're applying an extension method to a logical domain, not a type domain. That is, the extension method Seconds() doesn't really apply to ALL ints, it applies to ints that are representative of time that you want to convert to milliseconds.    Do you see what I mean? The two problems, in a nutshell, are that a) Seconds() called off a non-time value makes no sense and b) calling Seconds() off something to pass to something that does not take milliseconds will be off by a factor of 1000 or worse.   Thus, in my mind, you should only ever have an extension method that applies to the whole domain of that type.   For example, this is one of my personal favorites:       public static bool IsBetween<T>(this T value, T low, T high)         where T : IComparable<T>     {         return value.CompareTo(low) >= 0 && value.CompareTo(high) <= 0;     }   This allows you to check if any IComparable<T> is within an upper and lower bound. Think of how many times you type something like:       if (response.Employee.Address.YearsAt >= 2         && response.Employee.Address.YearsAt <= 10)     {     ...     }     Now, you can instead type:       if(response.Employee.Address.YearsAt.IsBetween(2, 10))     {     ...     }     Note that this applies to all IComparable<T> -- that's ints, chars, strings, DateTime, etc -- and does not depend on any logical domain. In addition, it satisfies the second point and actually makes the code more readable and maintainable.   Let's look at the third point. In it we said that an extension method should fit the most specific interface or type possible. Now, I'm not saying if you have something that applies to enumerables, you create an extension for List, Array, Dictionary, etc (though you may have reasons for doing so), but that you should beware of making things TOO general.   For example, let's say we had an extension method like this:       public static T ConvertTo<T>(this object value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This lets you do more fluent conversions like:       double d = "5.0".ConvertTo<double>();     However, if you dig into Reflector (LOVE that tool) you will see that if the type you are calling on does not implement IConvertible, what you convert to MUST be the exact type or it will throw an InvalidCastException. Now this may or may not be what you want in this situation, and I leave that up to you. Things like this would fail:       object value = new Employee();     ...     // class cast exception because typeof(IEmployee) != typeof(Employee)     IEmployee emp = value.ConvertTo<IEmployee>();       Yes, that's a downfall of working with Convertible in general, but if you wanted your fluent interface to be more type-safe so that ConvertTo were only callable on IConvertibles (and let casting be a manual task), you could easily make it:         public static T ConvertTo<T>(this IConvertible value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This is what I mean by choosing the best type to extend. Consider that if we used the previous (object) version, every time we typed a dot ('.') on an instance we'd pull up ConvertTo() whether it was applicable or not. By filtering our extension method down to only valid types (those that implement IConvertible) we greatly reduce our IntelliSense pollution and apply a good level of compile-time correctness.   Now my fourth rule is just my general rule-of-thumb. Obviously, you can make extension methods as in-your-face as you want. I included all mine in my work libraries in its own sub-namespace, something akin to:       namespace Shared.Core.Extensions { ... }     This is in a library called Shared.Core, so just referencing the Core library doesn't pollute your IntelliSense, you have to actually do a using on Shared.Core.Extensions to bring the methods in. This is very similar to the way Microsoft puts its extension methods in System.Linq. This way, if you want 'em, you use the appropriate namespace. If you don't want 'em, they won't pollute your namespace.   To really make this work, however, that namespace should only include extension methods and subordinate types those extensions themselves may use. If you plant other useful classes in those namespaces, once a user includes it, they get all the extensions too.   Also, just as a personal preference, extension methods that aren't simply syntactical shortcuts, I like to put in a static utility class and then have extension methods for syntactical candy. For instance, I think it imaginable that any object could be converted to XML:       namespace Shared.Core     {         // A collection of XML Utility classes         public static class XmlUtility         {             ...             // Serialize an object into an xml string             public static string ToXml(object input)             {                 var xs = new XmlSerializer(input.GetType());                   // use new UTF8Encoding here, not Encoding.UTF8. The later includes                 // the BOM which screws up subsequent reads, the former does not.                 using (var memoryStream = new MemoryStream())                 using (var xmlTextWriter = new XmlTextWriter(memoryStream, new UTF8Encoding()))                 {                     xs.Serialize(xmlTextWriter, input);                     return Encoding.UTF8.GetString(memoryStream.ToArray());                 }             }             ...         }     }   I also wanted to be able to call this from an object like:       value.ToXml();     But here's the problem, if i made this an extension method from the start with that one little keyword "this", it would pop into IntelliSense for all objects which could be very polluting. Instead, I put the logic into a utility class so that users have the choice of whether or not they want to use it as just a class and not pollute IntelliSense, then in my extensions namespace, I add the syntactical candy:       namespace Shared.Core.Extensions     {         public static class XmlExtensions         {             public static string ToXml(this object value)             {                 return XmlUtility.ToXml(value);             }         }     }   So now it's the best of both worlds. On one hand, they can use the utility class if they don't want to pollute IntelliSense, and on the other hand they can include the Extensions namespace and use as an extension if they want. The neat thing is it also adheres to the Single Responsibility Principle. The XmlUtility is responsible for converting objects to XML, and the XmlExtensions is responsible for extending object's interface for ToXml().

    Read the article

  • Workshops, online content show how Oracle infuses simplicity, mobility, extensibility into user experience

    - by mvaughan
    By Kathy Miedema & Misha Vaughan, Oracle Applications User Experience Oracle has made a huge investment into the user experience of its many different software product families, and recent releases showcase big changes and features that aim to promote end user engagement and efficiency by streamlining navigation and simplifying the user interface. But making Oracle’s enterprise software great-looking and usable doesn’t stop when Oracle products go out the door. The Applications User Experience (UX) team recognizes that our customers may need to customize software to fit their work processes. And that’s why we provide tools such as user experience design patterns to help you maintain the Oracle user experience as you tailor your application to fit your business needs. Often, however, customers may need some context around user experience. How has the Oracle user experience been designed and constructed? Why is a good user experience important for users? How does understanding what goes into the user experience benefit the people who purchase the software for users? There’s a short answer to these questions, and you can read about it on Usable Apps. But truly understanding Oracle’s investment and seeing how it applies across product families occasionally requires a deeper dive into the Oracle user experience, especially if you’re an influencer or decision-maker about Oracle products. To help frame these decisions, the Communications & Outreach team has developed several targeted workshops that explore what Oracle means when it talks about user experience, and provides a roadmap into where the Oracle user experience is going. These workshops require non-disclosure agreements, and have been delivered to Oracle sales folks, Oracle partners, Oracle ACE Directors and ACEs, and a few customers. Some of these audience members have been developers or have a technical background; just as many did not. Here’s a breakdown of the kind of training you can get around the Oracle user experience from the OAUX Communications & Outreach team.For Partners: George Papazzian, Principal, Naviscent with Joyce Ohgi, Oracle Oracle Fusion Applications HCM Pre-Sales Seminar:  In concert with Worldwide Alliances  and  Channels under Applications Partner Enablement Director Jonathan Vinoskey’s guidance, the Applications User Experience team delivers a two-day workshop.  Day one focuses on Oracle Fusion Applications HCM and pre-sales strategy, and Day two focuses on positioning and leveraging Oracle’s investment in the Oracle Fusion Applications user experience.  The next workshops will occur on the following dates: December 4-5, 2013 @ Manchester, UK January 29-30, 2014 @ Reston, Virginia February 2014 @ Guadalajara, Mexico (email: Shannon Whiteman) March 11-12, 2014 @ Dubai, United Arab Emirates April 1-2, 2014 @ Chicago, Illinois Partner Advisory Board: A two-day board meeting in the U.S. and U.K. to discuss four main user experience areas for Oracle Fusion Applications: simplicity, visualization & analytics, mobility, & futures. This event is limited to Oracle Diamond Partners, UX bloggers, and key UX influencers and requires legal documentation.  We will be talking about the Oracle applications UX strategy and roadmap. Partner Implementation Training on User Interface: How to Build Great-Looking, Usable Apps:  In this two-day, hands-on workshop built around Oracle’s Application Development Framework, learn how to build desktop and mobile user interfaces and mobile user interfaces based on Oracle’s experience with Fusion Applications. This workshop is for partners with a technology background who are looking for ways to tailor Fusion Applications using ADF, or have built their own custom solutions using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs. Nov 5-6, 2013 @ Redwood Shores, CA, USA January 28-29th, 2014 @ Reston, Virginia, USA February 25-26, 2014 @ Guadalajara, Mexico March 9-10, 2014 @ Dubai, United Arab Emirates To register, contact [email protected] Simplified UI Customization & Extensibility:  Pilot workshop:  We will be reviewing the proposed content for communicating the user experience tool kit available with the next release of Oracle Fusion Applications.  Our core focus will be on what toolkit components our system implementors and independent software vendors will need to respond to customer demand, whether they are extending Fusion Applications, or building custom applications, that will need to leverage the simplified UI. Dec 11th, 2013 @ Reading, UK For information: contact [email protected] Private lab tour and demos: Interested in seeing what’s going on in the Apps UX Labs?  If you are headed to the San Francisco Bay Area, let us know. We can arrange a spin through our usability labs at headquarters. OAUX Expo: This open-house forum gives partners a look at what the UX team is working on, and showcases the next-generation user experiences in a demo environment where attendees can see and touch the applications. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For CustomersAngela Johnston, Gozel Aamoth, Teena Singh, and Yen Chan, Oracle Lab tours: See demos of soon-to-be-released products, and take a spin on usability research equipment such as our eye-tracker. Watch this video to get an idea of what you’ll see. Get our newsletter: Learn about newly released products and see where you can meet us at user group conferences. Participate in a feedback session: Join a focus group or customer feedback session to get an early look at user experience designs for the next generation of software, and provide your thoughts on how well it will work. Join the OUAB: The Oracle Usability Advisory Board meets several times a year to discuss trends in the workforce and provide direction on user experience designs. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Developers (customers, partners, and consultants): Plinio Arbizu, SP Solutions, Richard Bingham, Oracle, Balaji Kamepalli, EiSTechnoogies, Praveen Pillalamarri, EiSTechnologies How to Build Great-Looking, Usable Apps: This workshop is for attendees with a strong technology background who are looking for ways to tailor customer software using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs.  See above for dates and times. UX design patterns web site: Cut the length of your project down by months. Use these patterns to build out the task flow you need to develop for your users. The patterns have already been usability-tested and represent the best practices that the Oracle UX research team has found in its studies. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Oracle Sales Mike Klein, Jeremy Ashley, Brent White, Oracle Contact your local sales person for more information about the Oracle user experience and the training available from the Applications User Experience Communications & Outreach team. See customer-friendly user experience collateral ranging from the new simplified UI in Oracle Fusion Applications Release 7, to E-Business Suite user experience highlights, to Siebel, PeopleSoft, and JD Edwards user experience highlights.   Receive access to the same pre-sales and implementation training we provide to partners. For Oracle Sales only: Oracle-only training on the Oracle Fusion Applications UX Innovation Sales Kit.

    Read the article

  • In Case You Weren’t There: Blogwell NYC

    - by Mike Stiles
    0 0 1 1009 5755 Vitrue 47 13 6751 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman";} Your roving reporter roved out to another one of Socialmedia.org’s fantastic Blogwell events, this time in NYC. As Central Park and incredible weather beckoned, some of the biggest brand names in the world gathered to talk about how they’re incorporating social into marketing and CRM, as well as extending social across their entire organizations internally. Below we present a collection of the live tweets from many of the key sessions GE @generalelectricJon Lombardo, Leader of Social Media COE How GE builds and extends emotional connections with consumers around health and reaps the benefits of increased brand equity in the process. GE has a social platform around Healthyimagination to create better health for people. If you and a friend are trying to get healthy together, you’ll do better. Health is inherently. Get health challenges via Facebook and share with friends to achieve goals together. They’re creating an emotional connection around the health context. You don’t influence people at large. Your sphere of real influence is around 5-10 people. They find relevant conversations about health on Twitter and engage sounding like a friend, not a brand. Why would people share on behalf of a brand? Because you tapped into an activity and emotion they’re already having. To create better habits in health, GE gave away inexpensive, relevant gifts related to their goals. Create the context, give the relevant gift, get social acknowledgment for giving it. What you get when you get acknowledgment for your engagement and gift is user generated microcontent. GE got 12,000 unique users engaged and 1400 organic posts with the healthy gift campaign. The Dow Chemical Company @DowChemicalAbby Klanecky, Director of Digital & Social Media Learn how Dow Chemical is finding, training, and empowering their scientists to be their storytellers in social media. There are 1m jobs coming open in science. Only 200k are qualified for them. Dow Chemical wanted to use social to attract and talk to scientists. Dow Chemical decided to use real scientists as their storytellers. Scientists are incredibly passionate, the key ingredient of a great storyteller. Step 1 was getting scientists to focus on a few platforms, blog, Twitter, LinkedIn. Dow Chemical social flow is Core Digital Team - #CMs – ambassadors – advocates. The scientists were trained in social etiquette via practice scenarios. It’s not just about sales. It’s about growing influence and the business. Dow Chemical trained about 100 scientists, 55 are active and there’s a waiting list for the next sessions. In person social training produced faster results and better participation. Sometimes you have to tell pieces of the story instead of selling your execs on the whole vision. Social Media Ethics Briefing: Staying Out of TroubleAndy Sernovitz, CEO @SocialMediaOrg How do we get people to share our message for us? We have to have their trust. The difference between being honest and being sleazy is disclosure. Disclosure does not hurt the effectiveness of your marketing. No one will get mad if you tell them up front you’re a paid spokesperson for a company. It’s a legal requirement by the FTC, it’s the law, to disclose if you’re being paid for an endorsement. Require disclosure and truthfulness in all your social media outreach. Don’t lie to people. Monitor the conversation and correct misstatements. Create social media policies and training programs. If you want to stay safe, never pay cash for social media. Money changes everything. As soon as you pay, it’s not social media, it’s advertising. Disclosure, to the feds, means clear, conspicuous, and understandable to the average reader. This phrase will keep you in the clear, “I work for ___ and this is my personal opinion.” Who are you? Were you paid? Are you giving an honest opinion based on a real experience? You as a brand are responsible for what an agency or employee or contactor does in your behalf. SocialMedia.org makes available a Disclosure Best Practices Toolkit. Socialmedia.org/disclosure. The point is to not ethically mess up and taint social media as happened to e-mail. Not only is the FTC cracking down, so is Google and Facebook. Visa @VisaNewsLucas Mast, Senior Business Leader, Global Corporate Social Media Visa built a mobile studio for the Olympics for execs and athletes. They wanted to do postcard style real time coverage of Visa’s Olympics sponsorships, and on a shoestring. Challenges included Olympic rules, difficulty getting interviews, time zone trouble, and resourcing. Another problem was they got bogged down with their own internal approval processes. Despite all the restrictions, they created and published a variety of and fair amount of content. They amassed 1000+ views of videos posted to the Visa Communication YouTube channel. Less corporate content yields more interest from media outlets and bloggers. They did real world video demos of how their products work in the field vs. an exec doing a demo in a studio. Don’t make exec interview videos dull and corporate. Keep answers short, shoot it in an interesting place, do takes until they’re comfortable and natural. Not everything will work. Not everything will get a retweet. But like the lottery, you can’t win if you don’t play. Promoting content is as important as creating it. McGraw-Hill Companies @McGrawHillCosPatrick Durando, Senior Director of Global New Media McGraw-Hill has 26,000 employees. McGraw-Hill created a social intranet called Buzz. Intranets create operational efficiency, help product dev, facilitate crowdsourcing, and breaks down geo silos. Intranets help with talent development, acquisition, retention. They replaced the corporate directory with their own version of LinkedIn. The company intranet has really cut down on the use of email. Long email threats become organized, permanent social discussions. The intranet is particularly useful in HR for researching and getting answers surrounding benefits and policies. Using a profile on your company intranet can establish and promote your internal professional brand. If you’re going to make an intranet, it has to look great, work great, and employees are going have to want to go there. You can’t order them to like it. 

    Read the article

  • Microsoft and the open source community

    - by Charles Young
    For the last decade, I have repeatedly, in my imitable Microsoft fan boy style, offered an alternative view to commonly held beliefs about Microsoft's stance on open source licensing.  In earlier times, leading figures in Microsoft were very vocal in resisting the idea that commercial licensing is outmoded or morally reprehensible.  Many people interpreted this as all-out corporate opposition to open source licensing.  I never read it that way. It is true that I've met individual employees of Microsoft who are antagonistic towards FOSS (free and open source software), but I've met more who are supportive or at least neutral on the subject.  In any case, individual attitudes of employees don't necessarily reflect a corporate stance.  The strongest opposition I've encountered has actually come from outside the company.  It's not a charitable thought, but I sometimes wonder if there are people in the .NET community who are opposed to FOSS simply because they believe, erroneously, that Microsoft is opposed. Here, for what it is worth, are the points I've repeated endlessly over the years and which have often been received with quizzical scepticism. a)  A decade ago, Microsoft's big problem was not FOSS per se, or even with copyleft.  The thing which really kept them awake at night was the fear that one day, someone might find, deep in the heart of the Windows code base, some code that should not be there and which was published under GPL.  The likelihood of this ever happening has long since faded away, but there was a time when MS was running scared.  I suspect this is why they held out for a while from making Windows source code open to inspection.  Nowadays, as an MVP, I am positively encouraged to ask to see Windows source. b)  Microsoft has never opposed the open source community.  They have had problems with specific people and organisations in the FOSS community.  Back in the 1990s, Richard Stallman gave time and energy to a successful campaign to launch antitrust proceedings against Microsoft.  In more recent times, the negative attitude of certain people to Microsoft's submission of two FOSS licences to the OSI (both of which have long since been accepted), and the mad scramble to try to find any argument, however tenuous, to block their submission was not, let us say, edifying. c) Microsoft has never, to my knowledge, written off the FOSS model.  They certainly don't agree that more traditional forms of licensing are inappropriate or immoral, and they've always been prepared to say so.  One reason why it was so hard to convince people that Microsoft is not rabidly antagonistic towards FOSS licensing is that so many people think they have no involvement in open source.  A decade ago, there was virtually no evidence of any such involvement.  However, that was a long time ago.  Quietly over the years, Microsoft has got on with the job of working out how to make use of FOSS licensing and how to support the FOSS community.  For example, as well as making increasingly extensive use of Github, they run an important FOSS forge (CodePlex) on which they, themselves, host many hundreds of distinct projects.  The total count may even be in the thousands now.  I suspect there is a limit of about 500 records on CodePlex searches because, for the past few years, whenever I search for Microsoft-specific projects on CodePlex, I always get approx. 500 hits.  Admittedly, a large volume of the stuff they publish under FOSS licences amounts to code samples, but many of those 'samples' have grown into useful and fully featured frameworks, libraries and tools. All this is leading up to the observation that yesterday's announcement by Scott Guthrie marks a significant milestone and should not go unnoticed.  If you missed it, let me summarise.   From the first release of .NET, Microsoft has offered a web development framework called ASP.NET.  The core libraries are included in the .NET framework which is released free of charge, but which is not open source.   However, in recent years, the number of libraries that constitute ASP.NET have grown considerably.  Today, most professional ASP.NET web development exploits the ASP.NET MVC framework.  This, together with several other important parts of the ASP.NET technology stack, is released on CodePlex under the Apache 2.0 licence.   Hence, today, a huge swathe of web development on the .NET/Azure platform relies four-square on the use of FOSS frameworks and libraries. Yesterday, Scott Guthrie announced the next stage of ASP.NET's journey towards FOSS nirvana.  This involves extending ASP.NET's FOSS stack to include Web API and the MVC Razor view engine which is rapidly becoming the de facto 'standard' for building web pages in ASP.NET.  However, perhaps the more important announcement is that the ASP.NET team will now accept and review contributions from the community.  Scott points out that this model is already in place elsewhere in Microsoft, and specifically draws attention to development of the Windows Azure SDKs.  These SDKs are central to Azure development.   The .NET and Java SDKs are published under Apache 2.0 on Github and Microsoft is open to community contributions.  Accepting contributions is a more profound move than simply releasing code under FOSS licensing.  It means that Microsoft is wholeheartedly moving towards a full-blooded open source approach for future evolution of some of their central and most widely used .NET and Azure frameworks and libraries.  In conjunction with Scott's announcement, Microsoft has also released Git support for CodePlex (at long last!) and, perhaps more importantly, announced significant new investment in their own FOSS forge. Here at Solidsoft we have several reasons to be very interested in Scott's announcement. I'll draw attention to one of them.  Earlier this year we wrote the initial version of a new UK Government web application called CloudStore.  CloudStore provides a way for local and central government to discover and purchase applications and services. We wrote the web site using ASP.NET MVC which is FOSS.  However, this point has been lost on the ladies and gentlemen of the press and, I suspect, on some of the decision makers on the government side.  They announced a few weeks ago that future versions of CloudStore will move to a FOSS framework, clearly oblivious of the fact that it is already built on a FOSS framework.  We are, it is fair to say, mildly irked by the uninformed and badly out-of-date assumption that “if it is Microsoft, it can't be FOSS”.  Old prejudices live on.

    Read the article

  • Oracle OpenWorld 2012: The Best Just Gets Better

    - by kellsey.ruppel
    For almost 30 years, Oracle OpenWorld has been the world's premier learning event for Oracle customers, developers, and partners. With more than 2,000 sessions providing best practices; demos; tips and tricks; and product insight from Oracle, customers, partners, and industry experts, Oracle OpenWorld provides more educational and networking opportunities than any other event in the world. 2011 Facts Attendees from 117 Countries Used Filtered Tap Water to Eliminate 22 Tons of Plastic Bottles Diverted Enough Trash to Fill 37 Dump Trucks 45,000+ Total Registered Attendees Oracle OpenWorld 2012: The Best Just Gets Better What's New? What's Different?  This year Oracle OpenWorld will include the Executive Edge @ OpenWorld (replacing Leaders Circle), the Customer Experience Summit @ OpenWorld, JavaOne, MySQL Connect, and the expanded Oracle PartnerNetwork Exchange @ OpenWorld. More than 50,000 customers and partners will attend OpenWorld to see Oracle's newest hardware and software products at work, and learn more about our server and storage, database, middleware, industry, and applications solutions.  New This Year: The Executive Edge @ Oracle OpenWorld (Oct 1 - 2) New at Oracle OpenWorld this year, the Executive Edge @ OpenWorld (replacing Leaders Circle) will bring together customer, partner and Oracle executives for two days of keynote presentations, summits targeted to customer industries and organizational roles, roundtable discussions, and great new networking opportunities. The Customer Experience Revolution Is Here!Customer Experience Summit @ Oracle OpenWorld (Oct 3 - 5) This dynamic new program offers more than 60 keynotes, roundtables and networking sessions exploring trends, innovations and best practices to help companies succeed with a customer experience-driven business strategy.  All Things Java -- JavaOne (Sep 30 - Oct 4) JavaOne is the world's most important event for the Java developer community. Technical sessions cover topics that span the breadth of the Java universe, with keynotes from the foremost Java visionaries and expert-led hands-on learning opportunities.  Are you innovating with Oracle Fusion Middleware?  If you are, then you need to know that the Call for Nominations for the 2012 Oracle Fusion Middleware Innovation Awards is open now through July 17, 2012. Jointly sponsored by Oracle, AUSOUG, IOUG, OAUG, ODTUG, QUEST, and UKOUG, the Oracle Fusion Middleware Innovation Awards honor organizations creatively using Oracle Fusion Middleware to deliver unique value to their enterprise.  Winning customers and partners will be hosted at Oracle OpenWorld 2012, where they can connect with Oracle executives, network with peers, and be featured in an upcoming edition of Oracle Magazine. Be sure to submit your WebCenter use case today! Oracle Music Festival his year, the first-ever Oracle Music Festival will debut, running from September 30 to October 4. In the tradition of great live music events like Coachella and SXSW, the streets of San Francisco—from 7:00 p.m. to 1:00 a.m. for five nights-into-days—will vibrate with the music of some of today’s hottest name acts, emerging and local bands, and scratching DJs. Outdoor venues and clubs near Moscone Center and the Zone (including 111 Minna, DNA, Mezzanine, Roe, Ruby Skye, Slim’s, the Taylor Street Café, Temple, Union Square, and Yerba Buena Gardens) will showcase acts that range from reggae to rock, punk to ska, R&B to country, indie to honky-tonk. After a full day of sessions and networking, you'll be primed for some late-night relaxation and rocking out at one or more of these sets.  Please note that with awesome acts, thousands of music devotees, and a limited number of venues each night, access to Festival events is on a first-come, first-served basis. Join us at the Oracle Music Festival--it's going to be epic! Save $500 on Registration with Early Bird Pricing Early Bird pricing ends July 13! Save up to $500 on registration fees by registering by Friday. Will you be attending Oracle OpenWorld 2012? We hope to see you there! Be sure to follow @oraclewebcenter on Twitter for more information and use hashtags #webcenter and #oow!

    Read the article

  • NetBeans PHP Community Council

    - by Tomas Mysik
    Hi all, today we would like to inform all of you that now you have a chance to improve NetBeans via NetBeans PHP Community Council. The author of this activity is Timur Poperecinii and he would like to tell you a few words about it. Hello passionate technical people, First of all let me introduce myself: my name is Timur, I’m a developer from Moldova (that little country between Romania and Ukraine), I develop mostly in .NET and JQuery, but I love to learn more, not being an expert I am familiar with Java (Struts2, Play), PHP (Symfony2), Ruby (Rails), Sencha Touch 2 and other technologies. I was “introduced” in PHP recently by a client of mine who requested to make the work specifically in PHP. Let me tell you a little story about my experience with open source and IDEs: when I was studying in university in 2007 I think, I did a simple little application in PHP and thought “Damn, if only there was a good IDE for PHP so I could relax and no having to remember all the function names”, then when I searched on internet pretty much everyone was using Vim or Emacs on Linux, but it had no autocomplete anyway, just syntax highlighting. I remember using some tool like Notepad++ I think. Nowadays everything changed, we have highlighting and autocomplete for about all standard things in PHP in many IDEs. I use NetBeans for PHP, and I really am happy with the experience I have there with standard PHP code, but for frameworks I still think there is lots of room for improvements. For example we have some Symfony 2 and Twig support. But I’d love to see more of that coming, for example I’m a big fan of file templates, where the main goal is to not waste time on writing over and over again something that can be generated, and it counts even more when you don’t have a lot of autocomplete. So what I thought, “Hey I know Java a little, and NetBeans has plugins, so may be it worth trying to do a file templates plugin”, and so I did, you can find details about my Unified Udevi Symfony2 Plugin for NetBeans 7.2 on my blog. It wasn’t hard, and it even was fun! Give back to open source Now think a little, NetBeans is an open source project and PHP support is just a part of it, so the resources are pretty limited in this area. But we as a community that uses this product, want to have the best possible experience with PHP and frameworks(!!!). So why don’t we GIVE BACK TO OPENSOURCE ? Imagine an IDE that can do all the things you wanted + it is free. Now how far is NetBeans from that point? I guess not so far – you might miss a little niche thing that you use on a daily basis, but then the question appears why don’t you make it happen on your own? NetBeans PHP Community Council What I proposed is to create a NetBeans PHP Community Council that will be formed of people willing to change something, willing to create plugins for their own needs and for the needs of the community, test the plugins created by them too, and basically evolve NetBeans in direction they want to reach. I already talked with the NetBeans PHP team. They are only happy to help this Council, with technical advises, opening some APIs we might need to have access to, and other things. One important thing to mention is that this Council is a Community project, so though we’ll have direct discussions with NetBeans PHP Dev team, NetBeans is not the leading force here, it is the community. You can see more details about the goals and structure I proposed at NetBeans PHP Community Council wiki page. We use this mail list: [email protected] for discussions and topics related to the Council. How can I join To join the NetBeans PHP Community Council please send an email to [email protected] with the subject of the mail starting with [Council New Member]. You can subscribe to this mail list here:http://netbeans.org/projects/php/lists. in your mail please indicate your location, age and experience both in Java and PHP. I need these data to assign you to a team. A response will be send to you with your next assignment and some people to contact. I really hope that you’ll make a step forward and try to make your everyday use of NetBeans even more fun.

    Read the article

  • How to reduce RAM consumption when my server is idle

    - by Julien Genestoux
    We use Slicehost, with 512MB instances. We run Ubuntu 9.10 on them. I installed a few packages, and I'm now trying to optimize RAM consumption before running anything on there. A simple ps gives me the list of running processes : # ps faux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 2 0.0 0.0 0 0 ? S< Jan04 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S< Jan04 0:15 \_ [migration/0] root 4 0.0 0.0 0 0 ? S< Jan04 0:01 \_ [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/0] root 6 0.0 0.0 0 0 ? S< Jan04 0:04 \_ [events/0] root 7 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [cpuset] root 8 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [khelper] root 9 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [async/mgr] root 10 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xenwatch] root 11 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xenbus] root 13 0.0 0.0 0 0 ? S< Jan04 0:02 \_ [migration/1] root 14 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksoftirqd/1] root 15 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/1] root 16 0.0 0.0 0 0 ? S< Jan04 0:07 \_ [events/1] root 17 0.0 0.0 0 0 ? S< Jan04 0:02 \_ [migration/2] root 18 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksoftirqd/2] root 19 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/2] root 20 0.0 0.0 0 0 ? R< Jan04 0:07 \_ [events/2] root 21 0.0 0.0 0 0 ? S< Jan04 0:04 \_ [migration/3] root 22 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksoftirqd/3] root 23 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/3] root 24 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [events/3] root 25 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/0] root 26 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/1] root 27 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/2] root 28 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/3] root 29 0.0 0.0 0 0 ? S< Jan04 0:01 \_ [kblockd/0] root 30 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kblockd/1] root 31 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kblockd/2] root 32 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kblockd/3] root 33 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kseriod] root 34 0.0 0.0 0 0 ? S Jan04 0:00 \_ [khungtaskd] root 35 0.0 0.0 0 0 ? S Jan04 0:05 \_ [pdflush] root 36 0.0 0.0 0 0 ? S Jan04 0:06 \_ [pdflush] root 37 0.0 0.0 0 0 ? S< Jan04 1:02 \_ [kswapd0] root 38 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/0] root 39 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/1] root 40 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/2] root 41 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/3] root 42 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsIO] root 43 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 44 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 45 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 46 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 47 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsSync] root 48 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfs_mru_cache] root 49 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/0] root 50 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/1] root 51 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/2] root 52 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/3] root 53 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/0] root 54 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/1] root 55 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/2] root 56 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/3] root 57 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/0] root 58 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/1] root 59 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/2] root 60 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/3] root 61 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 62 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 63 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 64 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 65 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 66 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 67 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 68 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 69 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kslowd] root 70 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kslowd] root 71 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/0] root 72 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/1] root 73 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/2] root 74 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/3] root 77 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/0] root 78 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/1] root 79 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/2] root 80 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/3] root 81 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/0] root 82 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/1] root 83 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/2] root 84 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/3] root 310 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kstriped] root 315 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksnapd] root 1452 0.0 0.0 0 0 ? S< Jan04 4:31 \_ [kjournald] root 1 0.0 0.1 19292 948 ? Ss Jan04 0:15 /sbin/init root 1545 0.0 0.1 13164 1064 ? S Jan04 0:00 upstart-udev-bridge --daemon root 1547 0.0 0.1 17196 996 ? S<s Jan04 0:00 udevd --daemon root 1728 0.0 0.2 20284 1468 ? S< Jan04 0:00 \_ udevd --daemon root 1729 0.0 0.1 17192 792 ? S< Jan04 0:00 \_ udevd --daemon root 1881 0.0 0.0 8192 152 ? Ss Jan04 0:00 dd bs=1 if=/proc/kmsg of=/var/run/rsyslog/kmsg syslog 1884 0.0 0.2 185252 1200 ? Sl Jan04 1:00 rsyslogd -c4 103 1894 0.0 0.1 23328 700 ? Ss Jan04 1:08 dbus-daemon --system --fork root 2046 0.0 0.0 136 32 ? Ss Jan04 4:05 runsvdir -P /etc/service log: gems/custom_require.rb:31:in `require'??from /mnt/app/superfeedr-firehoser/current/script/component:52?/opt/ruby-enterprise/lib/ruby/si root 2055 0.0 0.0 112 32 ? Ss Jan04 0:00 \_ runsv chef-client root 2060 0.0 0.0 132 40 ? S Jan04 0:02 | \_ svlogd -tt ./main root 2056 0.0 0.0 112 28 ? Ss Jan04 0:20 \_ runsv superfeedr-firehoser_2 root 2059 0.0 0.0 132 40 ? S Jan04 0:29 | \_ svlogd /var/log/superfeedr-firehoser_2 root 2057 0.0 0.0 112 28 ? Ss Jan04 0:20 \_ runsv superfeedr-firehoser_1 root 2062 0.0 0.0 132 44 ? S Jan04 0:26 \_ svlogd /var/log/superfeedr-firehoser_1 root 2058 0.0 0.0 18708 316 ? Ss Jan04 0:01 cron root 2095 0.0 0.1 49072 764 ? Ss Jan04 0:06 /usr/sbin/sshd root 9832 0.0 0.5 78916 3500 ? Ss 00:37 0:00 \_ sshd: root@pts/0 root 9846 0.0 0.3 17900 2036 pts/0 Ss 00:37 0:00 \_ -bash root 10132 0.0 0.1 15020 1064 pts/0 R+ 09:51 0:00 \_ ps faux root 2180 0.0 0.0 5988 140 tty1 Ss+ Jan04 0:00 /sbin/getty -8 38400 tty1 root 27610 0.0 1.4 47060 8436 ? S Apr04 2:21 python /usr/sbin/denyhosts --daemon --purge --config=/etc/denyhosts.conf --config=/etc/denyhosts.conf root 22640 0.0 0.7 119244 4164 ? Ssl Apr05 0:05 /usr/sbin/console-kit-daemon root 10113 0.0 0.0 3904 316 ? Ss 09:46 0:00 /usr/sbin/collectdmon -P /var/run/collectdmon.pid -- -C /etc/collectd/collectd.conf root 10114 0.0 0.2 201084 1464 ? Sl 09:46 0:00 \_ collectd -C /etc/collectd/collectd.conf -f As you can see there is nothing serious here. If I sum up the RSS line on all this, I get the following : # ps -aeo rss | awk '{sum+=$1} END {print sum}' 30096 Which makes sense. However, I have a pretty big surprise when I do a free: # free total used free shared buffers cached Mem: 591180 343684 247496 0 25432 161256 -/+ buffers/cache: 156996 434184 Swap: 1048568 0 1048568 As you can see 60% of the available memory is already consumed... which leaves me with only 40% to run my own applications if I want to avoid swapping. Quite disapointing! 2 questions arise : Where is all this memory? How to take some of it back for my own apps?

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • NHibernate 3.0 and FluentNHibernate, how to get up and running&hellip;.

    - by DesigningCode
    First up. Its actually really easy. I’m not very religious about my DB tech, I don’t really care, I just want something that works.  So I’m happy to consider all options if they provide an advantage, and recently I was considering jumping from NHibernate to EF 4.0.  However before ditching NHibernate and jumping to EF 4.0 I thought I should try the head version of NHibernates trunk and the Head version of FluentNHibernate. I currently have a “Repository / Unit of Work” Framework built up around these two techs.  All up it makes my life pretty simple for dealing with databases.   The problem is the current release of NHibernate + the Linq provider wasn’t too hot for our purposes.  Especially trying to plug it into older VB.NET code.   The Linq provider spat the dummy with VB.NET lambdas.  Mainly because in C# Query().Where(l => l.Name.Contains("x") || l.Name.Contains("y")).ToList(); is not the same as the VB.NET Query().Where(Function(l) l.Name.Contains("x") Or l.Name.Contains("y")).ToList VB.NET seems to spit out … well…. something different :-) so anyways… Compiling your own version of NHibernate and FluentNHibernate.  It’s actually pretty easy! First you’ll need to install tortisesvn NAnt and Git if you don’t already have them.  NHibernate first step, get the subversion trunk https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk/ into a directory somewhere.  eg \thirdparty\nhibernate Then use NAnt to build it.   (if you open the .sln it will show errors in that  AssemblyInfo.cs doesn’t exist ) to build it, there is a .txt document with sample command line build instructions,  I simply used :- NAnt -D:project.config=release clean build >output-release-build.log *wait* *wait* *wait* and ta da, you will have a bin directory with all the release dlls. FluentNHibernate This was pretty simple. there’s instructions here :- http://wiki.fluentnhibernate.org/Getting_started#Installation basically, with git, create a directory, and you issue the command git clone git://github.com/jagregory/fluent-nhibernate.git and wait, and soon enough you have the source. Now, from the bin directory that NHibernate spit out, take everything and dump it into the subdirectory “fluent-nhibernate\tools\NHibernate” Now, to build, you can use rake….which a ruby build system, however you can also just open the solution and build.   Which is what I did.  I had a few problems with the references which I simply re-added using the new ones.  Once built, I just took all the NHibnerate dlls, and the fluent ones and replaced my existing NHibernate / Fluent and killed off the old linq project. All I had to change is the places that used  .Linq<T>  and replace them with .Query<T>  (which was easy as I had wrapped it already to isolate my code from such changes) and hey presto, everything worked.  Even the VB.NET linq calls. I need to do some more testing as I’ve only done basic smoke tests, but its all looking pretty good, so for now, I will stick to NHibernate!

    Read the article

  • ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages

    - by DigiMortal
    If you are using AppFabric Access Control Services to authenticate users when they log in to your community site using Live ID, Google or some other popular identity provider, you need more than AuthorizeAttribute to make sure that users can access the content that is there for authenticated users only. In this posting I will show you hot to extend the AuthorizeAttribute so users must also have user profile filled. Semi-authorized users When user is authenticated through external identity provider then not all identity providers give us user name or other information we ask users when they join with our site. What all identity providers have in common is unique ID that helps you identify the user. Example. Users authenticated through Windows Live ID by AppFabric ACS have no name specified. Google’s identity provider is able to provide you with user name and e-mail address if user agrees to publish this information to you. They both give you unique ID of user when user is successfully authenticated in their service. There is logical shift between ASP.NET and my site when considering user as authorized. For ASP.NET MVC user is authorized when user has identity. For my site user is authorized when user has profile and row in my users table. Having profile means that user has unique username in my system and he or she is always identified by this username by other users. My solution is simple: I created my own action filter attribute that makes sure if user has profile to access given method and if user has no profile then browser is redirected to join page. Illustrating the problem Usually we restrict access to page using AuthorizeAttribute. Code is something like this. [Authorize] public ActionResult Details(string id) {     var profile = _userRepository.GetUserByUserName(id);     return View(profile); } If this page is only for site users and we have user profiles then all users – the ones that have profile and all the others that are just authenticated – can access the information. It is okay because all these users have successfully logged in in some service that is supported by AppFabric ACS. In my site the users with no profile are in grey spot. They are on half way to be users because they have no username and profile on my site yet. So looking at the image above again we need something that adds profile existence condition to user-only content. [ProfileRequired] public ActionResult Details(string id) {     var profile = _userRepository.GetUserByUserName(id);     return View(profile); } Now, this attribute will solve our problem as soon as we implement it. ProfileRequiredAttribute: Profiles are required to be fully authorized Here is my implementation of ProfileRequiredAttribute. It is pretty new and right now it is more like working draft but you can already play with it. public class ProfileRequiredAttribute : AuthorizeAttribute {     private readonly string _redirectUrl;       public ProfileRequiredAttribute()     {         _redirectUrl = ConfigurationManager.AppSettings["JoinUrl"];         if (string.IsNullOrWhiteSpace(_redirectUrl))             _redirectUrl = "~/";     }              public override void OnAuthorization(AuthorizationContext filterContext)     {         base.OnAuthorization(filterContext);           var httpContext = filterContext.HttpContext;         var identity = httpContext.User.Identity;           if (!identity.IsAuthenticated || identity.GetProfile() == null)             if(filterContext.Result == null)                 httpContext.Response.Redirect(_redirectUrl);          } } All methods with this attribute work as follows: if user is not authenticated then he or she is redirected to AppFabric ACS identity provider selection page, if user is authenticated but has no profile then user is by default redirected to main page of site but if you have application setting with name JoinUrl then user is redirected to this URL. First case is handled by AuthorizeAttribute and the second one is handled by custom logic in ProfileRequiredAttribute class. GetProfile() extension method To get user profile using less code in places where profiles are needed I wrote GetProfile() extension method for IIdentity interface. There are some more extension methods that read out user and identity provider identifier from claims and based on this information user profile is read from database. If you take this code with copy and paste I am sure it doesn’t work for you but you get the idea. public static User GetProfile(this IIdentity identity) {     if (identity == null)         return null;       var context = HttpContext.Current;     if (context.Items["UserProfile"] != null)         return context.Items["UserProfile"] as User;       var provider = identity.GetIdentityProvider();     var nameId = identity.GetNameIdentifier();       var rep = ObjectFactory.GetInstance<IUserRepository>();     var profile = rep.GetUserByProviderAndNameId(provider, nameId);       context.Items["UserProfile"] = profile;       return profile; } To avoid round trips to database I cache user profile to current request because the chance that profile gets changed meanwhile is very minimal. The other reason is maybe more tricky – profile objects are coming from Entity Framework context and context has also HTTP request as lifecycle. Conclusion This posting gave you some ideas how to finish user profiles stuff when you use AppFabric ACS as external authentication provider. Although there was little shift between us and ASP.NET MVC with interpretation of “authorized” we were easily able to solve the problem by extending AuthorizeAttribute to get all our requirements fulfilled. We also write extension method for IIdentity that returns as user profile based on username and caches the profile in HTTP request scope.

    Read the article

  • Free and Open Source Software in Oracle Solaris 11.1

    - by user13277799
    Oracle Solaris 11.1 contains number of Free and Open Source packages. The following table contains important FOSS packages with their versions available in this latest Oracle Solaris release. a2ps 4.14 aalib 1.4.0 pmtools 20071116 apache-ant 1.7.1 httpd 2.2.22 mod_dtrace 0.3.1 mod_fcgid 2.3.6 tomcat-connectors 1.2.28 mod_perl 2.0.4 mod_proxy_html 3.1.1 modsecurity-apache 2.5.9 mod_wsgi 3.3 apr 1.3.9 apr-util 1.3.9 areca 7.1 autoconf 2.68 autogen 5.9 automake 1.10 automake 1.11.2 automake 1.9.6 bash 4.1 bcc 0.16.17 beanshell 2.0b4 db 5.1.25 bind 9.6-ESV-R7-P2 binutils 2.21.1 bison 2.3 bzip2 1.0.6 cdrtools 3.00 clisp 2.47 cmake 2.8.6 gnu 0.5.11 conflict 20100627 convmv 1.15 coreutils 8.5 cups 1.4.5 curl 7.21.2 cvs 1.12.13 diffutils 2.8.7 doxygen 1.7.6.1 ejabberd 2.1.8 elinks 0.11.7 emacs 23.4 otp_src R12B-5 fcgi 2.4.0 fetchmail 6.3.22 flex 2.5.35 foomatic-db 20080903 foomatic-db-engine 3.0-20080903 foomatic-filters 4.0.15 foomatic-filters-ppds 20080818 fping 2.4b2_to gawk 3.1.8 gcc 3.4.3 gcc 4.5.2 gd 2.0.35 gdb 6.8 gdbm 1.8.3 gettext 0.16.1 grep 2.10 ghostscript 9.00 git 1.7.9.2 gnu-gs-fonts-other 6.0 gnu-gs-fonts-std 6.0 gmp 4.3.2 gnupg 2.0.17 gnuplot 4.6.0 pth 2.0.7 gocr 0.48 gperf 3.0.3 gpgme 1.1.8 grails 1.0.3 graphviz 2.28.0 tar 1.26 guile 1.8.6 gutenprint 5.2.7 gzip 1.4 hal-cups-utils 0.6.19 hexedit 1.2.12 hplip 3.10.9 httping 1.4.4 hwdata 0.5.11 iftop 0.17 ilmbase 1.0.1 ImageMagick 6.3.4 iperf 2.0.4 ipmitool 1.8.11 ircii 20060725 dhcp 4.1-ESV-R7 junit 4.10 INIT 2011-02-08 lcms 1.19 less 436 lftp 4.3.1 libassuan 2.0.1 confuse 2.6 libedit 20110802-3.0 libee 0.3.2 libestr 0.1.2 libevent 1.4.14b expat 2.1.0 libidn 1.19 libksba 1.1.0 libmcrypt 2.5.8 libmemcached 0.16 libmng 1.0.10 neon 0.29.5 libnet 1.1.5 libpcap 1.1.1 librsync 0.9.7 libsigsegv 2.6 libsndfile 1.0.23 libtecla 1.6.1 libtool 2.4.2 libtorrent 0.12.2 libusbugen 0.1.8 libusb 0.1.8 libxml2 2.7.6 libxslt 1.1.26 lighttpd 1.4.23 links 1.03 logilab-astng 0.19.0 logilab-common 0.40.0 lua 5.1.4 m4 1.4.12 make 3.82 mc 4.7.5.2 meld 1.4.0 memcached 1.4.5 memcached-java 2.0.1 mercurial 2.2.1 mpc 0.9 mpfr 2.4.2 mutt 1.5.21 mysql 5.1.37 ncftp 3.2.3 net-snmp 5.4.1 nethack 3.4.3 nmap 5.51 ntp-dev 4.2.5 open-fabrics 1.5.3 openexr 1.6.1 openldap 2.4.30 openscap 0.8.1 openssl 0.9.8q openssl 1.0.0j libopenusb 1.0.1 p7zip 9.20.1 pam_pkcs11 0.6.0 patch 2.5.9 pconsole 1.0 pcre 8.21 perl 5.12.4 DBI 1.58 Net-SSLeay 1.36 pmtools 1.10 XML-Parser 2.36 XML-Simple 2.18 PHP 5.2.17 PHP 5.3.14 pinentry 0.7.6 privoxy 3.0.17 proftpd 1.3.3 psutils p17 pv 1.2.0 pwgen 2.06 pylint 0.18.0 CherryPy 3.1.2 coverage 3.5 jsonrpclib 0.1.3 ldtp 2.1.1 M2Crypto 0.21.1 Mako 0.4.1 nose 1.1.2 ply 3.1 pybonjour 1.1.1 pycups 1.9.46 pycurl 7.19.0 lxml 2.3.3 pyOpenSSL 0.11 Python 2.6.8 Python 2.7.3 setuptools 0.6 quagga 0.99.19 quilt 0.60 rdiff-backup 1.3.3 readline 5.2 rpm2cpio 0.5.11 rsync 3.0.8 rsyslog 6.2.0 rtorrent 0.8.2 ruby 1.8.7 samba 3.6.6 sane-backends 1.0.19 sane-frontends 1.0.14 screen 4.0.3 sed 4.2.1 sendmail 8.14.5 slang 2.2.4 slib 3b1 slrn 0.9.9 snort 2.8.4.1 sox 14.3.2 spawn-fcgi 1.6.3 squid 3.1.18 stdcxx 4.2.1 subversion 1.7.5 sudo 1.8.4.5 swig 1.3.35 expect 5.45 tcl 8.5.9 tk 8.5.9 tls 1.6 tcpdump 4.1.1 tcsh 6.17.00 texinfo 4.7 tidy 1.0.0 timezone apache-tomcat 6.0.35 top 3.8beta1 trousers 0.3.6 unixODBC 2.3.0 unrar 4.1.4 unzip 6.0 vim 7.3 visual-panels wget 1.12 which 2.16 wireshark 1.8.2 wxGTK 2.8.12 xorriso 0.6.0 xz 5.0.1 zip 3.0 zlib 1.2.3 zsh 4.3.17

    Read the article

  • Orchestrating the Virtual Enterprise

    - by John Murphy
    During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems.  Case in point, almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value.

    Read the article

  • elffile: ELF Specific File Identification Utility

    - by user9154181
    Solaris 11 has a new standard user level command, /usr/bin/elffile. elffile is a variant of the file utility that is focused exclusively on linker related files: ELF objects, archives, and runtime linker configuration files. All other files are simply identified as "non-ELF". The primary advantage of elffile over the existing file utility is in the area of archives — elffile examines the archive members and can produce a summary of the contents, or per-member details. The impetus to add elffile to Solaris came from the effort to extend the format of Solaris archives so that they could grow beyond their previous 32-bit file limits. That work introduced a new archive symbol table format. Now that there was more than one possible format, I thought it would be useful if the file utility could identify which format a given archive is using, leading me to extend the file utility: % cc -c ~/hello.c % ar r foo.a hello.o % file foo.a foo.a: current ar archive, 32-bit symbol table % ar r -S foo.a hello.o % file foo.a foo.a: current ar archive, 64-bit symbol table In turn, this caused me to think about all the things that I would like the file utility to be able to tell me about an archive. In particular, I'd like to be able to know what's inside without having to unpack it. The end result of that train of thought was elffile. Much of the discussion in this article is adapted from the PSARC case I filed for elffile in December 2010: PSARC 2010/432 elffile Why file Is No Good For Archives And Yet Should Not Be Fixed The standard /usr/bin/file utility is not very useful when applied to archives. When identifying an archive, a user typically wants to know 2 things: Is this an archive? Presupposing that the archive contains objects, which is by far the most common use for archives, what platform are the objects for? Are they for sparc or x86? 32 or 64-bit? Some confusing combination from varying platforms? The file utility provides a quick answer to question (1), as it identifies all archives as "current ar archive". It does nothing to answer the more interesting question (2). To answer that question, requires a multi-step process: Extract all archive members Use the file utility on the extracted files, examine the output for each file in turn, and compare the results to generate a suitable summary description. Remove the extracted files It should be easier and more efficient to answer such an obvious question. It would be reasonable to extend the file utility to examine archive contents in place and produce a description. However, there are several reasons why I decided not to do so: The correct design for this feature within the file utility would have file examine each archive member in turn, applying its full abilities to each member. This would be elegant, but also represents a rather dramatic redesign and re-implementation of file. Archives nearly always contain nothing but ELF objects for a single platform, so such generality in the file utility would be of little practical benefit. It is best to avoid adding new options to standard utilities for which other implementations of interest exist. In the case of the file utility, one concern is that we might add an option which later appears in the GNU version of file with a different and incompatible meaning. Indeed, there have been discussions about replacing the Solaris file with the GNU version in the past. This may or may not be desirable, and may or may not ever happen. Either way, I don't want to preclude it. Examining archive members is an O(n) operation, and can be relatively slow with large archives. The file utility is supposed to be a very fast operation. I decided that extending file in this way is overkill, and that an investment in the file utility for better archive support would not be worth the cost. A solution that is more narrowly focused on ELF and other linker related files is really all that we need. The necessary code for doing this already exists within libelf. All that is missing is a small user-level wrapper to make that functionality available at the command line. In that vein, I considered adding an option for this to the elfdump utility. I examined elfdump carefully, and even wrote a prototype implementation. The added code is small and simple, but the conceptual fit with the rest of elfdump is poor. The result complicates elfdump syntax and documentation, definite signs that this functionality does not belong there. And so, I added this functionality as a new user level command. The elffile Command The syntax for this new command is elffile [-s basic | detail | summary] filename... Please see the elffile(1) manpage for additional details. To demonstrate how output from elffile looks, I will use the following files: FileDescription configA runtime linker configuration file produced with crle dwarf.oAn ELF object /etc/passwdA text file mixed.aArchive containing a mixture of ELF and non-ELF members mixed_elf.aArchive containing ELF objects for different machines not_elf.aArchive containing no ELF objects same_elf.aArchive containing a collection of ELF objects for the same machine. This is the most common type of archive. The file utility identifies these files as follows: % file config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: ascii text mixed.a: current ar archive, 32-bit symbol table mixed_elf.a: current ar archive, 32-bit symbol table not_elf.a: current ar archive same_elf.a: current ar archive, 32-bit symbol table By default, elffile uses its "summary" output style. This output differs from the output from the file utility in 2 significant ways: Files that are not an ELF object, archive, or runtime linker configuration file are identified as "non-ELF", whereas the file utility attempts further identification for such files. When applied to an archive, the elffile output includes a description of the archive's contents, without requiring member extraction or other additional steps. Applying elffile to the above files: % elffile config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: non-ELF mixed.a: current ar archive, 32-bit symbol table, mixed ELF and non-ELF content mixed_elf.a: current ar archive, 32-bit symbol table, mixed ELF content not_elf.a: current ar archive, non-ELF content same_elf.a: current ar archive, 32-bit symbol table, ELF 64-bit LSB relocatable AMD64 Version 1 The output for same_elf.a is of particular interest: The vast majority of archives contain only ELF objects for a single platform, and in this case, the default output from elffile answers both of the questions about archives posed at the beginning of this discussion, in a single efficient step. This makes elffile considerably more useful than file, within the realm of linker-related files. elffile can produce output in two other styles, "basic", and "detail". The basic style produces output that is the same as that from 'file', for linker-related files. The detail style produces per-member identification of archive contents. This can be useful when the archive contents are not homogeneous ELF object, and more information is desired than the summary output provides: % elffile -s detail mixed.a mixed.a: current ar archive, 32-bit symbol table mixed.a(dwarf.o): ELF 32-bit LSB relocatable 80386 Version 1 mixed.a(main.c): non-ELF content mixed.a(main.o): ELF 64-bit LSB relocatable AMD64 Version 1 [SSE]

    Read the article

  • Translating Your Customizations

    - by Richard Bingham
    This blog post explains the basics of translating the customizations you can make to Fusion Applications products, with the inclusion of information for both composer-based customizations and the generic design-time customizations done via JDeveloper. Introduction Like most Oracle Applications, Fusion Applications installs on-premise with a US-English base language that is, in Release 7, supported by the option to add up to a total of 22 additional language packs (In Oracle Cloud production environments languages are pre-installed already). As such many organizations offer their users the option of working with their local language, and logically that should also apply for any customizations as well. Composer-based UI Customizations Customizations made in Page Composer take into consideration the session LOCALE, as set in the user preferences screen, during all customization work, and stores the customization in the MDS repository accordingly. As such the actual new or changed values used will only apply for the same language under which the customization was made, and text for any other languages requires a separate upload. See the Resource Bundles section below, which incidentally also applies to custom UI changes done in JDeveloper. You may have noticed this when you select the “Select Text Resource” menu option when editing the text on a page. Using this ensures that the resource bundles are used, whereas if you define a static value in Expression Builder it will never be available for translation. Notice in the screenshot below the “What’s New” custom value I have already defined using the ‘Select Text Resource’ feature is internally using the adfBundle groovy function to pull the custom value for my key (RT_S_1) from the ComposerOverrideBundle. Figure 1 – Page Composer showing the override bundle being used. Business Objects Customizing the Business Objects available in the Applications Composer tool for the CRM products, such as adding additional fields, also operates using the session language. Translating these additional values for these fields into other installed languages requires loading additional resource bundles, again as described below. Reports and Analytics Most customizations to Reports and BI Analytics are just essentially reorganizations and visualizations of existing number and text data from the system, and as such will use the appropriate values based on the users session language. Where a translated value or string exists for that session language, it will be used without the need for additional work. Extending through the addition of brand new reports and analytics requires another method of loading the translated strings, as part of what is known as ‘Localizing’ the BI Catalog and Metadata. This time it is via an export/import of XML data through the BI Administrators console, and is described in the OBIEE Admin Guide. Fusion Applications reports based on BI Publisher are already defined in template-per-locale, and in addition provide an extra process for getting the data for translation and reloading. This again uses the standard resource bundle format. Loading a custom report is illustrated in this video from our YouTube channel which shows the screen for both setting the template local and running an export for translation. Fusion Applications Menus Whilst the seeded Navigator and Global Menu values are fully translated when the additional language is installed, if they are customized then the change or new menu item will apply universally, not currently per language. This is set to change in a future release with the new UI Text Editor feature described below. More on Resource Bundles As mentioned above, to provide translations for most of your customizations you need to add values to a resource bundle. This is an industry open standard (OASIS) format XML file with the extension .xliff, and store translated values for the strings used by ADF at run-time. The general process is that these values are exported from the MDS repository, manually edited, and then imported back in again.This needs to be done by an administrator, via either WLST commands or through Enterprise Manager as per the screenshot below. This is detailed out in the Fusion Applications Extensibility Guide. For SaaS environments the Cloud Operations team can assist. Figure 2 – Enterprise Manager’s MDS export used getting resource bundles for manual translation and re-imported on the same screen. All customized strings are stored in an override bundle (xliff file) for each locale, suffixed with the language initials, with English ones being saved to the default. As such each language bundle can be easily identified and updated. Similarly if you used JDeveloper to create your own applications as extensions to Fusion Applications you would use the native support for resource bundles, and add them into the faces-config.xml file for inclusion in your application. An example is this ADF customization video from our YouTube channel. JDeveloper also supports automatic synchronization between your underlying resource bundles and any translatable strings you add – very handy. For more information see chapters on “Using Automatic Resource Bundle Integration in JDeveloper” and “Manually Defining Resource Bundles and Locales” in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework. FND Messages and Look-ups FND Messages, as defined here, are not used for UI labels (they are known as ‘strings’), but are the responses back to users as a result of an action, such as from a page submit. Each ‘message’ is defined and stored in the related database table (FND_MESSAGES_B), with another (FND_MESSAGES_TL) holding any language-specific values. These come seeded with the additional language installs, however if you customize the messages via the “Manage Messages” task in Functional Setup Manager, or add new ones, then currently (in Release 7) you’ll need to repeat it for each language. Figure 3 – An FND Message defined in an English user session. Similarly Look-ups are stored in a translation table (FND_LOOKUP_VALUES_TL) where appropriate, and can be customized by setting the users session language and making the change  in the Setup and Maintenance task entitled “Manage [Standard|Common] Look-ups”. Online Help Yes, in fact all the seeded help is applied as part of each language pack install as part of the post-install provisioning process. If you are editing or adding custom online help then the Create Help screen provides a drop-down of which language your help customization will apply to. This is shown in the video below from our YouTube channel, and obviously you’ll need to it for each language in use. What is Coming for Translations? Currently planned for Release 8 is something called the User Interface (UI) Text Editor. This tool will allow the editing of all the text shown on the pages and forms of Fusion Application. This will provide a search based on a particular term or word, say “Worker”, and will allow it to be adjusted, say to “Employee”, which then updates all the Resource Bundles that contain it. In the case of multi-language environments, it will use the users session language (locale) to know which Resource Bundles to apply the change to. This capability will also support customization sandboxes, to help ensure changes can be tested and approved.  It is also interesting to note that the design currently allows any page-specific customizations done using Page Composer or Application Composer to over-write the global changes done via the UI Text Editor, allowing for special context-sensitive values to still be used. Further Reading and Resources The following short list provides the mains resources for digging into more detail on translation support for both Composer and JDeveloper customization projects. There is a dedicated chapter entitled “Translating Custom Text” in the Fusion Applications Extensibility Guide. This has good examples and steps for many tasks, especially administering resource bundles. Using localization formatting (numbers, dates etc) for design-time changes is well documented in the Fusion Applications Developer Guide. For more guidelines on general design-time globalization, see either the ‘Internationalizing and Localizing Pages’ chapter in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework (Oracle Fusion Applications Edition) or the general Oracle Database Globalization Support Guide. The Oracle Architecture ‘A-Team’ provided a recent post on customizing the user session timeout popup, using design-time changes to resource bundles. It has detailed step-by-step examples which can be a useful illustration.

    Read the article

  • jrunscript as a cross platform scripting environment

    - by user12798506
    ?????????????????????????????????????????????????????????sh????????????UNIX???????????????????sh???????????????????????????????????????????Windows????????????????? sh??????????????find?grep?sed?awk???Windows??????????????????????????????????????????????????????????????????????????????????????????????Windows???Cygwin????????????sh??????Windows??????????????Cygwin????????????????????????????????????????????JDK?????jrunscript?????JavaScript???????????????????????1?????????jrunscript??????????????????? Windows???UNIX??????????????????????? find?grep?sed?awk?????????sh???????????????Windows Script Host??????? Java????????????? ??????????????????????????????????????????????????????????(?????????????????????????????????????????) ?????????????JDK 6??????????????????????????PC????????????????JDK 6?PC????????????????????????????????????JDK????????????????????????????????????????jrunscript?????????????????????????? ?????jrunscript????JavaScript?????????????????????????????????????????? 1) Windows???UNIX????????????????? ?????????????????????????????????????????JavaScript???mytool.js???????????????????????jrunscript???????????UNIX????sh???????Windows????bat????????????????????? mytool.sh (UNIX?): #!/bin/sh bindir=$(cd $(dirname $0) && pwd) case "`uname`" in CYGWIN*) bindir=`cygpath -w "$bindir"` ;; esac jrunscript "${bindir}/mytool.js" $* mytool.bat (Windows?): @echo off set bindir=%~dp0 jrunscript "%bindir%mytool.js" %* UNIX??sh????????Cygwin???????????????????????????????????????????js??????????????UNIX?Windows??????????????????????????????? 2) jrunscript??cat, cp, find?grep?????? jrunscript???UNIX?????????????????????????????????? jrunscript JavaScript built-in functions ????UNIX??sh?????????????????????UNIX?????????????????????????????????????????src??????????java????????????enum???????java?????????????????????????????????????????????? find('src', '.*.java', function(f) { grep('enum', f); }); ???????UNIX?????????????????????????????????????????????????????????????????????????????????????????cp(from, to)??????????????????????????????????????????UNIX??????????? $ cp -r src/* tmp/ ?????????????????????????????????????????find()???????cp -r????????·?????????????????????? function cpr(fromdir, todir, pattern) { if (pattern == undefined) { pattern = ".*"; } var frdir = pathToFile(fromdir).getCanonicalPath(); find(fromdir, pattern, function(f) { // relative dir of file f from 'fromdir'. var relative = f.getParentFile().getCanonicalPath().substring(frdir.length() + 1); var dstdir = pathToFile(todir + "/" + relative); if (!dstdir.exists()) { // Create the destination dir for file f. mkdirs(dstdir); } // Copy file f to 'dstdir'. cp(f, dstdir + "/" + f.getName()); }); } java?????I/O?API??Windows?????????????"/"??????????????????????????????UNIX?Windows?????????????? ????????????exec(cmd)?????????jar???????????????????????????????????????????? $ jrunscript js> exec("jar xvf example.jar") META-INF/ ?????????????µ???B META-INF/MANIFEST.MF ???W?J???????µ???B com/ ?????????????µ???B com/example/ ?????????????µ???B com/example/Bar.class ???W?J???????µ???B com/example/dummy/ ?????????????µ???B com/example/dummy/dummy.txt ?????o???????µ???B com/example/dummy.properties ?????o???????µ???B com/example/Foo.class ???W?J???????µ???B ???exec()?????????????????????????????????????????????????????????????????Windows????????????I/O??????????????????????????????????BAT????????? errmsg.bat: for /L %%i in (1,1,50) do echo "Error Message count = %%i" 1&2 jrunscript??exec()???????????????18??????????????????????????????????? C:\tmp>jrunscript -e "exec('errmsg.bat')" C:\tmp>for /L %i in (1 1 100) do echo "Error Message count = %i" 1>&2 C:\tmp>echo "Error Message count = 1" 1>&2 : C:\tmp>echo "Error Message count = 18" 1>&2 ? ??? ???????????exec()?????????????????????????????????????????????????????????????????DataInputStream???????????????????????? $ jrunscript js this["exec"].toString() function exec(cmd) { var process = java.lang.Runtime.getRuntime().exec(cmd); var inp = new DataInputStream(process.getInputStream()); var line = null; while ((line = inp.readLine()) != null) { println(line); } process.waitFor(); $exit = process.exitValue(); } ?????????????????????????????????????????????????????exec()???????????????exec()?????????????????????????????exec()??????? function exec(cmd) { var process = java.lang.Runtime.getRuntime().exec(cmd); var stdworker = new java.lang.Runnable( {run: function() { cat(process.getInputStream()); }}); var errworker = new java.lang.Runnable( {run: function() { cat(process.getErrorStream()); }}); new java.lang.Thread(stdworker).start(); new java.lang.Thread(errworker).start(); return proc.waitFor(); } ???????????????????cat()???????????cat()?InputStreamReader?????????????????????????????????????????????????? 3) JavaScript???????????????? JavaScript?Java???????????????????????JavaScript????????????Ruby?Groovy?Scala???????????????????????????????????????????????10MB?????????????????????????????????????JavaScript????????????????????KB?????????????MB?JAR??????????????????????????JRE?JDK?????????????????????????????????????????

    Read the article

  • A Reusable Builder Class for .NET testing

    - by Liam McLennan
    When writing tests, other than end-to-end integration tests, we often need to construct test data objects. Of course this can be done using the class’s constructor and manually configuring the object, but to get many objects into a valid state soon becomes a large percentage of the testing effort. After many years of painstakingly creating builders for each of my domain objects I have finally become lazy enough to bother to write a generic, reusable builder class for .NET. To use it you instantiate a instance of the builder and configuring it with a builder method for each class you wish it to be able to build. The builder method should require no parameters and should return a new instance of the type in a default, valid state. In other words the builder method should be a Func<TypeToBeBuilt>. The best way to make this clear is with an example. In my application I have the following domain classes that I want to be able to use in my tests: public class Person { public string Name { get; set; } public int Age { get; set; } public bool IsAndroid { get; set; } } public class Building { public string Street { get; set; } public Person Manager { get; set; } } The builder for this domain is created like so: build = new Builder(); build.Configure(new Dictionary<Type, Func<object>> { {typeof(Building), () => new Building {Street = "Queen St", Manager = build.A<Person>()}}, {typeof(Person), () => new Person {Name = "Eugene", Age = 21}} }); Note how Building depends on Person, even though the person builder method is not defined yet. Now in a test I can retrieve a valid object from the builder: var person = build.A<Person>(); If I need a class in a customised state I can supply an Action<TypeToBeBuilt> to mutate the object post construction: var person = build.A<Person>(p => p.Age = 99); The power and efficiency of this approach becomes apparent when your tests require larger and more complex objects than Person and Building. When I get some time I intend to implement the same functionality in Javascript and Ruby. Here is the full source of the Builder class: public class Builder { private Dictionary<Type, Func<object>> defaults; public void Configure(Dictionary<Type, Func<object>> defaults) { this.defaults = defaults; } public T A<T>() { if (!defaults.ContainsKey(typeof(T))) throw new ArgumentException("No object of type " + typeof(T).Name + " has been configured with the builder."); T o = (T)defaults[typeof(T)](); return o; } public T A<T>(Action<T> customisation) { T o = A<T>(); customisation(o); return o; } }

    Read the article

  • Consuming ASP.NET Web API services from PHP script

    - by DigiMortal
    I introduced ASP.NET Web API in some of my previous posts. Although Web API is easy to use in ASP.NET web applications you can use Web API also from other platforms. This post shows you how to consume ASP.NET Web API from PHP scripts. Here are my previous posts about Web API: How content negotiation works? ASP.NET Web API: Extending content negotiation with new formats Query string based content formatting Although these posts cover content negotiation they give you some idea about how Web API works. Test application On Web API side I use the same sample application as in previous Web API posts – very primitive web application to manage contacts. Listing contacts On the other machine I will run the following PHP script that works against my Web API application: <?php   // request list of contacts from Web API $json = file_get_contents('http://vs2010dev:3613/api/contacts/'); // deserialize data from JSON $contacts = json_decode($json); ?> <html> <head>     <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> </head> <body>     <table>     <?php      foreach($contacts as $contact)     {         ?>         <tr>             <td valign="top">                 <?php echo $contact->FirstName ?>             </td>             <td valign="top">                 <?php echo $contact->LastName ?>             </td>             <td valign="middle">                 <form method="POST">                     <input type="hidden" name="id"                          value="<?php echo $contact-/>Id ?>" />                     <input type="submit" name="cmd"                          value="Delete"/>                 </form>             </td>         </tr>         <?php     }     ?>     </table> </body> </html> Notice how easy it is to handle JSON data in PHP! My PHP script produces the following output: Looks like data is here as it should be. Deleting contacts Now let’s write code to delete contacts. Add this block of code before any other code in PHP script. if(@$_POST['cmd'] == 'Delete') {     $errno = 0;     $errstr = '';     $id = @$_POST['id'];          $params = array('http' => array(               'method' => 'DELETE',               'content' => ""             ));     $url = 'http://vs2010dev:3613/api/contacts/'.$id;     $ctx = stream_context_create($params);     $fp = fopen($url, 'rb', false, $ctx);       if (!$fp) {         $res = false;       } else {         $res = stream_get_contents($fp);       }     fclose($fp);     header('Location: /json.php');     exit; } Again simple code. If we write also insert and update methods we may want to bundle those operations to single class. Conclusion ASP.NET Web API is not only ASP.NET fun. It is available also for all other platforms. In this posting we wrote simple PHP client that is able to communicate with our Web API application. We wrote only some simple code, nothing complex. Same way we can use also platforms like Java, PERL and Ruby.

    Read the article

  • Getting Started With Tailoring Business Processes

    - by Richard Bingham
    In this article, and for the sake of simplicity, we will use the term “On-Premise” to mean a deployment where you have design-time development access to the instance, including administration of the technology components, the applications filesystem, and the database. In reality this might be a local development instance that is then supported by a team who can deploy your customizations to the restricted production instance equivalents. Tools Overview Firstly let’s look at the Design-Time tools within JDeveloper for customizing and extending the artifacts of a Business Process. In essence this falls into two buckets; SOA Composite Editor for working with BPEL processes, and the BPM Studio. The SOA Composite Editor As a standard extension to JDeveloper, this graphical design tool should be familiar to anyone previously worked with Oracle SOA Server. With easy-to-use modeling capability, backed-up by full XML source-view (for read-only), it provides everything that is needed to implement the technical design. In simple terms, once deployed to the remote SOA Server the composite components (like Mediator) leverage the Event Delivery Network (EDN) for interaction with the application logic. If you are customizing an existing Fusion Applications BPEL process then be aware that it does support MDS-based customization layers just like Page Composer where different customizations are used based on the run-time context, like for a specific Product or Business Unit. This also makes them safe from patching and upgrades, although only a single active version of the composite is available at run-time. This is defined by a field on the composite record, available in Enterprise Manager. Obviously if you wish to fire different activities and tasks based on the user context then you can should include switches to fork the flows in your custom BPEL process. Figure 1 – A BPEL process in Composite Editor The following describes the simplified steps for making customizations to BPEL processes. This is the most common method of changing the business processes of Fusion Applications, as over 400 BPEL-based composite applications are provided out-of-the-box. Setup your local Fusion Applications JDeveloper environment. The SOA Composite Editor should be installed as part of the Fusion Applications extension. If there are problems you can also find it under the ‘Check for Updates’ help menu option. Since SOA Server is not part of the JDeveloper integrated WebLogic Server, setup a standalone WebLogic environment for deploying and testing. Obviously you might use a Fusion Applications development instance also. Package the existing standard Fusion Applications SOA Composite using Enterprise Manager and export it as a complete SOA Archive (SAR) file, resulting in a local .jar file. You may need to ask your system administrator for this. Import the exported SAR .jar file into JDeveloper using the File menu, under the option ‘SOA Archive into SOA Project’. In JDeveloper set the appropriate customization layer values, and then change from the default role to the Fusion Applications Customization Developer role. Make the customizations and save the application project. Finally redeploy the composite application, either to a direct Application Server connection, or as a fresh SAR (jar) file that can then be re-imported and deployed via Enterprise Manager. The Business Process Management (BPM) Suite In addition to the relatively low-level development environment associated with BPEL process creation, Oracle provides a suite of products that allow business process adjustments to be made without the need for some of the programming skills.  The aim is to abstract much of the technical implementation and to provide a Business Analyst tools for immediately implementing organization changes. Obviously there are some limitations on what they can do, however the BPM Suite functionality increases with each release and for the majority of the cases the tools remains as applicable as its developer-orientated sister. At the current time business processes must be explicitly coded to support just one of these use-cases, either BPEL for developer use or BPM for business analyst use. That said, they both run on the same SOA Server in much the same way. The components bundled in each SOA Composite Application can be verified by inspection through Enterprise Manager. Figure 2 – A BPM Process in JDeveloper BPM Suite. BPM processes are written in a standard notation (BPMN) and the modeling tools are very similar to that of BPEL. The steps to deploy a custom BPM process are also essentially much the same, since the BPM process is bundled into a SOA Composite just like a BPEL process. As such the SOA Composite Editor  actually has support for both artifacts and even allows use of them together, such as a calling a BPM process as a partnerlink from a BPEL process. For more details see the references below. Business Analyst Tooling In addition to using JDeveloper extensions for BPM development, there are run-time tools that Business Analysts can use to make adjustments, so that without high costs of an IT project the system can be tuned to match changes to the business operation. The first tool to consider is the BPM Composer, deployed with the middleware SOA Server and accessible online, and for Fusion Applications it is under the Business Process icon on the homepage of the Application Composer. Figure 3 – Business Process Composer showing a CRM process flow. The key difference between this and using JDeveloper is that the BPM Composer has a Business Catalog prepopulated with features and functions that can be used, mostly through registered WebServices. This means no coding or complex interface development is required, simply drag-drop-configure. The items in the business catalog are seeded by either Oracle (as a BPM Template) or added to by your own custom development. You cannot create or generate catalog content from BPM Composer directly. As per the screenshot you can see the Business Catalog content in the BPM Project browser region. In addition, other online tools for use by Business Analysts include the BPM Worklist application for editing business rules and approval management configuration, plus the SOA Composer which focuses on non-approval business rules and domain value maps. At the current time there are only a handful of BPM processes shipped with Fusion Applications HCM and CRM, including on-boarding workers and processing customer registrations.  This also means a limited number of associated BPM Templates provided out-of-the-box, therefore a limited Business Catalog. That said, BPM-based extension is a powerful capability to leverage and will most likely develop going forwards, especially for use in SaaS deployments where full design-time JDeveloper access is not available. Further Reading For BPEL – Fusion Applications Extensibility Guide – Section 12 For BPM – Fusion Applications Extensibility Guide – Section 7 The product-specific documentation and implementation guides for Fusion Applications Fusion Middleware Developers Guide for SOA Suite Modeling and Implementation Guide for Oracle Business Process Management User’s Guide for Oracle Business Process Composer Oracle University courses on BPM Suite and SOA Development

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >