Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 171/679 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Yet another ADF book - Oracle ADF Real World Developer’s Guide

    - by Chris Muir
    I'm happy to report that the number of ADF published books is expanding yet again, with this time Oracle's own Jobinesh Purushothaman publishing the Oracle ADF Real World Developer’s Guide.  I can remember the dim dark days when there was but just 1 Oracle book besides the documentation, so today it's great to have what I think might be the 7 or 8th ADF book publicly available, and not to forgot all our other technical docs too. Jobinesh has even published some extra chapters online that will give you a good taste of what to expect.  If you're interested in positive reviews, the ADF EMG already has it's first happy customer. Now to see if I can get Oracle to expense me a copy.

    Read the article

  • Oracle Linux 6 Implementation Essentials Certification Exam Now Available

    - by Antoinette O'Sullivan
    Get proof of your linux system administration skills by taking the Oracle Linux 6 Implementation Essentials Certification exam. This certification is available to all candidates. Oracle Partner Members earning this certification will be recognized as OPN Certified Specialists. This certification takes under 3 hours, asking you between 120-150 questions on areas including: Introduction to Oracle Linux Installing Oracle Linux 6 Linux Boot Process Oracle Linux System Configuration and Process Management Oracle Linux Package Management Ksplice Zero Downtime Updates Automate Tasks and System Logging User and Group Administration Oracle Linux File Sytems and Storage Administration Network Administration Oracle Linux System Monitoring and Troubleshooting Oracle Certifications are among the most sought after badges of credibility for expertise in the Information Technology marketplace. See Benefits of Oracle Certification for more information. To prepare for this exam, you can take the Oracle Linux System Administration training.

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • Silverlight Cream for March 13, 2011 -- #1059

    - by Dave Campbell
    In this Issue: András Velvárt, WIndowsPhoneGeek(-2-), Jesse Liberty(-2-), Victor Gaudioso, Kunal Chowdhury, Jeremy Likness, Michael Crump, and Dhananjay Kumar. Above the Fold: Silverlight: "Application Library Caching in Silverlight 4" Kunal Chowdhury WP7: "Handling WP7 orientation changes via Visual States" András Velvárt Shoutouts: Joe McBride gave a MEF Head User Group presentation and has posted How to Become a MEF Head – Slides & Code From SilverlightCream.com: Handling WP7 orientation changes via Visual States András Velvárt has an Expression Blend/WP7 post up discussing WP7 orientation changes and handling them via Visual States ... see an example from his SurfCube app, and a behavior to handle the control... with source. WP7 PerformanceProgressBar in depth WIndowsPhoneGeek has a post up discussing the WP7 Performance bar from the Windows Phone Toolkit. This is an update on the Toolkit based on the Feb 2011 release. Great explanation of the PerformanceProgressBar, external links, and sample code. Getting data out of WP7 WMAppManifest is easy with Coding4Fun PhoneHelper Next WindowsPhoneGeek has a post up about the PhoneHelper in the Coding4Fun TOolkit, and using it to get data out of the WMAppManifest easily. Good discussion, Links, and code as always Silverlight Unit Test For Phone In Jesse Liberty's "Windows Phone From Scratch" number 41, he's discussing Unit Testing for WP7... he gives some good external links and some good examples. Yet Another Podcast #27–Paul Betts Jesse Liberty's next post is his "Yet Another Podcast" number 27, and an interview with Paul Betts, the creator of Reactive UI... check out the podcast and also the good links listed. New Silverlight Video Tutorial: How to use the Fluid Move Behavior Victor Gaudioso has a new video tutorial up on using the Fluid Move Behavior... making a selected item animate from a ListBox to a Master Details Grid. Application Library Caching in Silverlight 4 Kunal Chowdhury takes a break from SilverlightZone long enough to write a post about Application Library Caching... for example on-demand loading of a 3rd-party XAP. Jounce Part 13: Navigation Parameters Jeremy Likness has his 13th post of a series in understanding his Jounce MVVM framework up. This episode surrounds a new release and what it contains, the primary focus being navigation parameters... that is you can raise a navigation event with a payload. Profiling Silverlight Applications after installing Visual Studio 2010 Service Pack 1 Michael Crump digs into the performance wizard for Silverlight that we get with VS2010 SP1. He shows how to get and read a profile... great intro to a new tool. Binding XML File to Data Grid in Silverlight Dhananjay Kumar demonstrates reading an XML file using LINQ to XML and binding the result to a Silverlight DataGrid Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Visual Studio 2010 Service Pack 1, now available for download

    - by Harish Ranganathan
    Visual Studio 2010 Service Pack 1 (SP1) is now available for general download for almost a week now.  The Beta of SP1 came couple of months back and it did a lot of performance enhancements, added support for HTML5 tags and few other stuff related to web development.  Now, the final release of SP1 is available.  The good part is that, if you had installed the SP1 beta, you don’t have to remove the Beta and start all over again.  You can apply the final release on top of the Beta and it works like a charm. So, in simplified terms, what is new in Visual Studio 2010 SP1 Before I start listing it down, I was checking if there was an MSDN article available on this and found http://msdn.microsoft.com/en-us/library/gg442059.aspx  While it reads (Beta), the same holds good for the release candidate as well.  Unlike VS 2008 SP1 and .NET 3.5 SP1 (which came together), this release doesn’t add any new project templates/item templates. However, there are lot of enhancements related to Web Deployment, Debugging and Unit Testing for .NET 3.5 applications. So, how does one find if you are running the correct version of SP1 final release. While the SP1 Beta (Help – About Visual Studio) reads Microsoft Visual Studio 2010 Version 10.0.3118.1 SP1 Rel, once you install the SP1 RTM release, it should read as below The download link for SP1 Beta is here Cheers!!!

    Read the article

  • ODI 11g - Scripting a Reverse Engineer

    - by David Allan
    A common question is related to how to script the reverse engineer using the ODI SDK. This follows on from some of my posts on scripting in general and accelerated model and topology setup. Check out this viewlet here to see how to define a reverse engineering process using ODI's package. Using the ODI SDK, you can script this up using the OdiPackage and StepOdiCommand classes as follows;  OdiPackage pkg = new OdiPackage(folder, "Pkg_Rev"+modName);   StepOdiCommand step1 = new StepOdiCommand(pkg,"step1_cmd_reset");   step1.setCommandExpression(new Expression("OdiReverseResetTable \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   StepOdiCommand step2 = new StepOdiCommand(pkg,"step2_cmd_reset");   step2.setCommandExpression(new Expression("OdiReverseGetMetaData \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   StepOdiCommand step3 = new StepOdiCommand(pkg,"step3_cmd_reset");   step3.setCommandExpression(new Expression("OdiReverseSetMetaData \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   pkg.setFirstStep(step1);   step1.setNextStepAfterSuccess(step2);   step2.setNextStepAfterSuccess(step3); The biggest leap of faith for users is getting to know which SDK classes have to be used to build the objects in the design, using StepOdiCommand isn't necessarily obvious, once you see it in action though it is very simple to use. The above snippet uses an OdiModel variable named mod, its a snippet I added to the accelerated model creation script in the post linked above.

    Read the article

  • Keyboard Shortcuts in Oracle SQL Developer

    - by thatjeffsmith
    The CTRL key, which stands for ConTRoL…aw, the good ole days What keyboard shortcuts should EVERY Oracle SQL Developer user know? How do you find new shortcuts to master, and how do you change them to match ones you’ve already learned in other tools? These are the driving questions for today’s post. While some of us may be keyboard ninjas, and others are more driven to use the mouse – everyone has probably picked up a few strategic keyboard shortcuts over the years. For example, I’ve personally JUST memorized the Cmd-Shift-4 ‘trick’ in Mac OS X. And of course we all know what F1 does, right? Right?!? Here are a few more keyboard shortcuts to commit to memory. My Favorite SQL Developer Shortcuts ctrl-enter : executes the current statement(s) F5 : executes the current code as a script (think SQL*Plus) ctrl-space : invokes code insight on demand Code Editor – Completion Insight – Enable Completion Auto-Popup (Keyword being Auto) ctrl-Up/Dn : replaces worksheet with previous/next SQL from SQL History ctrl-shift+Up/Dn : same as above but appends instead of replaces shift+F4 : opens a Describe window for current object at cursor ctrl+F7 : format SQL ctrl+/ : toggles line commenting ctrl+e : incremental search Configuring Keyboard Shortcuts in SQL Developer Tools Preferences Shortcut Keys Search by command name OR the keystroke itself Some tips… Sort by category Pay special attention to the ‘Code Editor’ and ‘Other’ categories Mind the conflicts when you change the defaults Be nice – share! You can save your new mappings with your co-workers using the Export and Import buttons Click on ‘More Actions’ to expose the Import and Export buttons When I get ‘bored’ or if I think I might be missing something, I peruse the Code Editor and Other categories, again! I’ve picked up quite a few cool editor tricks here. Then I blog about them, like they’re ‘magic.’ #EvilLaugh But the main tip is this – don’t let your previously memorized keyboard shortcuts SHORTCUT your usage of SQL Developer. If your fingers have already memorized some keystrokes, just re-program SQL Developer to match! What’s your favorite shortcut? I’ll use the most popular shortcut mentioned in the comments to round out my Top 10 list above!

    Read the article

  • Backpacks and Booth Paint: TechEd 2012

    - by The Un-T Guy
    Arriving in the parking lot of the Orange County Convention Center, I immediately knew I was in the right place. As far as the eye could see, the acres of asphalt were awash in backpacks, quirky (to be kind) outfits, and bad haircuts. This was the place. This was Microsoft Mecca v2012 for geeks and nerds, the Central Florida event of the year, a gathering of high tech professionals whose skills I both greatly respect and, frankly, fear a little. I was wholly and completely out of element, a dork in a vast sea of geek jumbo. It like was wearing dockers and a golf shirt walking into a RenFaire, but one with really crappy costumes and no turkey legs...save those attached to some of the attendees. Of course the corporate whores...errrr, vendors were in place, ready to parlay the convention's fre-nerd-ic energy into millions of dollars by convincing the big-brained and under-sexed in the crowd (i.e., virtually all of them...present company excluded, of course) that their product or service was the only thing standing between them and professional success, industry fame, and clear skin. "With KramTech 2012," they seemed to scream, "you will be THE ROCK STAR of your company's IT department!" As car shows and tattoo parlors learned long ago, Tech companies seem to believe that the best way to attract the attention of this crowd is through the hint of the promise of sex. They recruit and deploy an army of "sales reps" whose primary qualifications appear to be long hair, short skirts, high heels, and a vagina. Unlike their distant cousins in the car and body art industries, however, this sub-species of booth paint (semi-gloss decoration that adds nothing to the substance of the product) seems torn between committing to being all-out sex objects and recognition that they are in the presence of intelligent, discerning people. People who are smart enough to know exactly what these vendors are doing. Also unlike their distant car show and tattoo shop cousins, these young women (what…are there no gay tech professionals who could use some eye candy?) seem to realize that while IT remains a male-dominated field, there are ever-increasing numbers of intelligent, capable, strong professional women – women who’ve battled to make it in this field through hard work and work performance rather than a hard body and performing after work. This is not to say that all of the young female sales reps are there only because of their physical attributes. Many are competent, intelligent, and driven -- not to mention attractive. They're working hard on the front lines of delivering the next generation of technology. The distinction is pretty clear, however, between these young professionals and the booth paint. The former enthusiastically deliver credible information about the products they’re hawking. The latter are positioned in the aisles, uncomfortably avoiding eye contact as they struggle to operate the badge readers. Surprisingly, not all of the women in attendance seemed to object to the objectification of their younger sisters. One IT professional woman who came of age in the industry (mostly in IT marketing) said, “I have no problem with it. I was a ‘booth babe’ for years and it doesn’t bother me at all.” Others, however, weren’t quite so gracious. One woman I spoke with, an IT manager from Cheyenne, Wyoming, said it was demeaning and frankly, as more and more women grow into IT management positions, not a great marketing idea. “Using these young women is, to me, no different than vendors giving out t-shirts to attract attention. It’s sad because it’s still hard for a woman to be respected in the IT field and this just perpetuates the outdated notion that IT is a male-dominated field.” She went on to say that decisions by vendors to employ these young women in this “inappropriate way” could impact her purchasing decisions. “I might be swayed toward a vendor who has women on staff who are intelligent and dynamic rather than the vendors who use the ‘decoration’ girls.” So in many ways, the IT industry is no different than most other industries as it struggles to maximize performance by finding and developing talent – all of the talent, not just the 50% with a penis. Women in IT, like their brethren, struggle to find their niche in the field, to grow professionally, and reach for the brass ring, struggling to overcome obstacles as they climb the mountain of professional success in a never-ending cycle of economic uncertainty. But as (generally) well-educated and highly-trained professionals, they are probably better positioned than those in many other industries. Beside, they’ve got one other advantage over their non-IT counterparts as they attempt their ascent to the summit: They’ve already got the backpacks.

    Read the article

  • Changing the BizTalk message output file name

    - by Bill Osuch
    By default, BizTalk creates the filename of the message dropped to a send port as %MessageID%, which is the unique identifier (GUID) of the message. What if you want to create your own filename? To start, create a simple schema, and a basic orchestration that will receive the message and send it right back out, like this: If you deploy this and wire up the ports, you can drop an xml file into your receive port and have it come out at your send port named something like {7A63CAF8-317B-49D5-871F-9FD57910C3A0}.xml. Now, we'll create a new message with a custom filename. First, create a new orchestration variable called NewFileName, of the type System.String. Next, create a second message using the same schema as the message you're receiving in the Receive shape. Now, drag a Construct Message shape to the orchestration. In the shape's properties, set Messages Constructed to be the new message you just created. Double click the Message Assignment shape (inside the Construct shape...) and paste in the following code: Message_2 = Message_1;   NewFileName = Message_1(FILE.ReceivedFileName); NewFileName = NewFileName.Replace(".xml","_"); NewFileName = NewFileName + "output_" + System.DateTime.Now.Year.ToString() + "-" + System.DateTime.Now.Month.ToString();   Message_2(FILE.ReceivedFileName) = NewFileName; Here we make a copy of the received message, get it's original file name (ReceivedFileName), replace its extension with an underscore, and date-stamp it. Finally, add a Send shape and a Port to the surface, and configure them to send the message you just created. You should wind up with an orchestration like this: Deploy it, and create a new send port. It should be just about identical to the first send port, except this time the file name will be "%SourceFileName%.xml" (without the quotes of course). Fire up the application, drop in a test file, and you should now get both the xml file named with a GUID, and a second file named something along the lines of "MySchemaTestFile_output_2011-6.xml".

    Read the article

  • Repeat row headers after Page Break

    - by klaus.fabian
    The lead developer of the FO engine send me by chance an email about a REALLY nice feature I did not know about. Did you ever encounter a long table with merged cells, where the merged cell went on to the next page? While column headers are by default repeated on the next page, row headers are not. Tables with group-left column and pivot tables are prime examples where this problem occurs. I have seen reports where merged cells could go over multiple pages and you would need to back to find the row header on previous pages. The BI Publisher RTF templates have a special tag you can added to a merged cell to repeat the contents after each page break. You just need to add the following (wordy) tag to the next merged table cell: true Example: 2nd page of report before adding the tag 2nd page of report after adding the tag. Thought you might want to know. Klaus

    Read the article

  • JavaOne Latin America Underway

    - by Tori Wieldt
    JavaOne Latin America started officially today, but lots of networking has already happened. Last night some JUG leaders, Java Champions, and members of the Oracle Java development and marketing teams had dinner together. The conversation ranged from the new direction of JavaFX to how to improve JUG attendance. Maricio Leal shared the idea some Brazilian JUGs have of putting Java Evangelists and experts on a boat and having them visit JUGs on cities along the Amazon river.  We discussed ideas, and shared dessert pizza. It was the perfect community get together! If you see Brazilian Java Man Bruno Souza, ask him what he is bringing to the party.Today, at JavaOne Latin America, all the sessions were full, and developers were spilling into the hallways. Session content was selected with the help of 14 Java thought leaders from Latin America. JavaOne Program Committee Chair, Sharat Chander, said "I'm thrilled that at this JavaOne over half of the content is coming from the community." Between sessions, developers take advantage of the Oracle Technology Network lounge to grab a snack and use their laptops.  OTN LoungeIt promises to be a great JavaOne.

    Read the article

  • Migrating an LDOM from a T4 to a T5

    - by Owen Allen
    I got a question about LDoms: "Is there any restriction against migrating LDoms between the T4 and T5 platforms?" The only restriction is that, at present, you can't do a live migration. However, with Ops Center 12.1.4, you can put T4 and T5s together in a Server Pool and either manually migrate the LDoms to a new host or configure them for automated cold-migration failover. Take a look at the Server Pool and Oracle VM Server for SPARC chapters for more information.

    Read the article

  • Speaking at Dog Food Conference 2013

    - by Brian T. Jackett
    Originally posted on: http://geekswithblogs.net/bjackett/archive/2013/10/22/speaking-at-dog-food-conference-2013.aspx    It has been a couple years since I last attended / spoke at Dog Food Conference, but on Nov 21-22, 2013 I’ll be speaking at Dog Food Conference 2013 here in Columbus, OH.  For those of you confused by the name of the conference (no it’s not about dog food), read up on the concept of dogfooding .  This conference has a history of great sessions from local and regional speakers and I look forward to being a part of it once again.  Registration is now open (registration link) and is expected to sell out quickly.  Reserve your spot today.   Title: The Evolution of Social in SharePoint Audience and Level: IT Pro / Architect, Intermediate Abstract: Activities, newsfeed, community sites, following... these are just some of the big changes introduced to the social experience in SharePoint 2013. This class will discuss the evolution of the social components since SharePoint 2010, the architecture (distributed cache, microfeed, etc.) that supports the social experience, Yammer integration, and proper planning considerations when deploying social capabilities (personal sites, SkyDrive Pro and distributed cache). This session will include demos of the social newsfeed, community sites, and mentions. Attendees should have an intermediate knowledge of SharePoint 2010 or 2013 administration.         -Frog Out

    Read the article

  • Silverlight Cream for February 05, 2011 -- #1041

    - by Dave Campbell
    In this Issue: Peter Kuhn, Mike Ormond(-2-, -3-), WindowsPhoneGeek, Daniel N. Egan, Phil Middlemiss(-2-), Max Paulousky, Michael Washington. Above the Fold: Silverlight: "Designing for Browser-Zoom: Part 2" Phil Middlemiss WP7: "Talking about Converters in WP7 | Coding4fun toolkit converters in depth" WindowsPhoneGeek Lightswitch: "LightSwitch: Can We Handle The Truth?" Michael Washington Shoutouts: András Velvárt has a video up of some awesome changes he has planned for SurfCube, check it out: SurfCube V2 - 3D Web Browser for Windows Phone 7, now with tabs! From SilverlightCream.com: Silverlight for keyboard junkies Peter Kuhn has a post up talking about the issues surrounding trying to use the tab key to navigate between controls... and follows it up with a behavior that resolves it. Windows Phone 7 Content On Demand Mike Ormond has a batch of WP7 Videos up... this first is "Windows Phone 7: A Different Kind of Phone" with Andrej Radinger. Windows Phone 7 Content on Demand Pt 2 Mike Ormond's 2nd WP7 video is "Understanding the Windows Phone 7 Development Tools and Getting Started" with Maarten Struys Windows Phone 7 Content on Demand Pt 3 Mike Ormond's 3rd WP7 Content on Demand is "Games Programming on Windows Phone 7 with Silverlight and XNA" with Rob Miles Talking about Converters in WP7 | Coding4fun toolkit converters in depth WindowsPhoneGeek is discussing value converters in his latest post... value converters for WP7... and the ones in the Coding4Fun toolkit to be exact... everything you wanted to know about them but didn't know to ask :) WP7 Developer Tools–Jan Update Daniel N. Egan has information up about the new WP7 Developer Tools release. Designing for Browser-Zoom: Part 1 Phil Middlemiss has both parts of a series on Browser Zoom up... this first part covers the zoom and different pieces involved. Designing for Browser-Zoom: Part 2 Phil Middlemiss's part 2 shows us some design considerations and visual states, including an attached behavior you can use in Blend to respond to the zoom event. Windows Phone Copy-Paste: How It Looks and Works Max Paulousky has the first post I've seen on WP7 Copy/Paste up... of course it's still in the emulator, but hey... that's better than nothing, right? LightSwitch: Can We Handle The Truth? Have you been playing with Lightswitch? Well... Michael Washington has, and it's got his interest up far enough that he's waving the flags trying to attract everyone else over there as well... see if you agree. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • DBCC CHECKDB (BatmanDb, REPAIR_ALLOW_DATA_LOSS) &ndash; Are you Feeling Lucky?

    - by David Totzke
    I’m currently working for a client on a PowerBuilder to WPF migration.  It’s one of those “I could tell you, but I’d have to kill you” kind of clients and the quick-lime pits are currently occupied by the EMC tech…but I’ve said too much already. At approximately 3 or 4 pm that day users of the Batman[1] application here in Gotham[1] started to experience problems accessing the application.  Batman[2] is a document management system here that also integrates with the ERP system.  Very little goes on here that doesn’t involve Batman in some way.  The errors being received seemed to point to network issues (TCP protocol error, connection forcibly closed by the remote host etc…) but the real issue was much more insidious. Connecting to the database via SSMS and performing selects on certain tables underlying the application areas that were having problems started to reveal the issue.  You couldn’t do a SELECT * FROM MyTable without it bombing and giving the same error noted above.  A run of DBCC CHECKDB revealed 14 tables with corruption.  One of the tables with issues was the Document table.  Pretty central to a “document management” system.  Information was obtained from IT that a single drive in the SAN went bad in the night.  A new drive was in place and was working fine.  The partition that held the Batman database is configured for RAID Level 5 so a single drive failure shouldn’t have caused any trouble and yet, the database is corrupted.  They do hourly incremental backups here so the first thing done was to try a restore.  A restore of the most recent backup failed so they worked backwards until they hit a good point.  This successful restore was for a backup at 3AM – a full day behind.  This time also roughly corresponds with the time the SAN started to report the drive failure.  The plot thickens… I got my hands on the output from DBCC CHECKDB and noticed a pattern.  What’s sad is that nobody that should have noticed the pattern in the DBCC output did notice.  There was a rush to do things to try and recover the data before anybody really understood what was wrong with it in the first place.  Cooler heads must prevail in these circumstances and some investigation should be done and a plan of action laid out or you could end up making things worse[3].  DBCC CHECKDB also told us that: repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB Yikes.  That means that the database is so messed up that you’re definitely going to lose some stuff when you repair it to get it back to a consistent state.  All the more reason to do a little more investigation into the problem.  Rescuing this database is preferable to having to export all of the data possible from this database into a new one.  This is a fifteen year old application with about seven hundred tables.  There are TRIGGERS everywhere not to mention the referential integrity constraints to deal with.  Only fourteen of the tables have an issue.  We have a good backup that is missing the last 24 hours of business which means we could have a “do-over” of yesterday but that’s not a very palatable option either. All of the affected tables had TEXT columns and all of the errors were about LOB data types and orphaned off-row data which basically means TEXT, IMAGE or NTEXT columns.  If we did a SELECT on an affected table and excluded those columns, we got all of the rows.  We exported that data into a separate database.  Things are looking up.  Working on a copy of the production database we then ran DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS and that “fixed” everything up.   The allow data loss option will delete the bad rows.  This isn’t too horrible as we have all of those rows minus the text fields from out earlier export.  Now I could LEFT JOIN to the exported data to find the missing rows and INSERT them minus the TEXT column data. We had the restored data from the good 3AM backup that we could now JOIN to and, with fingers crossed, recover the missing TEXT column information.  We got lucky in that all of the affected rows were old and in the end we didn’t lose anything.  :O  All of the row counts along the way worked out and it looks like we dodged a major bullet here. We’ve heard back from EMC and it turns out the SAN firmware that they were running here is apparently buggy.  This thing is only a couple of months old.  Grrr…. They dispatched a technician that night to come and update it .  That explains why RAID didn’t save us. All-in-all this could have been a lot worse.  Given the root cause here, they basically won the lottery in not losing anything. Here are a few links to some helpful posts on the SQL Server Engine blog.  I love the title of the first one: Which part of 'REPAIR_ALLOW_DATA_LOSS' isn't clear? CHECKDB (Part 8): Can repair fix everything? (in fact, read the whole series) Ta da! Emergency mode repair (we didn’t have to resort to this one thank goodness)   Dave Just because I can…   [1] Names have been changed to protect the guilty. [2] I'm Batman. [3] And if I'm the coolest head in the room, you've got even bigger problems...

    Read the article

  • Demantra USA Based Companies and SOX Compliance

    - by user702295
    A USA based company is assessing Demantra Trade Promotion Management (TPM) capability.  It appears that SOX is necessary in their case due to the nature of what TPM does and the necessity for auditability.  Do we have any detail on SOX compliance for Demantra? Answser ------- SOX compliance with regards to IT: 1.  Requires auditing of data changes done by who, what, when     a. Audit trail profiles can be set up for key financial series and view them in audit trail reports     b. One functionality we do not have which typically is asked for is user login history. We have only        active sessions, history is not available. 2.  Segregation of duties     a. With respect to TPM, you could have deduction and financial analyst for settlement be different        from promotion creator, promotion approver or sales team.     b. Budget Approver for funds can be different from funds consumer.     c. Promotion creator can be different than promotion approver     d. For a US customer you may have to write some custom scripts to capture promotion status change        and produce an external report as part of compliance. One additional requirement is transparency of forward commitments entered into with retailers / distributors for trade spending, promotions.  Outside of Demantra - Consumer Goods Trade Funds Analytics.

    Read the article

  • OWSM Policy Repository in JDeveloper - Tips & Tricks - 11g

    - by Prakash Yamuna
    In this blog post I discussed about the OWSM Policy Repository that is embedded in JDeveloper. However some times people may run into issues with the embedded repository. Here is screen snapshot that shows the error you may run into (click on the image for larger image): If you run into "java.lang.IllegalArgumentException: WSM-04694 : An invalid directory was provided to connect to a file-base MDS repository." this caused due to spaces in the folder name. Here is a quick way to workaround this issue by running "Jdeveloper.exe - su". Hope people find this useful!

    Read the article

  • Oracle WebCenter: Extending Oracle Applications & Oracle Fusion Applications

    - by kellsey.ruppel(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} -- Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}We’ve talked in previous weeks about the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  We’ve provided an overview of Oracle WebCenter and discussed some of the other key goals in previous weeks, and this week, we’ll focus on how the new release of Oracle WebCenter extends Oracle Applications and Fusion Applications.When we talk about the new release of Oracle WebCenter, we really emphasize to customers that they can leverage their existing investments and benefit from WebCenter’s Complete, Open and Integrated platform. To summarize what we mean here, Oracle WebCenter is:COMPLETEComprehensive platform for Portals/Websites, Composite Applications with integrated Social/Collaboration services and Content Management infrastructureOPENStandards support improves reuse of existing resources and extends the value of existing systemsINTEGRATEDImplicit integration with Oracle Applications, Oracle Fusion Applications & other enterprise applicationsWith all the existing enterprise applications in Oracle’s application portfolio, in the new release of WebCenter we’ve got a set of pre-built catalogs that customers can use directly to get at all the portlet resources certified and available from Oracle.  It provides customers with a ready-to-use view of their application resources.  And since WebCenter provides seamless support for building these portlets/components in a professional IDE like JDeveloper or from within a Browser, developers and business analysts can quickly assemble the information they require for their existing application investment.  In addition, we’ve taken all the user flows and patterns that we’ve learned in building Fusion Applications and focused on making it dramatically easier to use tools to create reusable application UI components. In this way, one team in the organization using an application can share their components with other teams.  And more importantly, the new team can make changes to the component without breaking the original component.  When tied to enterprise applications, this capability is extremely powerful.  This is what Oracle means when they talk about Enterprise Mashups.  And finally, we’ve provided an innovative way to go well beyond traditional “on the glass” integration by enabling business transactions for the existing applications direct integration using activity streams. This delivers aggregated and “on time” delivery of information to the business users based on what‘s happening in the enterprise that is relevant to their particular job function.  Most importantly, it ties into the personalization interactions discussed earlier so that it can help target information to you directly based on past interactions.  Application integration is key to making businesses function more efficiently with these new Enterprise 2.0 technologies.Keep checking back this week as we share more information on how WebCenter is the most complete, open and integrated modern user experience platform and show key ways WebCenter can extend Oracle Applications & Oracle Fusion Applications.

    Read the article

  • EclipseCon 2011

    - by Marcus Hirt
    I sadly could not make it to EclipseCon last year. It was sad for so many reasons, not the least being that Sweden during that part of the year is cold and dark. ;) This year, however, I will be contributing two talks: ---> HotRockit – What to Expect from Oracle’s Converged JVM Oracle is converging the HotSpot and JRockit JVMs to produce a "best of breed JVM". Internally the project is sometimes referred to as the HotRockit project. There is already a large influx of ideas and solutions provided by the JRockit JVM into the Open JDK. Examples of improvements include: Better monitoring and profiling Improved performance Better ergonomics This talk will discuss what to expect from the converged JVM over the next two years, and how this will benefit the Eclipse community. Production-time Problem Solving in Eclipse This session will look at some common problems and pitfalls in Java applications. The focus will be on non-invasive profiling and diagnostics of running production systems. Problems tackled will be: Excessive GC Finding hotspots and optimizing them Optimizing the choice of data structures Synchronization problems Finding out where exceptions are thrown Finding memory leaks All problems will be demonstrated and solved running both the bad-behaving applications and the tools to analyze them from within the Eclipse Java IDE. <--- I hope to meet you there!

    Read the article

  • Alaska Airlines Takes Off with Siebel Loyalty and Marketing

    - by tony.berk
    Who likes junk mail? Not me! But I don't mind targeted messages that are relevant to me. Alaska Airlines greatly improved their ability to be more personal with their customers by replacing a legacy mainframe loyalty system with Siebel Loyalty and Siebel Marketing. Which means, as an Alaska Airlines customer, I get less junk mail! With improved access to customer profile information in Siebel, Alaska Airlines presents targeted, relevant offers on their website and via email. At the same time, Alaska Airlines has reduced their speed-to-market with promotions by 150 percent and can now implement new partner marketing programs twice as fast. Finally, as Steve Jarvis, VP of Marketing, Sales and Customer Experience at Alaska Airlines, points out in the video, Alaska Airlines can now reach all 22 million of their annual passengers, not just the 10% who were in the legacy loyalty system. To see other customer success stories, visit Siebel CRM Success. Click here to learn more about Oracle's CRM products.

    Read the article

  • Making Room for Innovation — Oracle Interactive eBook

    - by Javier Puerta
    Innovation and complexity are two critical topics on the minds of business leaders. Innovation is what gives them a competitive edge; increased complexity is their greatest challenge. Learn how Oracle is helping customers change the game and make room for innovation by simplifying IT. Access the new Oracle interactive e-book, “Simplify IT and Unleash Innovation”. You can download it here.

    Read the article

  • Making Room for Innovation — Oracle Interactive eBOOK

    - by Cinzia Mascanzoni
    Innovation and complexity are two critical topics on the minds of business leaders. Innovation is what gives them a competitive edge; increased complexity is their greatest challenge. Learn how Oracle is helping customers change the game and make room for innovation by simplifying IT. Access the new Oracle interactive e-book, “Simplify IT and Unleash Innovation” by inviting partners to download it here.

    Read the article

  • Enterprise Architecture IS (should not be) Arbitrary

    - by pat.shepherd
    I took a look at a blog entry today by Jordan Braunstein where he comments on another blog entry titled “Yes, “Enterprise Architecture is Relative BUT it is not Arbitrary.”  The blog makes some good points such as the following: Lock 10 architects in 10 separate rooms; provide them all an identical copy of the same business, technical, process, and system requirements; have them design an architecture under the same rules and perspectives; and I guarantee your result will be 10 different architectures of varying degrees. SOA Today: Enterprise Architecture IS Arbitrary Agreed, …to a degree….but less so if all 10 truly followed one of the widely accepted EA frameworks. My thinking is that EA frameworks all focus on getting the business goals/vision locked down first as the primary drivers for decisions made lower down the architecture stack.  Many people I talk to, know about frameworks such as TOGAF, FEA, etc. but seldom apply the tenants to the architecture at hand.  We all seem to want to get right into the Visio diagrams and boxes and arrows and connecting protocols and implementation details and lions and tigers and bears (Oh, my!) too early. If done properly the Business, Application and Information architectures are nailed down BEFORE any technological direction (SOA or otherwise) is set.  Those 3 layers and Governance (people and processes), IMHO, are layers that should not vary much as they have everything to do with understanding the business -- from which technological conclusions can later be drawn. I really like what he went on to say later in the post about the fact that architecture attempts to remove the amount of variance between the 10 different architect’s work.  That is the real heart of what EA is about; REMOVING THE ARBRITRARITY.

    Read the article

  • Optimizing AES modes on Solaris for Intel Westmere

    - by danx
    Optimizing AES modes on Solaris for Intel Westmere Review AES is a strong method of symmetric (secret-key) encryption. It is a U.S. FIPS-approved cryptographic algorithm (FIPS 197) that operates on 16-byte blocks. AES has been available since 2001 and is widely used. However, AES by itself has a weakness. AES encryption isn't usually used by itself because identical blocks of plaintext are always encrypted into identical blocks of ciphertext. This encryption can be easily attacked with "dictionaries" of common blocks of text and allows one to more-easily discern the content of the unknown cryptotext. This mode of encryption is called "Electronic Code Book" (ECB), because one in theory can keep a "code book" of all known cryptotext and plaintext results to cipher and decipher AES. In practice, a complete "code book" is not practical, even in electronic form, but large dictionaries of common plaintext blocks is still possible. Here's a diagram of encrypting input data using AES ECB mode: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ AESKey-->(AES Encryption) AESKey-->(AES Encryption) | | | | \/ \/ CipherTextOutput CipherTextOutput Block 1 Block 2 What's the solution to the same cleartext input producing the same ciphertext output? The solution is to further process the encrypted or decrypted text in such a way that the same text produces different output. This usually involves an Initialization Vector (IV) and XORing the decrypted or encrypted text. As an example, I'll illustrate CBC mode encryption: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ IV >----->(XOR) +------------->(XOR) +---> . . . . | | | | | | | | \/ | \/ | AESKey-->(AES Encryption) | AESKey-->(AES Encryption) | | | | | | | | | \/ | \/ | CipherTextOutput ------+ CipherTextOutput -------+ Block 1 Block 2 The steps for CBC encryption are: Start with a 16-byte Initialization Vector (IV), choosen randomly. XOR the IV with the first block of input plaintext Encrypt the result with AES using a user-provided key. The result is the first 16-bytes of output cryptotext. Use the cryptotext (instead of the IV) of the previous block to XOR with the next input block of plaintext Another mode besides CBC is Counter Mode (CTR). As with CBC mode, it also starts with a 16-byte IV. However, for subsequent blocks, the IV is just incremented by one. Also, the IV ix XORed with the AES encryption result (not the plain text input). Here's an illustration: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ AESKey-->(AES Encryption) AESKey-->(AES Encryption) | | | | \/ \/ IV >----->(XOR) IV + 1 >---->(XOR) IV + 2 ---> . . . . | | | | \/ \/ CipherTextOutput CipherTextOutput Block 1 Block 2 Optimization Which of these modes can be parallelized? ECB encryption/decryption can be parallelized because it does more than plain AES encryption and decryption, as mentioned above. CBC encryption can't be parallelized because it depends on the output of the previous block. However, CBC decryption can be parallelized because all the encrypted blocks are known at the beginning. CTR encryption and decryption can be parallelized because the input to each block is known--it's just the IV incremented by one for each subsequent block. So, in summary, for ECB, CBC, and CTR modes, encryption and decryption can be parallelized with the exception of CBC encryption. How do we parallelize encryption? By interleaving. Usually when reading and writing data there are pipeline "stalls" (idle processor cycles) that result from waiting for memory to be loaded or stored to or from CPU registers. Since the software is written to encrypt/decrypt the next data block where pipeline stalls usually occurs, we can avoid stalls and crypt with fewer cycles. This software processes 4 blocks at a time, which ensures virtually no waiting ("stalling") for reading or writing data in memory. Other Optimizations Besides interleaving, other optimizations performed are Loading the entire key schedule into the 128-bit %xmm registers. This is done once for per 4-block of data (since 4 blocks of data is processed, when present). The following is loaded: the entire "key schedule" (user input key preprocessed for encryption and decryption). This takes 11, 13, or 15 registers, for AES-128, AES-192, and AES-256, respectively The input data is loaded into another %xmm register The same register contains the output result after encrypting/decrypting Using SSSE 4 instructions (AESNI). Besides the aesenc, aesenclast, aesdec, aesdeclast, aeskeygenassist, and aesimc AESNI instructions, Intel has several other instructions that operate on the 128-bit %xmm registers. Some common instructions for encryption are: pxor exclusive or (very useful), movdqu load/store a %xmm register from/to memory, pshufb shuffle bytes for byte swapping, pclmulqdq carry-less multiply for GCM mode Combining AES encryption/decryption with CBC or CTR modes processing. Instead of loading input data twice (once for AES encryption/decryption, and again for modes (CTR or CBC, for example) processing, the input data is loaded once as both AES and modes operations occur at in the same function Performance Everyone likes pretty color charts, so here they are. I ran these on Solaris 11 running on a Piketon Platform system with a 4-core Intel Clarkdale processor @3.20GHz. Clarkdale which is part of the Westmere processor architecture family. The "before" case is Solaris 11, unmodified. Keep in mind that the "before" case already has been optimized with hand-coded Intel AESNI assembly. The "after" case has combined AES-NI and mode instructions, interleaved 4 blocks at-a-time. « For the first table, lower is better (milliseconds). The first table shows the performance improvement using the Solaris encrypt(1) and decrypt(1) CLI commands. I encrypted and decrypted a 1/2 GByte file on /tmp (swap tmpfs). Encryption improved by about 40% and decryption improved by about 80%. AES-128 is slighty faster than AES-256, as expected. The second table shows more detail timings for CBC, CTR, and ECB modes for the 3 AES key sizes and different data lengths. » The results shown are the percentage improvement as shown by an internal PKCS#11 microbenchmark. And keep in mind the previous baseline code already had optimized AESNI assembly! The keysize (AES-128, 192, or 256) makes little difference in relative percentage improvement (although, of course, AES-128 is faster than AES-256). Larger data sizes show better improvement than 128-byte data. Availability This software is in Solaris 11 FCS. It is available in the 64-bit libcrypto library and the "aes" Solaris kernel module. You must be running hardware that supports AESNI (for example, Intel Westmere and Sandy Bridge, microprocessor architectures). The easiest way to determine if AES-NI is available is with the isainfo(1) command. For example, $ isainfo -v 64-bit amd64 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this software. Solaris libraries and kernel automatically determine if it's running on AESNI-capable machines and execute the correctly-tuned software for the current microprocessor. Summary Maximum throughput of AES cipher modes can be achieved by combining AES encryption with modes processing, interleaving encryption of 4 blocks at a time, and using Intel's wide 128-bit %xmm registers and instructions. References "Block cipher modes of operation", Wikipedia Good overview of AES modes (ECB, CBC, CTR, etc.) "Advanced Encryption Standard", Wikipedia "Current Modes" describes NIST-approved block cipher modes (ECB,CBC, CFB, OFB, CCM, GCM)

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >