Search Results

Search found 10492 results on 420 pages for 'online backups'.

Page 335/420 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • links for 2011-03-17

    - by Bob Rhubart
    Siba Prasad: Oracle Database on Amazon RDSg Siba Prasad share an analysis of the pros and cons. (tags: oracle database cloud amazon) LIVE WEBCAST March 24 2pm PT- Why Switch from Red Hat and SUSE Linux to Oracle Linux? (Oracle's Linux Blog) Featuring Oracle's Monica Kumar, Sr.Director of Linux, Oracle VM and MySQL and Avi Miller, Principal Sales Consultant, Linux and Virtualization. (tags: oracle linux) Webcast: IBM SOA vs. Oracle SOA, March 24, 1pm ET / 10am PT Maneesh Joshi and Bruce Tierney guide you to a solid understanding of the differences between the Oracle and IBM approach to comprehensive SOA. (tags: oracle soa bpm) Finding the Right Solution to Source and Manage Your Contractors (PeopleSoft Apps Strategy) "Talent has become a primary competitive advantage for most organizations. Contingent labor offers talent on flexible terms; it offers the ability to scale up operations, close skill gaps, and manage risk in the process of delivering services." - Mark Rosenberg (tags: oracle peoplesoft enterprisearchitecture) Oracle Business Intelligence Customers: Have Your Voice Heard in the "2011Wisdom of the Crowds Business Intelligence Market Survey" (BI & Analytics Pulse) "The Wisdom of the Crowds survey combines social media, crowd sourcing, and good old fashioned market research to provide vendors and customers alike an unvarnished and insightful snap shot of what's top of mind with business intelligence professionals." (tags: oracle businessintelligence) Martin Bach: Troubleshooting Grid Infrastructure startup Martin Bach hunts down the problem that caused one of his blades to reboot after an EXT3 journal error. (tags: oracle grid rac) Oracle WebCenter: Social Networking & Collaboration (Oracle Enterprise 2.0 Blog) Kelley Ruppel with information on "how the new release of Oracle WebCenter provides unprecedented Social Networking and Collaboration." (tags: oracle webcenter enterprise2.0 collaboration) VirtaThon: 100% Virtual Java/Oracle/MySQL Conference! | Bex Huff "The goal is simple," says Oracle ACE Director Bex Huff. "Because it's all online, the conference is very cheap. Pricing is not yet announced... but it should be around $300. Also, unlike other conferences, every speaker gets paid a small fee depending on the popularity of his or her session." (tags: oracle oracleace java mysqql) Griffiths Waite Blog: BPM 11g PS3 GW's Ian Heathcock shares a link to "a most interesting article on Oracle's recent release discussing the new features and how PS3 adds value  to the whole SOA message." (tags: oracle soa) The Buttso Blathers: Tutorial: JSF 2.0 and JPA 2.0 with WebLogic Server using NetBeans Should you take application architecture advice from a man named Buttso? In this case, yes. (tags: oracle jsf jpa weblogic) Setting-up a High Available Tuned SOA Environment Middleware Magic (tags: ping.fm) How to Configure Weblogic Messaging Bridge with JBoss Middleware Magic (tags: ping.fm Weblogic JBoss) Richard Veryard on Architecture: Emergent Architecture (tags: ping.fm entarch emergence)

    Read the article

  • How To: Spell Check InfoPath web form in SharePoint 2010

    - by Jeremy Ramos
    Originally posted on: http://geekswithblogs.net/JeremyRamos/archive/2013/11/07/how-to-spell-check-infopath-web-form-in-sharepoint-2010.aspxThis is a sequel to my 2011 post about How To: Spell Check InfoPath Web Form in SharePoint. This time I will share how I managed to achieve Spell Checking in SharePoint 2010. This time round, we have changed our Online Forms strategy to use Custom lists instead of Form Libraries. I thought everything will be smooth sailing as we are using all OOTB features. So, we customised a Custom list form using InfoPath and added a few Rich Text Boxes (Spell Check is a requirement for this specific project). All is good in the InfoPath client including the Spell Checker so, happy days, I published straight away.Here comes the surprises now. I browsed to my Custom List and clicked Add New Item. This launched my Form in a modal dialog format. I went to my Rich Text Boxes to check the spell checker, and voila, it's disabled!I tried hacking the FormServer.aspx and the CustomSpellCheckEntirePage.js again but the new FormServer.aspx behaves differently than of MOSS 2007's. I searched for answers in many blogs to no avail. Often ending up being linked to my old blog post. I also tried placing the spell check javascript into a Content Editor Webpart of the Item's New Form and Edit form. It is launching the Spell Check dialog but it's not spellchecking the page correctly.At this point, I decided I needed to get my project across ASAP so enough with experimentations and logged a ticket with Microsoft Premier Support.On a call with the Support Engineer, I browsed through the Custom List and to the item to demonstrate my problem. Suddenly, the Spell Check tab in the toolbar is now Enabled! Surprised? Not much, it's Microsoft!Anyway, to cut my story short, here is a summary of my solution:Navigate to your Custom ListIn the Ribbon Toolbar, navigate to List > Customize List > Form Web Parts > Content Type Forms > (Item) New Form. This will display the newifs.aspx which is the page displayed when Add New Item is clicked. This page, just like any other SharePoint page, contains webparts. In this case, we have the InfoPath Form Web Part.Add a Content Editor Web Part (CEWP) on top of the InfoPath Form Web Part. (A blank CEWP would do for this example)Navigate to Page and click Stop EditingClick Add New Item again and navigate to a Rich Text box. Tadah! The Spell Check tab is now enabled!Do the same steps for the (Item) Edit Form to enable Spell Checks when editing an item.This "no code" solution discovered purely by accident!

    Read the article

  • Ubuntu 10.10 forgets desktop theme.

    - by Marcelo Cantos
    (I posed this question on superuser.com and haven't received any answers or comments, then I came across this site, so my apologies to anyone who has seen this already.) I am running Ubuntu in VirtualBox (on a Windows 7 host). Several times now, the top-level menu bar, the task bar — and seemingly every system dialog — have forgotten the out-of-the-box "Ambiance" theme they conform to when I first installed the system. Window captions still preserve the theme, but pretty much nothing else does. I have searched high and low on Google for assistance with this problem. Everything I've found suggests either running some gconf reset or deleting .gconf* .gnome* and other similar directories. I have followed all this advice and nothing works. I still get a boring Windows-95-style gray 3D look and feel. On previous occasions, after much messing around I've given up and rebooted the VM instance, and been pleasantly suprised to see the original "Ambience" theme restored throughout the UI, but invariably it disappears again some time later, usually after a reboot, so I can never figure out what I did that broke it. Here's a sample from Ubuntu's site of what I want it to look like. And here's a screenshot of my system as it currently looks. Also note that my GNOME Terminals normally have a nice purple semi-translucent look, and as can be seen from the screenshot, they are now just a solid matt white. This last time (just yesterday), trying numerous combinations all the usual tricks and rebooting several times hasn't fixed it, so here I am on SU wondering: How do I recover the out-of-the-box theme for my Gnome/Ubuntu desktop, noting that blowing away all config files — as suggested in many places online — fails to achieve this? It might help to know that it seems to fail either after I resize the VM instance, forcing the Ubuntu desktop to resize itself, or after I play around with Compiz settings. I haven't been able to figure out which of these it is, and it could be neither. Given the amount of pain I have had to go through to get things back to normal (and given that I am at a loss as to how to do so), it has proven difficult to definitively isolate the cause.

    Read the article

  • SharePoint 2010, Cloud, and the Constitution

    - by Michael Van Cleave
    The other evening an article on the Red Tap Chronicles caught my eye. The article written by Bob Sullivan titled "The Constitutional Issues of Cloud Computing" was very interesting in regards to the direction most of the technical world is going. We all have been inundated about utilizing cloud computing for reasons of price, availability, or even scalability; but what Bob brings up is a whole separate view of why a business might not want to move toward the cloud for services or applications. The overall point to the article was pretty simple. It all boiled down to the summation that hosting "Things" in the cloud (Email, Documents, etc…) are interpreted differently under the law regarding constitutional search and seizure than say a document or item that is kept in physical form at a business or home. Where if you physically have it stored someone would have to get a warrant to search for it or seize it, but if it is stored off in the cloud and the ISV or provider is subpoenaed for the item then they will usually give access to the information. Obviously this is a big difference in interpretation of the law and the constitution due to technology. So you might ask "Where does this fit in with SharePoint? Well the overall push for this next version of SharePoint is one that gives a business ultimate flexibility to utilize the Cloud. In one example this upcoming version gracefully lends itself to Multi Tenancy so that online or "Cloud" hosting would be possible by Service Providers. Another aspect to the upcoming version is that it has updated its ability to store content outside of the database and in a cheaper commoditized storage facility. This is called Remote Blob Storage (or RBS) which is the next evolution of External Blob Storage (or EBS). With this new functionality that business might look forward to it is extremely important for them to understand that they might be opening themselves up to laws that do not need a warrant to search or seize their information that is stored in the cloud. It will be interesting to see how this all plays out in the next few months. Usually the laws change slowly in comparison to technology so it might be a while until we see if it is actually constitutional to treat someone's content on the cloud differently as it would be in their possession, however until there is some type of parity that happens or more concrete laws regarding the differences be very careful about what you put in the cloud. Michael

    Read the article

  • Ubuntu Dual Screen Using Virtual Machine - AMD GPU

    - by Chris
    I've been searching online and reading tutorials and etc about how to make my ubuntu VM dual screen(x86_64). I have first tried to run these commands: sudo aticonfig --initial -f which gave me the ouput of: sudo: aitconfig: command not found I then googled the output and followed these instructions that I tells me to install my ATI drivers onto my ubuntu. wget http://www2.ati.com/drivers/linux/ati-driver-installer-11-5-x86.x86_64.run sudo sh ati-driver-installer-11-5-x86.x86_64.run --buildpkg Ubuntu/natty sudo dpkg -i *.deb sudo apt-get -f install sudo aticonfig -f --initial --adapter=all sudo reboot It all works well until I input sudo apt-get -f install which gives me the following output: sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 25 not upgraded. 3 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up fglrx (2:8.850-0ubuntu1) ... update-alternatives: error: alternative link /usr/bin/aticonfig is already managed by x86_64-linux-gnu_gl_conf. dpkg: error processing fglrx (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of fglrx-amdcccle: fglrx-amdcccle depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-amdcccle (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of fglrx-dev: fglrx-dev depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-dev (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: fglrx fglrx-amdcccle fglrx-dev E: Sub-process /usr/bin/dpkg returned an error code (1) At this point, I don't know what to do since running: gksudo amdcccle For the record, I have 3D acceleration turned on. The following is my GPU for my VM lspci | grep VGA 00:02.0 VGA compatible controller: InnoTek Systemberatung GmbH VirtualBox Graphics Adapter Any Help on how I can make my VM dual screen with Ubuntu would be great. Thank you in advance.

    Read the article

  • Oracle's Integrated Systems Management and Support Experience

    - by Scott McNeil
    With its recent launch, Oracle Enterprise Manager 11g introduced a new approach to integrated systems management and support. What this means is taking both areas of IT management and vendor support and combining them into one integrated comprehensive and centralized platform. Traditional Ways Under the traditional method, IT operational teams would often focus on running their systems using management tools that weren’t connected to their vendor’s support systems. If you needed support with a product, administrators would often contact the vendor by phone or visit the vendor website for support and then log a service request in order to fix the issues. This method was also very time consuming, as administrators would have to collect their software configurations, operating systems and hardware settings, then manually enter them into an online form or recite them to a support analyst on the phone. For the vendor, they had to analyze all the configuration data to recreate the problem in order to solve it. This approach was very manual, uncoordinated and error-prone where duplication between the customer and vendor frequently occurred. A Better Support Experience By removing the boundaries between support, IT management tools and the customer’s IT infrastructure, Oracle paved the way for a better support experience. This was achieved through integration between Oracle Enterprise Manager 11g and My Oracle Support. Administrators can not only manage their IT infrastructure and applications through Oracle Enterprise Manager’s centralized console but can also receive proactive alerts and patch recommendations right within the console they use day-in-day-out. Having one single source of information saves time and potentially prevents unforeseen problems down the road. All for One, and One for All The first step for you is to allow Oracle Enterprise Manager to upload configuration data into Oracle’s secure configuration repository, where it can be analyzed for potential issues or conflicts for all customers. A fix to a problem encountered by one customer may actually be relevant to many more. The integration between My Oracle Support and Oracle Enterprise Manager allows all customers who may be impacted by the problem to receive a notification about the fix. Once the alert appears in Oracle Enterprise Manager’s console, the administrator can take his/her time to do further investigations using automated workflows provided in Oracle Enterprise Manager to analyze potential conflicts. Finally, administrators can schedule a time to test and automatically apply the fix to all the systems that need it. In the end, this helps customers maintain their service levels without compromise and avoid experiencing unplanned downtime that may result from potential issues or conflicts. This new paradigm of integrated systems management and support helps customers keep their systems secure, compliant, and up-to-date, while eliminating the traditional silos between IT management and vendor support. Oracle’s next generation platform also works hand-in-hand to provide higher quality of service to business users while at the same time making life for administrators less complicated. For more information on Oracle’s integrated systems management and support experience, be sure to visit our Oracle Enterprise Manager 11g Resource Center for the latest customer videos, webcast, and white papers.

    Read the article

  • TDC: The Developer's Conference Day One

    - by Tori Wieldt
    The Developer's Conference (TDC) kicked off Wednesday in São Paulo, Brazil. With over 3000 developers in attendance over five days, it is the premier multi-community developer conference in Brazil, organized by Globalcode. Yara Senger, one of the organizers said, "We like to say multi-community rather than multi-technology because it is interesting and benefical when various communities get together. They learn so much from each other!" TDC includes tracks on Java and several other technologies, including SOA, Python, Ruby, mobile and digital TV. In the mobile track, developers who create a Java ME app will get a Nokia S40 phone!New this year at TDC is the Java University track, sponsored by Oracle.  It is aimed at university students and professionals who are new to Java. The lectures are introductory level, with an educational focus and practical exercises. The Java track and other tracks, such as SOA, mobile and Digital TV, are getting lots of help from the expertise of Brazilian JUGS members. Thanks to GoJava, JavaBahia, JavaNoroeste and SouJava!Carlos Fernando, one of the coordinators on the Digital TV track, said "My goal is to teach developers the basics of digital TV, and show them the tools used to build interactive TV applications." Fernando explained the concept of "the second screen:" that many people watch TV and have second smart device (tablet or smartphone) with them, and this creates many opportunities for developers. For example, while watching TV, a viewer can get extra content (interviews, behind the scenes) on their tablet. More interestingly, while watching their favorite TV show a viewer likes an outfit one of the actors is wearing, their smartphone can tell them where they can buy it nearby, or they can order it online immediately. Fernando exclaimed, "The opportunities for developers are nearly infinite in the area of digital TV!" At the TDC opening keynote, Debora Palermo, Oracle University country manager for Brazil, reminded attendees that Java is present in many devices, from simple to complex, and knowledge of this platform can open many doors in the labor market. She explained Oracle's Workforce Development Program (WDP), managed by Oracle University, which allows educational institutions to deliver Oracle training. WDP allows for easy and low-cost access to Oracle training in local communities across the world. "Oracle University is committed to creating the next generation of Java developers, and WDP can make that happen," Palermo said. As of March 2012, Oracle University is partnering with Globalcode to offer WDP. Students can earn official Oracle Course Certifications, a great way to learn Java.Brazilian developers that cannot attend TDC can watch live streaming.

    Read the article

  • Backup Azure Tables with the Enzo Backup API

    - by Herve Roggero
    In case you missed it, you can now backup (and restore) Azure Tables and SQL Databases using an API directly. The features available through the API can be found here: http://www.bluesyntax.net/backup20api.aspx and the online help for the API is here: http://www.bluesyntax.net/EnzoCloudBackup20/APIIntro.aspx. Backing up Azure Tables can’t be any easier than with the Enzo Backup API. Here is a sample code that does the trick: // Create the backup helper class. The constructor automatically sets the SourceStorageAccount property StorageBackupHelper backup = new StorageBackupHelper("storageaccountname", "storageaccountkey", "sourceStorageaccountname", "sourceStorageaccountkey", true, "apilicensekey"); // Now set some properties… backup.UseCloudAgent = false;                                       // backup locally backup.DeviceURI = @"c:\TMP\azuretablebackup.bkp";    // to this file backup.Override = true; backup.Location = DeviceLocation.LocalFile; // Set optional performance options backup.PKTableStrategy.Mode = BSC.Backup.API.TableStrategyMode.GUID; // Set GUID strategy by default backup.MaxRESTPerSec = 200; // Attempt to stay below 200 REST calls per second // Start the backup now… string taskId = backup.Backup(); // Use the Environment class to get the final status of the operation EnvironmentHelper env = new EnvironmentHelper("storageaccountname", "storageaccountkey", "apilicensekey"); string status = env.GetOperationStatus(taskId);   As you can see above, the code is straightforward. You provide connection settings in the constructor, set a few options indicating where the backup device will be located, set optional performance parameters and start the backup. The performance options are designed to help you backup your Azure Tables quickly, while attempting to keep under a specific threshold to prevent Storage Account throttling. For example, the MaxRESTPerSec property will attempt to keep the overall backup operation under 200 rest calls per second. Another performance option if the Backup Strategy for Azure Tables. By default, all tables are simply scanned. While this works best for smaller Azure Tables, larger tables can use the GUID strategy, which will issue requests against an Azure Table in parallel assuming the PartitionKey stores GUID values. It doesn’t mean that your PartitionKey must have GUIDs however for this strategy to work; but the backup algorithm is tuned for this condition. Other options are available as well, such as filtering which columns, entities or tables are being backed up. Check out more on the Blue Syntax website at http://www.bluesyntax.net.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Reviewing Retail Predictions for 2011

    - by David Dorf
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} I've been busy thinking about what 2012 and beyond will look like for retail, and I have some interesting predictions to share.  But before I go there, let’s first review this year’s predictions before making new ones for 2012. 1. Alternate Payments We've seen several alternate payment schemes emerge over the last two years, and 2011 may be the year one of them takes hold. Any competition that can drive down fees will be good for everyone. I'm betting that Apple will add NFC chips to their next version of the iPhone, then enable payments in stores using iTunes accounts on the backend. Paypal will continue to make inroads, and Isis will announce a pilot. The iPhone 4S did not contain an NFC chip, so we’ll have to continuing waiting for the iPhone 5. PayPal announced its moving into in-store payments, and Google launched its wallet in selected cities.  Overall I think the payment scene is heating up and that trend will continue. 2. Engineered Systems The industry is moving toward purpose-built appliances that are optimized across the entire stack. Oracle calls these "engineered systems" and the first two examples are Exadata and Exalogic, but there are other examples from other vendors. These are particularly important to the retail industry because of the volume of data that must be processed. There should be continued adoption in 2011. Oracle reports that Exadata is its fasting growing product, and at the recent OpenWorld it announced the SuperCluster and Exalytics products, both continuing the engineered systems trend. SAP’s HANA continues to receive attention, and IBM also seems to be moving in this direction. 3. Social Analytics There are lots of tools that provide insight into how a brand is perceived across popular internet sites, but as far as I know, these tools are not industry specific. The next step needs to mine the data and determine how it should influence retail operations. The data needs to help retailers determine how they create promotions, which products to stock, and how to keep consumers engaged. Social data alone does not provide the answers, but its one more data point that will help retailers make better decisions. Look for some vendor consolidation to help make this happen. In March, Salesforce.com acquired leading social monitoring vendor Radian6 and followed up with acquisitions of Heroku and Model Metrics. The notion of Social CRM seems to be going more mainstream now. 4. 2-D Barcodes Look for more QRCodes on shelf-tags, in newspaper circulars, and on billboards. It's a great portal from the physical world into the digital one that buys us time until augmented reality matures further. Nobody wants to type "www", backslash, and ".com" on their phones. QRCodes are everywhere. ‘Nuff said. 5. In the words of Microsoft, "To the Cloud!" My favorite "cloud application" is Evernote. If you take notes on your work laptop, you will inevitably need those notes on your home PC. And if you manage to solve that problem, you'll need to access them from your mobile phone. Evernote stores your notes in the cloud and provides easy ways to access them. Being able to access a service from anywhere and not having to worry about backups, upgrades, etc. is great. Retailers will start to rely on cloud services, both public and private, in the coming year. There were no shortage of announcements in this area: Amazon’s cloud-based Kindle Fire, Apple’s iCloud, Oracle’s Public Cloud, etc. I saw an interesting presentation showing how BevMo moved their systems to the cloud.  Seems like retailers are starting to consider the cloud for specific uses. 6. F-CommerceTop of Form Move over "E" and "M" so we can introduce "F-Commerce," which should go mainstream in 2011. Already several retailers have created small stores on Facebook, and it won't be long before Facebook becomes a full-fledged channel in the omni-channel world of retail. The battle between Facebook and Google will heat up over retail, where both stand to make lots of money. JCPenney and ASOS both put their entire catalogs on Facebook, and lots of other retailers have connected Facebook to their e-commerce site. I still think selling from the newsfeed is the best approach, and several retailers are trying that approach as well. I just don’t see Google+ as a threat to Facebook, so I think that battle is over.  I called 2011 The Year of F-Commerce, and that was probably accurate. Its good to look back at predictions, but we also have to think about what was missed.  I didn't see Amazon entering the tablet business with such a splash, although in hindsight it was obvious. Nor did I think HP would fall so far so fast.  Look for my 2012 predictions coming soon.

    Read the article

  • String contains trailing zeroes when converted from decimal [migrated]

    - by Locke
    I've run into an unusual quirk in a program I'm writing, and I was trying to figure out if anyone knew the cause. Note that fixing the issue is easy enough. I just can't figure out why it is happening in the first place. I have a WinForms program written in VB.NET that is displaying a subset of data. It contains a few labels that show numeric values (the .Text property of the labels are being assigned directly from the Decimal values). These numbers are being returned by a DLL I wrote in C#. The DLL calls a webservice which initially returns the values in question. It returns one as a string, the other as a decimal (I don't have any control over the webservice, I just consume it). The DLL assigns these to properties on an object (both of which are decimals) then returns that object back to the WinForm program that called the DLL. Obviously, there's a lot of other data being consumed from the webservice, but no other operations are happening which could modify these properties. So, the short version is: WinForm requests a new Foo from the DLL. DLL creates object Foo. DLL calls webservice, which returns SomeOtherFoo. //Both Foo.Bar1 and Foo.Bar2 are decimals Foo.Bar1 = decimal.Parse(SomeOtherFoo.Bar1); //SomeOtherFoo.Bar1 is a string equal to "2.9000" Foo.Bar2 = SomeOtherFoo.Bar2; //SomeOtherFoo.Bar2 is a decimal equal to 2.9D DLL returns Foo to WinForm. WinForm.lblMockLabelName1.Text = Foo.Bar1 //Inspecting Foo.Bar1 indicates my value is 2.9D WinForm.lblMockLabelName2.Text = Foo.Bar2 //Inspecting Foo.Bar2 also indicates I'm 2.9D So, what's the quirk? WinForm.lblMockLabelName1.Text displays as "2.9000", whereas WinForm.lblMockLabelname2.Text displays as "2.9". Now, everything I know about C# and VB indicates that the format of the string which was initially parsed into the decimal should have no bearing on the outcome of a later decimal.ToString() operation called on the same decimal. I would expect that decimal.Parse(someDecimalString).ToString() would return the string without any trailing zeroes. Everything I find online seems to corroborate this (there are countless Stack Overflow questions asking exactly the opposite...how to keep the formatting from the initial parsing). At the moment, I've just removed the trailing zeroes from the initial string that gets parsed, which has hidden the quirk. However, I'd love to know why it happens in the first place.

    Read the article

  • Virtual Developer Day: Oracle Fusion Development

    - by mseika
    Get up to date and learn everything you wanted to know about Oracle ADF & Fusion Development plus live Q&A chats with Oracle technical staff. Oracle Application Development Framework (ADF) is the standards based, strategic framework for Oracle Fusion Applications and Oracle Fusion Middleware. Oracle ADF's integration with the Oracle SOA Suite, Oracle WebCenter and Oracle BI creates a complete productive development platform for your custom applications. Join us at this FREE virtual event and learn the latest in Fusion Development including: Is Oracle ADF development faster and simpler than Forms, Apex or .Net? Mobile Application Development with ADF Mobile Oracle ADF development with Eclipse Oracle WebCenter Portal and ADF Development Application Lifecycle Management with ADF Building Process Centric Applications with ADF and BPM Oracle Business Intelligence and ADF Integration Live Q&A chats with Oracle technical staff Developer lead, manager or architect – this event has something for everyone. Don't miss this opportunity December 11th, 2012 9:00 – 13:00 GMT 10:00 – 14:00 CET 12:00 – 16:00 AST 13:00 – 17:00 MSK 14:30 – 18:30 IST Register online now for this FREE event! Agenda 9:00 a.m. – 9:30 a.m. Opening 9:30 a.m. – 10:00 a.m. Keynote Oracle Fusion Development Track 1 Introduction to Fusion Development Track 2 What's New in Fusion Development Track 3 Fusion Development in the Enterprise Track 4 Hands On Lab - WebCenter Portal and ADF Lab w/ JDeveloper 10:00 a.m. – 11:00 a.m. Is Oracle ADF development faster and simpler than Forms, Apex or .Net? Mobile Application Development with ADF Mobile Oracle WebCenter Portal and ADF Development Lab materials can be found on event wiki here. Q&A about the lab is available throughout the event. 11:00 a.m. – 12:00 p.m. Rich Web UI made simple – an ADF Faces Overview Oracle Enterprise Pack for Eclipse - ADF Development Building Process Centric Applications with ADF and BPM 12:00 p.m. – 1:00 p.m. Next Generation Controller for JSF Application Lifecycle Management for ADF Oracle Business Intelligence and ADF Integration View Session Abstracts We look forward to welcoming you at this free event!

    Read the article

  • Oracle Gave Me a Chance - ECEMEA Internship Programme

    - by FelixWehmeyer
    My name is Mohammad Raad and I started in the One Year Training program with Oracle on March 1, 2012. I graduated on September 2011 and started searching for a job. Starting your career, as you all know, is hard because some companies see you as a fresh graduate lacking experience and no one is willing to invest in you. I used to check the recruitment websites daily to see if there were openings to apply to, but unfortunately no one wanted to hire a zero year experience fresh graduate. One day I saw Oracle’s 1 year internship program advertised and that was what I needed. I applied but expected nothing to happen because I was used to applying and getting no replies, but they called and that was the start! I had my first interview over the phone and decided to go to Qatar and to continue to search for a job. Two weeks after arriving at Qatar, Oracle called me for a second interview in Lebanon so I booked a ticket on the same day because my interview was the next day I had my interview and went back to Qatar. On January 2012, I heard from Oracle that I was accepted and they choose me for this program, it was a day I will not forget! Starting on 1st March 2012, I was full of energy, willing to do anything to gain experience and prove that I can do it. What really give me a big push is my colleagues’ motivation and support especially from Youssef Halawi, my mentor and Rami Mattar because they believe in me and track my progress day-by-day. At Oracle, I meet customers, attend meetings, demos and presentations, partners’ events and online trainings. Now I’m focusing on a specific product that I really liked and aim to master by the end of my internship. So dear readers wish me good luck! I know that I will get the experience that I want, because from Oracle, a leader in its industry, you have the chance to grab experience as much as you can handle, simply because there are no limits to excellence. Do you want to find out more about the open roles within Oracle? Follow us on https://campus.oracle.com.

    Read the article

  • Five hours of Task Flow Overview Recordings Available

    - by Frank Nimphius
    In addition to the ADF Controller task flow documentation in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework 11g Release 1 http://download.oracle.com/docs/cd/E21764_01/web.1111/b31974/partpage3.htm#BABHIIAI The ADF Insider website … http://www.oracle.com/technetwork/developer-tools/adf/learnmore/adfinsider-093342.html … hosts five online videos that explain how to build and work with ADF Controller task flows in Oracle ADF. ADF Task Flow - Overview (Part 1) This 90 minute recording introduces the concept of ADF unbounded and bounded task flows, as well as other ADF Controller features. The session starts with an overview of unbounded task flows, bounded task flows and the different activities that exist for developers to build complex application flows. Exception handling and the Train navigation model is also covered in this first part of a two part series. By example of developing a sample application, the recording guides viewers through building unbounded and bounded task flows. This session is continued in a second part. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/taskflow-overview-p1/taskflow-overview-p1.html ADF Task Flow - Overview (Part 2) This 75 minute session continues where part 1 ended and completes the sample application that guides viewers through different aspects of unbounded and bounded task flow development. In this recording, memory scopes, save for later, task flow opening in dialogs and remote task flow calls are explained and demonstrated. If you are new to ADF Task Flow, then it is recommended to first watch part 1 of this series to be able to follow the explanation guided by the sample application. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/taskflow-overview-p2/taskflow-overview-p2.html ADF Region Interaction - An Overview This session covers most of the options that exist for communicating between regions. It briefly discusses what it takes to build regions from bounded task flows before going into details using slides and samples. The following interaction is explained: contextual events, queue action in region, input parameters and PPR, drag and drop, shared Data Controls, parent action and region navigation listener. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/adf-region-interaction/adf-region-interaction.html ADF Region Interaction - Contextual Events Contextual event is used as a communication channel between a parent view and its contained regions, as well as between regions. By example, this session explains how to set up contextual events, how to define producers and event listeners and how to define the payload message. http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/AdfInsiderContextualEvents/AdfInsiderContextualEvents.html

    Read the article

  • Three Master Data Management Deployment Tips

    - by david.butler(at)oracle.com
    MDM is all about data quality and data governance. We now know that improved data quality raises all operational and analytical boats. But it's not just about deploying data quality tools. It's about deploying data quality tools within and across the IT landscape - from a thousand points of data entry to a single version of the truth. Here are three tips to deploying MDM across your applications and enterprise.   #1: Identify a tactical, high-value business problem where MDM can materially help. §  Support a customer acquisition and retention program with a 'customer' master data solution. §  Accelerate new products and services to market with a 'product' master data solution. §  Reduce supplier exceptions or support spend control initiatives with a 'supplier' master data solution. §  Support new store (branch, campus, restaurant, hospital, office, well head) location analysis with a 'site' master data solution. §  Fix long standing Chart of Accounts and Cost Center problems with a 'financial' master data solution. §  Support M&A activity, application upgrades, an SOA initiative, a cloud computing program, or a new business intelligence deployment by implementing a mix of master data solutions.   #2: Incrementally expand to a full information architecture. Quite often, the measurable return on interest from tactical MDM initiatives will fund future deployments. Over time, the MDM solution expands into its full architecture to cover the entire IT landscape. Operations and analytics are united, IT flexibility is restored, and sustainable competitive advantage is achieved.   #3: Bring business into every MDM deployment. To be successful, MDM must work hand in hand with data governance. In fact, Oracle MDM incorporates data governance tools for business users. IT can insure data quality, but only after the business side has defined what quality means. The business establishes the rules for governing the master data, and then IT enforces the rules via the MDM applications. Without this business/IT collaboration, MDM initiatives seldom achieve their full potential.   It is not very often that a technology comes along that can measurably assist organizations across a wide variety of top IT initiatives. Reducing costs, increasing flexibility, getting more out of existing assets, and aligning business and IT are not easy tasks for any CIO. But with MDM, success is achievable. IT can regain its place as a center for innovation.   For more information on this topic, take a look at my article Master Data Management Deployment Tips in the Opinion Section of Oracle's Profit Online magazine.

    Read the article

  • Ubuntu Sluggish and Graphics Problem after Nvidia Driver Update

    - by iam
    I just recently started using Ubuntu (12.04) since a few weeks ago and noticed that the interface is very slow and sluggish: On Dash, I have to type the entire app name and wait a few seconds before it shows up in the search box, and a bit later before it displays search result Opening new files or applications takes also quite long and awkward Dragging icons or moving app windows around is not very spontaneous too: I have to take extra attention in moving the mouse otherwise Ubuntu would not do a correct movement or might ends up doing something incorrect instead e.g. opening the windows to full screen options or move the file to different folders, which is frustrating My PC is a few years old already (1.7 GB RAM) so this could be a reason too but when I checked in System Monitor it's hardly ever consuming much memory. Plus web-surfing on Firefox is actually lightning fast (much more than Windows), so I suspect there might be something wrong with the graphics driver (mine is GeForce 7050). I checked around System Settings and found an option to update the Nvidia driver. So I tried it and restarted, as instructed. Now, I got into a big problem upon restart... as the login-screen windows (where I have to type in the password) would take several attempts to display and finally did not manage to (it'd freeze for several seconds before there's any movement again). The background screen also kept reloading several times too and at some point the screen turned black with pixelated color strips running on the bottom 1/3 of the screen, and after a long while the background screen would come up again. Eventually I'd manage to be able to access the desktop but the launcher, top menu bar and app windows border would not disappear. I searched around and found many other people have this similar problem after updating Nvidia driver too, and on some threads the suggestion is to use "killall -u $USER" in command line (it's the only thing among various online suggestions I could do, as at that point I could not access Terminal without the launcher - Ctrl-Alt-T doesn't work for me). So I did that and was able to access the desktop correctly again with launchee/menu by creating a new account. But I would still have the same problem if logging into my original account. So I just finally tried upgrading to 12.10 and now can access my original account with fully-functional desktop - the launcher, menu and windows border are all back now. However, the problem with sluggishness still remains. And now I get scared of ever having to update the Nvidia driver again! I wonder if anyone knows what's the reason that updating the Nvidia driver is causing this problem and is there a way I can update it safely in the future? I'm still not sure how to solve the problem with the sluggishness too and not sure where else to look to find a solution.

    Read the article

  • md/raid:md2: cannot start dirty degraded array, kernel panic

    - by nl-x
    After having made use of a remote power switch, my server did not come back online. When I went to the datacenter and reboot the computer on the spot I see the server booting (I see the centos progress bar with running almost all the way to the end) and eventually giving the following messages: md/raid:md2: cannot start dirty degraded array. md/raid:md2: failed to run raid set. md: pers->run() failed ... md/raid:md2: cannot start dirty degraded array. md/raid:md2: failed to run raid set. md: pers->run() failed ... Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init not tainted 2.6.32-279.1.1.el6.i686 #1 Call Trace: [<c083bfbc>] ? panic+0x68/0x11c [<c045a501>] ? do_exit+0x741/0x750 [<c045a54c>] ? do_group_exit+0x3c/0xa0 [<c045a5c1>] ? sys_exit_group+0x11/0x20 [<c083eba4>] ? syscall_call+0x7/0xb [<c083007b>] ? cmos_wake_setup+0x62/0x112 The server runs CentOS and has software raid, and I don't have backups of the raid settings. The only backup I have is of /home and the database dumps. (Glad to at least have those though.) Since the server is an old Dell PowerEdge 1750 with no CD-ROM drive, I have no way of booting the machine from a boot disk. I also remember in the past that the server also wouldn't boot from a bootable USB disk. So the only way I know how to boot the server is to go to the datacenter, pick up the server and take it to the office. Screw open the server. Attach a cdrom drive to an IDE slot on the motherboard. And then boot it. I am hoping you guys could help me avoid this. I have looked a bit through the boot options and I found the following boot options. When CentOS is about to boot and interrupt the boot-countdown: CentOS (2.6.32-279.1.1.el63.i686) CentOS Linux (2.6.32-71.29.1.el6.i686) centos (2.6.32-71.el6.i686) I think the first configuration is the default one, because choosing that gets me to the above mentioned kernel panic. The other ones end with something like "Sleeping forever". I can press 'e' to edit boot commands, press 'a' to modify kernel arguments and press 'c' for grub command line. The command line gives a grub prompt. But I have no idea how to get the system to boot without (trying to) access the dirty partitions. What I want to do is off course: - boot the machine - check hard drive for errors - mark the drive as clean

    Read the article

  • Friday Fun: Destroy the Web

    - by Mysticgeek
    Another Friday has arrived and it’s time to screw off on company time. Today we take a look at a unique game Add-on for Firefox called Destroy the Web. Destroy the Web Once you install the Add-on and restart Firefox, you’ll see the Destroy the Web icon on the toolbar. Click on it to destroy whatever webpage you’re on.   The game starts up and gives you a 3 second countdown… Now move the target over different elements of the page to destroy them. You have 30 seconds to destroy the site contents. If you are angry at a particular company or the one you work for, this can give you some fun stress relief by destroying it’s website. After you’ve destroyed as much of the page as possible, you’re given a score. You get different amounts of points for destroying certain elements on the page. Basically destroy as much as possible to get the most points. You can submit your score and check out some of the scoreboard leaders as well.  There are some cool sound effects and arcade sounding background music, so make sure to turn the volume down while playing. Or you can go into the Add-on options and disable it. If you want a unique way to let off some steam before the weekend starts, this is a fun way of doing it. Install Destroy the Web for Firefox Similar Articles Productive Geek Tips Friday Fun: Destroy Everything with Indestructo TankWeekend Fun: Easter Egg in Spybot Search & DestroyFriday Fun: BoombotSecure Computing: Create Scheduled Scans With Spybot Search & DestroyFriday Fun: Castle Game Collection TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Dual Boot Ubuntu and Windows 7 What is HTML5? Default Programs Editor – One great tool for Setting Defaults Convert BMP, TIFF, PCX to Vector files with RasterVect Free Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good

    Read the article

  • Remove Clutter from the Opera Speed Dial Page

    - by Asian Angel
    Do you want to clean up the Speed Dial page in Opera so that only the thumbnails are visible? Today we show you a couple of tweaks that will make it happen. Speed Dial Page The search bar and text at the bottom take up room and add clutter to the look and feel of Opera’s Speed Dial page. Changing the Settings Two small tweaks to the config settings will clean it all up. To get started type opera:config into the address bar and press enter. Type “speed” into the quick find bar and look for the Speed Dial State entry. Change the 1 to 2 and click save. You will see the following message concerning the changes…click OK. Next type “search” into the quick find bar and look for the Speed Dial Search Type entry. Remove all of the text in the blank and click save. Once again you will see a message about the latest change that you have made. At this point you may need to restart Opera for both changes to take full effect. There will be a noticeable difference in how the Speed Dial page looks afterwards and is much cleaner without the Search bar and text field. You will also still be able to access the right click context menu just like before. Conclusion If you have been looking to get a cleaner and less cluttered Speed Dial page in Opera, then these two little hacks will get the job done! Similar Articles Productive Geek Tips Set the Speed Dial as the Opera Startup PageReplace Google Chrome’s New Tab Page with Speed DialSpeed up Windows Vista Start Menu Search By Limiting ResultsBlank New Tab Quick-Fix for Google ChromeMonitor and Control Memory Usage in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Backup Outlook 2010 Daily Motivator (Firefox) FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone

    Read the article

  • Disaster, or Migration?

    - by Rob Farley
    This post is in two parts – technical and personal. And I should point out that it’s prompted in part by this month’s T-SQL Tuesday, hosted by Allen Kinsel. First, the technical: I’ve had a few conversations with people recently about migration – moving a SQL Server database from one box to another (sometimes, but not primarily, involving an upgrade). One question that tends to come up is that of downtime. Obviously there will be some period of time between the old server being available and the new one. The way that most people seem to think of migration is this: Build a new server. Stop people from using the old server. Take a backup of the old server Restore it on the new server. Reconfigure the client applications (or alternatively, configure the new server to use the same address as the old) Make the new server online. There are other things involved, such as testing, of course. But this is essentially the process that people tell me they’re planning to follow. The bit that I want to look at today (as you’ve probably guessed from my title) is the “backup and restore” section. If a SQL database is using the Simple Recovery Model, then the only restore option is the last database backup. This backup could be full or differential. The transaction log never gets backed up in the Simple Recovery Model. Instead, it truncates regularly to stay small. One that’s using the Full Recovery Model (or Bulk-Logged) won’t truncate its log – the log must be backed up regularly. This provides the benefit of having a lot more option available for restores. It’s a requirement for most systems of High Availability, because if you’re making sure that a spare box is up-and-running, ready to take over, then you have to be interested in the logs that are happening on the current box, rather than truncating them all the time. A High Availability system such as Mirroring, Replication or Log Shipping will initialise the spare machine by restoring a full database backup (and maybe a differential backup if available), and then any subsequent log backups. Once the secondary copy is close, transactions can be applied to keep the two in sync. The main aspect of any High Availability system is to have a redundant system that is ready to take over. So the similarity for migration should be obvious. If you need to move a database from one box to another, then introducing a High Availability mechanism can help. By turning on the Full Recovery Model and then taking a backup (so that the now-interesting logs have some context), logs start being kept, and are therefore available for getting the new box ready (even if it’s an upgraded version). When the migration is ready to occur, a failover can be done, letting the new server take over the responsibility of the old, just as if a disaster had happened. Except that this is a planned failover, not a disaster at all. There’s a fine line between a disaster and a migration. Failovers can be useful in patching, upgrading, maintenance, and more. Hopefully, even an unexpected disaster can be seen as just another failover, and there can be an opportunity there – perhaps to get some work done on the principal server to increase robustness. And if I’ve just set up a High Availability system for even the simplest of databases, it’s not necessarily a bad thing. :) So now the personal: It’s been an interesting time recently... June has been somewhat odd. A court case with which I was involved got resolved (through mediation). I can’t go into details, but my lawyers tell me that I’m allowed to say how I feel about it. The answer is ‘lousy’. I don’t regret pursuing it as long as I did – but in the end I had to make a decision regarding the commerciality of letting it continue, and I’m going to look forward to the days when the kind of money I spent on my lawyers is small change. Mind you, if I had a similar situation with an employer, I’d do the same again, but that doesn’t really stop me feeling frustrated about it. The following day I had to fly to country Victoria to see my grandmother, who wasn’t expected to last the weekend. She’s still around a week later as I write this, but her 92-year-old body has basically given up on her. She’s been a Christian all her life, and is looking forward to eternity. We’ll all miss her though, and it’s hard to see my family grieving. Then on Tuesday, I was driving back to the airport with my family to come home, when something really bizarre happened. We were travelling down the freeway, just pulled out to go past a truck (farm-truck sized, not a semi-trailer), when a car-sized mass of metal fell off it. It was something like an industrial air-conditioner, but from where I was sitting, it was just a mass of spinning metal, like something out of a movie (one friend described it as “holidays by Michael Bay”). Somehow, and I’m really don’t know how, the part of it nearest us bounced high enough to clear the car, and there wasn’t even a scratch. We pulled over the check, and I was just thanking God that we’d changed lanes when we had, and that we remained unharmed. I had all kinds of thoughts about what could’ve happened if we’d had something that size land on the windscreen... All this has drilled home that while I feel that I haven’t provided as well for the family as I could’ve done (like by pursuing an expensive legal case), I shouldn’t even consider that I have proper control over things. I get to live life, and make decisions based on what I feel is right at the time. But I’m not going to get everything right, and there will be things that feel like disasters, some which could’ve been in my control and some which are very much beyond my control. The case feels like something I could’ve pursued differently, a disaster that could’ve been avoided in some way. Gran dying is lousy of course. An accident on the freeway would have been awful. I need to recognise that the worst disasters are ones that I can’t affect, and that I need to look at things in context – perhaps seeing everything that happens as a migration instead. Life is never the same from one day to the next. Every event has a before and an after – sometimes it’s clearly positive, sometimes it’s not. I remember good events in my life (such as my wedding), and bad (such as the loss of my father when I was ten, or the back injury I had eight years ago). I’m not suggesting that I know how to view everything from the “God works all things for good” perspective, but I am trying to look at last week as a migration of sorts. Those things are behind me now, and the future is in God’s hands. Hopefully I’ve learned things, and will be able to live accordingly. I’ve come through this time now, and even though I’ll miss Gran, I’ll see her again one day, and the future is bright.

    Read the article

  • C++ Accelerated Massive Parallelism

    - by Daniel Moth
    At AMD's Fusion conference Herb Sutter announced in his keynote session a technology that our team has been working on that we call C++ Accelerated Massive Parallelism (C++ AMP) and during the keynote I showed a brief demo of an app built with our technology. After the keynote, I go deeper into the technology in my breakout session. If you read both those abstracts, you'll get some information about what C++ AMP is, without being too explicit since we published the abstracts before the technology was announced. You can find the official online announcement at Soma's blog post. Here, I just wanted to capture the key points about C++ AMP that can serve as an introduction and an FAQ. So, in no particular order… C++ AMP lowers the barrier to entry for heterogeneous hardware programmability and brings performance to the mainstream, without sacrificing developer productivity or solution portability. is designed not only to help you address today's massively parallel hardware (i.e. GPUs and APUs), but it also future proofs your code investments with a forward looking design. is part of Visual C++. You don't need to use a different compiler or learn different syntax. is modern C++. Not C or some other derivative. is integrated and supported fully in Visual Studio vNext. Editing, building, debugging, profiling and all the other goodness of Visual Studio work well with C++ AMP. provides an STL-like library as part of the existing concurrency namespace and delivered in the new amp.h header file. makes it extremely easy to work with large multi-dimensional data on heterogeneous hardware; in a manner that exposes parallelization. introduces only one core C++ language extension. builds on DirectX (and DirectCompute in particular) which offers a great hardware abstraction layer that is ubiquitous and reliable. The architecture is such, that this point can be thought of as an implementation detail that does not surface to the API layer. Stay tuned on my blog for more over the coming months where I will switch from just talking about C++ AMP to showing you how to use the API with code examples… Comments about this post welcome at the original blog.

    Read the article

  • EPM Planning (Hyperion) V11.1.2 Implementation Hands-On Boot-camp

    - by Mike.Hallett(at)Oracle-BI&EPM
    5-Day Training for Partners: 29th October - 2nd November 2012, London (UK): REGISTER Here This FREE for Partners 5-day workshop is designed to provide implementation instruction on Oracle Hyperion EPM Planning.  This boot-camp is intended for prospective implementers of the Planning and Budgeting functionality of Oracle EPM or implementers that are currently familiar with the basics of EPM Planning and looking to strengthen their base of knowledge in the product. The class begins with an overview of Essbase, the foundation of Hyperion Planning. It provides a general overview of Planning and Planning terms, the architecture of all the Planning components, and how they are commonly used. The course goes over all the steps to create an application from scratch. This involves some preparation work outside of Planning and leads to developing the application in both the Planning Windows and Web clients. Participants will modify existing dimensions and build out the hierarchies using the Web client. Topics Covered The boot-camp shows developers how to build out dimensions using Classic Planning and by using EPMA. It covers the mechanics and cover strategies for automating the build process such as interface tables. It reviews data loads using Load Rules to load the Planning database. The course focuses on tasks that end-users must perform during the planning cycle. It walks students through creating and modifying forms, working with forms to enter data, adding annotations, and the rest of the form features such as running business rules and managing task lists. It covers how to use the forms in the Smart View client and finishes up the end-user perspective by going through Workflow Management and the process of submitting a plan for review. The final section of the course covers Security and other administration topics such as automation and deployment. Prerequisites Ideal participants are Oracle partners (SIs and resellers) with a background in business information systems and a clientele of customers with ongoing or prospective EPM initiatives. Alternatively, partners with the background described above and an interest in evolving their practice to a similar profile are suitable participants. Further online OPN guided learning path information and webinars are available at: Oracle Hyperion Planning 11 Essentials. Please note that attendees are required to bring a laptop. View here laptop requirements and detailed agenda. ·       REGISTER Here : acceptance is subject to availability and your place will be confirmed within two weeks  ( and for help see the Partner Registration Guide ). Training Location: Oracle Corporation UK Ltd Columbus Room Customer Visit Center 1 South Place London EC2M 2RB Training Dates: 29th October - 2nd November  9:30 am – 5:00 pm BST For more information please contact [email protected].

    Read the article

  • SCHA API for resource group failover / switchover history

    - by krishna.k.murthy
    The Oracle Solaris Cluster framework keeps an internal log of cluster events, including switchover and failover of resource groups. These logs can be useful to Oracle support engineers for diagnosing cluster behavior. However, till now, there was no external interface to access the event history. Oracle Solaris Cluster 4.2 provides a new API option for viewing the recent history of resource group switchovers in a program-parsable format. Oracle Solaris Cluster 4.2 provides a new option tag argument RG_FAILOVER_LOG for the existing API command scha_cluster_get which can be used to list recent failover / switchover events for resource groups. The command usage is as shown below: # scha_cluster_get -O RG_FAILOVER_LOG number_of_days number_of_days : the number of days to be considered for scanning the historical logs. The command returns a list of events in the following format. Each field is separated by a semi-colon [;]: resource_group_name;source_nodes;target_nodes;time_stamp source_nodes: node_names from which resource group is failed over or was switched manually. target_nodes: node_names to which the resource group failed over or was switched manually. There is a corresponding enhancement in the C API function scha_cluster_get() which uses the SCHA_RG_FAILOVER_LOG query tag. In the example below geo-infrastructure (failover resource group), geo-clusterstate (scalable resource group), oracle-rg (failover resource group), asm-dg-rg (scalable resource group) and asm-inst-rg (scalable resource group) are part of Geographic Edition setup. # /usr/cluster/bin/scha_cluster_get -O RG_FAILOVER_LOG 3 geo-infrastructure;schost1c;;Mon Jul 21 15:51:51 2014 geo-clusterstate;schost2c,schost1c;schost2c;Mon Jul 21 15:52:26 2014 oracle-rg;schost1c;;Mon Jul 21 15:54:31 2014 asm-dg-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:54:58 2014 asm-inst-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:56:11 2014 oracle-rg;;schost2c;Mon Jul 21 15:58:51 2014 geo-infrastructure;;schost2c;Mon Jul 21 15:59:19 2014 geo-clusterstate;schost2c;schost2c,schost1c;Mon Jul 21 16:01:51 2014 asm-inst-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:01:10 2014 asm-dg-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:02:10 2014 oracle-rg;schost2c;;Tue Jul 22 16:58:02 2014 oracle-rg;;schost1c;Tue Jul 22 16:59:05 2014 oracle-rg;schost1c;schost1c;Tue Jul 22 17:05:33 2014 Note that in the output some of the entries might have an empty string in the source_nodes. Such entries correspond to events in which the resource group is switched online manually or during a cluster boot-up. Similarly, an empty destination_nodes list indicates an event in which the resource group went offline. - Arpit Gupta, Harish Mallya

    Read the article

  • Appointment & Booking Calendar for 2,500 Members

    - by D. K. Mason
    For this job I need a booking or appointment calendar for the WP website, so that each of the 2500+ members can manage his/her own appointments calendar. Members will set their schedules and available hours. Buyers can select one member, and book when they want to visit the member and reserve time online. Lets say our WP membership site has thousands of members each offering one service. We have 25 categories. On their profile page is his appointment or booking calendar, along with his personally created and uploaded (into S3) profile video. Website visitors should be able to easily book hours & days as desired. For now all members have a free membership. WE earn $ by bringing customers to the registered members and collecting one dollar for our service. Therefore, we need an affordable script and one that handles our members needs. We already have a paid copy of Jrox. What are those needs? Well, please register for a test account in the ENTERTAINMENT category at http://asianhighway26.com/?page_id=140 Next, pretend you are a buyer and start on the index page and select your ENTERTAINMENT category. Click on image #3 and you will receive a list of others in your traveling area. When you click VIEW DETAILS, there is where a potential customer will see the calendar and if you are available on the dates and times you are interested in doing business. We know that YOU the member will want to list your not available dates and hours. There may be other features desired but this is the most important, we believe. Our WP administrator (me) can install the main script, but each member must have some login or dashboard to manage his/her own calendar and work hours. Can you create or modify any appointment calendar software to do this? Can the jam.jrox.com software we own do this? As you can see here, we offer to pay for ideas you present that we use: asianhighway26.com/?page_id=126 D.K. Mason Chief Development Officer, Asian Highway Network, Tourism Division, Asia-Pacific Region (http://www.forums.doctormason.us/b-ah/)

    Read the article

  • The Numbers of Customer Experience

    - by Christie Flanagan
    This week, we’ll be continuing our conversations about Customer Experience (CX) on the Oracle WebCenter blog.  While we all know that customer experience is critically important for acquiring new customers and engendering long term brand loyalty, I thought we could kick this week off by taking a look at the numbers of customer experience.   I’m sure you’ll agree that nothing quite puts things into perspective like numbers and figures. A whopping 86% of consumers say that they are willing to pay more for a better customer experience.  But many companies are failing to step up to the challenge.  And when companies fail deliver on customer experience expectations, they leave money on the table. A huge percentage of customers, 89%, begin doing business with a competitor following a poor customer experience. Breaking up isn’t hard to do and today’s empowered customers have no qualms about taking their business elsewhere when their expectations for customer experience are not met. Over a quarter of consumers, 26%, posted a negative comment on a social networking site like Facebook or Twitter following a poor customer experience. Today, individual customer service failures have the ability to easily snowball.  An unsatisfied customer has the ability to easily share their rancor with their entire social network and chip away at your brand’s reputation. A large number of consumers, 79%,  who shared complaints about poor customer experience online had their complaints ignored.  Companies ignore customer complaints at their own peril.  And unsatisfied customers, when handled effectively, have the potential to become advocates for your brand.  Of the 21% of consumers who did get responses to complaints, more than half had positive reactions to the same company about which they were previously complaining. Half of consumers will give a brand only a week to respond to a question before they stop doing business with them.  The clock is ticking when customers have questions about your brand and a week is an eternity in the realm of customer experience.  The source for these stats is the 2011 Customer Experience Impact (CEI) Report, which explores the relationship between consumers and brands.  The report is based on a survey commissioned by RightNow (acquired by Oracle in 2012) and conducted by Harris Interactive. If you’re interested in seeing more facts and figures about customer experience, download the full report.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >