Search Results

Search found 14028 results on 562 pages for 'online storage'.

Page 458/562 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • WebCenter Customer Spotlight: Marvel

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryMarvel Entertainment, LLC (Marvel) is one of the world's most prominent character-based entertainment companies, built on a proven library of over 8,000 characters featured in a variety of media over seventy years. The customer wanted to optimize their brand licensing process, so Marvel worked with Oracle WebCenter partner Fishbowl Solutions and implemented a centralized Content Hub based on Oracle WebCenter Content. The 100% web based secure Intranet/Partner Extranet solution is now managing the entire life cycle of the brand licensing process. Marvel and their brand licensees have  now complete visibility of brand license operations including the history of approval request and related content.  Company OverviewMarvel Entertainment, LLC (Marvel) a wholly-owned subsidiary of The Walt Disney Company, is one of the world's most prominent character-based entertainment companies, built on a proven library of over 8,000 characters featured in a variety of media over seventy years.  Marvel utilizes its character franchises in entertainment, licensing and publishing.   Sample  characters:    - Spider-Man    - Iron Man    - Captain America    - X-MEN    - Thor    - Avengers    - And a host of others  Business ChallengesMarvel wanted to optimize their brand licensing process for their characters and had following business requirements : Facilitating content worldwide Scalable and flexible infrastructure to manage multiple content types and huge file sizes Optimize the licensing process workflow trough automatic notifications, tracking reviews, issuing approvals, etc. Solution DeployedMarvel worked with Oracle WebCenter partner Fishbowl Solutions and implemented a centralized Content Hub based on Oracle WebCenter Content. The 100% web based secure Intranet/Partner Extranet solution is now managing the entire life cycle of the brand licensing process. The internal users can now manage all digital assets related to a character trough proper categorization of all items, workflow based review and approval of branding styles and a powerful search and retrieval service. The licensees of Marvel brands can now online develop and submit  concepts and prototypes which are reviewed and approved using a collaborative process. Business ResultMarvel and their brand licensees have now complete visibility of brand license operations including the history of approval request and related content. The character brand related content is now in the right place, at the right time at the user's fingertips with highly improved quality. Additional Information Marvel Open World Presentation Oracle WebCenter Content

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • TalkTalk Business Succeeds with Engaging Conversations on an Eloqua platform

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 “Everybody from online, CRM and data, channel management, communications, and content had to pull together to deliver one campaign and Eloqua was the glue that held it all together.” says Paul Higgins, Marketing Director, TalkTalk Business (UK, telco) in this 2'23 video. Challenges Nurturing multiple sales channels with very diverse customers. Generating leads qualified to a very high standard. Engaging a fatigued and apathetic target audience. Positioning the brand as industry thought leaders. Solutions “What’s Your Business Grade?” campaign Eloqua automated email nurture strategy Eloqua Partner Network – Stein IAS Results ROI of 20:1. 40% uplift in sales opportunities in the smaller end with a 25% reduction in costs. 20–25% increase in sales qualified leads for mid-market, corporate, and enterprise customer sets. Open rate highs of 61.84% and click to open highs of 15.97%. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Good Customer Service Example

    - by MightyZot
    Here’s another good customer service example for you! My wife purchased a Galaxy last week and she loves the phone.  She asked me to add it to our AT&T Microcell last night. I purchased the AT&T Microcell a couple of years ago, because cell signal out where I live sucks! Since microcells are managed on the AT&T web site, I went to the site and tried to sign in. Naturally, having not managed that microcell in a couple of years…and much to my chagrin…I discovered that I didn’t know my password OR my user ID. So, I decided to call and see if I could get my account reset that late in the day (we’re talking last night, so it was well after 7pm.) I called the technical support line, touched the appropriate numbers to navigate to microcell support, turned on my speaker phone, and prepared for the long wait. After about 45 seconds I was delighted to hear “Jeffrey” break in and ask what he could help me with. I explained that I have not managed my microcell for some time and had forgotten the user name and password.  “No problem”, he replied, and he asked me for the line I used to register the microcell. After confirming the last four digits of my IMEI number, he asked me for my wife’s number. I gave him my wife’s number and he said, “I’ve taken care of it Mr Pope. Just have her reboot her phone and you should see your microcell.” We rebooted her phone, it connected to the microcell, and voila, she was online! “Is there anything else I can help you with while I’ve got you on the line”, he said. “Nope”, I replied. “Ok, have a great night.” What made this a great customer service experience for me was that “Jeffrey” didn’t stop at giving me my user account and password, which I would probably forget anyway after setting up my wife’s new phone. Instead, he solved the real problem for me – adding my wife’s new phone to my microcell. Great job Jeffrey!

    Read the article

  • My collection of favourite TFS utilities

    - by Aaron Kowall
    So, you’re in charge of your company or team’s Team Foundation Server.  Wish it was easier to manage, administer, extend?  Well, here are a few utilities that I highly recommend looking at. I’ve recently had need to rebuild my laptop and upgrade my local TFS environment to TFS 2012 Update 1.  This gave me cause to enumerate some of the utilities I like to have on hand. One of the reasons I love to use TFS on projects is that it’s basically a complete ALM toolkit.  Everything from Task Management, Version Control, Build Management, Test Management, Metrics and Reporting are all there ‘in the box’.  However, no matter how complete a product set it, there are always ways to make it better.  Here are a list of utilities and libraries that are pretty generally useful.  this is not intended to be an exhaustive list of TFS extensions but rather a set that I recommend you look at.  There are many more out there that may be applicable in one scenario or another.  This set of tools should work with TFS 2012 or 2010 if you grab the right version. Most of these tools (and more) are available from the Visual Studio Gallery or CodePlex. General TFS Power Tools – This is ‘the’ collection of utilities and extensions delivered by the Product Group.  Highly recommended from here are the Best Practice Analyzer for ensuring your TFS implementation is healthy and the Team Foundation Server Backups to ensure your TFS databases are backed up correctly. TFS Administrators Toolkit – helps make updates to work item types and reports across many team projects.  Also provides visibility of disk usage by finding large files in version control or test attachments to assist in managing storage utilization. Version Control Git-TF - a set of cross-platform, command line tools that facilitate sharing of changes between TFS and Git. These tools allow a developer to use a local Git repository, and configure it to share changes with a TFS server.  Great for all Git lovers who must integrate into a TFS repository. Testing TFS 2012 Tester Power Tool – A utility for bulk copying test cases which assists in an approach for managing test cases across multiple releases.  A little plug that this utility was written and maintained by Anna Russo of Imaginet where I also work. Test Scribe - A documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc. Reporting Community TFS Report Extensions - a single repository of SQL Server Reporting Services report for Team Foundation 2010 (and above).  Check out the Test Plan Status report by Imaginet’s Steve St. Jean.  Very valuable for your test managers. Builds TFS Build Manager – A great utility if you are build manager over a complex build environment with many TFS build definitions. Community TFS Build Extensions – contains many custom build activities.  Current release binaries are for TFS 2010 but many of the activities can be recompiled for use with TFS 2012. While compiling this list, I was surprised by the number of TFS utilities and extensions I no longer use/need in TFS 2012 because of the great work by the TFS team addressing many gaps since the 2010 release. Are there any utilities you depend on that I’ve missed?  I’d love to hear about them in the comments!

    Read the article

  • WebLogic Server 11gR1 Interactive Quick Reference

    - by JuergenKress
    The WebLogic Server 11gR1 Administration interactive quick reference is a multimedia tool for various terms and concepts used in WebLogic Server architecture. This tool is available for administrators for online or offline use. This is built as a multimedia web page which provides descriptions of WebLogic Server Architectural components, and references to relevant documentation. This tool offers valuable reference information for any complex concept or product in an intuitive and useful manner. Each interactive type presents data that may be available in the documentation (in the case of Oracle products), but presents it in a way that is more intuitive and useful to a user of Oracle products because it displays data the way it is used in a real world, best practice scenario. For example, the architectural diagram interactive type provides an image of an architectural diagram that is typically larger than a single slide or paper. The image is scrollable and provides zoom capabilities to easily and clearly view any part of the image. The image itself contains a hotspot map that you can click to get more information about a feature, including reference links to the documentation in question. Linking the visual image of the component and where it fits in the overall architecture of the product, or technology in use, to the technical explanation and how-to materials related to that component is something not offered by the documentation. In a future release, the poster will also enable you to drill down even further into the individual subsystems in nested diagrams to look at the details of that subsystem. In short, the interactive posters are good at showing you the big picture, then quickly and easily getting you to the detailed information you need. In an instant, you can see where a technical component fits into an overall architecture, and zero in on the nitty-gritty details that show you how to do it yourself. Note: This is a first initial release with more features in development. Currently known information: Only Firefox 8.0 and higher is known to work with this product. This product may work with Chrome and Safari browsers, but is known to have issues in Internet Explorer at this time. Smartphones, such as iPads and iPhones, are partially supported WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: WebLogic server quick reference,weblogic overview,weblogic 12c,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Learn programming backwards, or "so I failed the FizzBuzz test. Now what?"

    - by moraleida
    A Little Background I'm 28 today, and I've never had any formal training in software development, but I do have two higher education degrees equivalent to a B.A in Public Relations and an Executive MBA focused on Project Management. I've worked on those fields for about 6 years total an then, 2,5 years ago I quit/lost my job and decided to shift directions. After a month thinking things through I decided to start freelancing developing small websites in WordPress. I self-learned my way into it and today I can say I run a humble but successful career developing themes and plugins from scratch for my clients - mostly agencies outsourcing some of their dev work for medium/large websites. But sometimes I just feel that not having studied enough math, or not having a formal understanding of things really holds me behind when I have to compete or work with more experienced developers. I'm constantly looking for ways to learn more but I seem to lack the basics. Unfortunately, spending 4 more years in Computer Science is not an option right now, so I'm trying to learn all I can from books and online resources. This method is never going to have NASA employ me but I really don't care right now. My goal is to first pass the bar and to be able to call myself a real programmer. I'm currently spending my spare time studying Java For Programmers (to get a hold on a language everyone says is difficult/demanding), reading excerpts of Code Complete (to get hold of best practices) and also Code: The Hidden Language of Computer Hardware and Software (to grasp the inner workings of computers). TL;DR So, my current situation is this: I'm basically capable of writing any complete system in PHP (with the help of Google and a few books), integrating Ajax, SQL and whatnot, and maybe a little slower than an experienced dev would expect due to all the research involved. But I was stranded yesterday trying to figure out (not Google) a solution for the FizzBuzz test because I didn't have the if($n1 % $n2 == 0) method modulus operator memorized. What would you suggest as a good way to solve this dilemma? What subjects/books should I study that would get me solving problems faster and maybe more "in a programmers way"? EDIT - Seems that there was some confusion about what did I not know to solve FizzBuzz. Maybe I didn't express myself right: I knew the steps needed to solve the problem. What I didn't memorize was the modulus operator. The problem was in transposing basic math to the program, not in knowing basic math. I took the test for fun, after reading about it on Coding Horror. I just decided it was a good base-comparison line between me and formally-trained devs. I just used this as an example of how not having dealt with math in a computer environment before makes me lose time looking up basic things like modulus operators to be able to solve simple problems.

    Read the article

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • Cannot Mount USB 3.0 Hard Disk ?!!

    - by Tenken
    Hi, I have a USB 3.0 external hard disk which I am unable to mount. The entry appears in the "lsusb" command, but I do not exactly understand how to mount it. This is the output for my lsusb command. "ASMedia Technology Inc." is the USB 3.0 device. I would appreciate some help in mounting and accessing the hard disk. This the relevant output of my "lsusb -v" : Bus 009 Device 002: ID 174c:5106 ASMedia Technology Inc. Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x174c ASMedia Technology Inc. idProduct 0x5106 bcdDevice 0.01 iManufacturer 2 ASMedia iProduct 3 AS2105 iSerial 1 00000000000000000000 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.35-28-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:04:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0503 highspeed power enable connect Port 4: 0000.0503 highspeed power enable connect Device Status: 0x0003 Self Powered Remote Wakeup Enabled This is the error given when I try to mount the hard drive: shinso@shinso-IdeaPad:~$ sudo mount /dev/sdb /mnt [sudo] password for shinso: mount: /dev/sdb: unknown device This the output of "dmesg|tail": [30062.774178] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [30535.800977] usb 9-4: USB disconnect, address 3 [30659.237342] Valid eCryptfs headers not found in file header region or xattr region [30659.237351] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31259.268310] Valid eCryptfs headers not found in file header region or xattr region [31259.268313] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31860.059058] Valid eCryptfs headers not found in file header region or xattr region [31860.059062] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [32465.220590] Valid eCryptfs headers not found in file header region or xattr region [32465.220593] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO I am using Ubuntu 10.10 (64 bit). Any help is appreciated.

    Read the article

  • Apache doesn't load .php files

    - by Haddex
    First, sorry for my English and asking something that it's quite answered all over the web. I've read a lot of post about this problem but I still can't find the solution. I'm a web developer who recently moved to Ubuntu from Windows 7. I had a website done (it's online and working) and I set up LAMP to keep working with it. I made a test.php file with: <?php phpinfo(); ?> and put it on /var/www/html directory, it shows all the information about the php and I was really happy: "Ok, it's all done, tomorrow I will work hard" But I placed my whole web into /var/www/html , not in a folder, the index.php is in /var/www/html but guess what: doesn't load any of my .php files, the browser just keep thinking. What I did: I rebooted Apache: /etc/init.d/apache2 restart I tried again with the test.php file and it works fine I put in /var/www/html a .html file and works fine. I looked for /etc/apache2/sites-enable/000-default.conf and it says: DocumentRoot /var/www/html I looked for /etc/apache2/mods-enabled/dir.conf and it says: DirectoryIndex index.html index.cgi index.pl index.php ... Edit* I think it's something related to phpmyadmin, like if I'm not able to connect with the database. But I got nothing on the screen when trying to load the page so...I'm not sure. I can access to the url localhost/phpmyadmin and I edited the connection.php file like this: <?php # FileName="Connection_php_mysql.htm" # Type="MYSQL" # HTTP="true" $hostname_rakstadconnection = "localhost"; $database_rakstadconnection = "rakstadclandb"; $username_rakstadconnection = "root"; $password_rakstadconnection = "admin"; $rakstadconnection = mysql_connect($hostname_rakstadconnection, $username_rakstadconnection, $password_rakstadconnection) or trigger_error(mysql_error(),E_USER_ERROR); mysql_query("SET NAMES 'utf8'"); ?> The name of the database is correct, like the user and password. http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112609_zpsc45ddb72.png http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112120_zps0b9e15f7.png *Edit2: could this be because it's a website that I brought to Linux from Windows? I used Dreamweaver. Edit3: I changed the # to /*/, nothing. The error.log file says: [Mon Jun 09 17:08:13.627881 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Warning: require_once(/var/www/html/Connections/rakstadconnection.php): failed to open stream: Permission denied in /var/www/html/index.php on line 1 [Mon Jun 09 17:08:13.627933 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Fatal error: require_once(): Failed opening required 'Connections/rakstadconnection.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/html/index.php on line 1 I'm reading error log but...should I add a linux path into a my index.php file? Don't think so. Thanks.

    Read the article

  • It's All In The Cloud

    - by Natalia Rachelson
    People turned out in droves for Steve Miranda's Apps Cloud General Session. Steve, as engaging as ever, covered our Apps strategy in the cloud and reinforced that Oracle has a complete set of cloud services including: •    Human Capital Management•    Talent Management•    Sales and Marketing•    Customer Service and Support•    Financial Management•    Procurement, Sourcing, and Inventory•    Project Portfolio Management•    Governance, Risk, and Compliance... all delivered on top of the Social, Platform, and Common Infrastructure.Steve talked about Fusion being the centerpiece of our Cloud Services. The fact that Fusion is 100 percent standards based is a big, big deal! In addition, our ERP Cloud Service is the most complete cloud service on the market. And email marketing is dead -- social marketing is where the action is. It's also where Oracle is investing heavily from a Sales & Marketing Cloud perspective. Steve covered the strategic acquisitions Oracle has made to enhance our organic Cloud offering. Specifically, Oracle bought RightNow to make our Customer Service and Support Cloud service complete. We also bought Taleo to add Recruiting and Learning capabilities to our Talent Management Cloud. Steve talked about our customers and how they are benefiting from the use of a variety of our Cloud Services. Red Robin is driving lower labor and food costs with Oracle ERP Cloud Service. He used Elizabeth Arden as the profile customer for HCM and Talent Management Service, UBS for HCM and Talent Management Service, and Brocade for Talent Management. All these customers are benefiting from a comprehensive and fully integrated HR platform that aligns compensation with performance and enhances workforce motivation and retention. At the same time, Hitachi Data Systems is using Oracle Taleo Performance Management Cloud to recruit the right competencies, pinpoint areas of improvement, and develop and monitor employee goals to support the global account organization. KLM and Overstock.com are gaining the benefits of Oracle's Customer Service and Support Service from RightNow by better engaging and serving customer needs online and through call centers. And last but not least, Graco and Key Energy are leveraging mobility features and sales forecasting and territory management capabilities within the Oracle Sales and Marketing Service. They expect to gain better visibility to sales information and drive more efficient sales campaigns and empower their sales force with data they need to make sales. Overall, Oracle Apps Cloud Services are enjoying a significant momentum in the marketplace. Steve projected an air of confidence and enthusiasm highlighting Oracle's latest successes with Cloud services.

    Read the article

  • ArchBeat Facebook Friday: Top 10 Shared Links - May 23-29, 2014

    - by OTN ArchBeat
    Among the 5,144 fans of the OTN ArchBeat Facebook Page the following Top 10 items were the most popular over the last seven days, May 23-29, 2014. GlassFish/Java EE Community Open Forum Today! | Reza Rahman Have questions about Glassfish? Java EE/GlassFish evangelist Reza Rahman has answers, and you can pick his brain tomorrow during an online forum organized by the London Glassfish User Group and C2B2. The event is free, but you must register in order to participate. Click the link for more information. Twitter Tuesday - Top 10 @ArchBeat Tweets - May 20-26, 2014 The top 10 @OTNArchBeat tweets for the week of May 20-26, 2014. Topics covered include ADF, Cloud, GoldenGate, KScope14, OBIEE, ODI, WebLogic, WebCenter, and more. FrameworkFolders Support has come to Oracle WebCenter Portal | JayJay Zheng Interested in working with Framework Folders in Oracle WebCenter Portal? Oracle ACE JayJay Zheng reviews the essentials. Video: Programming Best Practices - ADF Business Components | Frank Nimphius Frank Nimphius discusses best practices and recommendations for ADF Business Components in the latest video from ADF Architecture TV. Video: Kscope 2014 Preview: Data Modeling and Moving Meditation with Kent Graziano For your mind and your body! Oracle ACE Director Kent Graziano previews his Kscope 2014 data modeling presentations and the early morning Chi Gung sessions he will once again lead for Kscope attendees. OAG and OES Integration for Web API Security: skin and guts | Andre Correa A-Team architect Andre Correa's post examines a strategy for web API security that uses OAG (Oracle API Gateway) and OES (Oracle Entitlements Server). Getting Started with Coherence*Web in WebLogic Server 12.1.2 | Tim Middleton Solution architect Tim Middleton shows you how to configure Coherence*Web in WebLogic Server 12.1.2 and deploy a basic web application. SOA and Business Processes: You are the Process! Part of the 13-part "Industrial SOA" article series, this article looks at best practices for modeling and managing effective business processes. Authentication in Oracle Identity Federation/ IdP | Damien Carru Damien Carru discuss authentication when OIF acts as an IdP and how the server can be configured to use specific OAM Authentication Schemes to challenge the user. Caveats on Using WebLogic Server with JDK7 | JayJay Zheng Quick tech tips from Oracle ACE JayJay Zheng.

    Read the article

  • How to get faster graphics in KVM? VNC is painfully slow with Haiku OS guest, Spice won't install and SDL doesn't work

    - by Don Quixote
    I've been coming up to speed on the Haiku operating system, an Open Source clone of BeOS 5 Pro. I'm using an Apple MacBook Pro as my development machine. Apple's BootCamp BIOS does not support more than four partitions on the internal hard drive. While I can set up extended and logical partitions, doing so will prevent any of the installed operating systems from booting. To run Haiku directly on the iron, I boot it off a USB stick. Using external storage is also helpful because I am perpetually out of filesystem space. While VirtualBox is documented to allow access to physical drives, I could not actually get it to work. Also VirtualBox can only use one of the host CPU's cores. While VB guests can be configured for more than one CPU, they are only emulated. A full build of the Haiku OS takes 4.5 under VB. I had the hope of reducing build times by using KVM instead, but it's not working nearly as well as VirtualBox did. The Linux Kernel Virtual Machine is broken in all manner of fundamental ways as seen from Haiku. But I'm a coder; maybe I could contribute to fixing some of those problems. The first problem I've got is that Haiku's video in virt-manager is quite painfully slow. When I drag Haiku windows around the desktop, they lag quite far behind where my mouse is. It's quite difficult to move a window to a precise position on the screen. Just imagine that the mouse was connected to the window title bar with a really stretchy spring. Also Haiku's mouse lags quite far behind where I have moved it. I found lots of Personal Package Archives that enable Spice from QEMU / KVM at the Ubuntu Personal Package Arhives. I tried a few of the PPAs but none of them worked; with one of them, the command "add-apt-repository" crashed with a traceback. There is a Wiki page about Spice, but it says that it only works on 64-bit. My Early 2006 MacBook Pro is 32-bit. Its Apple Model Identifier is MacBookPro1,1; these use Core Duos NOT Core 2 Duos. I don't mind building a source deb for 32-bit if I can expect it to work. Is there some reason that Spice should be 64-bit only? Does it need features of the x86_64 Instruction Set Architecture that x86 does not have? When I try using SDL from virt-manager, the configuration for Local SDL Window says "Xauth: /home/mike/.Xauthority". When I try to start my guest, virt-manager emits an error. When I Googled the error message, the usual solution was to make ~/.Xauthority readible. However, .Xauthorty does not exist in my home directory. Instead I have a $XAUTHORITY environment variable. There is no way to configure SDL in virt-manager to use $XAUTHORITY instead of ~/.Xauthority. Neither does it work to copy the value of $XAUTHORITY into the file. I am ready to scream, because I've been five fscking days trying to make KVM work for Haiku development. There is a whole lot more that is broken than the slow video. All I really want to do for now is speed up my full builds of Haiku by using "jam -j2" to use both cores in my CPU. I may try Xen next, but the last time I monkeyed with Xen it was far, far more broken than I am finding KVM to be. Just for now, I would be satisfied if there were some way to use my USB stick as a drive in VirtualBox. VB does allow me to configure /dev/sdb as a drive, but it always causes a fatal error when I try to launch the guest. Thank You For Any Advice You Can Give Me. -

    Read the article

  • MS in Computer Science after BE in electronics

    - by Abhinav
    I am doing my 3rd year Bachelors in Electronics and Electrical Communication but from the first year I have been interested in Computer Science. But at that time it was just my hobby. But in second year when I joined robotics my love for computer science rose. I with my team came in top three in 2 National Competition (Technical fests of different IITs) where we used Image Processing, Hardware interfacing etc. But then I realised that Computer Science is not just about coding. I took many lectures from online free schools like Udacity, Coursera in subjects related to Artificial Intelligence, Building a Search Engine, Design and Analysis of Algorithm, Programming a Robotic Car, Programming Languages, Machine Learning, Software Engineering as a Service, WebApps Engineering, Compilers, Applied Crypotography etc. I also did some courses in Core and Advanced Java in my second year from training institute. I will also be taking course in Statistics, Databases, Discrete Mathematics from 25th June. Now I realized how vast is the field of Computer Science and how efficient you become on deciding algorithms and classifying problems into different subfields which have been thoroughly researched so you don't always do brute force thing or naive programming. Now this field has become kind of passion for me. Adding to the fact I am also doing my 6 months internship in software field in Texas Instruments where I am working on Automation and Algorithms. I also have some 5-6 good college level projects in Softwares and Robotics. I also like Electronics but only some fields like Operating System(this subject was there in Electronics also), Micro Processor, Digital, Computer Architecture, DSPs etc. I really want to pursue MS in some field of Computer Science. I am giving GRE in October/November. Till now I have good CG of around 9.4/10 and my 1 year in college is still left. Do I have any chance that some good University in US will consider me for MS in field related to computer science or Robotics. Also Can you suggest somethings that I can do during this 1 year to increase my chances for MS or should I apply for EECS(Electrical Engineering and Computer Science) and then I can shift more towards Computer Science as my major option. My main aim is to do Phd after Ms in CS if I am able to do that somehow. I know that I have to put much extra effort to understand things in MS than CS undergraduates but I will do that with my full dedication, also when I communicate with my college CS students or during my internship period I didn't feel that I am missing very much stuff that they know and was very comfortable during my internship with software employees.

    Read the article

  • Heterogeneous Datacenter Management with Enterprise Manager 12c

    - by Joe Diemer
    The following is a Guest Blog, contributed by Bryce Kaiser, Product Manager at Blue MedoraWhen I envision a perfect datacenter, it would consist of technologies acquired from a single vendor across the entire server, middleware, application, network, and storage stack - Apps to Disk - that meets your organization’s every IT requirement with absolute best-of-breed solutions in every category.   To quote a familiar motto, your datacenter would consist of "Hardware and Software, Engineered to Work Together".  In almost all cases, practical realities dictate something far less than the IT Utopia mentioned above.   You may wish to leverage multiple vendors to keep licensing costs down, a single vendor may not have an offering in the IT category you need, or your preferred vendor may quite simply not have the solution that meets your needs.    In other words, your IT needs dictate a heterogeneous IT environment.  Heterogeneity, however, comes with additional complexity. The following are two pretty typical challenges:1) No End-to-End Visibility into the Enterprise Wide Application Deployment. Each vendor solution which is added to an infrastructure may bring its own tooling creating different consoles for different vendor applications and platforms.2) No Visibility into Performance Bottlenecks. When multiple management tools operate independently, you lose diagnostic capabilities including identifying cross-tier issues with database, hung-requests, slowness, memory leaks and hardware errors/failures causing DB/MW issues. As adoption of Oracle Enterprise Manager (EM) has increased, especially since the release of Enterprise Manager 12c, Oracle has seen an increase in the number of customers who want to leverage their investments in EM to manage non-Oracle workloads.  Enterprise Manager provides a single pane of glass view into their entire datacenter.  By creating a highly extensible framework via the Oracle EM Extensibility Development Kit (EDK), Oracle has provided the tooling for business partners such as my company Blue Medora as well as customers to easily fill gaps in the ecosystem and enhance existing solutions.  As mentioned in the previous post on the Enterprise Manager Extensibility Exchange, customers have access to an assortment of Oracle and Partner provided solutions through this Exchange, which is accessed at http://www.oracle.com/goto/emextensibility.  Currently, there are over 80 Oracle and partner provided plug-ins across the EM 11g and EM 12c versions.  Blue Medora is one of those contributing partners, for which you will find 3 of our solutions including our flagship plugin for VMware.  Let's look at Blue Medora’s VMware plug-in as an example to what I'm trying to convey.  Here is a common situation solved by true visibility into your entire stack:Symptoms•    My database is bogging down, however the database appears okay internally.  Maybe it’s starved for resources?•    My OS tooling is showing everything is “OK”.  Something doesn’t add up. Root cause•    Through the VMware plugin we can see the problem is actually on the virtualization layer Solution•    From within Enterprise Manager  -- the same tool you use for all of your database tuning -- we can overlay the data of the database target, host target, and virtual machine target for a true picture of the true root cause. Here is the console view: Perhaps your monitoring conditions are more specific to your environment.  No worries, Enterprise Manager still has you covered.  With Metric Extensions you have the “Next Generation” of User-Defined Metrics, which easily bring the power of your existing management scripts into a single console while leveraging the proven Enterprise Manager framework. Simply put, Oracle Enterprise manager boasts a growing ecosystem that provides the single pane of glass for your entire datacenter from the database and beyond.  Bryce can be contacted at [email protected]

    Read the article

  • Now Shipping! NetAdvantage for .NET 2010 Volume 3!

    The new NetAdvantage Ultimate includes all four Line of Business user interface control sets for ASP .NET, Windows Forms, WPF and Silverlight plus two advanced Data Visualization UI control sets for WPF and Silverlight. With six NetAdvantage products in one robust package, Infragistics® gives you hundreds of controls and infinite development possibilities. Unified XAML Product Strategy-Share Code, Get More Controls In the 10.3 release, Infragistics continues to deliver code parity between the XAML platforms, WPF and Silverlight. In the line of business toolsets, Infragistics introduces the new xamSchedule™, full-featured, Outlook® 2010-style schedule controls, and the new xamDataTree™, a data bound tree view that comfortably handles tens of thousands of tree nodes. Mimicking our Silverlight Drag and Drop Framework, the WPF Drag and Drop Framework CTP empowers you to add your own rich touches to your applications. Track Users' Behaviors New to all NetAdvantage Silverlight controls is the Infragistics Analytics Framework (IGAF), which empowers you to track user behavior in RIAs running on Silverlight 4. Building on the Microsoft® Silverlight Analytics Framework, with IGAF you can analyze the user's behaviors to ensure the experience you want to deliver. NetAdvantage for Windows Forms--New Office® 2010 Ribbon and Application Menu 2010 Create new experiences with Windows Forms. Now with Office 2010 styling, NetAdvantage for Windows Forms has new features such as Microsoft® Office 2010 ribbon and enhanced Infragistics.Excel to export the contents of the high performance WinGrid™ into Microsoft Excel® 2010. The new Windows Message Support enables Infragistics standalone editor controls to process numerous Windows® OS messages, allowing them to respond just like native controls to changes in the Windows environment. Create Faster Web 2.0 Experiences with NetAdvantage for ASP .NET Infragistics continues to push the envelope to deliver the fastest ASP .NET WebForms controls available on the market. Our lightning fast ASP .NET grids are now enhanced with XPS/PDF Exporting and Summary Rows. This release also includes support for jQuery Templating (as a CTP) within our WebDataGrid™ and WebDataTree™ controls allowing you to quickly cut down overall page size. Deliver Business Intelligence with Power, Flexibility and the Office 2010 Experience NetAdvantage for WPF Data Visualization and NetAdvantage for Silverlight Data Visualization help you deliver flexible, powerful and usable end user experiences in Business Intelligence applications. Both suites include the Pivot Grid that delivers the full power of online analytical processing (OLAP) to present multi-dimensional data, sliced and diced in cross-tabulated form for end users to drill down into, interact with and easily extract meaning from the data. Mapping Made Easy 10.3 marks the official release of the WPF Data Visualization xamMap™ control to map anything and everything from geographic to geo-spacial mapping data. Map layers allow you to add successive levels of detail, navigational panes for panning in all directions, color swatch panes that facilitate value scales like Choropleth shading, and scale panes allowing users to zoom-in and out. Both toolsets introduce the first of many relationship maps! With the xamOrgChart™ CTP you can map out organizational charts of up to 50K employees, competitive brackets (think World Cup) and any other relational, organizational map your application needs. http://www.infragistics.com span.fullpost {display:none;}

    Read the article

  • Cloning a dual boot system from HDD to SSD

    - by Alex
    I'm planning on replacing my laptop's HDD with a 256GB SSD, but I have a dual-boot (12.04 and Windows 7) setup and I'd like to be able to directly migrate Ubuntu over without having to reinstall and lose all of my settings. GParted reports the following partition setup on my HDD. I am, of course, able to modify it if necessary. /dev/sda1 (NTFS) 66.92 out of 200.00 MB used I'm honestly not sure what this partition is for. Maybe for Windows 7 system files? I'm hesitant to mess with it. (edit; it turns out it is a partition for Windows recovery files in the event of OS corruption, so I don't want to remove it. Plus it also appears to be a major pain to remove anyways) /dev/sda2 (NTFS) 116.35 out of 339.06 GB used (boot) This partition is the C:/ drive on my Windows installation. I don't use it on my Ubuntu installation, except it is the boot partition and thus has grub on it. /dev/sda4 (extended) > /dev/sda5 (ext4) 14.49 out of 91.34 GB used > /dev/sda6 (linux-swap) 5.92 GB These are my Ubuntu partitions. /sda5 contains my documents and all of the files I use on Ubuntu, and (as far as I know) the system files for Ubuntu itself (it's the partition I created when prompted by the Live-DVD installer). /sda6 is, of course, the swap partition which I only need for hibernation (6GB of RAM). /dev/sda3 (NTFS) 9.89 out of 14.75 GB used This is an annoying partition that Lenovo created to store some drivers and files that I might need later on. For example, it allows me to use OneKeyRecovery for a quick factory recovery if absolutely necessary, not sure if that'll work on an SSD. It also contains not-so-important files for bloatware installation. In total, my HDD only has about 150GB of files on it so it should fit comfortably on the SSD. The problem is, I want to exactly migrate my files, partitions, OSes, MBR, etc. from my HDD to my SSD and I'm not quite sure how to do this. I've seen CloneZilla referenced before, but I'm not all too experienced and the documentation for it quite frankly seems a bit like a foreign language to me. So, put simply, is there any way I can exactly clone this HDD to an SSD without a massive headache? Also, if it matters, I'll probably be using an external hard drive case (as recommended in online tutorials) to externally attach the SSD to my laptop during the cloning process due to the lack of two hard drive slots in the machine.

    Read the article

  • SQL SERVER – ERROR: FIX using Compatibility Level – Database diagram support objects cannot be installed because this database does not have a valid owner – Part 2

    - by pinaldave
    Earlier I wrote a blog post about how to resolve the error with database diagram. Today I faced the same error when I was dealing with a database which is upgraded from SQL Server 2005 to SQL Server 2008 R2. When I was searching for the solution online I ended up on my own earlier solution SQL SERVER – ERROR: FIX – Database diagram support objects cannot be installed because this database does not have a valid owner. I really found it interesting that I ended up on my own solution. However, the solution to the problem this time was a bit different. Let us see how we can resolve the same. Error: Database diagram support objects cannot be installed because this database does not have a valid owner. To continue, first use the Files page of the Database Properties dialog box or the ALTER AUTHORIZATION statement to set the database owner to a valid login, then add the database diagram support objects. Workaround / Fix / Solution : Follow the steps listed below and it should for sure solve your problem. (NOTE: Please try this for the databases upgraded from previous version. For everybody else you should just follow the steps mentioned here.) Select your database >> Right Click >> Select Properties Go to the Options In the Dropdown at right labeled “Compatibility Level” choose “SQL Server 2005(90)” Select FILE in left side of page In the OWNER box, select button which has three dots (…) in it Now select user ‘sa’ or NT AUTHORITY\SYSTEM and click OK. This will solve your problem. However, there is one very important note you must consider. When you change any database owner, there are always security related implications. I suggest you check your security policies before changing authorization. I did this to quickly solve my problem on my development server. If you are on production server, you may open yourself to potential security compromise. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Impatient Customers Make Flawless Service Mission Critical for Midsize Companies

    - by Richard Lefebvre
    At times, I can be an impatient customer. But I’m not alone. Research by The Social Habit shows that among customers who contact a brand, product, or company through social media for support, 32% expect a response within 30 minutes and 42% expect a response within 60 minutes! 70% of respondents to another study expected their complaints to be addressed within 24 hours, irrespective of how they contacted the company. I was intrigued when I read a recent blog post by David Vap, Group Vice President of Product Development for Oracle Service Cloud. It’s about “Three Secrets to Innovation” in customer service. In David’s words: 1) Focus on making what’s hard simple 2) Solve real problems for real people 3) Don’t just spin a good vision. Do something about it  I believe midsize companies have a leg up in delivering on these three points, mainly because they have no other choice. How can you grow a business without listening to your customers and providing flawless service? Big companies are often weighed down by customer service practices that have been churning in bureaucracy for years or even decades. When the all-in-one printer/fax/scanner I bought my wife for Christmas (call me a romantic) failed after sixty days, I wasted hours of my time navigating the big brand manufacturer’s complex support and contact policies only to be offered a refurbished replacement after I shipped mine back to them. There was not a happy ending. Let's just say my wife still doesn't have a printer.  Young midsize companies need to innovate to grow. Established midsize company brands need to innovate to survive and reach the next level. Midsize Customer Case Study: The Boston Globe The Boston Globe, established in 1872 and the winner of 22 Pulitzer Prizes, is fighting the prevailing decline in the newspaper industry. Businessman John Henry invested in the Globe in 2013 because he, “…believes deeply in the future of this great community, and the Globe should play a vital role in determining that future”. How well the paper executes on its bold new strategy is truly mission critical—a matter of life or death for an industry icon. This customer case study tells how Oracle’s Service Cloud is helping The Boston Globe “do something about” and not just “spin” it’s strategy and vision via improved customer service. For example, Oracle RightNow Chat Cloud Service is now the preferred support channel for its online environments. The average e-mail or phone call can take three to four minutes to complete while the average chat is only 30 to 40 seconds. It’s a great example of one company leveraging technology to make things simpler to solve real problems for real people. Related: Oracle Cloud Service a leader in The Forrester Wave™: Customer Service Solutions For Small And Midsize Teams, Q2 2014

    Read the article

  • Access Offline or Overloaded Webpages in Firefox

    - by Asian Angel
    What do you do when you really want to access a webpage only to find that it is either offline or overloaded from too much traffic? You can get access to the most recent cached version using the Resurrect Pages extension for Firefox. The Problem If you have ever encountered a website that has become overloaded and unavailable due to sudden popularity (i.e. Slashdot, Digg, etc.) then this is the result. No satisfaction to be had here… Resurrect Pages in Action Once you have installed the extension you can add the Toolbar Button if desired…it will give you the easiest access to Resurrect Pages. Or you can wait for a problem to occur when trying to access a particular website and have it appear as shown here. As you can see there is a very nice selection of cache services to choose from, therefore increasing your odds of accessing a copy of that webpage. If you would prefer to have the access attempt open in a new tab or window then you should definitely use the Toolbar Button. Clicking on the Toolbar Button will give you access to the popup window shown here…otherwise the access attempt will happen in the current tab. Here is the result for the website that we wanted to view using the Google Listing. Followed by the Google (text only) Listing. The results with the different services will depend on how recently the webpage was published/set up. View Older Versions of Currently Accessible Websites Just for fun we decided to try the extension out on the How-To Geek website to view an older version of the homepage. Using the Toolbar Button and clicking on The Internet Archive brought up the following page…we decided to try the Nov. 28, 2006 listing. As you can see things have really changed between 2006 and now…Resurrect Pages can be very useful for anyone who is interested in how websites across the web have grown and changed over the years. Conclusion If you encounter a webpage that is offline or overloaded by sudden popularity then the Resurrect Pages extension can help you get access to the information that you need using a cached version. Links Download the Resurrect Pages extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Remove Colors and Background Images in WebpagesGet Last Accessed File Time In Ubuntu LinuxCustomize the Reading Format for Webpages in FirefoxGet Access to 100+ URL Shortening Services in FirefoxAccess Cached Versions of Webpages When a Website is Down TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • Wireframing: A Day In the Life of UX Workshop at Oracle

    - by ultan o'broin
    The Oracle Applications User Experience team's Day in the Life (DITL) of User Experience (UX) event was run in Oracle's Redwood Shores HQ for Oracle Usability Advisory Board (OUAB) members. I was charged with putting together a wireframing session, together with Director of Financial Applications User Experience, Scott Robinson (@scottrobinson). Example of stunning new wireframing visuals we used on the DITL events. We put on a lively show, explaining the basics of wireframing, the concepts, what it is and isn't, considerations on wireframing tool choice, and then imparting some tips and best practices. But the real energy came when the OUAB customers and partners in the room were challenge to do some wireframing of their own. Wireframing is about bringing your business and product use cases to life in real UX visual terms, by creating a low-fidelity drawing to iterate and agree on in advance of prototyping and coding what is to be finally built and rolled out for users. All the best people wireframe. Leonardo da Vinci used "cartoons" on some great works, tracing outlines first and using red ochre or charcoal dropped through holes in the tracing parchment onto the canvas to outline the subject. (Image distributed under Wikimedia commons license) Wireframing an application's user experience design enables you to: Obtain stakeholder buy-in. Enable faster iteration of different designs. Determine the task flow navigation paths (in Oracle Fusion Applications navigation is linked with user roles). Develop a content strategy (readability, search engine optimization (SEO) of content, and so on) Lay out the pages, widgets, groups of features, and so on. Apply usability heuristics early (no replacement for usability testing, but a great way to do some heavy-lifting up front). Decide upstream which functional user experience design patterns to apply (out of the box solutions that expedite productivity). Assess which Oracle Application Development Framework (ADF) or equivalent technology components can be used (again, developer productivity is enhanced downstream). We ran a lively hands-on exercise where teams wireframed a choice of application scenarios using the time-honored tools of pen and paper. Scott worked the floor like a pro, pointing out great use of features, best practices, innovations, and making sure that the whole concept of wireframing, the gestalt, transferred. "We need more buttons!" The cry of the energized. Not quite. The winning wireframe session (online shopping scenario) from the Applications UX DITL event shown. Great fun, great energy, and great teamwork were evident in the room. Naturally, there were prizes for the best wireframe. Well, actually, prizes were handed out to the other attendees too! An exciting, slightly different aspect to delivery of this session made the wireframing event one of the highlights of the day. And definitely, something we will repeat again when we get the chance. Thanks to everyone who attended, contributed, and helped organize.

    Read the article

  • Oracle Database In-Memory

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE Larry Ellison unveiled the next major milestone in database technology, Oracle Database In-Memory, on June 10, 2014. Oracle Database In-Memory will be generally available in July 2014 and can be used with all hardware platforms on which Oracle Database 12c is supported. This option will accelerate database performance by orders of magnitude for analytics, data warehousing, and reporting while also speeding up online transaction processing (OLTP). It allows any existing Oracle Database-compatible application to automatically and transparently take advantage of columnar in-memory processing, without additional programming or application changes. Benefits Fast ad-hoc analytics without the need to pre-create indexes Completely transparent to existing applications Faster mixed workload OLTP No database size limit Industrial strength availability and security Robustness and maturity of Oracle Database 12c To find out more see Oracle Database In-Memory Comment from Rittman Mead on Oracle In-Memory Option Launch  ... and I will let you know how this unfolds in regards to advantages for OBI11g and Exalytics and Big Data over the coming months. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Big Companies Influence Retail in 2010

    - by David Dorf
    From a retail industry perspective, 2010 will go down as the year mobile went mainstream, the economy recovered from the crash, and Facebook surpassed Google as the most influential online property. While the economy certainly had the biggest impact on the retail industry, a few big companies also exerted influence. Here's a rundown and a look back at 2010: Apple -- Steve Jobs and company continued to lead the mobile pack. Consumers are using their iPhones to shop, retailers are using the iPod Touch for mobile checkout, and both are embracing the iPad as the next wave of technology. The Next Technology from Apple Mobile Platforms in Retail Apple Stores, Touch2Systems, and the iPad Google -- Not to be outdone, Google's Android platform grew faster than Apple's, plus they support QRCodes natively and will probably beat Apple to NFC. Google Checkout, Product Search, and Boutiques.com continue to impact the e-commerce scene. Google Leverages Like.com Facebook -- While the movie The Social Network certainly made Facebook a household name, Connect, Places, and seeing the "like" button all over the Web really pushed Facebook everywhere. 2010 set the foundations for f-commerce. Facebook Participatory Promotions Crowd Savers What's the value of a Facebook fan? Step Aside Google Leveraging Social Networks for Retail Social Shopping at Nine West Groupon -- This newcomer executed on a simple concept flawlessly, making them the fasted company to reach $1B in revenue. (See cool chart from Silicon Alley Insider.) Google's offer of $5-6B wasn't enough, so now they are raising an additional $1B in funding, presumably to buy-up all the copycats across the globe. Changing the Way We Shop Amazon -- As if leading the e-commerce charge wasn't enough, Amazon shook things up with their purchase of Woot and release of their Price Checker mobile app. They continue to push boundaries with Kindle, and don't seem worried about the iPad at all. You Can't Win on Price Amazon Looks at Your Social Graph eBay -- Acquiring Skype didn't exactly work out, but eBay's purchase of PayPal and RedLaser are driving the company forward. They are still a major force. Bump the Bill Oracle, SAP, HP, IBM, and Cisco left their marks on the retail industry as well with various acquisitions and CxO shake-ups. We'll just have to wait and see what 2011 brings next.

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >