Search Results

Search found 2796 results on 112 pages for 'bounce rate'.

Page 99/112 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • NRF Online Merchandising Workshop: Where Online Retailers Are Focusing for Holiday and Beyond

    - by Rose Spicer-Oracle
    0 0 1 1204 6863 Oracle Corporation 57 16 8051 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Last month we attended the NRF Online Merchandising Workshop in LA, and it was a great opportunity to catch up with our customers, meet new retailers, and hear some great presentations from VF Corporation, Zazzle, Julep Beauty, Backcountry, eBags and more. The one-on-one conversations with Merchants and the keynote presentations carry the same themes across companies of all sizes and across verticals. With only 125 days left (and counting) until Black Friday, these conversations provided some great insight in to what’s top of mind for retailers during the most stressful time of their year, and a sneak peek in to what they will deliver this holiday season.  Some of the most popular topics were: When to start promoting for holiday: seems like a funny conversation to have in July, but a number of retailers said they already had their holiday shopping gift guides live on their site, and it was attracting a significant portion of their onsite traffic. When it comes to timing, most retailers were questioning when to begin their holiday promotions -- carefully balancing when to release pricing and specials, and knowing that customers are holding out for last-minute deals and price drops. Many retailers noted the frustrations around transparent pricing by Amazon and a few other mega-retailers last year, publishing their “lowest prices of the season” as early as October – ensuring shoppers that those prices were the best they could get all season long. Many retailers felt their hands were forced to drop prices. Others kept their set pricing with negative customer reaction, causing some to miss their holiday goals. The pressure is on, and most retailers identified November 1 as their target start date for the holiday promotions blitz. Some are even waiting for the big guys to release their “lowest prices of the season” guides and will then follow suit.      Attribution is tough – and a huge focus: understanding the path to conversion is a tough nut to crack, especially in the new omnichannel world where consumers use multiple touchpoints to make a single purchase, and internal management wants to know hard data. This has lead many retailers to invest in attribution; carefully tracking their online marketing efforts to determine what gets “credit” for the sale, instead of giving credit to the “last click.” Retailers noted that it is very difficult to determine the numbers when online and offline worlds collide – like when a shopper uses digital channels for research and then makes a purchase in a store. As one of the presenters from The North Face mentioned in her keynote, a key to enabling better customer service and satisfaction when it comes to converged online and offline sales is training the in-store staff, and creating a culture where it eventually “doesn’t matter what group gets the credit” if they all add to the sale. No doubt, the area of attribution will be a big area of retail investment in the coming years.      How to plan for the converged world: planning to ensure inventory gets where it needs to be was another concern. In conversations with retailers, we advised them to analyze customer patterns: where shoppers purchase items, where the items were sourced from and even where items are returned. This analysis is very valuable in determining inventory plans. From there, retailers can more accurately plan and allocate inventory to support both the online and offline customer behavior. As we head into the holiday season, the need for accurate enterprise-wide inventory visibility, and providing that information to associates, is even more critical to the brand-wide customer experience.       Improving the search / navigation / usability of the site(s): Aside from some of the big ideas and standard holiday pricing pressure, most conversations we had centered around continuing to improve the basics of the site. Reinvesting in search and navigation came up time and time again (FitForCommerce blogged about what a big topic it was at the event as well). Obviously getting shoppers on their path quickly and allowing them to find what they need fast is critical, but it was definitely interesting to hear just how much effort is still going in to honing the search and navigation experience. Adding new elements to search and navigation like typeahed, inventive navigation refinements, and new navigation categories like gift guides, specialized boutiques and flash sales were top of mind, in addition to searchandising and making search-driven product recommendations. (Oracle can help!)       Reducing cart abandonment: always a hot topic that is top of mind for every online retailer. Getting shoppers to the cart is often less then half the battle; getting them to click “buy” and complete the transaction is much more difficult. While retailers carefully study the checkout process and where shoppers tend to bounce, they know that how they design their checkout page is critical. We’re all online shoppers in our personal lives and we know how frustrating it can be when total prices are not transparent (i.e. shipping, processing, taxes is not included until the very last possible screen before clicking that buy button). Online retailers are struggling with where in the checkout process to surface the total price to be charged to reduce cart abandonment, while not showing the total figure too early in the process that it keeps shoppers from getting to checkout altogether. Recent research shows that providing total pricing prior to the checkout process dramatically reduces cart abandonment – as it serves as a filter to those shopping within a specific price band. Much of the cart abandonment discussion leads us to…       The free shipping / free returns question: it’s no secret that because of Amazon and programs like Prime, consumers expect free shipping, much to the chagrin of the smaller retailer. The reality is that if you’re not a mega-retailer, shipping is an expensive part of doing business that doesn’t allow most retailers to keep their prices low and offer free shipping. This has many retailers venturing out on the “free returns” path, especially in apparel. A number of retailers we spoke with are testing a flat rate shipping fee with free returns to see if they can crack the price threshold where shoppers are willing to pay for shipping with an added service. But, free shipping remains king.      Social ads and retargeting: they are working, but do they turn off consumers? That’s the big question. Every retailer we spoke with during a roundtable on the topic said that social ads and retargeting (where that pair of boots you’re been eyeing on a site magically follows you around the Internet) work and are meeting campaign goals. The larger question many retailers are asking is if this type of tactic is turning off a large number of shoppers, even if these campaigns are meeting their early goals. Retailers also mentioned that Facebook ads are working very well for them, especially when it comes to new customer acquisition, serving as a complimentary a channel to SEO when it comes to engaging new customers. While there are always new things to experiment with in retail, standard challenges are top of mind as retailers scramble to get ready for holiday. It will undoubtedly be another record-breaking online shopping season, but as retailers get more and more advanced with each Black Friday, expect some exciting things. This excitement needs to be backed by sound solutions and optimized operations. Then again, consumers are expecting more than ever, so I don’t doubt that retailers are already thinking about the possibilities of holiday 2015… and beyond. Customers who read this article, also found value in the following stories: Personalization for Retail: http://blogs.oracle.com/retail/entry/personalization_for_retailShop Direct User Experience Focus Drives Sales:https://blogs.oracle.com/retail/entry/shop_direct_user_experience_focusMaking Waves: Australian Online Retailer SurfStitch: https://blogs.oracle.com/oracleretail/entry/surf_stitchWhat’s new in Oracle Commerce v11.1 for RetailWhat the Content+Commerce Equation is Missing

    Read the article

  • Training on Demand Certification Packages for DBAs

    - by Antoinette O'Sullivan
    The demand for Database Administrators continues to grow.*Almost two-thirds of IT hiring managers indicate that they highly value certifications in validatingIT skills and expertise.** * Job satisfaction and DBA work growth rate: CNN Money's 2011 Best Jobs in America survey.** Survey among nearly 1,700 respondents by CompTIA, the nonprofit trade association for the IT industry, cited in Certification Magazine, Feb. 14 th., 2012. Get Certified with Training on DemandAre you an experienced Database professional eager to achieve certification?Is time your most precious resource?Then try our new Training On Demand Certification Value Package with 20% discount. These all-in-one packages give you everything you need to get certified with success: Why Training On Demand:  Expert training from Oracle’s top instructors Sophisticated streaming video recording Available for 90 days, 24 hours a day, 7 days a week White boarding and training labs for hands-on experience Start, stop, pause, jump or rewind sections of the course as needed  Oracle University instructor Q&A  A full-text search leads to the right video fragment in a matter of seconds. Watch this demo to see how it works. Additional Certification resources: Benefits of Oracle Certification Database Certification Paths Available Database Certification Exams Getting certified has never been easier!For assistance contact your local Oracle University Service Desk. Many organizations deploy both Oracle Database and MySQL side by side to serve different needs, and as a database professional you can find training courses on both topics at Oracle University! Check out the upcoming Oracle Database 11g training courses and MySQL training courses. Even if you're only managing Oracle Databases at this point of time, getting familiar with MySQL Database will broaden your career path with growing job demand. These Value Packages are also available with the following training formats: In-Class, Live Virtual Class and Self Study: MySQL Database Administration Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand MySQL Database Administrator Certification Value Package View Package View Package View Package View Package MySQL Developer Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand       MySQL Developer Certification Value Package View Package View Package     Oracle Database 10g Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand Oracle Database 10g Administrator Certified Associate Certification Value Package View Package View Package View Package   Oracle Database 10g Administrator Certified Professional Certification Value Package View Package View Package View Package   Oracle Database 11g Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand Oracle Database 11g Administrator Certified Associate Certification Value Package View Package View Package View Package View Package Oracle Database 11g Administrator Certified Professional Certification Value Package View Package View Package View Package View Package Exam Prep Seminar Value Package: Oracle Database Admin 1       View Package Oracle Database 11g Administrator Certified Professional UPGRADE Certification Value Package       View Package Oracle Real Application Clusters 11g and Grid Infrastructure Administraton Certified Expert Certification Value Package       View Package Exam Prep Seminar Value Package: Oracle Database Admin 2        View Package Exam Prep Seminar Value Package: Oracle RAC 11g and Grid Infrastructure Administration       View Package Exam Prep Seminar Value Package: Upgrade Oracle Certified Professional (OCP) to Oracle Database 11g       View Package SQL and PL/SQL Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand Oracle Database Sql Expert Certification Value Package View Package View Package View Package View Package Exam Prep Seminar Value Package: Oracle Database SQL       View Package View our Certification Value Packages Mention this code at the time of booking: E1245 Connect For a full list of MySQL Training courses and events, go to http://oracle.com/education/mysql.

    Read the article

  • Special thanks to everyone that helped me in 2010.

    - by mbcrump
    2010 has been a very good year for me and I wanted to create a list and thank everyone for what they have done for me.  I also wanted to thank everyone for reading and subscribing to my blog. It is hard to believe that people actually want to read what I write. I feel like I owe a huge thanks to everyone listed below. Looking back upon 2010, I feel that I’ve grown as a developer and you are part of that reason. Sometimes we get caught up in day to day work and forget to give thanks to those that helped us along the way. The list below is mine, it includes people and companies. This list is obviously not going to include everyone that has helped, just those that have stood out in my mind. When I think back upon 2010, their names keep popping up in my head. So here goes, in no particular order.  People Dave Campbell – For everything he has done for the Silverlight Community with his Silverlight Cream blog. I can’t think of a better person to get recognition at the Silverlight FireStarter event. I also wanted to thank him for spending several hours of his time helping me track down a bug in my feedburner account. Victor Gaudioso – For his large collection of video tutorials on his blog and the passion and enthusiasm he has for Silverlight. We have talked on the phone and I’ve never met anyone so fired up for Silverlight. Kunal Chowdhury – Kunal has always been available for me to bounce ideas off of. Kunal has also answered a lot of questions that stumped me. His blog and CodeProject article have green a great help to me and the Silverlight Community. Glen Gordon – I was looking frantically for a Windows Phone 7 several months before release and Glen found one for me. This allowed me to start a blog series on the Windows Phone 7 hardware and developing an application from start to finish that Scott Guthrie retweeted.  Jeff Blankenburg – For listening to my complaints in the early stages of Windows Phone 7. Jeff was always very polite and gave me his cell phone number to talk it over. He also walked me through several problems that I was having early on. Pete Brown – For writing Silverlight 4 in Action. This book is definitely a labor of love. I followed Pete on Twitter as he was writing it and he spent a lot of late nights and weekends working on it. I felt a lot smarter after reading it the first time. The second time was even better. John Papa – For all of his work on the Silverlight Firestarter and the Silverlight community in general. He has also helped me on a personal level with several things. Daniel Heisler – For putting up with me the past year while we worked on many .NET projects together in 2010. Alvin Ashcraft – For publishing a daily blog post on the best of .NET links. He has linked to my site many times and I really appreciate what he does for the community. Chris Alcock – For publishing the Morning Brew every weekday. I remember when I first appeared on his site, I started getting hundreds of hits on my site and wondered if I was getting a DOS attack or something. It was great to find out that Chris had linked to one of my articles. Joel Cochran – For spending a week teaching “Blend-O-Rama”. This was my one of my favorite sessions of this year. I learned a lot about Expression Blend from it and the best part was that it was free and during lunchtime. Jeremy Likness – Jeremy is smart – very smart. I have learned a lot from Jeremy over the past year. He is also involved in the Silverlight community in every way possible, from forums to blog post to screencast to open source. It goes on and on. The people that I met at VSLive Orlando 2010. I had a great time chatting with Walt Ritscher, Wallace McClure, Tim Huckabee and David Platt. Also a special thanks to all of my friends on Twitter like @wilhil, @DBVaughan, @DataArtist, @wbm, @DirkStrauss and @rsringeri and many many more. Software Companies / Events / May of gave me FREE stuff. =) Microsoft (3) – I was sent a free coupon code by Microsoft to take the Silverlight 4 Beta Exam. I jumped on the offer and took the exam. It was great being selected to try out the exam before it goes public even though Microsoft eventually published a universal coupon code for everyone. I am still waiting to find out if I passed the exam. My fingers are crossed. Microsoft reaching out to me with some questions regarding the .NET Community. I’ve never had a company contact me with such interest in the community. Having a contest where 75 people could win a $100 gift certificate and a T-Shirt for submitting a Windows Phone 7 app. I submitted my app and won. All of the free launch events this year (Windows Phone 7, Visual Studio 2010, ASP.NET MVC). Wintellect – For providing an awesome day of free technical training called T.E.N. Where else can you get free training from some of the best programmers in the world? I also won a contest from them that included a NETAdvantage Ultimate License from Infragistics. VSLive – I attended the Orlando 2010 Conference and it was the best developer’s conference that I have ever attended. I got to know a lot of people at this conference and hang out with many wonderful speakers. I live tweeted the event and while it may have annoyed some, the organizers of VSLive loved it. I won the contest on Twitter and they invited me back to the 2011 session of my choice. This is a very nice gift and I really appreciate the generosity. BarcodeLib.com – For providing free barcode generating tools for a Non-Profit ASP.NET project that I was working on. Their third party controls really made this a breeze compared to my existing solution. NDepend – It is absolutely the best tool to improve code quality. The product is extremely large and I would recommend heading over to their site to check it out. Silverlight Spy – I was writing a blog post on Silverlight Spy and Koen Zwikstra provided a FREE license to me. If you ever wanted to peek inside of a Silverlight Application then this is the tool for you. He is also working on a version that will support OOB and Windows Phone 7. I would recommend checking out his site. Birmingham .NET Users Group / Silverlight Nights User Group – It takes a lot of time to put together a user group meeting every month yet it always seems to happen. I don’t want to name names for fear of leaving someone out but both of these User Groups are excellent if you live in the Birmingham, Alabama area. Publishing Companies Manning Publishing – For giving me early access to Silverlight 4 in Action by Pete Brown. It was really nice to be able to read this awesome book while Pete was writing it. I was also one of the first people to publish a review of the book. Sams Publishing and DZone – For providing a copy of Silverlight 4 Unleashed by Laurent Bugnion for me to review for their site. The review is coming in January 2011. Special Shoutout to the following 3rd Party Silverlight Controls It has been a great pleasure to work with the following companies on 3rd Party Control Giveaways every month. It always amazes me how every 3rd Party Control company is so eager to help out the community. I’ve never been turned down by any of these companies! These giveaways have sparked a lot of interest in Silverlight and hopefully I can continue giving away a new set every month. If you are a 3rd Party Control company and are interested in participating in these giveaways then please email me at mbcrump29[at]gmail[d0t].com. The companies below have already participated in my giveaways: Infragistics (December 2010) - Win a set of Infragistics Silverlight Controls with Data Visualization!  Mindscape (November 2010) - Mindscape Silverlight Controls + Free Mega Pack Contest Telerik (October 2010) - Win Telerik RadControls for Silverlight! ($799 Value) Again, I just wanted to say Thanks to everyone for helping me grow as a developer.  Subscribe to my feed

    Read the article

  • World Record Oracle Business Intelligence Benchmark on SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server configured with four SPARC T4 3.0 GHz processors delivered the first and best performance of 25,000 concurrent users on Oracle Business Intelligence Enterprise Edition (BI EE) 11g benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 10. A SPARC T4-4 server running Oracle Business Intelligence Enterprise Edition 11g achieved 25,000 concurrent users with an average response time of 0.36 seconds with Oracle BI server cache set to ON. The benchmark data clearly shows that the underlying hardware, SPARC T4 server, and the Oracle BI EE 11g (11.1.1.6.0 64-bit) platform scales within a single system supporting 25,000 concurrent users while executing 415 transactions/sec. The benchmark demonstrated the scalability of Oracle Business Intelligence Enterprise Edition 11g 11.1.1.6.0, which was deployed in a vertical scale-out fashion on a single SPARC T4-4 server. Oracle Internet Directory configured on SPARC T4 server provided authentication for the 25,000 Oracle BI EE users with sub-second response time. A SPARC T4-4 with internal Solid State Drive (SSD) using the ZFS file system showed significant I/O performance improvement over traditional disk for the Web Catalog activity. In addition, ZFS helped get past the UFS limitation of 32767 sub-directories in a Web Catalog directory. The multi-threaded 64-bit Oracle Business Intelligence Enterprise Edition 11g and SPARC T4-4 server proved to be a successful combination by providing sub-second response times for the end user transactions, consuming only half of the available CPU resources at 25,000 concurrent users, leaving plenty of head room for increased load. The Oracle Business Intelligence on SPARC T4-4 server benchmark results demonstrate that comprehensive BI functionality built on a unified infrastructure with a unified business model yields best-in-class scalability, reliability and performance. Oracle BI EE 11g is a newer version of Business Intelligence Suite with richer and superior functionality. Results produced with Oracle BI EE 11g benchmark are not comparable to results with Oracle BI EE 10g benchmark. Oracle BI EE 11g is a more difficult benchmark to run, exercising more features of Oracle BI. Performance Landscape Results for the Oracle BI EE 11g version of the benchmark. Results are not comparable to the Oracle BI EE 10g version of the benchmark. Oracle BI EE 11g Benchmark System Number of Users Response Time (sec) 1 x SPARC T4-4 (4 x SPARC T4 3.0 GHz) 25,000 0.36 Results for the Oracle BI EE 10g version of the benchmark. Results are not comparable to the Oracle BI EE 11g version of the benchmark. Oracle BI EE 10g Benchmark System Number of Users 2 x SPARC T5440 (4 x SPARC T2+ 1.6 GHz) 50,000 1 x SPARC T5440 (4 x SPARC T2+ 1.6 GHz) 28,000 Configuration Summary Hardware Configuration: SPARC T4-4 server 4 x SPARC T4-4 processors, 3.0 GHz 128 GB memory 4 x 300 GB internal SSD Storage Configuration: "> Sun ZFS Storage 7120 16 x 146 GB disks Software Configuration: Oracle Solaris 10 8/11 Oracle Solaris Studio 12.1 Oracle Business Intelligence Enterprise Edition 11g (11.1.1.6.0) Oracle WebLogic Server 10.3.5 Oracle Internet Directory 11.1.1.6.0 Oracle Database 11g Release 2 Benchmark Description Oracle Business Intelligence Enterprise Edition (Oracle BI EE) delivers a robust set of reporting, ad-hoc query and analysis, OLAP, dashboard, and scorecard functionality with a rich end-user experience that includes visualization, collaboration, and more. The Oracle BI EE benchmark test used five different business user roles - Marketing Executive, Sales Representative, Sales Manager, Sales Vice-President, and Service Manager. These roles included a maximum of 5 different pre-built dashboards. Each dashboard page had an average of 5 reports in the form of a mix of charts, tables and pivot tables, returning anywhere from 50 rows to approximately 500 rows of aggregated data. The test scenario also included drill-down into multiple levels from a table or chart within a dashboard. The benchmark test scenario uses a typical business user sequence of dashboard navigation, report viewing, and drill down. For example, a Service Manager logs into the system and navigates to his own set of dashboards using Service Manager. The BI user selects the Service Effectiveness dashboard, which shows him four distinct reports, Service Request Trend, First Time Fix Rate, Activity Problem Areas, and Cost Per Completed Service Call spanning 2002 to 2005. The user then proceeds to view the Customer Satisfaction dashboard, which also contains a set of 4 related reports, drills down on some of the reports to see the detail data. The BI user continues to view more dashboards – Customer Satisfaction and Service Request Overview, for example. After navigating through those dashboards, the user logs out of the application. The benchmark test is executed against a full production version of the Oracle Business Intelligence 11g Applications with a fully populated underlying database schema. The business processes in the test scenario closely represent a real world customer scenario. See Also SPARC T4-4 Server oracle.com OTN Oracle Business Intelligence oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN WebLogic Suite oracle.com OTN Oracle Solaris oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

    Read the article

  • HTG Explains: Why is Printer Ink So Expensive?

    - by Chris Hoffman
    Printer ink is expensive, more expensive per drop than fine champagne or even human blood. If you haven’t gone paperless, you’ll notice that you’re paying a lot for new ink cartridges — more than seems reasonable. Purchasing the cheapest inkjet printer and buying official ink cartridge replacements is the most expensive thing you can do. There are ways to save money on ink if you must continue to print documents. Cheap Printers, Expensive Ink Ink jet printers are often very cheap. That’s because they’re sold at cost, or even at a loss — the manufacturer either makes no profit from the printer itself or loses money. The manufacturer will make most of its money from the printer cartridges you buy later. Even if the company does make a bit of money from each printer sold, it makes a much larger profit margin on ink. Rather than selling you a printer that may be rather expensive, they want to sell you a cheap printer and make money on an ongoing basis by providing expensive printer ink. It’s been compared to the razor model — sell a razor cheaply and mark up the razor blades. Rather than making a one-time profit on the razor, you’ll make continuing profit as the customer keeps buying razor blade replacements — or ink, in this case. Many printer manufacturers go out of their way to make it difficult for you to use unofficial ink cartridges, building microchips into their official ink cartridges. If you use an unofficial cartridge or refill an official cartridge, the printer may refuse to use it. Lexmark once argued in court that unofficial microchips that enable third-party ink cartridges would violate their copyright and Lexmark has argued that creating an unofficial microchip to bypass this restriction on third-party ink would violate Lexmark’s copyright and be illegal under the US DMCA. Luckily, they lost this argument. What Printer Companies Say Printer companies have put forth their own arguments in the past, attempting to justify the high cost of official ink cartridges and microchips that block any competition. In a Computer World story from 2010, HP argued that they spend a billion dollars each year on “ink research and development.” They point out that printer ink “must be formulated to withstand heating to 300 degrees, vaporization, and being squirted at 30 miles per hour, at a rate of 36,000 drops per second, through a nozzle one third the size of a human hair. After all that it must dry almost instantly on the paper.” They also argue that printers have become more efficient and use less ink to print, while third-party cartridges are less reliable. Companies that use microchips in their ink cartridges argue that only the microchip has the ability to enforce an expiration date, preventing consumers from using old ink cartridges. There’s something to all these arguments, sure — but they don’t seem to justify the sky-high cost of printer ink or the restriction on using third-party or refilled cartridges. Saving Money on Printing Ultimately, the price of something is what people are willing to pay and printer companies have found that most consumers are willing to pay this much for ink cartridge replacements. Try not to fall for it: Don’t buy the cheapest inkjet printer. Consider your needs when buying a printer and do some research. You’ll save more money in the long run. Consider these basic tips to save money on printing: Buy Refilled Cartridges: Refilled cartridges from third parties are generally much cheaper. Printer companies warn us away from these, but they often work very well. Refill Your Own Cartridges: You can get do-it-yourself kits for refilling your own printer ink cartridges, but this can be messy. Your printer may refuse to accept a refilled cartridge if the cartridge contains a microchip. Switch to a Laser Printer: Laser printers use toner, not ink cartridges. If you print a lot of black and white documents, a laser printer can be cheaper. Buy XL Cartridges: If you are buying official printer ink cartridges, spend more money each time. The cheapest ink cartridges won’t contain much ink at all, while larger “XL” ink cartridges will contain much more ink for only a bit more money. It’s often cheaper to buy in bulk. Avoid Printers With Tri-Color Ink Cartridges: If you’re printing color documents, you’ll want to get a printer that uses separate ink cartridges for all its colors. For example, let’s say your printer has a “Color” cartridge that contains blue, green, and red ink. If you print a lot of blue documents and use up all your blue ink, the Color cartridge will refuse to function — now all you can do is throw away your cartridge and buy a new one, even if the green and red ink chambers are full. If you had a printer with separate color cartridges, you’d just have to replace the blue cartridge. If you’ll be buying official ink cartridges, be sure to compare the cost of cartridges when buying a printer. The cheapest printer may be more expensive in the long run. Of course, you’ll save the most money if you stop printing entirely and go paperless, keeping digital copies of your documents instead of paper ones. Image Credit: Cliva Darra on Flickr     

    Read the article

  • The Internet of Things & Commerce: Part 2 -- Interview with Brian Celenza, Commerce Innovation Strategist

    - by Katrina Gosek, Director | Commerce Product Strategy-Oracle
    Internet of Things & Commerce Series: Part 2 (of 3) Welcome back to the second installation of my three part series on the Internet of Things & Commerce. A few weeks ago, I wrote “The Next 7,000 Days” about how we’ve become embedded in a digital architecture in the last 7,000 days since the birth of the internet – an architecture that everyday ties the massive expanse of the internet evermore closely with our physical lives. This blog series explores how this new blend of virtual and material will change how we shop and how businesses sell. Now enjoy reading my interview with Brian Celenza, one of the chief strategists in our Oracle Commerce innovation group. He comments on the past, present, and future of the how the growing Internet of Things relates and will relate to the buying and selling of goods on and offline. -------------------------------------------- QUESTION: You probably have one of the coolest jobs on our team, Brian – and frankly, one of the coolest jobs in our industry. As part of the innovation team for Oracle Commerce, you’re regularly working on bold features and groundbreaking commerce-focused experiences for our vision demos. As you look back over the past couple of years, what is the biggest trend (or trends) you’ve seen in digital commerce that started to bring us closer to this idea of what people are calling an “Internet of Things”? Brian: Well as you look back over the last couple of years, the speed at which change in our industry has moved looks like one of those blurred movement photos – you know the ones where the landscape blurs because the observer is moving so quickly your eye focus can’t keep up. But one thing that is absolutely clear is that the biggest catalyst for that speed of change – especially over the last three years – has been mobile. Mobile technology changed everything. Over the last three years the entire thought process of how to sell on (and offline) has shifted because of mobile technology advances. Particularly for eCommerce professionals who have started to move past the notion of “channels” for selling goods to this notion of “Mobile First”… then the Web site. Or more accurately, that everything – smartphones, web, store, tablet – is just one channel or has to act like one singular access point to the same product catalog, information and content. The most innovative eCommerce professionals realized some time ago that it’s not ideal to build an eCommerce Web site and then build everything on top of or off of it. Rather, they want to build an eCommerce API and then integrate it will all other systems. To accomplish this, they are leveraging all the latest mobile technologies or possibilities mobile technology has opened up: 4G and LTE, GPS, bluetooth, touch screens, apps, html5… How has this all started to come together for shopping experiences on and offline? Well to give you a personal example, I remember visiting an Apple store a few years ago and being amazed that I didn’t have to wait in line because a store associate knew everything about me from my ID – right there on the sales floor – and could check me out anywhere. Then just a few months later (when like any good addict) I went back to get the latest and greatest new gadget, I felt like I was stealing it because I could check myself out with my smartphone. I didn’t even need to see a sales associate OR go to a cash register. Amazing. And since then, all sort sorts of companies across all different types of industries – from food service to apparel –  are starting to see mobile payments in the billions of dollars now thanks not only to the convenience factor but to smart loyalty rewards programs as well. These are just some really simple current examples that come to mind. So many different things have happened in the last couple of years, it’s hard to really absorb all of the quickly – because as soon as you do, everything changes again! Just like that blurry speed photo image. For eCommerce, however, this type of new environment underscores the importance of building an eCommerce API – a platform that has services you can tap in to and build on as the landscape changes at a fever pitch. It’s a mobile first perspective. A web service perspective – particularly if you are thinking of how to engage customers across digital and physical spaces. —— QUESTION: Thanks for bringing us into the present – some really great examples you gave there to put things into perspective. So what do you see as the biggest trend right now around the “Internet of Things” – and what’s coming next few years? Brian: Honestly, even sitting where I am in the innovation group – it’s hard to look out even 12 months because, well, I don’t even think we’ve fully caught up with what is possible now. But I can definitely say that in the last 12 months and in the coming 12 months, in the technology and eCommerce world it’s all about iBeacons. iBeacons are awesome tools we have right now to tie together physical and digital shopping experiences. They know exactly where you are as a shopper and can communicate that to businesses. Currently there seem to be two camps of thought around iBeacons. First, many people are thinking of them like an “indoor GPS”, which to be fair they literally are. The use case this first camp envisions for iBeacons is primarily for advertising and marketing. So they use iBeacons to push location-based promotions to customers if they are close to a store or in a store. You may have seen these types of mobile promotions start to pop up occasionally on your smart phone as you pass by a store you’ve bought from in the past. That’s the work of iBeacons. But in my humble opinion, these promotions probably come too early in the customer journey and although they may be well timed and work to “convert” in some cases, I imagine in most they are just eroding customer trust because they are kind of a “one-size-fits-all” solution rather than one that is taking into account what exactly the customer might be looking for in that particular moment. Maybe they just want more information and a promotion is way too soon for that type of customer. The second camp is more in line with where my thinking falls. In this case, businesses take a more sensitive approach with iBeacons to customers’ needs. Instead of throwing out a “one-size-fits-all” to any passer by with iBeacons, the use case is more around looking at the physical proximity of a customer as an opportunity to provide a service: show expert reviews on a product they may be looking at in a particular aisle of a store, offer the opportunity to compare prices (and then offer a promotion), signal an in-store associate if a customer has been in the store for more than 10 minutes in one place. These are all less intrusive more value-driven uses of iBeacons. And they are more about building customer trust through service. To take this example a bit further into the future realm of “Big Data” and “Internet of Things” businesses could actually use the Oracle Commerce Platform and iBeacons to “silently” track customer movement w/in the store to provide higher quality service. And this doesn’t have to be creepy or intrusive. Simply if a customer has been in a particular department or aisle for more than a 5 or 10 minutes, an in-store associate could come over an offer some assistance already knowing customer preferences from their online profile and maybe even seeing the items in a shopping cart they started at home. None of this has to be revealed to the customer, but it certainly could boost the level of service an in-store sales associate could provide. Or, in another futuristic example, stores could use the digital footprint of the physical store transmitted by iBeacons to generate heat maps of the store that could be tracked over time. Imagine how much you could find out about which parts of the store are more busy during certain parts of the day or seasons. This could completely revolutionize how physical merchandising is deployed or where certain high value / new items are placed. And / or this use of iBeacons could also help businesses figure out if customers are getting held up in certain parts of the store during busy days like Black Friday. If long lines are causing customers to bounce from a physical store and leave those holiday gifts behind, maybe having employees with mobile check as an option could remove the cash register bottleneck. But going to back to my original statement, it’s all still very early in the story for iBeacons. The hardware manufacturers are still very new and there is still not one clear standard.  Honestly, it all goes back to building and maintaining an extensible and flexible platform for anywhere engagement. What you’re building today should allow you to rapidly take advantage of whatever unimaginable use cases wait around the corner. ------------------------------------------------------ I hope you enjoyed the brief interview with Brian. It’s really awesome to have such smart and innovation-minded individuals on our Oracle Commerce innovation team. Please join me again in a few weeks for Part 3 of this series where I interview one of the product managers on our team about how the blending of digital and in-store selling in influencing our product development and vision.

    Read the article

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • A Closable jQuery Plug-in

    - by Rick Strahl
    In my client side development I deal a lot with content that pops over the main page. Be it data entry ‘windows’ or dialogs or simple pop up notes. In most cases this behavior goes with draggable windows, but sometimes it’s also useful to have closable behavior on static page content that the user can choose to hide or otherwise make invisible or fade out. Here’s a small jQuery plug-in that provides .closable() behavior to most elements by using either an image that is provided or – more appropriately by using a CSS class to define the picture box layout. /* * * Closable * * Makes selected DOM elements closable by making them * invisible when close icon is clicked * * Version 1.01 * @requires jQuery v1.3 or later * * Copyright (c) 2007-2010 Rick Strahl * http://www.west-wind.com/ * * Licensed under the MIT license: * http://www.opensource.org/licenses/mit-license.php Support CSS: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Options: * handle Element to place closebox into (like say a header). Use if main element and closebox container are two different elements. * closeHandler Function called when the close box is clicked. Return true to close the box return false to keep it visible. * cssClass The CSS class to apply to the close box DIV or IMG tag. * imageUrl Allows you to specify an explicit IMG url that displays the close icon. If used bypasses CSS image styling. * fadeOut Optional provide fadeOut speed. Default no fade out occurs */ (function ($) { $.fn.closable = function (options) { var opt = { handle: null, closeHandler: null, cssClass: "closebox", imageUrl: null, fadeOut: null }; $.extend(opt, options); return this.each(function (i) { var el = $(this); var pos = el.css("position"); if (!pos || pos == "static") el.css("position", "relative"); var h = opt.handle ? $(opt.handle).css({ position: "relative" }) : el; var div = opt.imageUrl ? $("<img>").attr("src", opt.imageUrl).css("cursor", "pointer") : $("<div>"); div.addClass(opt.cssClass) .click(function (e) { if (opt.closeHandler) if (!opt.closeHandler.call(this, e)) return; if (opt.fadeOut) $(el).fadeOut(opt.fadeOut); else $(el).hide(); }); if (opt.imageUrl) div.css("background-image", "none"); h.append(div); }); } })(jQuery); The plugin can be applied against any selector that is a container (typically a div tag). The close image or close box is provided typically by way of a CssClass - .closebox by default – which supplies the image as part of the CSS styling. The default styling for the box looks something like this: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Alternately you can also supply an image URL which overrides the background image in the style sheet. I use this plug-in mostly on pop up windows that can be closed, but it’s also quite handy for remove/delete behavior in list displays like this: you can find this sample here to look to play along: http://www.west-wind.com/WestwindWebToolkit/Samples/Ajax/AmazonBooks/BooksAdmin.aspx For closable windows it’s nice to have something reusable because in my client framework there are lots of different kinds of windows that can be created: Draggables, Modal Dialogs, HoverPanels etc. and they all use the client .closable plug-in to provide the closable operation in the same way with a few options. Plug-ins are great for this sort of thing because they can also be aggregated and so different components can pick and choose the behavior they want. The window here is a draggable, that’s closable and has shadow behavior and the server control can simply generate the appropriate plug-ins to apply to the main <div> tag: $().ready(function() { $('#ctl00_MainContent_panEditBook') .closable({ handle: $('#divEditBook_Header') }) .draggable({ dragDelay: 100, handle: '#divEditBook_Header' }) .shadow({ opacity: 0.25, offset: 6 }); }) The window is using the default .closebox style and has its handle set to the header bar (Book Information). The window is just closable to go away so no event handler is applied. Actually I cheated – the actual page’s .closable is a bit more ugly in the sample as it uses an image from a resources file: .closable({ imageUrl: '/WestWindWebToolkit/Samples/WebResource.axd?d=TooLongAndNastyToPrint', handle: $('#divEditBook_Header')}) so you can see how to apply a custom image, which in this case is generated by the server control wrapping the client DragPanel. More interesting maybe is to apply the .closable behavior to list scenarios. For example, each of the individual items in the list display also are .closable using this plug-in. Rather than having to define each item with Html for an image, event handler and link, when the client template is rendered the closable behavior is attached to the list. Here I’m using client-templating and the code that this is done with looks like this: function loadBooks() { showProgress(); // Clear the content $("#divBookListWrapper").empty(); var filter = $("#" + scriptVars.lstFiltersId).val(); Proxy.GetBooks(filter, function(books) { $(books).each(function(i) { updateBook(this); showProgress(true); }); }, onPageError); } function updateBook(book,highlight) { // try to retrieve the single item in the list by tag attribute id var item = $(".bookitem[tag=" +book.Pk +"]"); // grab and evaluate the template var html = parseTemplate(template, book); var newItem = $(html) .attr("tag", book.Pk.toString()) .click(function() { var pk = $(this).attr("tag"); editBook(this, parseInt(pk)); }) .closable({ closeHandler: function(e) { removeBook(this, e); }, imageUrl: "../../images/remove.gif" }); if (item.length > 0) item.after(newItem).remove(); else newItem.appendTo($("#divBookListWrapper")); if (highlight) { newItem .addClass("pulse") .effect("bounce", { distance: 15, times: 3 }, 400); setTimeout(function() { newItem.removeClass("pulse"); }, 1200); } } Here the closable behavior is applied to each of the items along with an event handler, which is nice and easy compared to having to embed the right HTML and click handling into each item in the list individually via markup. Ideally though (and these posts make me realize this often a little late) I probably should set up a custom cssClass to handle the rendering – maybe a CSS class called .removebox that only changes the image from the default box image. This example also hooks up an event handler that is fired in response to the close. In the list I need to know when the remove button is clicked so I can fire of a service call to the server to actually remove the item from the database. The handler code can also return false; to indicate that the window should not be closed optionally. Returning true will close the window. You can find more information about the .closable class behavior and options here: .closable Documentation Plug-ins make Server Control JavaScript much easier I find this plug-in immensely useful especial as part of server control code, because it simplifies the code that has to be generated server side tremendously. This is true of plug-ins in general which make it so much easier to create simple server code that only generates plug-in options, rather than full blocks of JavaScript code.  For example, here’s the relevant code from the DragPanel server control which generates the .closable() behavior: if (this.Closable && !string.IsNullOrEmpty(DragHandleID) ) { string imageUrl = this.CloseBoxImage; if (imageUrl == "WebResource" ) imageUrl = ScriptProxy.GetWebResourceUrl(this, this.GetType(), ControlResources.CLOSE_ICON_RESOURCE); StringBuilder closableOptions = new StringBuilder("imageUrl: '" + imageUrl + "'"); if (!string.IsNullOrEmpty(this.DragHandleID)) closableOptions.Append(",handle: $('#" + this.DragHandleID + "')"); if (!string.IsNullOrEmpty(this.ClientDialogHandler)) closableOptions.Append(",handler: " + this.ClientDialogHandler); if (this.FadeOnClose) closableOptions.Append(",fadeOut: 'slow'"); startupScript.Append(@" .closable({ " + closableOptions + "})"); } The same sort of block is then used for .draggable and .shadow which simply sets options. Compared to the code I used to have in pre-jQuery versions of my JavaScript toolkit this is a walk in the park. In those days there was a bunch of JS generation which was ugly to say the least. I know a lot of folks frown on using server controls, especially the UI is client centric as the example is. However, I do feel that server controls can greatly simplify the process of getting the right behavior attached more easily and with the help of IntelliSense. Often the script markup is easier is especially if you are dealing with complex, multiple plug-in associations that often express more easily with property values on a control. Regardless of whether server controls are your thing or not this plug-in can be useful in many scenarios. Even in simple client-only scenarios using a plug-in with a few simple parameters is nicer and more consistent than creating the HTML markup over and over again. I hope some of you find this even a small bit as useful as I have. Related Links Download jquery.closable West Wind Web Toolkit jQuery Plug-ins © Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery   ASP.NET  JavaScript  

    Read the article

  • Postfix configuration - Uing virtual min but server is bouncing back my mail.

    - by brodiebrodie
    I have no experience in setting up postfix, and thought virtualmin minght do the legwork for me. Appears not. When I try to send mail to the domain (either [email protected] [email protected] or [email protected]) I get the following message returned This is the mail system at host dedq239.localdomain. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to <postmaster> If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]> (expanded from <[email protected]>): User unknown in virtual alias table Final-Recipient: rfc822; [email protected] Original-Recipient: rfc822;[email protected] Action: failed Status: 5.0.0 Diagnostic-Code: X-Postfix; User unknown in virtual alias table How can I diagnose the problem here? It seems that the mail gets to my server but the server fails to locally deliver the message to the correct user. (This is a guess, truthfully I have no idea what is happening). I have checked my virtual alias table and it seems to be set up correctly (I can post if this would be helpful). Can anyone give me a clue as to the next step? Thanks alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 html_directory = no local_recipient_maps = $virtual_mailbox_maps mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination smtpd_sasl_auth_enable = yes soft_bounce = no unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual My mail log file (the last entry) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 207C6B18158: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: from=<[email protected]>, size=1805, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/error[7238]: 207C6B18158: to=<[email protected]>, orig_to=<[email protected]>, relay=none, delay=0.64, delays=0.61/0.01/0/0.02, dsn=5.0.0, status=bounced (User unknown in virtual alias table) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 8DC13B18169: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 8DC13B18169: from=<>, size=3691, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/bounce[7239]: 207C6B18158: sender non-delivery notification: 8DC13B18169 Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: removed Sep 30 15:13:48 dedq239 postfix/smtp[7240]: 8DC13B18169: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[209.85.216.55]:25, delay=1.3, delays=0.02/0.01/0.58/0.75, dsn=2.0.0, status=sent (250 2.0.0 OK 1254348828 36si15082901pxi.91) Sep 30 15:13:48 dedq239 postfix/qmgr[7177]: 8DC13B18169: removed Sep 30 15:14:17 dedq239 postfix/smtpd[7233]: disconnect from mail-bw0-f228.google.com[209.85.218.228] etc.aliases file below I have not touched this file - myvirtualdomain is a replacement for my real domain name # Aliases in this file will NOT be expanded in the header from # Mail, but WILL be visible over networks or from /bin/mail. # # >>>>>>>>>> The program "newaliases" must be run after # >> NOTE >> this file is updated for any changes to # >>>>>>>>>> show through to sendmail. # # Basic system aliases -- these MUST be present. mailer-daemon: postmaster postmaster: root # General redirections for pseudo accounts. bin: root daemon: root adm: root lp: root sync: root shutdown: root halt: root mail: root news: root uucp: root operator: root games: root gopher: root ftp: root nobody: root radiusd: root nut: root dbus: root vcsa: root canna: root wnn: root rpm: root nscd: root pcap: root apache: root webalizer: root dovecot: root fax: root quagga: root radvd: root pvm: root amanda: root privoxy: root ident: root named: root xfs: root gdm: root mailnull: root postgres: root sshd: root smmsp: root postfix: root netdump: root ldap: root squid: root ntp: root mysql: root desktop: root rpcuser: root rpc: root nfsnobody: root ingres: root system: root toor: root manager: root dumper: root abuse: root newsadm: news newsadmin: news usenet: news ftpadm: ftp ftpadmin: ftp ftp-adm: ftp ftp-admin: ftp www: webmaster webmaster: root noc: root security: root hostmaster: root info: postmaster marketing: postmaster sales: postmaster support: postmaster # trap decode to catch security attacks decode: root # Person who should get root's mail #root: marc abuse-myvirtualdomain.com: [email protected] My etc/postfix/virtual file is below - again myvirtualdomain is a replacement. I think this file was generated by Virtualmin and I have tried messing around with is with no success... This is the version without my changes. myunixusername@myvirtualdomain .com myunixusername myvirtualdomain .com myvirtualdomain.com [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]

    Read the article

  • The Krewe App Post-Mortem

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/05/23/the-krewe-app-post-mortem.aspxNow that teched has come and gone, I thought I would use this opportunity to do a little post-mortem on The Krewe app. It is one thing to test the app at home. It is a completely different animal to see how it responds in the environment TechEd creates. At a future time, I will list all the things that I would like to change with the app. At this point, I will find some good way to get community feedback. I want to break all this down screen by screen. We'll start with the screen I got right. The first of these is the events calendar. This is the one screen that, to you guys, just worked. However, there was an issue here. When I wrote v1 for last year, I was lazy and placed everything in CST. This caused problems with the achievements, which I will explain later. Furthermore, the event locations were not check-in locations. This created another problem with the achievements. Next, we get to the Twitter page. For what this page does, it works great. For those that don't know, I have an Azure Worker Role that polls Twitter pretty close to the rate limit. I cache these results in my database, and serve them upon request. This gives me great control over the content. I just have to remember to flush past tweets after a period, to save database growth. The next screen is the check-in screen. This screen has been the bane of my existence since I first created the thing. Last year, I used a background task to check people out of locations after they traveled. This year, I removed the background task in favor of a foursquare model. You are checked out after 3 hours or when you check-in to some other location. This seemed to work well, until those pesky achievements came into the mix. Again, more on this later. Next, I want to address the Connect and Connections screens together. I wanted to use some of the capabilities of the phone, and NFC seemed a natural choice. From this, I came up with the gamification aspects of the app. Since we are, fundamentally, a networking organization, I wanted to encourage people to actually network. Users could make and share a profile, similar to a virtual business card. I just had to figure out how to get people to use the feature. Why not just give someone a business card? Thus, the achievements were born. This was such a good idea. It would have been a great idea, if I have come up with it about two months earlier... When I came up with these ideas, I had about 2 weeks to implement them. Version 1 of the app was, basically, a pure consumption app. We provided data and centralized it. With version 2, the app became a much more interactive experience. The API was not ready for this change in such a short period of time. Most of this became apparent when I started implementing the achievements. The achievements based on count and specific person when fairly easy. The problem came with tying them to locations and events. This took some true SQL kung fu. This also showed me the rookie mistake of putting CST, not UTC, in the database. Once I got all of that cleaned up, I had to find a way to get the achievement system to talk to the phone. I knew I needed to be able to dynamically add achievements. I wouldn't know the precise location of some things until I got to Houston. I wanted the server to approve the achievements. This, unfortunately, required a decent data connection. Some achievements required GPS levels of location accuracy in areas of network triangulation. All of this became a huge nightmare. My flagship feature was based on some silly assumptions. Still, I managed to get 31 people to get the first achievement (Make 1 Connection.) Quite a few of those managed to get to the higher levels. Soon, I will post a list of the feature and changes that need to happen to the API. This includes things like proper objects for communication, geo-fencing, and caching. However, that is for another day.

    Read the article

  • Create a Social Community of Trust Along With Your Federal Digital Services Governance

    - by TedMcLaughlan
    The Digital Services Governance Recommendations were recently released, supporting the US Federal Government's Digital Government Strategy Milestone Action #4.2 to establish agency-wide governance structures for developing and delivering digital services. Figure 1 - From: "Digital Services Governance Recommendations" While extremely important from a policy and procedure perspective within an Agency's information management and communications enterprise, these recommendations only very lightly reference perhaps the most important success enabler - the "Trusted Community" required for ultimate usefulness of the services delivered. By "ultimate usefulness", I mean the collection of public, transparent properties around government information and digital services that include social trust and validation, social reach, expert respect, and comparative, standard measures of relative value. In other words, do the digital services meet expectations of the public, social media ecosystem (people AND machines)? A rigid governance framework, controlling by rules, policies and roles the creation and dissemination of digital services may meet the expectations of direct end-users and most stakeholders - including the agency information stewards and security officers. All others who may share comments about the services, write about them, swap or review extracts, repackage, visualize or otherwise repurpose the output for use in entirely unanticipated, social ways - these "stakeholders" will not be governed, but may observe guidance generated by a "Trusted Community". As recognized members of the trusted community, these stakeholders may ultimately define the right scope and detail of governance that all other users might observe, promoting and refining the usefulness of the government product as the social ecosystem expects. So, as part of an agency-centric governance framework, it's advised that a flexible governance model be created for stewarding a "Community of Trust" around the digital services. The first steps follow the approach outlined in the Recommendations: Step 1: Gather a Core Team In addition to the roles and responsibilities described, perhaps a set of characteristics and responsibilities can be developed for the "Trusted Community Steward/Advocate" - i.e. a person or team who (a) are entirely cognizant of and respected within the external social media communities, and (b) are trusted both within the agency and outside as practical, responsible, non-partisan communicators of useful information. The may seem like a standard Agency PR/Outreach team role - but often an agency or stakeholder subject matter expert with a public, active social persona works even better. Step 2: Assess What You Have In addition to existing, agency or stakeholder decision-making bodies and assets, it's important to take a PR/Marketing view of the social ecosystem. How visible are the services across the social channels utilized by current or desired constituents of your agency? What's the online reputation of your agency and perhaps the service(s)? Is Search Engine Optimization (SEO) a facet of external communications/publishing lifecycles? Who are the public champions, instigators, value-adders for the digital services, or perhaps just influential "communicators" (i.e. with no stake in the game)? You're essentially assessing your market and social presence, and identifying the actors (including your own agency employees) in the existing community of trust. Step 3: Determine What You Want The evolving Community of Trust will most readily absorb, support and provide feedback regarding "Core Principles" (Element B of the "six essential elements of a digital services governance structure") shared by your Agency, and obviously play a large, though probably very unstructured part in Element D "Stakeholder Input and Participation". Plan for this, and seek input from the social media community with respect to performance metrics - these should be geared around the outcome and growth of the trusted communities actions. How big and active is this community? What's the influential reach of this community with respect to particular messaging or campaigns generated by the Agency? What's the referral rate TO your digital services, FROM channels owned or operated by members of this community? (this requires governance with respect to content generation inclusive of "markers" or "tags"). At this point, while your Agency proceeds with steps 4 ("Build/Validate the Governance Structure") and 5 ("Share, Review, Upgrade"), the Community of Trust might as well just get going, and start adding value and usefulness to the existing conversations, existing data services - loosely though directionally-stewarded by your trusted advocate(s). Why is this an "Enterprise Architecture" topic? Because it's increasingly apparent that a Public Service "Enterprise" is not wholly contained within Agency facilities, firewalls and job titles - it's also manifested in actual, perceived or representative forms outside the walls, on the social Internet. An Agency's EA model and resulting investments both facilitate and are impacted by the "Social Enterprise". At Oracle, we're very active both within our Enterprise and outside, helping foster social architectures that enable truly useful public services, digital or otherwise.

    Read the article

  • Five Reasons to Attend PLM Summit 2013: The Conference Formerly Known as AGILITY

    - by Terri Hiskey
    As we approach the end of 2012, we are also closing in on the last couple of weeks that Agile customers and prospects can register for the upcoming PLM Summit 2013 for the bargain early bird rate of $195. Register now to secure your spot! The Conference Formerly Known as AGILITY... Long-time Agile customers may remember AGILITY, which was Agile's PLM customer conference that was held on an annual basis prior to Oracle's acquisiton of Agile in 2007. In February 2012, due to feedback we received from our Agile PLM community, we successfully resurrected the AGILITY conference and renamed it the PLM Summit. The PLM Summit was so well received and well-attended, that we are doing it again in 2013. This upcoming PLM Summit is being co-located in San Francisco under the overarching banner of the Oracle Value Chain Summit, and will be held alongside several other Oracle customer conferences that cover a range of value chain solutions, including Value Chain Planning, Value Chain Execution, Procurement, Maintenance and Manufacturing. This setup offers PLM attendees the best of all worlds--the opportunity to participate and learn about PLM in smaller, focused sessions by product and by industry, while also giving attendees the chance to see how PLM works together with other critical enterprise applications that address other important aspects of the value chain. Top Five Reasons to Attend the PLM Summit 2013 In the spirit of all of the end-of-the-year lists that are currently popping up, here is a list of the top five reasons to attend the PLM Summit for anyone out there needs a little extra encouragement to register: 1. The Best Opportunities for Customer Networking   The PLM Summit offers attendees numerous opportunities to learn and network with fellow Agile users. Customer stories are featured in keynote and breakout presentations and the schedule allows for plenty of networking time during breakfasts, lunches, breaks and dinners. Customer networking is the number one reason that Agile users attend the PLM Summit. Read what attendees thought of the most recent PLM Summit: "Hearing about the implementation of Agile products from a customers’ perspective is invaluable." - Director of Quality Assurance & Regulatory Affairs, leading medical device manufacturer "Understanding the scope of other companies’ projects and the lessons learned made attending this event well worth my time." - Director of Test Engineering, global industrial manufacturer "The most beneficial thing about attending this event is the opportunity to network with other customers with similar experiences." - Director of Business Process Improvement, leading high technology company Come to the PLM Summit and play an active role within the PLM community: swap war stories and business cards, connect on LinkedIn and Facebook, share your stories and discuss the sessions from each day. Register now! 2. It's Educational! The PLM Summit is the premier educational event for anyone in the Agile PLM community. There are nearly 40 PLM-focused in-depth educational sessions led by Agile PLM experts, customers and partners that will cover a range of specific product and industry-focused topics. Keynotes will give attendees a broad overview of the entire Agile PLM footprint, while sessions will delve deeply into specific product functionality and customer case studies. There is truly something for everyone. Check out the latest agenda for view of all the sessions. 3. Visit with the PLM Partner Community Our partners play a significant and important role within the Agile PLM community. At the PLM Summit, attendees will be able to meet and mingle with several of the top Oracle Agile PLM partners including: Deloitte, Domain, GoEngineer, Hitachi Consulting, IBM, Kalypso, KPIT Cummins (CPG Solutions), Perception Software, Verdant, Xavor and ZeroWaitState. Go here for a complete list of all the Value Chain Summit sponsors. 4. See Agile PLM in Action at our Dedicated PLM Demo Pods At the PLM Summit, attendees will have the chance to see Agile PLM in action at dedicated PLM demo pods, manned by expert members of our Agile PLM team. If you would like to see up close specific Agile PLM functionality, or if you have a question on how to extend the scope of your current implemention or if you want a better understanding of how to leverage Agile PLM to address specific use-cases, stop by one of the Agile PLM demo pods and engage the Agile PLM experts on hand at the PLM Summit. 5. Spend Some Time in Lovely San Francisco Still on the fence about the upcoming PLM Summit? Remember that it is being held in San Francisco, which is a fantastic city for a getaway. After spending time learning and networking about PLM, take an extra day or two to escape the dreary winter and enjoy the beautiful scenery and the unique actitivies offered only by the City by the Bay. You will walk away from the conference not only with renewed excitement about Agile PLM, but feeling rejuvenated in general.

    Read the article

  • How does one find out which application is associated with an indicator icon?

    - by Amos Annoy
    It is trivial to do this in Ubuntu 10.04. The question is specific to Ubuntu 12.04. some pertinent references (src: answer to What is the difference between indicators and a system tray?: Here is the documentation for indicators: Application indicators | Ubuntu App Developer libindicate Reference Manual libappindicator Reference Manual also DesktopExperienceTeam/ApplicationIndicators - Ubuntu Wiki ref: How can the application that makes an indicator icon be identified? bookmark: How does one find out which application is associated with an indicator icon in Ubuntu 12.04? is a serious question for reasons & problems outlined below and for which a significant investment has been made and is necessary for remedial purposes. reviewing refs. to find an orchestrated resolution ... (an indicator ap. indicator maybe needed) This has nothing to do (does it?) with right click. How can an indicator's icon in Ubuntu 12.04 be matched with the program responsible for it's manifestation on the top panel? A list of running applications can include all processes using System Monitor. How is the correct matching process found for an indicator? How are the sub-indicator applications identified? These are the aps associated with the components of an indicators drop-down menu. (This was to be a separate question and quite naturally follows up the progression. It is included here as it is obvious there is no provisioning to track down offending either sub or indicator aps. easily.) (The examination of SM points out a rather poignant factor in the faster battery depletion and shortened run time - the ambient quiescent CPU rate in 12.04 is now well over 20% when previously, in 10.04, it was well under 10%, between 5% and 7%! - the huge inordinate cpu overhead originates from Xorg and compiz - after booting the system, only SM is run and All Processes are selected, sorting on %CPU - switching between Resources and Processes profiles the execution overhead problem - running another ap like gedit "Text Editor" briefly gives it CPU priority - going back to S&M several aps. are at the top of the list in order: gnome-system-monitor as expected, then: Xorg, compiz, unity-panel-service, hud-service, with dbus-daemon and kworker/x:y's mixed in with some expected daemons and background tasks like nm-applet - not only do Xorg and compiz require excessive CPU time but their entourage has to come along too! further exacerbating the problem - our compute bound tasks no longer work effectively in the field - reduced battery life, reduced CPU time for custom ap.s etc. - and all this precipitated from an examination of what is going on with the battery ap. indicator - this was and is not a flippant, rhetorical or idle musing but has consequences for the credible deployment of 12.04 to reduce the negative impact of its overhead in a production environment) (I have a problem with the battery indicator - it sometimes has % and other times hh:mm - it is necessary to know the ap. & v. to get more info on controlling same. ditto: There are issues with other indicator aps.: NM vs. iwlist/iwconfig conflict, BT ap. vs RF switch, Battery ap. w/ no suspend/sleep for poor battery runtime, ... the list goes on) Details from: How can I find Application Indicator ID's? suggests looking at: file:///usr/share/indicator-application/ordering-override.keyfile [Ordering Index Overrides] nm-applet=1 gnome-power-manager=2 ibus=3 gst-keyboard-xkb=4 gsd-keyboard-xkb=5 which solves the battery ap. identification, and presumably nm is NetworkManager for the rf icon, but the envelope, blue tooth and speaker indicator aps. are still a mystery. (Also, the ordering is not correlated.) Mind you, it was simple in the past to simply right click to get the About option to find the ap. & v. info. browsing around and about: file:///usr/share/indicator-application/ordering-override.keyfile examined: file:///usr/share/indicators file:///usr/share/indicators/messages/applications/ ... perhaps?/presumably? the information sought may be buried in file:///usr/share/indicators A reference in the comments was given to: What is the difference between indicators and a system tray? quoting from that source ... Unfortunately desktop indicators are not well documented yet: I couldn't find any specification doc ... Well ... the actual document https://wiki.ubuntu.com/DesktopExperienceTeam/ApplicationIndicators#Summary does not help much but it's existential information provides considerable insight ...

    Read the article

  • How to Force Graphics Options in PC Games with NVIDIA, AMD, or Intel Graphics

    - by Chris Hoffman
    PC games usually have built-in graphics options you can change. But you’re not limited to the options built into games — the graphics control panels bundled with graphics drivers allow you to tweak options from outside PC games. For example, these tools allow you to force-enabling antialiasing to make old games look better, even if they don’t normally support it. You can also reduce graphics quality to get more performance on slow hardware. If You Don’t See These Options If you don’t have the NVIDIA Control Panel, AMD Catalyst Control Center, or Intel Graphics and Media Control Panel installed, you may need to install the appropriate graphics driver package for your hardware from the hardware manufacturer’s website. The drivers provided via Windows Update don’t include additional software like the NVIDIA Control Panel or AMD Catalyst Control Center. Drivers provided via Windows Update are also more out of date. If you’re playing PC games, you’ll want to have the latest graphics drivers installed on your system. NVIDIA Control Panel The NVIDIA Control Panel allows you to change these options if your computer has NVIDIA graphics hardware. To launch it, right-click your desktop background and select NVIDIA Control Panel. You can also find this tool by performing a Start menu (or Start screen) search for NVIDIA Control Panel or by right-clicking the NVIDIA icon in your system tray and selecting Open NVIDIA Control Panel. To quickly set a system-wide preference, you could use the Adjust image settings with preview option. For example, if you have old hardware that struggles to play the games you want to play, you may want to select “Use my preference emphasizing” and move the slider all the way to “Performance.” This trades graphics quality for an increased frame rate. By default, the “Use the advanced 3D image settings” option is selected. You can select Manage 3D settings and change advanced settings for all programs on your computer or just for specific games. NVIDIA keeps a database of the optimal settings for various games, but you’re free to tweak individual settings here. Just mouse-over an option for an explanation of what it does. If you have a laptop with NVIDIA Optimus technology — that is, both NVIDIA and Intel graphics — this is the same place you can choose which applications will use the NVIDIA hardware and which will use the Intel hardware. AMD Catalyst Control Center AMD’s Catalyst Control Center allows you to change these options on AMD graphics hardware. To open it, right-click your desktop background and select Catalyst Control Center. You can also right-click the Catalyst icon in your system tray and select Catalyst Control Center or perform a Start menu (or Start screen) search for Catalyst Control Center. Click the Gaming category at the left side of the Catalyst Control Center window and select 3D Application Settings to access the graphics settings you can change. The System Settings tab allows you to configure these options globally, for all games. Mouse over any option to see an explanation of what it does. You can also set per-application 3D settings and tweak your settings on a per-game basis. Click the Add option and browse to a game’s .exe file to change its options. Intel Graphics and Media Control Panel Intel integrated graphics is nowhere near as powerful as dedicated graphics hardware from NVIDIA and AMD, but it’s improving and comes included with most computers. Intel doesn’t provide anywhere near as many options in its graphics control panel, but you can still tweak some common settings. To open the Intel graphics control panel, locate the Intel graphics icon in your system tray, right-click it, and select Graphics Properties. You can also right-click the desktop and select Graphics Properties. Select either Basic Mode or Advanced Mode. When the Intel Graphics and Media Control Panel appears, select the 3D option. You’ll be able to set your Performance or Quality setting by moving the slider around or click the Custom Settings check box and customize your Anisotropic Filtering and Vertical Sync preference. Different Intel graphics hardware may have different options here. We also wouldn’t be surprised to see more advanced options appear in the future if Intel is serious about competing in the PC graphics market, as they say they are. These options are primarily useful to PC gamers, so don’t worry about them — or bother downloading updated graphics drivers — if you’re not a PC gamer and don’t use any intensive 3D applications on your computer. Image Credit: Dave Dugdale on Flickr     

    Read the article

  • Right-Time Retail Part 3

    - by David Dorf
    This is part three of the three-part series.  Read Part 1 and Part 2 first. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Marketing Real-time isn’t just about executing faster; it extends to interactions with customers as well. As an industry, we’ve spent many years analyzing all the data that’s been collected. Yes, that data has been invaluable in helping us make better decisions like where to open new stores, how to assort those stores, and how to price our products. But the recent advances in technology are now making it possible to analyze and deliver that data very quickly… fast enough to impact a potential sale in near real-time. Let me give you two examples. Salesmen in car dealerships get pretty good at sizing people up. When a potential customer walks in the door, it doesn’t take long for the salesman to figure out the revenue at stake. Is this person a real buyer, or just looking for a fun test drive? Will this person buy today or three months from now? Will this person opt for the expensive packages, or go bare bones? While the salesman certainly asks some leading questions, much of information is discerned through body language. But body language doesn’t translate very well over the web. Eloqua, which was acquired by Oracle earlier this year, reads internet body language. By tracking the behavior of the people visiting your web site, Eloqua categorizes visitors based on their propensity to buy. While Eloqua’s roots have been in B2B, we’ve been looking at leveraging the technology with ATG to target B2C. Knowing what sites were previously visited, how often the customer has been to your site recently, and how long they’ve spent searching can help understand where the customer is in their purchase journey. And knowing that bit of information may be enough to help close the deal with a real-time offer, follow-up email, or online customer service pop-up. This isn’t so different from the days gone by when the clerk behind the counter of the corner store noticed you were lingering in a particular aisle, so he walked over to help you compare two products and close the sale. You appreciated the personalized service, and he knew the value of the long-term relationship. Move that same concept into the digital world and you have Oracle’s CX Suite, a cloud-based offering of end-to-end customer experience tools, assembled primarily from acquisitions. Those tools are Oracle Marketing (Eloqua), Oracle Commerce (ATG, Endeca), Oracle Sales (Oracle CRM On Demand), Oracle Service (RightNow), Oracle Social (Collective Intellect, Vitrue, Involver), and Oracle Content (Fatwire). We are providing the glue that binds the CIO and CMO together to unleash synergies that drive the top-line higher, and by virtue of the cloud-approach, keep costs at bay. My second example of real-time marketing takes place in the store but leverages the concepts of Web marketing. In 1962 the decline of personalized service in retail began. Anyone know the significance of that year? That’s when Target, K-Mart, and Walmart each opened their first stores, and over the succeeding years the industry chose scale over personal service. No longer were you known as “Jane with the snotty kid so make sure we check her out fast,” but you suddenly became “time-starved female age 20-30 with kids.” I’m not saying that was a bad thing – it was the right thing for our industry at the time, and it enabled a huge amount of growth, cheaper prices, and more variety of products. But scale alone is no longer good enough. Today’s sophisticated consumer demands scale, experience, and personal attention. To some extent we’ve delivered that on websites via the magic of cookies, your willingness to log in, and sophisticated data analytics. What store manager wouldn’t love a report detailing all the visitors to his store, where they came from, and which products that examined? People trackers are getting more sophisticated, incorporating infrared, video analytics, and even face recognition. (Next time you walk in front on a mannequin, don’t be surprised if it’s looking back.) But the ultimate marketing conduit is the mobile phone. Since each mobile phone emits a unique number on WiFi networks, it becomes the cookie of the physical world. Assuming congress keeps privacy safeguards reasonable, we’ll have a win-win situation for both retailers and consumers. Retailers get to know more about the consumer’s purchase journey, and consumers get higher levels of service with the retailer. When I call my bank, a couple things happen before the call is connected. A reverse look-up on my phone number identifies me so my accounts can be retrieved from Siebel CRM. Then the system anticipates why I’m calling based on recent transactions. In this example, it sees that I was just charged a foreign currency fee, so it assumes that’s the reason I’m calling. It puts all the relevant information on the customer service rep’s screen as it connects the call. When I complain about the fee, the rep immediately sees I’m a great customer and I travel lots, so she suggests switching me to their traveler’s card that doesn’t have foreign transaction fees. That technology is powered by a product called Oracle Real-Time Decisions, a rules engine built to execute very quickly, basically in the time it takes the phone to ring once. So let’s combine the power of that product with our new-found mobile cookie and provide contextual customer interactions in real-time. Our first opportunity comes when a customer crosses a pre-defined geo-fence, typically a boundary around the store. Context is the key to our interaction: that’s the customer (known or anonymous), the time of day and day of week, and location. Thomas near the downtown store on a Wednesday at noon means he’s heading to lunch. If he were near the mall location on a Saturday morning, that’s a completely different context. But on his way to lunch, we’ll let Thomas know that we’ve got a new shipment of ASICS running shoes on display with a simple text message. We used the context to look-up Thomas’ past purchases and understood he was an avid runner. We used the fact that this was lunchtime to select the type of message, in this case an informational message instead of an offer. Thomas enters the store, phone in hand, and walks to the shoe department. He scans one of the new ASICS shoes using the convenient QR Codes we provided on the shelf-tags, but then he starts scanning low-end Nikes. Each scan is another opportunity to both learn from Thomas and potentially interact via another message. Since he historically buys low-end Nikes and keeps scanning them, he’s likely falling back into his old ways. Our marketing rules are currently set to move loyal customer to higher margin products. We could have set the dials to increase visit frequency, move overstocked items, increase basket size, or many other settings, but today we are trying to move Thomas to higher-margin products. We send Thomas another text message, this time it’s a personalized offer for 10% off ASICS good for 24 hours. Offering him a discount on Nikes would be throwing margin away since he buys those anyway. We are using our marketing dollars to change behavior that increases the long-term value of Thomas. He decides to buy the ASICS and scans the discount code on his phone at checkout. Checkout is yet another opportunity to interact with Thomas, so the transaction is sent back to Oracle RTD for evaluation. Since Thomas didn’t buy anything with the shoes, we’ll print a bounce-back coupon on the receipt offering 30% off ASICS socks if he returns within seven days. We have successfully started moving Thomas from low-margin to high-margin products. In both of these marketing scenarios, we are able to leverage data in near real-time to decide how best to interact with the customer and lead to an increase in the lifetime value of the customer. The key here is acting at the moment the customer shows interest using the context of the situation. We aren’t pushing random products at haphazard times. We are tailoring the marketing to be very specific to this customer, and it’s the technology that allows this to happen in near real-time. Conclusion As we enable more right-time integrations and interactions, retailers will begin to offer increased service to their customers. Localized and personalized service at scale will drive loyalty and lead to meaningful revenue growth for the retailers that execute well. Our industry needs to support Commerce Anywhere…and commerce anytime as well.

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • A Closable jQuery Plug-in

    - by Rick Strahl
    In my client side development I deal a lot with content that pops over the main page. Be it data entry ‘windows’ or dialogs or simple pop up notes. In most cases this behavior goes with draggable windows, but sometimes it’s also useful to have closable behavior on static page content that the user can choose to hide or otherwise make invisible or fade out. Here’s a small jQuery plug-in that provides .closable() behavior to most elements by using either an image that is provided or – more appropriately by using a CSS class to define the picture box layout. /* * * Closable * * Makes selected DOM elements closable by making them * invisible when close icon is clicked * * Version 1.01 * @requires jQuery v1.3 or later * * Copyright (c) 2007-2010 Rick Strahl * http://www.west-wind.com/ * * Licensed under the MIT license: * http://www.opensource.org/licenses/mit-license.php Support CSS: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Options: * handle Element to place closebox into (like say a header). Use if main element and closebox container are two different elements. * closeHandler Function called when the close box is clicked. Return true to close the box return false to keep it visible. * cssClass The CSS class to apply to the close box DIV or IMG tag. * imageUrl Allows you to specify an explicit IMG url that displays the close icon. If used bypasses CSS image styling. * fadeOut Optional provide fadeOut speed. Default no fade out occurs */ (function ($) { $.fn.closable = function (options) { var opt = { handle: null, closeHandler: null, cssClass: "closebox", imageUrl: null, fadeOut: null }; $.extend(opt, options); return this.each(function (i) { var el = $(this); var pos = el.css("position"); if (!pos || pos == "static") el.css("position", "relative"); var h = opt.handle ? $(opt.handle).css({ position: "relative" }) : el; var div = opt.imageUrl ? $("<img>").attr("src", opt.imageUrl).css("cursor", "pointer") : $("<div>"); div.addClass(opt.cssClass) .click(function (e) { if (opt.closeHandler) if (!opt.closeHandler.call(this, e)) return; if (opt.fadeOut) $(el).fadeOut(opt.fadeOut); else $(el).hide(); }); if (opt.imageUrl) div.css("background-image", "none"); h.append(div); }); } })(jQuery); The plugin can be applied against any selector that is a container (typically a div tag). The close image or close box is provided typically by way of a CssClass - .closebox by default – which supplies the image as part of the CSS styling. The default styling for the box looks something like this: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Alternately you can also supply an image URL which overrides the background image in the style sheet. I use this plug-in mostly on pop up windows that can be closed, but it’s also quite handy for remove/delete behavior in list displays like this: you can find this sample here to look to play along: http://www.west-wind.com/WestwindWebToolkit/Samples/Ajax/AmazonBooks/BooksAdmin.aspx For closable windows it’s nice to have something reusable because in my client framework there are lots of different kinds of windows that can be created: Draggables, Modal Dialogs, HoverPanels etc. and they all use the client .closable plug-in to provide the closable operation in the same way with a few options. Plug-ins are great for this sort of thing because they can also be aggregated and so different components can pick and choose the behavior they want. The window here is a draggable, that’s closable and has shadow behavior and the server control can simply generate the appropriate plug-ins to apply to the main <div> tag: $().ready(function() { $('#ctl00_MainContent_panEditBook') .closable({ handle: $('#divEditBook_Header') }) .draggable({ dragDelay: 100, handle: '#divEditBook_Header' }) .shadow({ opacity: 0.25, offset: 6 }); }) The window is using the default .closebox style and has its handle set to the header bar (Book Information). The window is just closable to go away so no event handler is applied. Actually I cheated – the actual page’s .closable is a bit more ugly in the sample as it uses an image from a resources file: .closable({ imageUrl: '/WestWindWebToolkit/Samples/WebResource.axd?d=TooLongAndNastyToPrint', handle: $('#divEditBook_Header')}) so you can see how to apply a custom image, which in this case is generated by the server control wrapping the client DragPanel. More interesting maybe is to apply the .closable behavior to list scenarios. For example, each of the individual items in the list display also are .closable using this plug-in. Rather than having to define each item with Html for an image, event handler and link, when the client template is rendered the closable behavior is attached to the list. Here I’m using client-templating and the code that this is done with looks like this: function loadBooks() { showProgress(); // Clear the content $("#divBookListWrapper").empty(); var filter = $("#" + scriptVars.lstFiltersId).val(); Proxy.GetBooks(filter, function(books) { $(books).each(function(i) { updateBook(this); showProgress(true); }); }, onPageError); } function updateBook(book,highlight) { // try to retrieve the single item in the list by tag attribute id var item = $(".bookitem[tag=" +book.Pk +"]"); // grab and evaluate the template var html = parseTemplate(template, book); var newItem = $(html) .attr("tag", book.Pk.toString()) .click(function() { var pk = $(this).attr("tag"); editBook(this, parseInt(pk)); }) .closable({ closeHandler: function(e) { removeBook(this, e); }, imageUrl: "../../images/remove.gif" }); if (item.length > 0) item.after(newItem).remove(); else newItem.appendTo($("#divBookListWrapper")); if (highlight) { newItem .addClass("pulse") .effect("bounce", { distance: 15, times: 3 }, 400); setTimeout(function() { newItem.removeClass("pulse"); }, 1200); } } Here the closable behavior is applied to each of the items along with an event handler, which is nice and easy compared to having to embed the right HTML and click handling into each item in the list individually via markup. Ideally though (and these posts make me realize this often a little late) I probably should set up a custom cssClass to handle the rendering – maybe a CSS class called .removebox that only changes the image from the default box image. This example also hooks up an event handler that is fired in response to the close. In the list I need to know when the remove button is clicked so I can fire of a service call to the server to actually remove the item from the database. The handler code can also return false; to indicate that the window should not be closed optionally. Returning true will close the window. You can find more information about the .closable class behavior and options here: .closable Documentation Plug-ins make Server Control JavaScript much easier I find this plug-in immensely useful especial as part of server control code, because it simplifies the code that has to be generated server side tremendously. This is true of plug-ins in general which make it so much easier to create simple server code that only generates plug-in options, rather than full blocks of JavaScript code.  For example, here’s the relevant code from the DragPanel server control which generates the .closable() behavior: if (this.Closable && !string.IsNullOrEmpty(DragHandleID) ) { string imageUrl = this.CloseBoxImage; if (imageUrl == "WebResource" ) imageUrl = ScriptProxy.GetWebResourceUrl(this, this.GetType(), ControlResources.CLOSE_ICON_RESOURCE); StringBuilder closableOptions = new StringBuilder("imageUrl: '" + imageUrl + "'"); if (!string.IsNullOrEmpty(this.DragHandleID)) closableOptions.Append(",handle: $('#" + this.DragHandleID + "')"); if (!string.IsNullOrEmpty(this.ClientDialogHandler)) closableOptions.Append(",handler: " + this.ClientDialogHandler); if (this.FadeOnClose) closableOptions.Append(",fadeOut: 'slow'"); startupScript.Append(@" .closable({ " + closableOptions + "})"); } The same sort of block is then used for .draggable and .shadow which simply sets options. Compared to the code I used to have in pre-jQuery versions of my JavaScript toolkit this is a walk in the park. In those days there was a bunch of JS generation which was ugly to say the least. I know a lot of folks frown on using server controls, especially the UI is client centric as the example is. However, I do feel that server controls can greatly simplify the process of getting the right behavior attached more easily and with the help of IntelliSense. Often the script markup is easier is especially if you are dealing with complex, multiple plug-in associations that often express more easily with property values on a control. Regardless of whether server controls are your thing or not this plug-in can be useful in many scenarios. Even in simple client-only scenarios using a plug-in with a few simple parameters is nicer and more consistent than creating the HTML markup over and over again. I hope some of you find this even a small bit as useful as I have. Related Links Download jquery.closable West Wind Web Toolkit jQuery Plug-ins © Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery   ASP.NET  JavaScript  

    Read the article

  • The standards that fail us and the intellectual bubble

    - by Jeff
    There has been a great deal of noise in the techie community about standards, and a sudden and unexplainable hate for Flash. This noise isn't coming from consumers... the countless soccer moms, teens and your weird uncle Bob, it's coming from the people who build (or at least claim to build) the stuff those consumers consume. If you could survey the position of consumers on the topic, they'd likely tell you that they just want stuff on the Web to work.The noise goes something like this: Web standards are the correct and right thing to use across the Intertubes, and anything not a part of those standards (Flash) is bad. Furthermore, the more recent noise is centered around the idea that HTML 5, along with Javascript, is the right thing to use. The arguments against Flash are, well, the truth is I haven't seen a good argument. I see anecdotal nonsense about high CPU usage and things I'd never think to check when I'm watching Piano Cat on YouTube, but these aren't arguments to me. Sure, I've seen it crash a browser a few times, but it's totally rare.But let's go back to standards. Yes, standards have played an important role in establishing the ubiquity of the Web. The protocols themselves, TCP/IP and HTTP, have been critical. HTML, which has served us well for a very long time, established an incredible foundation. Javascript did an OK job, and thanks to clever programmers writing great frameworks like JQuery, is becoming more and more useful. CSS is awful (there, I said it, I feel SO much better), and I'll never understand why it's so disconnected and different from anything else. It doesn't help that it's so widely misinterpreted by different browsers. Still, there's no question that standards are a good thing, and they've been good for the Web, consumers and publishers alike.HTML 4 has been with us for more than a decade. In Web years, that might as well be 80. HTML 5, contrary to popular belief, is not a standard, and likely won't be for many years to come. In fact, the Web hasn't really evolved at all in terms of its standards. The tools that generate the standard markup and script have, but at the end of the day, we're still living with standards that are more than ten years old. The "official" standards process has failed us.The Web evolved anyway, and did not wait for standards bodies to decide what to do next. It evolved in part because Macromedia, then Adobe, kept evolving Flash. In the earlier days, it mostly just did obnoxious splash pages, but then it started doing animation, and then rich apps as they added form input. Eventually it found its killer app: video. Now more than 95% of browsers have Flash installed. Consumers are better for it.But I'll do it one better... I'll go out on a limb and say that Flash is a standard. If it's that pervasive, I don't care what you tell me, it's a standard. Just because a company owns it doesn't mean that it's evil or not a standard. And hey, it pains me to say that as a developer, because I think the dev tools are the suck (more on that in a minute). But again, consumers don't care. They don't even pay for Flash. The bottom line is that if I put something Flash based on the Internet, it's likely that my audience will see it.And what about the speed of standards owned by a company? Look no further than Silverlight. Silverlight 2 (which I consider the "real" start to the story) came out about a year and a half ago. Now version 4 is out, and it has come a very long way in its capabilities. If you believe Riastats.com, more than half of browsers have it now. It didn't have to wait for standards bodies and nerds drafting documents, it's out today. At this rate, Silverlight will be on version 6 or 7 by the time HTML 5 is a ratified standard.Back to the noise, one of the things that has continually disappointed me about this profession is the number of people who get stuck in an intellectual bubble, color it with dogmatic principles, and completely ignore the actual marketplace where this stuff all has to live. We aren't machines; Binary thinking that forces us to choose between "open standards" and "proprietary lock-in" (the most loaded b.s. FUD term evar) isn't smart at all. The truth is that the <object> tag has allowed us to build incredible stuff on top of the old standards, and consumers have benefitted greatly. Consumer desire, capitalism, and yes, standards ratified by nerds who think about this stuff for years have all played a role in the broad adoption of the Interwebs.We could all do without the noise. At the end of the day, I'm going to build stuff for the Web that's good for my users, and I'm not going to base my decisions on a techie bubble religion. Imagine what the brilliant minds behind the noise could do for the Web if they joined me in that pursuit.

    Read the article

  • Employee Engagement Q&A with John Brunswick

    - by Kellsey Ruppel
    As we are focusing this week on Employee Engagement, I recently sat down with industry expert and thought leader John Brunswick on the topic. Here is the Q&A dialogue we shared.  Q: How do you effectively engage employees to drive business value?A: Motivation, both extrinsic and intrinsic, combined with the relevancy of various channels to support it.  Beyond chaining business strategies like compensation models within an organization, engagement ultimately is most successful when driven by employee's motivations.  Business value derived from engagement through technical capabilities can be objectively measured through metrics like the rate and accuracy of problem solving for a given business function or frequency of innovation created.  Providing employees performing "knowledge work" with capabilities that allow them to perform work with a higher degree of accuracy in the same or ideally less time, adds value for that individual and in turn, drives their level of engagement to drive business value. Q: Organizations with high levels of employee engagement outperform the total stock market index by 22%. Can you comment on why you think this might be? A: Alignment through shared purpose.  Zappos is an excellent example of a culture that arguably has higher than average levels of employee engagement and it permeates every aspect of their organization – embodied externally through their customer experience.  I recently made my first purchase with them and it was obvious through their web experience, visual design, communication style, customer service and attention to detail down to green packaging, that they have an amazingly strong shared purpose.  The Zappos.com ‘About page’ outlines their "Family Core Values", the first three being "Deliver WOW Through Service, Embrace and Drive Change & Create Fun and A Little Weirdness" – all reflected externally in my interaction with them.  Strong shared purpose enables higher product and service experience, equating to a dedicated customer base, repeat purchases and expanded marketshare. Q: Have you seen any trends in the market regarding employee engagement? A: Some companies now see offering a form of social engagement similar to Facebook and LinkedIn as standard communication infrastructure like email or instant messaging.  Originally offered as standalone tools, the value is now seen when these capabilities are offered in an integrated fashion in the context of business entities.  An emerging area of focus is around employee activities related to their organization on external social platforms, implicitly creating external communities with employees acting on behalf of the brand and interacting with each other (e.g. Twitter).  Companies have reached a formal understand that this now established communication medium requires strategies allowing employees to engage.  I have personally met colleagues from Oracle, like Oracle User Experience Director Ultan O'Broin (@ultan), via Twitter before meeting first through internal channels. Q: Employee engagement is important, but what about engaging customers and partners? A: The last few years we have witnessed an interesting evolution from the novelty of self-service to expectations of "intelligent" self-service.  From a consumer standpoint, engagement can end up being a key differentiator, especially in mature markets.  Customers that perform some level of interaction with a brand develop greater affinity for the brand and have a greater probability of acting as an advocate.  As organizations move toward a model of deeper engagement, they must ensure that their business is positioned to support deeper relationships, offering potentially greater transparency. From a partner standpoint greater engagement can lead to new types of business opportunities, much in the way that Amazon.com offers a unified shopping experience that can potentially span various vendors.  This same model can be extended to blending services and product delivery models, based on a closeness not easily possible before increased capability of engagement mechanisms. Q: What types of solutions are available to successfully deliver employee engagement? A: Solutions enabling higher levels of engagement do so on the basis of relevancy.  This relevancy is generally supported by aspects of content management, social collaboration, business intelligence, portal and process management technologies.  These technologies can help deliver an experience tailored to a given role or process within an organization that applies equally to work that is structured or unstructured, appearing in the form of functionality as simple as an online employee directory search, knowledge communities supported by social collaboration, as well as more feature rich business intelligence dashboards and portals. Looking to learn more about how to effectively engage your employees? Check out this webcast, or read more from John Brunswick. 

    Read the article

  • Oracle WebCenter at the Enterprise 2.0 Conference

    - by Brian Dirking
    We had a great week at the E20 Conference, presenting in four sessions – Andy MacMillan gave a session titled Today’s Successful Enterprises are Social Enterprises and was on a panel that Tony Byrne moderated; Christian Finn spoke on a panel on Unified Communications Unified Communications + Social Computing = Best of Both Worlds?, Mark Bennett spoke on a panel on The Evolution of Talent Management. The key areas of focus this year were sentiment analysis, adoption and community building, the benefits of failure, and social’s role in process applications. Sentiment analysis. This was focused not on external audiences but more on employee sentiment. Tim Young showed his internal "NikoNiko" project, where employees use smilies to report their current mood. The result was a dashboard that showed the company mood by department. Since the goal is to improve productivity, people can see which departments are running into issues and try and address them. A company might otherwise wait until the end of the quarter financials to find out that there was a problem and product didn’t ship. This is a way to identify issues immediately. Tim is great – he had the crowd laughing as soon as he hit the stage, with his proposed hastag for his session: by making it 138 characters long, people couldn’t say much behind his back. And as I tweeted during his session, I loved his comment that complexity diffuses energy - it sounds like something Sun Tzu would say. Another example of employee sentiment analysis was CubeVibe. Founder and CEO Aaron Aycock, in his 3 minute pitch or die session talked about how engaged employees perform better. It was too bad he got gonged, he was just picking up speed, but CubeVibe did win the vote – congratulations to them. Internal adoption, community building, and involvement. On this topic I spoke to Terri Griffith, and she said there is some good work going on at University of Indiana regarding this, and hinted that she might be blogging about it in the near future. This area holds lots of interest for me. Amongst our customers, - CPAC stands out as an organization that has successfully built a community. So, I wonder - what are the building blocks? A strong leader? A common or unifying purpose? A certain level of engagement? I imagine someone has created an equation that says “for a community to grow at 30% per month, there must be an engagement level x to the square root of y, where x equals current community size, and y equals the expected growth rate, and the result is how many engagements the average user must contribute to maintain that growth.” Does anyone have a framework like that? The net result of everyone’s experience is that there is nothing to do but start early and fail often. Kevin Jones made this the focus of his keynote. He talked about the types of failure and what they mean. And he showed his famous kids at work video: Kevin’s blog also has this post: Social Business Failure #8: Workflow Integration. This is something that we’ve been working on at Oracle. Since so much of business is based in enterprise applications such as ERP and CRM (and since Oracle offers e-Business Suite, Siebel, PeopleSoft, and JD Edwards, as well as Fusion Applications), it makes sense that the social capabilities of Oracle WebCenter is built right into these applications. There are two types of social collaboration – ad-hoc, and exception handling. When you are in a business process and encounter an exception, you immediately look for 1) the document that tells you how to handle it, or 2) the person who can tell you how to handle it. With WebCenter built into these processes, people either search their content management system, or engage in expertise location and conversation. The great thing is, THEY DON’T HAVE TO LEAVE THE APPLICATION TO DO IT. Oracle has built the social capabilities right into the applications and business processes. I don’t think enough folks were able to see that at the event, but I expect that over the next six months folks will become very aware of it. WebCenter also provides the ability to have ad-hoc collaboration, search, and expertise location that folks need when they are innovating or collaborating. We demonstrated Oracle Social Network. It’s built on our Oracle WebCenter product to provide social collaboration inside and outside of your company. When we showed it to people, there were a number of areas that they commented on that were different from the other products being shown at the conference: Screenshots from within the product Many authors working on documents simultaneously Flagging people for follow up Direct ability to call out to people Ability to see presence not just if someone is online, but which conversation they are actively in Great stuff, the conference was full of smart people that that we enjoy spending time with. We’ll keep up in the meantime, but we look forward to seeing you in Boston.

    Read the article

  • How to implement jquery and Mootools together ?

    - by Avi Kumar Manku
    I am developing a website in which I am implementing two slider for images gallery using one with jQuery and one with moottools. But there is problem in implementing these because when I use both together the jQuery slider doesn't works where mootools slider works. jQuery slider works in case where I remove mootools. What should I do to implement both sliders together? Any suggestions will be helpful. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Tresmode | Footwear &amp; Accessories</title> <script type="text/javascript" src="js/jquery-1.5.min.js"></script> <script src="js/jquery.easing.1.3.js" type="text/javascript"></script> <script src="js/jquery.slideviewer.1.2.js" type="text/javascript"></script> <!-- Syntax hl --> <script src="js/jquery.syntax.min.js" type="text/javascript" charset="utf-8"></script> <script type="text/javascript"> $(window).bind("load", function() { $("div#mygaltop").slideView({toolTip: true, ttOpacity: 0.5}); $("div#mygalone").slideView(); //if leaved blank performs the default kind of animation (easeInOutExpo, 750) $("div#mygaltwo").slideView({ easeFunc: "easeInOutBounce", easeTime: 2200, toolTip: true }); $("div#mygalthree").slideView({ easeFunc: "easeInOutSine", easeTime: 100, uiBefore: true, ttOpacity: 0.5, toolTip: true }); }); $(function(){ $.syntax({root: 'http://www.gcmingati.net/wordpress/wp-content/themes/giancarlo-mingati/js/jquery-syntax/'}); }); </script> <link href="css/style.css" rel="stylesheet" type="text/css" /> <link href="css/product.css" rel="stylesheet" type="text/css" /> <link href="css/scroll.css" rel="stylesheet" type="text/css" /> <!--[if lte IE 8]> <link href="css/ieonly.css" rel="stylesheet" type="text/css" /> <![endif]--> <script language="javascript" type="text/javascript" src="js/mootools-1.2-core.js"></script> <script language="javascript" type="text/javascript" src="js/mootools-1.2-more.js"></script> <script language="javascript" type="text/javascript" src="js/SlideItMoo.js"></script> <script language="javascript" type="text/javascript"> window.addEvent('domready', function(){ /* thumbnails example , links only */ new SlideItMoo({itemsVisible:5, // the number of thumbnails that are visible currentElement: 0, // the current element. starts from 0. If you want to start the display with a specific thumbnail, change this thumbsContainer: 'thumbs', elementScrolled: 'thumb_container', overallContainer: 'gallery_container'}); /* thumbnails example , div containers */ new SlideItMoo({itemsVisible:5, // the number of thumbnails that are visible currentElement: 0, // the current element. starts from 0. If you want to start the display with a specific thumbnail, change this thumbsContainer: 'thumbs2', elementScrolled: 'thumb_container2', overallContainer: 'gallery_container2'}); /* banner rotator example */ new SlideItMoo({itemsVisible:1, // the number of thumbnails that are visible showControls:0, // show the next-previous buttons autoSlide:2500, // insert interval in milliseconds currentElement: 0, // the current element. starts from 0. If you want to start the display with a specific thumbnail, change this transition: Fx.Transitions.Bounce.easeOut, thumbsContainer: 'banners', elementScrolled: 'banner_container', overallContainer: 'banners_container'}); }); </script> </head> <body> <div id="landing"> <!-- landing page menu --> <div id="landing_menu"> <ul> <li><a class="active" href="#">SPECIALS</a></li> <li><a href="#">SHOP MEN'S</a></li> <li class="none"><a class="none" href="#">SHOP WOMEN'S</a></li> </ul> </div> <!-- landing page menu --> <!-- loading container menu --> <div id="container_part"> <div id="big_image_slider"> <!-- <img src="images/briteloves.png" alt="Britelove" /> --> <div id="mygaltop" class="svw"> <ul> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/briteloves.png" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/1.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/2.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/3.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/4.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/5.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/6.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/7.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/8.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/9.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/10.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/11.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/12.jpg" /></li> </ul> </div> </div> <div class="new_style_banner"><img src="images/new_styles.png" alt="new style" /></div> <div class="new_style_banner"><img src="images/ford-super-models.png" alt="ford super models" /></div> </div> <!--- loading container menu --> <!-- footer scrool ---> <div id="footer_scroll"> <!--thumbnails slideshow begin--> <div id="gallery_container"> <div id="thumb_container"> <div id="thumbs"> <a href="gallery/full/DC080302018.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/1.jpg"/></a> <a href="gallery/full/DC080302028.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/2.jpg" /></a> <a href="gallery/full/DC080302030.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/3.jpg"/></a> <a href="gallery/full/DC080302018.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/4.jpg" /></a> <a href="gallery/full/DC080302028.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/5.jpg" /></a> <a href="gallery/full/DC080302030.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/6.jpg"/></a> <a href="gallery/full/DC080302018.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/1.jpg"/></a> <a href="gallery/full/DC080302028.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/2.jpg" /></a> <a href="gallery/full/DC080302030.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/7.jpg"/></a> <a href="gallery/full/DC080302018.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/8.jpg" /></a> <a href="gallery/full/DC080302028.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/9.jpg" /></a> <a href="gallery/full/DC080302030.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/10.jpg"/></a> </div> </div> </div> <!--thumbnails slideshow end--> </div> <!-- foooter scrooll --> </div> <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> var pageTracker = _gat._getTracker("UA-2064812-2"); pageTracker._initData(); pageTracker._trackPageview(); </script> </body> </html>

    Read the article

  • What Counts For A DBA: Foresight

    - by drsql
    Of all the valuable attributes of a DBA covered so far in this series, ranging from passion to humility to practicality, perhaps one of the most important attributes may turn out to be the most seemingly-nebulous: foresight. According to Free Dictionary foresight is the "perception of the significance and nature of events before they have occurred". Foresight does not come naturally to most people, as the parent of any teenager will attest. No matter how clearly you see their problems coming they won't listen, and have to fail before eventually (hopefully) learning for themselves. Having graduated from the school of hard knocks, the DBA, the naive teenager no longer, acquires the ability to foretell how events will unfold in response to certain actions or attitudes with the unerring accuracy of a doom-laden prophet. Like Simba in the Lion King, after a few blows to the head, we foretell that a sore head that will be the inevitable consequence of a swing of Rafiki's stick, and we take evasive action. However, foresight is about more than simply learning when to duck. It's about taking the time to understand and prevent the habits that caused the stick to swing in the first place. And based on this definition, I often think there is a lot less foresight on display in my industry than there ought to be. Most DBAs reading this blog will spot a line such as the following in a piece of "working" code, understand immediately why it is less than optimimum, and take evasive action. …WHERE CAST (columnName as int) = 1 However, the programmers who regularly write this sort of code clearly lack that foresight, and this and numerous other examples of similarly-malodorous code prevail throughout our industry (and provide premium-grade fertilizer for the healthy growth of many a consultant's bank account). Sometimes, perhaps harried by impatient managers and painfully tight deadlines, everyone makes mistakes. Yes, I too occasionally write code that "works", but basically stinks. When the problems manifest, it is sometimes accompanied by a sense of grim recognition that somewhere in me existed the foresight to know that that approach would lead to this problem. However, in the headlong rush, warning signs got overlooked, lessons learned previously, which could supply the foresight to the current project, were lost and not applied.   Of course, the problem often is a simple lack of skills, training and knowledge in the relevant technology and/or business space; programmers and DBAs forced to do their best in the face of inadequate training, or to apply their skills in areas where they lack experience. However, often the problem goes deeper than this; I detect in some DBAs and programmers a certain laziness of attitude.   They veer from one project to the next, going with "whatever works", unwilling or unable to take the time to understand where their actions are leading them. Of course, the whole "Agile" mindset is often interpreted to favor flexibility and rapid production over aiming to get things right the first time. The faster you try to travel in the dark, frequently changing direction, the more important it is to have someone who has the foresight to know at least roughly where you are heading. This is doubly true for the data tier which, no matter how you try to deny it, simply cannot be "redone" every month as you learn aspects of the world you are trying to model that, with a little bit of foresight, you would have seen coming.   Sometimes, when as a DBA you can glance briefly at 200 lines of working SQL code and know instinctively why it will cause problems, foresight can feel like magic, but it isn't; it's more like muscle memory. It is acquired as the consequence of good experience, useful communication with those around you, and a willingness to learn continually, through continued education as well as from failure. Foresight can be deployed only by finding time to understand how the lessons learned from other DBAs, and other projects, can help steer the current project in the right direction.   C.S. Lewis once said "The future is something which everyone reaches at the rate of sixty minutes an hour, whatever he does, whoever he is." It cannot be avoided; the quality of what you build now is going to affect you, and others, at some point in the future. Take the time to acquire foresight; it is a love letter to your future self, to say you cared.

    Read the article

  • Cannot connect to secure wireless with Netgear wna3100 USB

    - by Vince Radice
    I have installed Ubuntu 11.10. I used a wired connection to download and install all of the updates. When I tried to use a Netgear WNA3100 wireless USB network adapter, it failed. Much searching and trying things I was finally able to get it working by disabling security on my router. I have verified this by disabling security and I was able to connect. When I enabled security (WPA2 PSK), the connection failed. What is necessary to enable security (WPA2 PSK) and still use the Netgear USB interface? Here is the output from the commands most requested lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 003: ID 0846:9020 NetGear, Inc. WNA3100(v1) Wireless-N 300 [Broadcom BCM43231] lshw -C network *-network description: Ethernet interface product: RTL-8139/8139C/8139C+ vendor: Realtek Semiconductor Co., Ltd. physical id: 3 bus info: pci@0000:02:03.0 logical name: eth0 version: 10 serial: 00:40:ca:44:e6:3e size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=8139too driverversion=0.9.28 duplex=half latency=32 link=no maxlatency=64 mingnt=32 multicast=yes port=MII speed=10Mbit/s resources: irq:19 ioport:c800(size=256) memory:ee011000-ee0110ff memory:40000000-4000ffff *-network description: Wireless interface physical id: 1 logical name: wlan0 serial: e0:91:f5:56:e1:0d capabilities: ethernet physical wireless configuration: broadcast=yes driver=ndiswrapper+bcmn43xx32 driverversion=1.56+,08/26/2009, 5.10.79.30 ip=192.168.1.104 link=yes multicast=yes wireless=IEEE 802.11g iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11g ESSID:"vincecarolradice" Mode:Managed Frequency:2.422 GHz Access Point: A0:21:B7:9F:E5:EE Bit Rate=121.5 Mb/s Tx-Power:32 dBm RTS thr:2347 B Fragment thr:2346 B Encryption key:off Power Management:off Link Quality:76/100 Signal level:-47 dBm Noise level:-96 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 ndiswrapper -l bcmn43xx32 : driver installed device (0846:9020) present lsmod | grep ndis ndiswrapper 193669 0 dmesg | grep -e ndis -e wlan [ 907.466392] ndiswrapper version 1.56 loaded (smp=yes, preempt=no) [ 907.838507] ndiswrapper (import:233): unknown symbol: ntoskrnl.exe:'IoUnregisterPlugPlayNotification' [ 907.838955] ndiswrapper: driver bcmwlhigh5 (Netgear,11/05/2009, 5.60.180.11) loaded [ 908.137940] wlan0: ethernet device e0:91:f5:56:e1:0d using NDIS driver: bcmwlhigh5, version: 0x53cb40b, NDIS version: 0x501, vendor: 'NDIS Network Adapter', 0846:9020.F.conf [ 908.141879] wlan0: encryption modes supported: WEP; TKIP with WPA, WPA2, WPA2PSK; AES/CCMP with WPA, WPA2, WPA2PSK [ 908.143048] usbcore: registered new interface driver ndiswrapper [ 908.178826] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 994.015088] usbcore: deregistering interface driver ndiswrapper [ 994.028892] ndiswrapper: device wlan0 removed [ 994.080558] ndiswrapper version 1.56 loaded (smp=yes, preempt=no) [ 994.374929] ndiswrapper: driver bcmn43xx32 (,08/26/2009, 5.10.79.30) loaded [ 994.404366] ndiswrapper (mp_init:219): couldn't initialize device: C0000001 [ 994.404384] ndiswrapper (pnp_start_device:435): Windows driver couldn't initialize the device (C0000001) [ 994.404666] ndiswrapper (mp_halt:262): device e05b6480 is not initialized - not halting [ 994.404671] ndiswrapper: device eth%d removed [ 994.404709] ndiswrapper: probe of 1-5:1.0 failed with error -22 [ 994.406318] usbcore: registered new interface driver ndiswrapper [ 2302.058692] wlan0: ethernet device e0:91:f5:56:e1:0d using NDIS driver: bcmn43xx32, version: 0x50a4f1e, NDIS version: 0x501, vendor: 'NDIS Network Adapter', 0846:9020.F.conf [ 2302.060882] wlan0: encryption modes supported: WEP; TKIP with WPA, WPA2, WPA2PSK; AES/CCMP with WPA, WPA2, WPA2PSK [ 2302.113838] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 2354.611318] ndiswrapper (iw_set_auth:1602): invalid cmd 12 [ 2355.268902] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 2365.400023] wlan0: no IPv6 routers present [ 2779.226096] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 2779.422343] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 2797.574474] ndiswrapper (iw_set_auth:1602): invalid cmd 12 [ 2802.607937] ndiswrapper (iw_set_auth:1602): invalid cmd 12 [ 2803.261315] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 2813.952028] wlan0: no IPv6 routers present [ 3135.738431] ndiswrapper (iw_set_auth:1602): invalid cmd 12 [ 3139.180963] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3139.816561] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3163.229872] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3163.444542] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3163.758297] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3163.860684] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3205.118732] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3205.139553] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3205.300542] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3353.341402] ndiswrapper (iw_set_auth:1602): invalid cmd 12 [ 3363.266399] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3363.505475] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3363.506619] ndiswrapper (set_iw_auth_mode:601): setting auth mode to 5 failed (00010003) [ 3363.717203] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3363.779206] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3405.206152] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3405.248624] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3405.577664] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3438.852457] ndiswrapper (iw_set_auth:1602): invalid cmd 12 [ 3438.908573] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3568.282995] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3568.325237] ndiswrapper (set_iw_auth_mode:601): setting auth mode to 5 failed (00010003) [ 3568.460716] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3568.461763] ndiswrapper (set_iw_auth_mode:601): setting auth mode to 5 failed (00010003) [ 3568.809776] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3568.880641] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3610.122848] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3610.148328] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3610.324502] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3636.088798] ndiswrapper (iw_set_auth:1602): invalid cmd 12 [ 3636.712186] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 3647.600040] wlan0: no IPv6 routers present I am using the system now with the router security turned off. When I submit this, I will turn security back on.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 5

    - by MarkPearl
    Learning Outcomes Describe the operation of a memory cell Explain the difference between DRAM and SRAM Discuss the different types of ROM Explain the concepts of a hard failure and a soft error respectively Describe SDRAM organization Semiconductor Main Memory The two traditional forms of RAM used in computers are DRAM and SRAM DRAM (Dynamic RAM) Divided into two technologies… Dynamic Static Dynamic RAM is made with cells that store data as charge on capacitors. The presence or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have natural tendency to discharge, dynamic RAM requires periodic charge refreshing to maintain data storage. The term dynamic refers to the tendency of the stored charge to leak away, even with power continuously applied. Although the DRAM cell is used to store a single bit (0 or 1), it is essentially an analogue device. The capacitor can store any charge value within a range, a threshold value determines whether the charge is interpreted as a 1 or 0. SRAM (Static RAM) SRAM is a digital device that uses the same logic elements used in the processor. In SRAM, binary values are stored using traditional flip flop logic configurations. SRAM will hold its data as along as power is supplied to it. Unlike DRAM, no refresh is required to retain data. SRAM vs. DRAM DRAM is simpler and smaller than SRAM. Thus it is more dense and less expensive than SRAM. The cost of the refreshing circuitry for DRAM needs to be considered, but if the machine requires a large amount of memory, DRAM turns out to be cheaper than SRAM. SRAMS are somewhat faster than DRAM, thus SRAM is generally used for cache memory and DRAM is used for main memory. Types of ROM Read Only Memory (ROM) contains a permanent pattern of data that cannot be changed. ROM is non volatile meaning no power source is required to maintain the bit values in memory. While it is possible to read a ROM, it is not possible to write new data into it. An important application of ROM is microprogramming, other applications include library subroutines for frequently wanted functions, System programs, Function tables. A ROM is created like any other integrated circuit chip, with the data actually wired into the chip as part of the fabrication process. To reduce costs of fabrication, we have PROMS. PROMS are… Written only once Non-volatile Written after fabrication Another variation of ROM is the read-mostly memory, which is useful for applications in which read operations are far more frequent than write operations, but for which non volatile storage is required. There are three common forms of read-mostly memory, namely… EPROM EEPROM Flash memory Error Correction Semiconductor memory is subject to errors, which can be classed into two categories… Hard failure – Permanent physical defect so that the memory cell or cells cannot reliably store data Soft failure – Random error that alters the contents of one or more memory cells without damaging the memory (common cause includes power supply issues, etc.) Most modern main memory systems include logic for both detecting and correcting errors. Error detection works as follows… When data is to be read into memory, a calculation is performed on the data to produce a code Both the code and the data are stored When the previously stored word is read out, the code is used to detect and possibly correct errors The error checking provides one of 3 possible results… No errors are detected – the fetched data bits are sent out An error is detected, and it is possible to correct the error. The data bits plus error correction bits are fed into a corrector, which produces a corrected set of bits to be sent out An error is detected, but it is not possible to correct it. This condition is reported Hamming Code See wiki for detailed explanation. We will probably need to know how to do a hemming code – refer to the textbook (pg. 188 – 189) Advanced DRAM organization One of the most critical system bottlenecks when using high-performance processors is the interface to main memory. This interface is the most important pathway in the entire computer system. The basic building block of main memory remains the DRAM chip. In recent years a number of enhancements to the basic DRAM architecture have been explored, and some of these are now on the market including… SDRAM (Synchronous DRAM) DDR-DRAM RDRAM SDRAM (Synchronous DRAM) SDRAM exchanges data with the processor synchronized to an external clock signal and running at the full speed of the processor/memory bus without imposing wait states. SDRAM employs a burst mode to eliminate the address setup time and row and column line precharge time after the first access In burst mode a series of data bits can be clocked out rapidly after the first bit has been accessed SDRAM has a multiple bank internal architecture that improves opportunities for on chip parallelism SDRAM performs best when it is transferring large blocks of data serially There is now an enhanced version of SDRAM known as double data rate SDRAM or DDR-SDRAM that overcomes the once-per-cycle limitation of SDRAM

    Read the article

  • Your Next IT Job

    - by BuckWoody
    Some data professionals have worked (and plan to work) in the same place for a long time. In organizations large and small, the turnover rate just isn’t that high. This has not been my experience. About every 3-5 years I’ve changed either roles or companies. That might be due to the IT environment or my personality (or a mix of the two), but the point is that I’ve had many roles and worked for many companies large and small throughout my 27+ years in IT. At one point this might have been a detriment – a prospective employer looks at the resume and says “it seems you’ve moved around quite a bit.” But I haven’t found that to be the case all the time –in fact, in some cases the variety of jobs I’ve held has been an asset because I’ve seen what works (and doesn’t) in other environments, which can save time and money. So if you’re in the first camp – great! Stay where you are, and continue doing the work you love. but if you’re in the second, then this post might be useful. If you are planning on making a change, or perhaps you’ve hit a wall at your current location, you might start looking around for a better paying job – and there’s nothing wrong with that. We all try to make our lives better, and for some that involves more money. Money, however, isn’t always the primary motivator. I’ve gone to another job that doesn’t have as many benefits or has the same salary as the current job I’m working to gain more experience, get a better work/life balance and so on. It’s a mix of factors that only you know about. So I thought I would lay out a few advantages and disadvantages in the shops I’ve worked at. This post isn’t aimed at a single employer, but represents a mix of what I’ve experienced, and of course the opinions here are my own. You will most certainly have a different take – if so, please post a response! I also won’t mention a specific industry – I’ve worked everywhere from medical firms, legal offices, retail, billing centers, manufacturing, government, even to NASA. I’m focusing here mostly on size and composition. And I’m making some very broad generalizations here – I am fully aware that a small company might have great benefits and a large company might allow a lot of role flexibility.  your mileage may vary – and again, post those comments! Small Company To me a “small company” means around 100 people or less – sometimes a lot less. These can be really fun, frustrating places to to work. Advantages: a great deal of flexibility, a wide range of roles (often at the same time), a large degree of responsibility, immediate feedback, close relationships with co-workers, work directly with your customer. Disadvantages: Too much responsibility, little work/life balance, immature political structure, few (if any) benefits. If the business is family-owned, they can easily violate work/life boundaries. Medium Size company In my experience the next size company I would work for involves from a few hundred people to around five thousand. Advantages: Good mobility – fairly easy to get promoted, acceptable benefits, more defined responsibilities, better work/life balance, balanced load for expertise, but still the organizational structure is fairly simple to understand. Disadvantages: Pay is not always highest, rapid changes in structure as the organization grows, transient workforce. You may not be given the opportunity to work with another technology if someone already “owns” it. Politics are painful at this level as people try to learn how to do it. Large Company When you get into the tens of thousands of folks employed around the world, you’re in a large company. Advantages: Lots of room to move around – sometimes you can work (as I have) multiple jobs through the years and yet stay at the same company, building time for benefits, very defined roles, trained managers (yes, I know some of them are still awful – trust me – I DO know that), higher-end benefits, long careers possible, discounts at retailers and other “soft” benefits, prestige. For some, a higher level of politics (done professionally) is a good thing. Disadvantages: You could become another faceless name in the crowd, might not allow a great deal of flexibility,  large organizational changes might take away any control you have of your career. I’ve also seen large layoffs happen, and good people get let go while “dead weight” is retained. For some, a higher level of politics is distasteful. So what are your experiences? Share with the group! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >