Search Results

Search found 5760 results on 231 pages for 'itunes alternative'.

Page 221/231 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Integrate Google Wave With Your Windows Workflow

    - by Matthew Guay
    Have you given Google Wave a try, only to find it difficult to keep up with?  Here’s how you can integrate Google Wave with your desktop and workflow with some free and simple apps. Google Wave is an online web app, and unlike many Google services, it’s not easily integrated with standard desktop applications.  Instead, you’ll have to keep it open in a browser tab, and since it is one of the most intensive HTML5 webapps available today, you may notice slowdowns in many popular browsers.  Plus, it can be hard to stay on top of your Wave conversations and collaborations by just switching back and forth between the website and whatever else you’re working on.  Here we’ll look at some tools that can help you integrate Google Wave with your workflow, and make it feel more native in Windows. Use Google Wave Directly in Windows What’s one of the best ways to make a web app feel like a native application?  By making it into a native application, of course!  Waver is a free Air powered app that can make the mobile version of Google Wave feel at home on your Windows, Mac, or Linux desktop.  We found it to be a quick and easy way to keep on top of our waves and collaborate with our friends. To get started with Waver, open their homepage on the Adobe Air Marketplace (link below) and click Download From Publisher. Waver is powered by Adobe Air, so if you don’t have Adobe Air installed, you’ll need to first download and install it. After clicking the link above, Adobe Air will open a prompt asking what you wish to do with the file.  Click Open, and then install as normal. Once the installation is finished, enter your Google Account info in the window.   After a few moments, you’ll see your Wave account in miniature, running directly in Waver.  Click a Wave to view it, or click New wave to start a new Wave message.  Unfortunately, in our tests the search box didn’t seem to work, but everything else worked fine. Google Wave works great in Waver, though all of the Wave features are not available since it is running the mobile version of Wave. You can still view content from plugins, including YouTube videos, directly in Waver.   Get Wave Notifications From Your Windows Taskbar Most popular email and Twitter clients give you notifications from your system tray when new messages come in.  And with Google Wave Notifier, you can now get the same alerts when you receive a new Wave message. Head over to the Google Wave Notifier site (link below), and click the download link to get started.  Make sure to download the latest Binary zip, as this one will contain the Windows program rather than the source code. Unzip the folder, and then run GoogleWaveNotifier.exe. On first run, you can enter your Google Account information.  Notice that this is not a standard account login window; you’ll need to enter your email address in the Username field, and then your password below it. You can also change other settings from this dialog, including update frequency and whether or not to run at startup.  Click the value, and then select the setting you want from the dropdown menu. Now, you’ll have a new Wave icon in your system tray.  When it detects new Waves or unread updates, it will display a popup notification with details about the unread Waves.  Additionally, the icon will change to show the number of unread Waves.  Click the popup to open Wave in your browser.  Or, if you have Waver installed, simply open the Waver window to view your latest Waves. If you ever need to change settings again in the future, right-click the icon and select Settings, and then edit as above. Get Wave Notifications in Your Email  Most of us have Outlook or Gmail open all day, and seldom leave the house without a Smartphone with push email.  And thanks to a new Wave feature, you can still keep up with your Waves without having to change your workflow. To activate email notifications from Google Wave, login to your Wave account, click the arrow beside your Inbox, and select Notifications. Select how quickly you want to receive notifications, and choose which email address you wish to receive the notifications.  Click Save when you’re finished. Now you’ll receive an email with information about new and updated Waves in your account.  If there were only small changes, you may get enough info directly in the email; otherwise, you can click the link and open that Wave in your browser. Conclusion Google Wave has great potential as a collaboration and communications platform, but by default it can be hard to keep up with what’s going on in your Waves.  These apps for Windows help you integrate Wave with your workflow, and can keep you from constantly logging in and checking for new Waves.  And since Google Wave registration is now open for everyone, it’s a great time to give it a try and see how it works for yourself. Links Signup for Google Wave (Google Account required) Download Waver from the Adobe Air Marketplace Download Google Wave Notifier Similar Articles Productive Geek Tips We Have 20 Google Wave Invites. Want One?Tired of Waiting for Google Wave? Try ShareFlow NowIntegrate Google Docs with Outlook the Easy WayAwesome Desktop Wallpapers: The Windows 7 EditionWeek in Geek: The Stupid Geek Tricks to Hide Extra Windows Edition TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Default Programs Editor – One great tool for Setting Defaults Convert BMP, TIFF, PCX to Vector files with RasterVect Free Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer

    Read the article

  • Beginner’s Guide to Flock, the Social Media Browser

    - by Asian Angel
    Are you wanting a browser that can work as a social hub from the first moment that you start it up? If you love the idea of a browser that is ready to go out of the box then join us as we look at Flock. During the Install Process When you are installing Flock there are two install windows that you should watch for. The first one lets you choose between the “Express Setup & Custom Setup”. We recommend the “Custom Setup”. Once you have selected the “Custom Setup” you can choose which of the following options will enabled. Notice the “anonymous usage statistics” option at the bottom…you can choose to leave this enabled or disable it based on your comfort level. The First Look When you start Flock up for the first time it will open with three tabs. All three are of interest…especially if this is your first time using Flock. With the first tab you can jump right into “logging in/activating” favorite social services within Flock. This page is set to display each time that you open Flock unless you deselect the option in the lower left corner. The second tab provides a very nice overview of Flock and its’ built-in social management power. The third and final page can be considered a “Personal Page”. You can make some changes to the content displayed for quick and easy access and/or monitoring “Twitter Search, Favorite Feeds, Favorite Media, Friend Activity, & Favorite Sites”. Use the “Widget Menu” in the upper left corner to select the “Personal Page Components” that you would like to use. In the upper right corner there is a built-in “Search Bar” and buttons for “Posting to Your Blog & Uploading Media”. To help personalize the “My World Page” just a bit more you can even change the text to your name or whatever best suits your needs. The Flock Toolbar The “Flock Toolbar” is full of social account management goodness. In order from left to right the buttons are: My World (Homepage), Open People Sidebar, Open Media Bar, Open Feeds Sidebar, Webmail, Open Favorites Sidebar, Open Accounts and Services Sidebar, Open Web Clipboard Sidebar, Open Blog Editor, & Open Photo Uploader. The buttons will be “highlighted” with a blue background to help indicate which area you are in. The first area will display a listing of people that you are watching/following at the services shown here. Clicking on the “Media Bar Button” will display the following “Media Slider Bar” above your “Tab Bar”. Notice that there is a built-in “Search Bar” on the right side. Any photos, etc. clicked on will be opened in the currently focused tab below the “Media Bar”. Here is a listing of the “Media Streams” available for viewing. By default Flock will come with a small selection of pre-subscribed RSS Feeds. You can easily unsubscribe, rearrange, add custom folders, or non-categorized feeds as desired. RSS Feeds subscribed to here can be viewed combined together as a single feed (clickable links) in the “My World Page”. or can be viewed individually in a new tab. Very nice! Next on the “Flock Toolbar is the “Webmail Button”. You can set up access to your favorite “Yahoo!, Gmail, & AOL Mail” accounts from here. The “Favorites Sidebar” combines your “Browser History & Bookmarks” into one convenient location. The “Accounts and Services Sidebar” gives you quick and easy access to get logged into your favorite social accounts. Clicking on any of the links will open that particular service’s login page in a new tab. Want to store items such as photos, links, and text to add into a blog post or tweet later on? Just drag and drop them into the “Web Clipboard Sidebar” for later access. Clicking on the “Blog Editor Button” will open up a separate blogging window to compose your posts in. If you have not logged into or set up an account yet in Flock you will see the following message window. The “Blogging Window”…nice, simple, and straightforward. If you are not already logged into your photo account(s) then you will see the following message window when you click on the “Photo Uploader Button”. Clicking “OK” will open the “Accounts and Services Sidebar” with compatible photo services highlighted in a light yellow color. Log in to your favorite service to start uploading all those great images. After Setting Up Here is what our browser looked like after setting up some of our favorite services. The Twitter feed is certainly looking nice and easy to read through… Some tweaking in the “RSS Feeds Sidebar” makes for a perfect reading experience. Keeping up with our e-mail is certainly easy to do too. A look back at the “Accounts and Services Sidebar” shows that all of our accounts are actively logged in (green dot on the right side). Going back to our “My World Page” you can see how nice everything looks for monitoring our “Friend Activity & Favorite Feeds”. Moving on to regular browsing everything is looking very good… Flock is a perfect choice for anyone wanting a browser and social hub all built into a single app. Conclusion Anyone who loves keeping up with their favorite social services while browsing will find using Flock to be a wonderful experience. You literally get the best of both worlds with this browser. Links Download Flock The Official Flock Extensions Homepage The Official Flock Toolbar Homepage Similar Articles Productive Geek Tips Add Color Coding to Windows 7 Media Center Program GuideAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogHow to use an ISO image on Ubuntu LinuxAdvertise on How-To GeekFixing When Windows Media Player Library Won’t Let You Add Files TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • 14+ WordPress Portfolio Themes

    - by Edward
    There are various portfolio themes for WordPress out there, with this collection we are trying to help you choose the best one. These themes can be used to create any type of personal, photography, art or corporate portfolio. Display 3 in 1 Display 3 in 1 – Business & Portfolio WordPress Theme. Features a fantastic 3D Image slideshow that can be controlled from your backend with a custom tool. The Theme has a huge wordpress custom backend (8 additional Admin Pages) that make customization of the Theme easy for those who dont know much about coding or wordpress. Price: $40 View Demo Download DeepFocus Tempting features such as automatic separation of blog and portfolio content by template, publishing of most important information on homepage, styles to choose from and many more such features. It also provides for page templates for blog, portfolio, blog archive, tags etc. It has the best feature that helps you to manage everything from one place. Price: $39 (Package includes more than 55 themes) View Demo Download SimplePress Simple, yet awesome. One of the best portfolio theme. Price: $39 (Package includes more than 55 themes) View Demo Download Graphix Graphix is one of best word press portfolio themes. It is most suited to aspiring designers, developers, artists and photographers who’d like a framework theme, which has a great-looking portfolio with a feature-rich blog. It has theme option page, 5-color style, SEO option, featured content blocks, drop down multi-level menu, social profile link custom widgets, custom post, custom page template etc. Price: $69 Single & $149 Developer Package View Demo Download Bizznizz It boasts of many features such as custom homepage, custom post types, custom widgets, portfolio templates, alternative styles and many more. View Demo Download Showtime Ultimate WordPress Theme for you to create your web portfolio, It has 3 different styles for you to choose from. Price: $40 View Demo Download Montana WP Horizontal Portfolio Theme Montana Theme – WP Horizontal Portfolio Theme, best suited for creative studios to showcase design, photography, illustration, paintings and art. Price: $30 View Demo Download OverALL OverALL Premium WordPress Blog & Portfolio Theme, is low priced & has amazing tons of features. Price: $17 View Demo Download Habitat Habitat – Blog and Portfolio Theme. Unique Portfolio Sorting/Filtering with a custom jQuery script (each entry supports multiple images or a video) Multiple Featured Images for each post to generate individual Slideshows per Post, or the option to directly embed video content from youtube, vimeo, hulu etc. Price: $35 View Demo Download Fresh Folio Fresh Folio from WooThemes, can be used as both portfolio and a premium WordPress theme. The theme is a remix of the Fresh News Theme and Proud Folio Theme which combines all the best elements of the respective blog and portfolio style themes. View Demo Download Fresh Folio Features: Can be used to create an impressive portfolio. 7 diverse theme styles to choose from (default, blue, red, grunge light, grunge floral, antique, blue creamer, nightlife) The template will automatically (visually) separate your blog & portfolio content, making this an amazing theme for aspiring designers, developers, artists, photographers etc. Unique page templates types for the portfolio, blog, blog archives, tags & search results. Integrated Theme Options (for WordPress) to tweak the layout, colour scheme etc. for the theme Optional Automatic Image Resize, which is used to dynamically create the thumbnails and featured images Includes Widget enabled Sidebars. eGallery eGallery is a theme made to transform your wordpress blog into a fully functional online portfolio. Theme is perfectly designed to emphasize the artwork you choose to showcase. The design has been greatly enhanced using javascript, and is easy to implement. Price: $39 (Package includes more than 55 themes) View Demo Download ProudFolio ProudFolio is a portfolio premium WordPress theme from Woo Themes. The theme is for designers, developers, artists and photographers who would like a showcase theme which would depict as a portfolio and also serves a purpose of blog. ProudFolio puts a strong emphasis on the portfolio pieces, allowing for decent-sized thumbnails, huge fullscreen views via Lightbox, and full details on the single page. The theme file also contains a choice of three different background images and color schemes. Price: $70 Single $150 Developer License View Demo Download Features: The template will automatically (visually) separate your blog & portfolio content. An unique homepage layout, which publishes only the most important information; Unique page templates for the portfolio, blog, blog archives, tags & search results. Integrated Theme Options (for WordPress) to tweak the layout, colour scheme etc. for the theme; Built-in video panel, which you can use to publish any web-based Flash videos; Automatic Image Resize, which is used to dynamically create the thumbnails and featured images; Custom Page Templates for Archives, Sitemap & Image Gallery; Built-in Gravatar Support for Authors & Comments; Integrated Banner Management script to display randomized banner ads of your choice site-wide; Pretty drop down navigation everywhere; and Widget Enabled Sidebars. Porftolio WordPress Theme A FREE wordpress theme designed for web portfolios and (for now) just for web portfolios. It is coming with an Administrative Panel from where you can edit the head quote text, you can edit all theme colors, font families, font sizes and you can fill a curriculum vitae and display it into a special page. Theme demo and download can be found here Viz | Biz Viz | Biz is a premium WordPress photo gallery and portfolio theme designed specifically for photographers, graphic designers and web designers who want to display their creative work online, market their services, as well as have a typical text blog, using the power and flexibility of WordPress. It is priced for $79.95. Theme Features: Premium quality portfolio template Custom logo uploader to replace the standard graphic with your own unique look from the WP Dashboard Integrated blog component (front images are custom fields and thumbnails, but you can also have a typical blog) Four tabbed feature areas (About Me, Services, Recent Posts, and Tags) Two home page feature photos (You choose which photos to feature using a WP category) Manage your online portfolio through the WordPress CMS Crop two sizes of your work: One for the front page thumbnails and another full size version and upload to WP Search engine optimized. Related posts:14 WordPress Photo Blog & Portfolio Themes 6 PhotoBlog Portfolio WordPress Themes Professional WordPress Business Themes

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #050

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Executing Remote Stored Procedure – Calling Stored Procedure on Linked Server In this example we see two different methods of how to call Stored Procedures remotely.  Connection Property of SQL Server Management Studio SSMS A very simple example of the how to build connection properties for SQL Server with the help of SSMS. Sample Example of RANKING Functions – ROW_NUMBER, RANK, DENSE_RANK, NTILE SQL Server has a total of 4 ranking functions. Ranking functions return a ranking value for each row in a partition. All the ranking functions are non-deterministic. T-SQL Script to Add Clustered Primary Key Jr. DBA asked me three times in a day, how to create Clustered Primary Key. I gave him following sample example. That was the last time he asked “How to create Clustered Primary Key to table?” 2008 2008 – TRIM() Function – User Defined Function SQL Server does not have functions which can trim leading or trailing spaces of any string at the same time. SQL does have LTRIM() and RTRIM() which can trim leading and trailing spaces respectively. SQL Server 2008 also does not have TRIM() function. User can easily use LTRIM() and RTRIM() together and simulate TRIM() functionality. http://www.youtube.com/watch?v=1-hhApy6MHM 2009 Earlier I have written two different articles on the subject Remove Bookmark Lookup. This article is as part 3 of original article. Please read the first two articles here before continuing reading this article. Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 2 Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 3 Interesting Observation – Query Hint – FORCE ORDER SQL Server never stops to amaze me. As regular readers of this blog already know that besides conducting corporate training, I work on large-scale projects on query optimizations and server tuning projects. In one of the recent projects, I have noticed that a Junior Database Developer used the query hint Force Order; when I asked for details, I found out that the basic concept was not properly understood by him. Queries Waiting for Memory Allocation to Execute In one of the recent projects, I was asked to create a report of queries that are waiting for memory allocation. The reason was that we were doubtful regarding whether the memory was sufficient for the application. The following query can be useful in similar cases. Queries that do not have to wait on a memory grant will not appear in the result set of following query. 2010 Quickest Way to Identify Blocking Query and Resolution – Dirty Solution As the title suggests, this is quite a dirty solution; it’s not as elegant as you expect. However, it works totally fine. Simple Explanation of Data Type Precedence While I was working on creating a question for SQL SERVER – SQL Quiz – The View, The Table and The Clustered Index Confusion, I had actually created yet another question along with this question. However, I felt that the one which is posted on the SQL Quiz is much better than this one because what makes that more challenging question is that it has a multiple answer. Encrypted Stored Procedure and Activity Monitor I recently had received questionable if any stored procedure is encrypted can we see its definition in Activity Monitor.Answer is - No. Let us do a quick test. Let us create following Stored Procedure and then launch the Activity Monitor and check the text. Indexed View always Use Index on Table A single table can have maximum 249 non clustered indexes and 1 clustered index. In SQL Server 2008, a single table can have maximum 999 non clustered indexes and 1 clustered index. It is widely believed that a table can have only 1 clustered index, and this belief is true. I have some questions for all of you. Let us assume that I am creating view from the table itself and then create a clustered index on it. In my view, I am selecting the complete table itself. 2011 Detecting Database Case Sensitive Property using fn_helpcollations() I received a question on how to determine the case sensitivity of the database. The quick answer to this is to identify the collation of the database and check the properties of the collation. I have previously written how one can identify database collation. Once you have figured out the collation of the database, you can put that in the WHERE condition of the following T-SQL and then check the case sensitivity from the description. Server Side Paging in SQL Server CE (Compact Edition) SQL Server Denali is coming up with new T-SQL of Paging. I have written about the same earlier.SQL SERVER – Server Side Paging in SQL Server Denali – A Better Alternative,  SQL SERVER – Server Side Paging in SQL Server Denali Performance Comparison, SQL SERVER – Server Side Paging in SQL Server Denali – Part2 What is very interesting is that SQL Server CE 4.0 have the same feature introduced. Here is the quick example of the same. To run the script in the example, you will have to do installWebmatrix 4.0 and download sample database. Once done you can run following script. Why I am Going to Attend PASS Summit Unite 2011 The four-day event will be marked by a lot of learning, sharing, and networking, which will help me increase both my knowledge and contacts. Every year, PASS Summit provides me a golden opportunity to build my network as well as to identify and meet potential customers or employees. 2012 Manage Help Settings – CTRL + ALT + F1 This is very interesting read as my daughter once accidently came across a screen in SQL Server Management Studio. It took me 2-3 minutes to figure out how she has created the same screen. Recover the Accidentally Renamed Table “I accidentally renamed table in my SSMS. I was scrolling very fast and I made mistakes. It was either because I double clicked or clicked on F2 (shortcut key for renaming). However, I have made the mistake and now I have no idea how to fix this. If you have renamed the table, I think you pretty much is out of luck. Here are few things which you can do which can give you an idea about what your table name can be if you are lucky. Identify Numbers of Non Clustered Index on Tables for Entire Database Here is the script which will give you numbers of non clustered indexes on any table in entire database. Identify Most Resource Intensive Queries – SQL in Sixty Seconds #029 – Video Here is the complete complete script which I have used in the SQL in Sixty Seconds Video. Thanks Harsh for important Tip in the comment. http://www.youtube.com/watch?v=3kDHC_Tjrns Advanced Data Quality Services with Melissa Data – Azure Data Market For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Get Information to Your Blog with Microsoft Broadcaster

    - by Matthew Guay
    Do you often have people ask you for advice about technology, or do you write tech-focused blog or newsletter?  Here’s how you can get information to share with your readers about Microsoft technology with Microsoft Broadcaster. Microsoft Broadcaster is a new service from Microsoft to help publishers, bloggers, developers, and other IT professionals find relevant information and resources from Microsoft.  You can use it to help discover things to write about, or simply discover new information about the technology you use.  Broadcaster will also notify you when new resources are available about the topics that interest you.  Let’s look at how you could use this to expand your blog and help your users. Getting Started Head over to the Microsoft Broadcaster site (link below), and click Join to get started. Sign in with your Windows Live ID, or create a new account if you don’t already have one. Near the bottom of the page, add information about your blog, newsletter, or group that you want to share Broadcaster information with.  Click Add when you’re done entering information.  You can enter as many sites or groups as you wish. When you’ve entered all of your information, click the Apply button at the bottom of the page.  Broadcaster will then let you know your information has been submitted, but you’ll need to wait several days to see if you are approved or not. Our application was approved about 2 days after applying, though this may vary.  When you’re approved, you’ll receive an email letting you know.  Return to the Broadcaster website (link below), but this time, click Sign in. Accept the terms of use by clicking I Accept at the bottom of the page. Confirm that your information entered previously is correct, and then click Configure my keywords at the bottom of the page. Now you can pick the topics you want to stay informed about.  Type keywords in the textbox, and it will bring up relevant topics with IntelliSense. Here we’ve added several topics to keep up with. Next select the Microsoft Products you want to keep track of.  If the product you want to keep track of is not listed, make sure to list it in the keywords section as above. Finally, select the types of content you wish to see, including articles, eBooks, webcasts, and more. Finally, when everything’s entered, click Configure My Alerts at the bottom of the page. Broadcaster can automatically email you when new content is found.  If you would like this, click Subscribe.  Otherwise, simply click Access Dashboard to go ahead and find your personalized content. If you choose to receive emails of new content, you’ll have to configure it with Windows Live Alerts.  Click Continue to set this up. Select if you want to receive Messenger alerts, emails, and/or text messages when new content is available.  Click Save when you’re finished. Finally, select how often you want to be notified, and then click Access Dashboard to view the content currently available. Finding Content For Your Blog, Site, or Group Now you can find content specified for your interests from the dashboard.  To access the dashboard in the future, simply go to the Broadcaster site and click Sign In. Here you can see available content, and can search for different topics or customize the topics shown. You’ll see snippets of information from various Microsoft videos, articles, whitepapers, eBooks, and more, depending on your settings.  Click the link at the top of the snippet to view the content, or right-click and copy the link to use in emails or on social networks like Twitter. If you’d like to add this snippet to your website or blog, click the Download content link at the bottom.   Now you can preview what the snippet will look like on your site, and change the width or height to fit your site.  You can view and edit the source code of the snippet from the box at the bottom, and then copy it to use on your site. Copy the code, and paste it in the HTML of a blog post, email, webpage, or anywhere else you wish to share it.  Here we’re pasting it into the HTML editor in Windows Live Writer so we can post it to a blog. After adding a title and opening paragraph, we have a nice blog post that only took a few minutes to put together but should still be useful for our readers.  You can check out the blog post we created at the link below. Readers can click on the links, which will direct them to the content on Microsoft’s websites. Conclusion If you frequently need to find educational and informative content about Microsoft products and services, Broadcaster can be a great service to keep you up to date.  The service worked quite good in our tests, and generally found relevant content to our keywords.  We had difficulty embedding links to eBooks that were listed by Broadcaster, but everything else worked for us.  Now you can always have high quality content to help your customers, coworkers, friends, and more, and you just might find something that will help you, too! Link Microsoft Broadcaster (registration required) Example Post at Techinch.com with Content from Microsoft Broadcaster Similar Articles Productive Geek Tips Create An Electronic Business Card In Outlook 2007Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPAnnouncing the How-To Geek BlogsNew Vista Syntax for Opening Control Panel Items from the Command-lineHow To Create and Publish Blog Posts in Word 2010 & 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Fix Common Inkjet Printer Errors Dual Boot Ubuntu and Windows 7 What is HTML5? Default Programs Editor – One great tool for Setting Defaults Convert BMP, TIFF, PCX to Vector files with RasterVect Free Identify Fonts using WhatFontis.com

    Read the article

  • Big Data Matters with ODI12c

    - by Madhu Nair
    contributed by Mike Eisterer On October 17th, 2013, Oracle announced the release of Oracle Data Integrator 12c (ODI12c).  This release signifies improvements to Oracle’s Data Integration portfolio of solutions, particularly Big Data integration. Why Big Data = Big Business Organizations are gaining greater insights and actionability through increased storage, processing and analytical benefits offered by Big Data solutions.  New technologies and frameworks like HDFS, NoSQL, Hive and MapReduce support these benefits now. As further data is collected, analytical requirements increase and the complexity of managing transformations and aggregations of data compounds and organizations are in need for scalable Data Integration solutions. ODI12c provides enterprise solutions for the movement, translation and transformation of information and data heterogeneously and in Big Data Environments through: The ability for existing ODI and SQL developers to leverage new Big Data technologies. A metadata focused approach for cataloging, defining and reusing Big Data technologies, mappings and process executions. Integration between many heterogeneous environments and technologies such as HDFS and Hive. Generation of Hive Query Language. Working with Big Data using Knowledge Modules  ODI12c provides developers with the ability to define sources and targets and visually develop mappings to effect the movement and transformation of data.  As the mappings are created, ODI12c leverages a rich library of prebuilt integrations, known as Knowledge Modules (KMs).  These KMs are contextual to the technologies and platforms to be integrated.  Steps and actions needed to manage the data integration are pre-built and configured within the KMs.  The Oracle Data Integrator Application Adapter for Hadoop provides a series of KMs, specifically designed to integrate with Big Data Technologies.  The Big Data KMs include: Check Knowledge Module Reverse Engineer Knowledge Module Hive Transform Knowledge Module Hive Control Append Knowledge Module File to Hive (LOAD DATA) Knowledge Module File-Hive to Oracle (OLH-OSCH) Knowledge Module  Nothing to beat an Example: To demonstrate the use of the KMs which are part of the ODI Application Adapter for Hadoop, a mapping may be defined to move data between files and Hive targets.  The mapping is defined by dragging the source and target into the mapping, performing the attribute (column) mapping (see Figure 1) and then selecting the KM which will govern the process.  In this mapping example, movie data is being moved from an HDFS source into a Hive table.  Some of the attributes, such as “CUSTID to custid”, have been mapped over. Figure 1  Defining the Mapping Before the proper KM can be assigned to define the technology for the mapping, it needs to be added to the ODI project.  The Big Data KMs have been made available to the project through the KM import process.   Generally, this is done prior to defining the mapping. Figure 2  Importing the Big Data Knowledge Modules Following the import, the KMs are available in the Designer Navigator. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Figure 3  The Project View in Designer, Showing Installed IKMs Once the KM is imported, it may be assigned to the mapping target.  This is done by selecting the Physical View of the mapping and examining the Properties of the Target.  In this case MOVIAPP_LOG_STAGE is the target of our mapping. Figure 4  Physical View of the Mapping and Assigning the Big Data Knowledge Module to the Target Alternative KMs may have been selected as well, providing flexibility and abstracting the logical mapping from the physical implementation.  Our mapping may be applied to other technologies as well. The mapping is now complete and is ready to run.  We will see more in a future blog about running a mapping to load Hive. To complete the quick ODI for Big Data Overview, let us take a closer look at what the IKM File to Hive is doing for us.  ODI provides differentiated capabilities by defining the process and steps which normally would have to be manually developed, tested and implemented into the KM.  As shown in figure 5, the KM is preparing the Hive session, managing the Hive tables, performing the initial load from HDFS and then performing the insert into Hive.  HDFS and Hive options are selected graphically, as shown in the properties in Figure 4. Figure 5  Process and Steps Managed by the KM What’s Next Big Data being the shape shifting business challenge it is is fast evolving into the deciding factor between market leaders and others. Now that an introduction to ODI and Big Data has been provided, look for additional blogs coming soon using the Knowledge Modules which make up the Oracle Data Integrator Application Adapter for Hadoop: Importing Big Data Metadata into ODI, Testing Data Stores and Loading Hive Targets Generating Transformations using Hive Query language Loading Oracle from Hadoop Sources For more information now, please visit the Oracle Data Integrator Application Adapter for Hadoop web site, http://www.oracle.com/us/products/middleware/data-integration/hadoop/overview/index.html Do not forget to tune in to the ODI12c Executive Launch webcast on the 12th to hear more about ODI12c and GG12c. Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • NVIDIA x server - "sudo nvidia config" does not generate a working 'xorg.config'

    - by Mike
    I am over 18 hours deep on this challenge. I got to this point and am stuck. very stuck. Maybe you can figure it out? Ubuntu Version 12.04 LTS with all the updates installed. Problem: The default settings in "etc/X11/xorg.conf" that are generated by the "nvidia-xconfig" tool, do not allow the NVIDIA x server to connect to the driver in my "System Settings Additional Driver window". (that's how I understand it. Lots of information below). Symptoms of Problem "System Settings Additional Driver" window has drivers, but the nvidia x server cannot connect/utilize any of the 4 drivers. the drivers are activated, but not in use. When I go to "System Tools Administration NVIDIA x server settings" I get an error that basically tells me to create a default file to initialize the NVIDIA X server (screen shot below). This is the messages the terminal gives after running a "sudo nvidia-xconfig" command for the first time. It seems that the generated file by the tool i just ran is generating a bad/unusable file: If I run the "sudo nvidia-xconfig" command again, I wont get an error the second time. However when I reboot, the default file that is generated (etc/X11/xorg.conf) simply puts the screen resolution at 800 x 600 (or something big like that). When I try to go to NVIDIA x server settings I am greeted with the same screen as the screen shot as in symptom 2 (no option to change the resolution). If I try to go to "system settings display" there are no other resolutions to choose from. At this point I must delete the newly minted "xorg.conf" and reinstate the original in its place. Here are the contents of the "xorg.conf" that is generated first (the one missing required information): # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 304.88 (buildmeister@swio-display-x86-rhel47-06) Wed Mar 27 15:32:58 PDT 2013 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Hardware: I ran the "lspci|grep VGA". There results are: 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [Quadro 1000M] (rev a1) More Hardware info: Ram: 16GB CPU: Intel Core i7-2720QM @2.2GHz * 8 Other: 64 bit. This is a triple boot computer and not a VM. Attempts With Not Success on My End: 1) Tried to append the "xorg.conf" with what I perceive is missing information and obviously it didn't fly. 2) All the other stuff I tried got me to this point. 3) See if this link is helpful to you (I barely get it, but i get enough knowing that a smarter person might find this useful): http://manpages.ubuntu.com/manpages/lucid/man1/nvidia-xconfig.1.html 4) I am completely new to Linux (40 hours over past week), but not to programming. However I am very serious about changing over to Linux. When you respond (I hope someone responds...) please respond in a way that a person new to Linux can understand. 5) By the way, the reason I am in this mess is because I MUST have a second monitor running from my laptop, and "System Settings Display" doesn't recognize my second display. I know it is possible to make the second display work in my system, because when I boot from the install CD, I perform work on the native laptop monitor, but the second monitor shows a purple screen with Ubuntu in the middle, so I know the VGA port is sending a signal out. If this is too much for you to tackle please suggest an alternative method to get a second display. I don't want to go to windows but I cannot have a single display. I am really fudged here. I hope some smart person can help. Thanks in advance. Mike. **********************EDIT #1********************** More Details About Graphics Card I was asked "which brand of nvidia-card do you have exactly?" Here is what I did to provide more info (maybe relevant, maybe not, but here is everything): 1) Took my Lenovo W520 right apart to see if there is an identifier on the actual card. However I realized that if I get deep enough to take a look, the laptop "won't like it". so I put it back together. Figuring out the card this way is not an option for me right now. 2) (My computer is triple boot) I logged into Win7 and ran 'dxdiag' command. here is the screen shot: 3) I tried to look on the lenovo website for more details... but no luck. I took a look at my receipts and here is info form receipt: System Unit: W520 NVIDIA Quadro 1000M 2GB 4) In win7 I went to the NVIDIA website and used the option to have my card 'scanned' by a Java applet to determine the latest update for my card. I tried the same with Ubuntu but I can't get the applet to run. Here is the recommended driver from from the NVIDIA Applet for my card for Win7 (I hope this shines some light on the specifics of the card): Quadro/NVS/Tesla/GRID Desktop Driver Release R319 Version: 320.00 WHQL Release Date: 3.5.2013 5) Also I went on the NVIDIA driver search and looked through every possible combination of product type + product series + product to find all the combinations that yield a 1000M card. My card is: Product Type: Quadro Product Series: Quadro Series (Notebooks) Product: 1000M ***********************EDIT #2******************* Additional Symptoms Another question that generated more symptoms I previously didn't mention was: "After generating xorg.conf by nvidia-xconfig, go to additional drivers, do you see nvidia-304?" 1) I took a screen shot of the "additional drivers" right after generating xorg.conf by nvidia-xconfig. Here it is: 2) Then I did a reboot. Now Ubuntu is 600 x 800 resolution. When I logged in after the computer came up I got an error (which I always get after generating xorg.conf by nvidia-xconfig and rebooting) 3) To finally answer the question - No. There is no "NVIDIA-304" driver. Screen shot of additional drivers after generating xorg.conf by nvidia-xconfig and rebooting : At this point I revert to the original xorg.conf and delete the xorg.conf generated by Nvidia.

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • Is Financial Inclusion an Obligation or an Opportunity for Banks?

    - by tushar.chitra
    Why should banks care about financial inclusion? First, the statistics, I think this will set the tone for this blog post. There are close to 2.5 billion people who are excluded from the banking stream and out of this, 2.2 billion people are from the continents of Africa, Latin America and Asia (McKinsey on Society: Global Financial Inclusion). However, this is not just a third-world phenomenon. According to Federal Deposit Insurance Corp (FDIC), in the US, post 2008 financial crisis, one family out of five has either opted out of the banking system or has been moved out (American Banker). Moving this huge unbanked population into mainstream banking is both an opportunity and a challenge for banks. An obvious opportunity is the significant untapped customer base that banks can target, so is the positive brand equity a bank can build by fulfilling its social responsibilities. Also, as banks target the cost-conscious unbanked customer, they will be forced to look at ways to offer cost-effective products and services, necessitating technology upgrades and innovations. However, cost is not the only hurdle in increasing the adoption of banking services. The potential users need to be convinced of the benefits of banking and banks will also face stiff competition from unorganized players. Finally, the banks will have to believe in the viability of this business opportunity, and not treat financial inclusion as an obligation. In what ways can banks target the unbanked For financial inclusion to be a success, banks should adopt innovative business models to develop products that address the stated and unstated needs of the unbanked population and also design delivery channels that are cost effective and viable in the long run. Through business correspondents and facilitators In rural and remote areas, one of the major hurdles in increasing banking penetration is connectivity and accessibility to banking services, which makes last mile inclusion a daunting challenge. To address this, banks can avail the services of business correspondents or facilitators. This model allows banks to establish greater connectivity through a trusted and reliable intermediary. In India, for instance, banks can leverage the local Kirana stores (the mom & pop stores) to service rural and remote areas. With a supportive nudge from the central bank, the commercial banks can enlist these shop owners as business correspondents to increase their reach. Since these neighborhood stores are acquainted with the local population, they can help banks manage the KYC norms, besides serving as a conduit for remittance. Banks also have an opportunity over a period of time to cross-sell other financial products such as micro insurance, mutual funds and pension products through these correspondents. To exercise greater operational control over the business correspondents, banks can also adopt a combination of branch and business correspondent models to deliver financial inclusion. Through mobile devices According to a 2012 world bank report on financial inclusion, out of a world population of 7 billion, over 5 billion or 70% have mobile phones and only 2 billion or 30% have a bank account. What this means for banks is that there is scope for them to leverage this phenomenal growth in mobile usage to serve the unbanked population. Banks can use mobile technology to service the basic banking requirements of their customers with no frills accounts, effectively bringing down the cost per transaction. As I had discussed in my earlier post on mobile payments, though non-traditional players have taken the lead in P2P mobile payments, banks still hold an edge in terms of infrastructure and reliability. Through crowd-funding According to the Crowdfunding Industry Report by Massolution, the global crowdfunding industry raised $2.7 billion in 2012, and is projected to grow to $5.1 billion in 2013. With credit policies becoming tighter and banks becoming more circumspect in terms of loan disbursals, crowdfunding has emerged as an alternative channel for lending. Typically, these initiatives target the unbanked population by offering small loans that are unviable for larger banks. Though a significant proportion of crowdfunding initiatives globally are run by non-banking institutions, banks are also venturing into this space. The next step towards inclusive finance Banks by themselves cannot make financial inclusion a success. There is a need for a whole ecosystem that is supportive of this mission. The policy makers, that include the regulators and government bodies, must be in sync, the IT solution providers must put on their thinking caps to come out with innovative products and solutions, communication channels such as internet and mobile need to expand their reach, and the media and the public need to play an active part. The other challenge for financial inclusion is from the banks themselves. While it is true that financial inclusion will unleash a hitherto hugely untapped market, the normal banking model may be found wanting because of issues such as flexibility, convenience and reliability. The business will be viable only when there is a focus on increasing the usage of existing infrastructure and that is possible when the banks can offer the entire range of products and services to the large number of users of essential banking services. Apart from these challenges, banks will also have to quickly master and replicate the business model to extend their reach to the remotest regions in their respective geographies. They will need to ensure that the transactions deliver a viable business benefit to the bank. For tapping cross-sell opportunities, banks will have to quickly roll-out customized and segment-specific products. The bank staff should be brought in sync with the business plan by convincing them of the viability of the business model and the need for a business correspondent delivery model. Banks, in collaboration with the government and NGOs, will have to run an extensive financial literacy program to educate the unbanked about the benefits of banking. Finally, with the growing importance of retail banking and with many unconventional players eyeing the opportunity in payments and other lucrative areas of banking, banks need to understand the importance of micro and small branches. These micro and small branches can help banks increase their presence without a huge cost burden, provide bankers an opportunity to cross sell micro products and offer a window of opportunity for the large non-banked population to transact without any interference from intermediaries. These branches can also help diminish the role of the unorganized financial sector, such as local moneylenders and unregistered credit societies. This will also help banks build a brand awareness and loyalty among the users, which by itself has a cascading effect on the business operations, especially among the rural and un-banked centers. In conclusion, with the increasingly competitive banking sector facing frequent slowdowns and downturns, the unbanked population presents a huge opportunity for banks to enhance their customer base and fulfill their social responsibility.

    Read the article

  • Sending HTML to Gmail always lands in Spam

    - by cartaysm
    I am having an issue with sending HTML emails to Gmail. I can send them to Yahoo, Hotmail, RR, AOL, etc. with no problem at all, but when I send them to Gmail I get kicked to spam. I have checked my IP with a lot of different list to make sure it is not listed anywhere, which it is not. spamhaus = is not listed in the DBL abuse.net = is not listed in the SBL abuse.net = is not listed in the PBL abuse.net = is not listed in the XBL spamcop = not listed in bl.spamcop.net host 24.172.204.xxx xxx.204.172.24.in-addr.arpa domain name pointer xxxevents.com. host xxxevents.com xxxevents.com has address 24.172.204.xxx xxxevents.com mail is handled by 10 mail.xxxevents.com. I am just trying to send a very VERY basic HTML message (listed below). I use an Ubuntu server, swiftmailer, multipart/alternative (HTML & plain), SPF = pass, and I am going to setup DKIM today to see if that fixes it (but I doubt it will)... For now I will only post the message I sent that gets kicked to spam and can provide any details needed. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head><title>Triathlon</title></head> <body> <table cellpadding="0" cellspacing="0"> <tr> <td> <p>Thank you for attending our 4th annual Triathlon/Duathlon/5k at Hueston Woods State Park on August 12th. This event is held annually to raise research funding for Crohn's Disease, Ulcerative Colitis, and Muscular Dystrophy diseases.</p> </td> </tr> <tr> <td> <p>As you know the results and pictures have been posted on our home page at since Sunday 8/13/2012. Now we also have updated our Facebook page with those photos and you can start tagging yourself or downloading the pictures now! <br /> our page and tag yourself at </p> <p> test test </p> <p>Race day events is professionally managed by Speedy-Feet</p> </td> </tr> </table> </body> </html> Just plain text works great, I thought maybe wording was messing me up but not the case... I am almost done install opendkim so I will be able to rule that out very soon. Edit: Okay installed opendkim and I am getting passing results so I sent the html I posted above it went through just fine. So now when I start to add a few more lines I am getting kicked back to spam again. Here is updated html code: ` <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head><title>Triathlon</title></head> <body> <table cellpadding="0" cellspacing="0"> <tr> <td> <center><a href='http://xxxevents.com' target="_blank"> <font face="Verdana, Arial, Helvetica, sans-serif" color="#666666" size="2"> <img src="http://xxxevents.com/marketemailimages/xxxlogo.png" alt="xxx It Events | Raising funds for Crohns, Colitis, and Muscular Dystrophy" border="0" /> </font></a></center> </td> <tr> <td> <p>Thank you for attending our 4th annual Triathlon/Duathlon/5k at Hueston Woods State Park on August 12th. This event is held annually to raise research funding for Crohn's Disease, Ulcerative Colitis, and Muscular Dystrophy diseases.</p> </td> </tr> <tr> <td> <p>As you know the results and pictures have been posted on our home page at since Sunday 8/13/2012. Now we also have updated our Facebook page with those photos and you can start tagging yourself or downloading the pictures now! <br /> our page and tag yourself at </p> <p> test test </p> <p>Race day events is professionally managed by Speedy-Feet</p> </td> </tr> </table> <table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr> <td valign="top"> <div align="center" style="font-family:Verdana, Arial, Helvetica, sans-serif; font-size:10px;"><br />PO Box xxx Maineville, OH 45039<br /> <a href="mailto:[email protected]">[email protected]</a> | <a href='http://xxxevents.com' target="_blank">xxxevents.com</a><br /> <br /> </div> </td> </tr> </table> </body> </html>`

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario   Conventional Structures   Columnstore   Δ SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Rapid Evolution of Society & Technology

    - by Michael Snow
    We caught up with Brian Solis on the phone the other day and Christie Flanagan had a chance to chat with him and learn a bit more about him and some of the concepts he'll be addressing in our Social Business Thought Leaders Webcast on Thursday 12/13/12. «--- Interview with Brian Solis  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast- mso-fareast-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Be sure and register for this week's webcast ---» ------------------- Guest post by Brian Solis. Reposted (Borrowed) from his posting of May 24, 2012 Dear [insert business name], what’s your promise? - Brian Solis You say you want to get closer to customers, but your actions are different than your words. You say you want to “surprise and delight” customers, but your product development teams are too busy building against a roadmap without consideration of the 5th P of marketing…people. Your employees are your number one asset, however the infrastructure of the organization has turned once optimistic and ambitious intrapreneurs into complacent cogs or worse, your greatest detractors. You question the adoption of disruptive technology by your internal champions yet you’ve not tried to find the value for yourself. You’re a change agent and you truly wish to bring about change, but you’ve not invested time or resources to answer “why” in your endeavors to become a connected or social business. If we are to truly change, we must find purpose. We must uncover the essence of our business and the value it delivers to traditional and connected consumers. We must rethink the spirit of today’s embrace and clearly articulate how transformation is going to improve customer and employee experiences and relationships now and over time. Without doing so, any attempts at evolution will be thwarted by reality. In an era of Digital Darwinism, no business is too big to fail or too small to succeed. These are undisciplined times which require alternative approaches to recognize and pursue new opportunities. But everything begins with acknowledging the 360 view of the world that you see today is actually a filtered view of managed and efficient convenience. Today, many organizations that were once inspired by innovation and engagement have fallen into a process of marketing, operationalizing, managing, and optimizing. That might have worked for the better part of the last century, but for the next 10 years and beyond, new vision, leadership and supporting business models will be written to move businesses from rigid frameworks to adaptive and agile entities. I believe that today’s executives will undergo a great test; a test of character, vision, intention, and universal leadership. It starts with a simple, but essential question…what is your promise? Notice, I didn’t ask about your brand promise. Nor did I ask for you to cite your mission and vision statements. This is much more than value propositions or manufactured marketing language designed to hook audiences and stakeholders. I asked for your promise to me as your consumer, stakeholder, and partner. This isn’t about B2B or B2C, but instead, people to people, person to person. It is this promise that will breathe new life into an organization that on the outside, could be misdiagnosed as catatonic by those who are disrupting your markets. A promise, for example, is meant to inspire. It creates alignment. It serves as the foundation for your vision, mission, and all business strategies and it must come from the top to mean anything. For without it, we cannot genuinely voice what it is we stand for or stand behind. Think for a moment about the definition of community. It’s easy to confuse a workplace or a market where everyone simply shares common characteristics. However, a community in this day and age is much more than belonging to something, it’s about doing something together that makes belonging matter The next few years will force a divide where companies are separated by intention as measured by actions and words. But, becoming a social business is not enough. Becoming more authentic and transparent doesn’t serve as a mantra for a renaissance. A promise is the ink that inscribes the spirit of the relationship between you and me. A promise serves as the words that influence change from within and change beyond the halls of our business. It is the foundation for a renewed embrace, one that must then find its way to every aspect of the organization. It’s the difference between a social business and an adaptive business. While an adaptive business can also be social, it is the culture of the organization that strives to not just use technology to extend current philosophies or processes into new domains, but instead give rise to a new culture where striving for relevance is among its goals. The tools and networks simply become enablers of a greater mission You are reading this because you believe in something more than what you’re doing today. While you fight for change within your organization, remember to aim for a higher purpose. Organizations that strive for innovation, imagination, and relevance will outperform those that do not. Part of your job is to lead a missionary push that unites the groundswell with a top down cascade. Change will only happen because you and other internal champions see what others can’t and will do what other won’t. It takes resolve. It takes the ability to translate new opportunities into business value. And, it takes courage. “This is a very noisy world, so we have to be very clear what we want them to know about us”-Steve Jobs ----------------------------------------------------------------- So -- where do you begin to evaluate the kind of experience you are delivering for your customers, partners, and employees?  Take a look at this White Paper: Creating a Successful and Meaningful Customer Experience on the Web and then have a cup of coffee while you listen to the sage advice of Guy Kawasaki in a short video below.   An interview with Guy Kawasaki on Maximizing Social Media Channels 

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • Master Data

    - by david.butler(at)oracle.com
    Let's take a deeper look at what we mean when we talk about 'Master' data. In its most general sense, master data is data that exists in more than one operational application. These are the applications that automate business processes. These applications require significant amounts of data to function correctly.  This includes data about the objects that are involved in transactions, as well as the transaction data itself.  For example, when a customer buys a product, the transaction is managed by a sales application.  The objects of the transaction are the Customer and the Product.  The transactional data is the time, place, price, discount, payment methods, etc. used at the point of sale. Many thousands of transactional data attributes are needed within the application. These important data elements are local to the applications and have no bearing on other applications. Harmonization and synchronization across applications is not necessary. The Customer and Product objects of the transaction also have a large number of attributes. Customer for example, includes hierarchies, hierarchical and matrixed relationships, contacts, classifications, preferences, accounts, identifiers, profiles, and addresses galore for 'ship to', 'mail to'; 'service at'; etc. Dozens of attributes exist for individuals, hundreds for organizations, and thousands for products. This data has meaning beyond any particular application. It exists in many applications and drives the vital cross application enterprise business processes. These are the processes that define and differentiate the organization. At every decision point, information about the objects of the process determines the direction of the process flow. This is the nature of the data that exists in more than one application, and this is why we call it 'master data'. Let me elaborate. Parties Oracle has developed a party schema to model all participants in your daily business operations. It models people, organizations, groups, customers, contacts, employees, and suppliers. It models their accounts, locations, classifications, and preferences.  And most importantly, it models the vast array of hierarchical and matrixed relationships that exist between all the participants in your real world operations.  The model logically separates people and organizations from their relationships and accounts.  This separation creates flexibility unmatched in the industry and accounts for the fact that the Oracle schema for Customers, Suppliers, and Accounts is a true superset of the wide variety of commercial and homegrown customer models in existence. Sites Sites are places where business is conducted. They can be addresses, clusters such as retail malls, locations within a cluster, floors within a building, places where meters are located, rooms on floors, etc.  Fully understanding all attributes of a site is key to many business processes. Attributes such as 'noise abatement policy' at a point of delivery, or the size of an oven in a business kitchen drive day-to-day activities such as delivery schedules or food promotions. Typically this kind of data is siloed in departments and scattered across applications and spreadsheets.  This leads to conflicting information and poor operational efficiencies. Oracle's Global Single Schema can hold all site attributes in one place and enables a single version of authoritative site information across the enterprise. Products and Services The Oracle Global Single Schema also includes a number of entities that define the products and services a company creates and offers for sale. Key entities include Items organized into Catalogs and Price Lists. The Catalog structures provide for the ability to capture different views of a product such as engineering, manufacturing, and service which are based on a unified product model. As a result, designers, manufacturing engineers, purchasers and partners can work simultaneously on a common product definition. The Catalog schema allows for unlimited attributes, combines them into meaningful groups, and maps them to catalog categories to track these different types of information. The model also maps an unlimited number of functional structures for each item. For example, multiple Bills of Material (BOMs) can be constructed representing requirements BOM, features BOM, and packaging BOM for an item. The Catalog model also supports hierarchical information about each item and all standard Global Data Synchronization attributes. Business Processes Utilizing Linked Data Entities Each business entity codified into a centralized master data environment significantly improves the efficiency of the automated business processes that use the consolidated data.  When all the key business entities used by an organization's process are so consolidated, the advantages are multiplied.  The primary reason for business process breakdowns (i.e. data errors across application boundaries) is eliminated. All processes are positively impacted and business process automation is itself automated.  I like to use the "Call to Resolution" business process as an example to help illustrate this important point. It involves call center applications, service applications, RMA applications, transportation applications, inventory applications, etc. Customer, Site, Product and Supplier master data must all be correct and consistent across these applications.  What's more, the data relationships between customer and product, and product and suppliers must be right. This is the minimum quality needed to insure the business process flows without error. But that is not the end of the story. Critical master data attributes such as customer loyalty, profitability, credit worthiness, and propensity to buy can optimize the call center point of contact component of the process. Critical product information such as alternative parts or equivalent products can optimize the resolution selected by the process. A comprehensive understanding of the 'service at' location can help insure multiple trips are avoided in the process. Full supplier information on reliability, delivery delays, and potential alternates can prevent supplier exceptions and play a significant role in optimizing the process.  In other words, these master data attributes enable the optimization of the "Call to Resolution" enterprise business process. Master data supports and guides business process flows. Thus the phrase 'Master Data' is indeed appropriate. MDM is the software that houses, manages, and governs the master data that resides in all applications and controls the enterprise business processes. A complete master data solution takes a data model that holds fully attributed master data entities and their inter-relationships. Oracle has this model. Oracle, with its deep understanding of application data is the logical choice for managing all your master data within the enterprise whether or not your organization actually runs any Oracle Applications.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • How can unrealscript halt event handler execution after an arbitrary number of lines with no return or error?

    - by Dan Cowell
    I have created a class that extends TcpLink and is instantiated in a custom Kismet Sequence Action. It is being instantiated correctly and is making the GET HTTP request that I need it to (I have checked my access log in apache) and Apache is responding to the request with the appropriate content. The problem I have is that I'm using the event receive mode and it appears that somehow the handler for the Opened event is halted after a specific number of lines of code have executed. Here is my code for the Opened event: event Opened() { // A connection was established WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] event opened"); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sending simple HTTP query"); //The HTTP GET request //char(13) and char(10) are carrage returns and new lines requesttext = "userId="$userId$"&apartmentId="$apartmentId; SendText("GET /"$path$"?"$requesttext$" HTTP/1.0"); SendText(chr(13)$chr(10)); SendText("Host: "$TargetHost); SendText(chr(13)$chr(10)); SendText("Connection: Close"); SendText(chr(13)$chr(10)$chr(13)$chr(10)); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sent request: "$requesttext); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] end HTTP query"); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkState: "$LinkState); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkMode: "$LinkMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] ReceiveMode: "$ReceiveMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Error: "$string(GetLastError())); } As you can see, a number of the Broadcast calls have been commented out. Initially, only the lines up to the Broadcast containing "[DNomad_TcpLinkClient] Sent request: " were being executed and none of the Broadcasts were commented out. After commenting out that line, the next Broadcast was successful and so on and so forth. As a test, I commented out the very first Broadcast to see if the connection closing had any effect: // A connection was established //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] event opened"); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sending simple HTTP query"); Upon doing that, an additional Broadcast at the end of the function executed. Thus the inference that there is an upper limit to the number of lines executed. Additionally, my ReceivedText handler is never called, despite Apache returning the correct HTTP 200 response with a body. My working hypothesis is that somehow after the Sequence Action finishes executing the garbage collector cleans up the TcpLinkClient instance. My biggest source of confusion with that is how on earth it does it during the execution of an event handler. Has anyone ever seen anything like this before? My full TcpLinkClient class is below: /* * TcpLinkClient based on an example usage of the TcpLink class by Michiel 'elmuerte' Hendriks for Epic Games, Inc. * */ class DNomad_TcpLinkClient extends TcpLink; var PlayerController PC; var string TargetHost; var int TargetPort; var string path; var string requesttext; var string userId; var string apartmentId; var string statusCode; var string responseData; event PostBeginPlay() { super.PostBeginPlay(); } function DoTcpLinkRequest(string uid, string id) //removes having to send a host { userId = uid; apartmentId = id; Resolve(targethost); } function string GetStatus() { return statusCode; } event Resolved( IpAddr Addr ) { // The hostname was resolved succefully WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] "$TargetHost$" resolved to "$ IpAddrToString(Addr)); // Make sure the correct remote port is set, resolving doesn't set // the port value of the IpAddr structure Addr.Port = TargetPort; //dont comment out this log because it rungs the function bindport WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Bound to port: "$ BindPort() ); if (!Open(Addr)) { WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Open failed"); } } event ResolveFailed() { WorldInfo.Game.Broadcast(self, "[TcpLinkClient] Unable to resolve "$TargetHost); // You could retry resolving here if you have an alternative // remote host. //send failed message to scaleform UI //JunHud(JunPlayerController(PC).myHUD).JunMovie.CallSetHTML("Failed"); } event Opened() { // A connection was established //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] event opened"); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sending simple HTTP query"); //The HTTP GET request //char(13) and char(10) are carrage returns and new lines requesttext = "userId="$userId$"&apartmentId="$apartmentId; SendText("GET /"$path$"?"$requesttext$" HTTP/1.0"); SendText(chr(13)$chr(10)); SendText("Host: "$TargetHost); SendText(chr(13)$chr(10)); SendText("Connection: Close"); SendText(chr(13)$chr(10)$chr(13)$chr(10)); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sent request: "$requesttext); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] end HTTP query"); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkState: "$LinkState); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkMode: "$LinkMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] ReceiveMode: "$ReceiveMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Error: "$string(GetLastError())); } event Closed() { // In this case the remote client should have automatically closed // the connection, because we requested it in the HTTP request. WorldInfo.Game.Broadcast(self, "Connection closed."); // After the connection was closed we could establish a new // connection using the same TcpLink instance. } event ReceivedText( string Text ) { WorldInfo.Game.Broadcast(self, "Received Text: "$Text); //we dont want the header info, so we split the string after two new lines Text = Split(Text, chr(13)$chr(10)$chr(13)$chr(10), true); WorldInfo.Game.Broadcast(self, "Split Text: "$Text); statusCode = Text; } event ReceivedLine( string Line ) { WorldInfo.Game.Broadcast(self, "Received Line: "$Line); } event ReceivedBinary( int Count, byte B[255] ) { WorldInfo.Game.Broadcast(self, "Received Binary of length: "$Count); } defaultproperties { TargetHost="127.0.0.1" TargetPort=80 //default for HTTP LinkMode=MODE_Text ReceiveMode=RMODE_Event path = "dnomad/datafeed.php" userId = "0"; apartmentId = "0"; statusCode = ""; send = false; }

    Read the article

  • What's new in Solaris 11.1?

    - by Karoly Vegh
    Solaris 11.1 is released. This is the first release update since Solaris 11 11/11, the versioning has been changed from MM/YY style to 11.1 highlighting that this is Solaris 11 Update 1.  Solaris 11 itself has been great. What's new in Solaris 11.1? Allow me to pick some new features from the What's New PDF that can be found in the official Oracle Solaris 11.1 Documentation. The updates are very numerous, I really can't include all.  I. New AI Automated Installer RBAC profiles have been introduced to enable delegation of installation tasks. II. The interactive installer now supports installing the OS to iSCSI targets. III. ASR (Auto Service Request) and OCM (Oracle Configuration Manager) have been enabled by default to proactively provide support information and create service requests to speed up support processes. This is optional and can be disabled but helps a lot in supportcases. For further information, see: http://oracle.com/goto/solarisautoreg IV. The new command svcbundle helps you to create SMF manifests without having to struggle with XML editing. (btw, do you know the interactive editprop subcommand in svccfg? The listprop/setprop subcommands are great for scripting and automating, but for an interactive property editing session try, for example, this: svccfg -s svc:/application/pkg/system-repository:default editprop )  V. pfedit: Ever wondered how to delegate editing permissions to certain files? It is well known "sudo /usr/bin/vi /etc/hosts" is not the right way, for sudo elevates the complete vi process to admin levels, and the user can "break" out of the session as root with simply starting a shell from that vi. Now, the new pfedit command provides a solution exactly to this challenge - an auditable, secure, per-user configurable editing possibility. See the pfedit man page for examples.   VI. rsyslog, the popular logging daemon (filters, SSL, formattable output, SQL collect...) has been included in Solaris 11.1 as an alternative to syslog.  VII: Zones: Solaris Zones - as a major Solaris differentiator - got lots of love in terms of new features: ZOSS - Zones on Shared Storage: Placing your zones to shared storage (FC, iSCSI) has never been this easy - via zonecfg.  parallell updates - with S11's bootenvironments updating zones was no problem and meant no downtime anyway, but still, now you can update them parallelly, a way faster update action if you are running a large number of zones. This is like parallell patching in Solaris 10, but with all the IPS/ZFS/S11 goodness.  per-zone fstype statistics: Running zones on a shared filesystems complicate the I/O debugging, since ZFS collects all the random writes and delivers them sequentially to boost performance. Now, over kstat you can find out which zone's I/O has an impact on the other ones, see the examples in the documentation: http://docs.oracle.com/cd/E26502_01/html/E29024/gmheh.html#scrolltoc Zones got RDSv3 protocol support for InfiniBand, and IPoIB support with Crossbow's anet (automatic vnic creation) feature.  NUMA I/O support for Zones: customers can now determine the NUMA I/O topology of the system from within zones.  VIII: Security got a lot of attention too:  Automated security/audit reporting, with builtin reporting templates e.g. for PCI (payment card industry) audits.  PAM is now configureable on a per-user basis instead of system wide, allowing different authentication requirements for different users  SSH in Solaris 11.1 now supports running in FIPS 140-2 mode, that is, in a U.S. government security accredited fashion.  SHA512/224 and SHA512/256 cryptographic hash functions are implemented in a FIPS-compliant way - and on a T4 implemented in silicon! That is, goverment-approved cryptography at HW-speed.  Generally, Solaris is currently under evaluation to be both FIPS and Common Criteria certified.  IX. Networking, as one of the core strengths of Solaris 11, has been extended with:  Data Center Bridging (DCB) - not only setups where network and storage share the same fabric (FCoE, anyone?) can have Quality-of-Service requirements. DCB enables peers to distinguish traffic based on priorities. Your NICs have to support DCB, see the documentation, and additional information on Wikipedia. DataLink MultiPathing, DLMP, enables link aggregation to span across multiple switches, even between those of different vendors. But there are essential differences to the good old bandwidth-aggregating LACP, see the documentation: http://docs.oracle.com/cd/E26502_01/html/E28993/gmdlu.html#scrolltoc VNIC live migration is now supported from one physical NIC to another on-the-fly  X. Data management:  FedFS, (Federated FileSystem) is new, it relies on Solaris 11's NFS referring mechanism to join separate shares of different NFS servers into a single filesystem namespace. The referring system has been there since S11 11/11, in Solaris 11.1 FedFS uses a LDAP - as the one global nameservice to bind them all.  The iSCSI initiator now uses the T4 CPU's HW-implemented CRC32 algorithm - thus improving iSCSI throughput while reducing CPU utilization on a T4 Storage locking improvements are now RAC aware, speeding up throughput with better locking-communication between nodes up to 20%!  XI: Kernel performance optimizations: The new Virtual Memory subsystem ("VM2") scales now to 100+ TB Memory ranges.  The memory predictor monitors large memory page usage, and adjust memory page sizes to applications' needs OSM, the Optimized Shared Memory allows Oracle DBs' SGA to be resized online XII: The Power Aware Dispatcher in now by default enabled, reducing power consumption of idle CPUs. Also, the LDoms' Power Management policies and the poweradm settings in Solaris 11 OS will cooperate. XIII: x86 boot: upgrade to the (Grand Unified Bootloader) GRUB2. Because grub2 differs in the configuration syntactically from grub1, one shall not edit the new grub configuration (grub.cfg) but use the new bootadm features to update it. GRUB2 adds UEFI support and also support for disks over 2TB. XIV: Improved viewing of per-CPU statistics of mpstat. This one might seem of less importance at first, but nowadays having better sorting/filtering possibilities on a periodically updated mpstat output of 256+ vCPUs can be a blessing. XV: Support for Solaris Cluster 4.1: The What's New document doesn't actually mention this one, since OSC 4.1 has not been released at the time 11.1 was. But since then it is available, and it requires Solaris 11.1. And it's only a "pkg update" away. ...aand I seriously need to stop here. There's a lot I missed, Edge Virtual Bridging, lofi tuning, ZFS sharing and crypto enhancements, USB3.0, pulseaudio, trusted extensions updates, etc - but if I mention all those then I effectively copy the What's New document. Which I recommend reading now anyway, it is a great extract of the 300+ new projects and RFE-followups in S11.1. And this blogpost is a summary of that extract.  For closing words, allow me to come back to Request For Enhancements, RFEs. Any customer can request features. Open up a Support Request, explain that this is an RFE, describe the feature you/your company desires to have in S11 implemented. The more SRs are collected for an RFE, the more chance it's got to get implemented. Feel free to provide feedback about the product, as well as about the Solaris 11.1 Documentation using the "Feedback" button there. Both the Solaris engineers and the documentation writers are eager to hear your input.Feel free to comment about this post too. Except that it's too long ;)  wbr,charlie

    Read the article

  • Create and Backup Multiple Profiles in Google Chrome

    - by Asian Angel
    Other browsers such as Firefox and SeaMonkey allow you to have multiple profiles but not Chrome…at least not until now. If you want to use multiple profiles and create backups for them then join us as we look at Google Chrome Backup. Note: There is a paid version of this program available but we used the free version for our article. Google Chrome Backup in Action During the installation process you will run across this particular window. It will have a default user name filled in as shown here…you will not need to do anything except click on Next to continue installing the program. When you start the program for the first time this is what you will see. Your default Chrome Profile will already be visible in the window. A quick look at the Profile Menu… In the Tools Menu you can go ahead and disable the Start program at Windows Startup setting…the only time that you will need the program running is if you are creating or restoring a profile. When you create a new profile the process will start with this window. You can access an Advanced Options mode if desired but most likely you will not need it. Here is a look at the Advanced Options mode. It is mainly focused on adding Switches to the new Chrome Shortcut. The drop-down menu for the Switches available… To create your new profile you will need to choose: A profile location A profile name (as you type/create the profile name it will automatically be added to the Profile Path) Make certain that the Create a new shortcut to access new profile option is checked For our example we decided to try out the Disable plugins switch option… Click OK to create the new profile. Once you have created your new profile, you will find a new shortcut on the Desktop. Notice that the shortcut’s name will be Google Chrome + profile name that you chose. Note: On our system we were able to move the new shortcut to the “Start Menu” without problems. Clicking on our new profile’s shortcut opened up a fresh and clean looking instance of Chrome. Just out of curiosity we did decide to check the shortcut to see if the Switch set up correctly. Unfortunately it did not in this instance…so your mileage with the Switches may vary. This was just a minor quirk and nothing to get excited or upset over…especially considering that you can create multiple profiles so easily. After opening up our default profile of Chrome you can see the individual profile icons (New & Default in order) sitting in the Taskbar side-by-side. And our two profiles open at the same time on our Desktop… Backing Profiles Up For the next part of our tests we decided to create a backup for each of our profiles. Starting the wizard will allow you to choose between creating or restoring a profile. Note: To create or restore a backup click on Run Wizard. When you reach the second part of the process you can go with the Backup default profile option or choose a particular one from a drop-down list using the Select a profile to backup option. We chose to backup the Default Profile first… In the third part of the process you will need to select a location to save the profile to. Once you have selected the location you will see the Target Path as shown here. You can choose your own name for the backup file…we decided to go with the default name instead since it contained the backup’s calendar date. A very nice feature is the ability to have the cache cleared before creating the backup. We clicked on Yes…choose the option that best suits your needs. Once you have chosen either Yes or No the backup will then be created. Click Finish to complete the process. The backup file for our Default Profile at 14.0 MB in size. And the backup file for our Chrome Fresh Profile…2.81 MB. Restoring Profiles For the final part of our tests we decided to do a Restore. Select Restore and click Next to get the process started. In the second step you will need to browse for the Profile Backup File (and select the desired profile if you have created multiples). For our example we decided to overwrite the original Default Profile with the Chrome Fresh Profile. The third step lets you choose where to restore the chosen profile to…you can go with the Default Profile or choose one from the drop-down list using the Restore to a selected profile option. The final step will get you on your way to restoring the chosen profile. The program will conduct a check regarding the previous/old profile and ask if you would like to proceed with overwriting it. Definitely nice in case you change your mind at the last moment. Clicking Yes will finish the restoration. The only other odd quirk that we noticed while using the program was that the Next Button did not function after restoring the profile. You can easily get around the problem by clicking to close the window. Which one is which? After the restore process we had identical twins. Conclusion If you have been looking for a way to create multiple profiles in Google Chrome, then you might want to add this program to your system. Links Download Google Chrome Backup Similar Articles Productive Geek Tips Backup and Restore Firefox Profiles EasilyBackup Different Browsers Easily with FavBackupBackup Your Browser with the New FavBackupStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • JSP Precompilation for ADF Applications

    - by Duncan Mills
    A question that comes up from time to time, particularly in relation to build automation, is how to best pre-compile the .jspx and .jsff files in an ADF application. Thus ensuring that the app is ready to run as soon as it's installed into WebLogic. In the normal run of things, the first poor soul to hit a page pays the price and has to wait a little whilst the JSP is compiled into a servlet. Everyone else subsequently gets a free lunch. So it's a reasonable thing to want to do... Let Me List the Ways So forth to Google (other search engines are available)... which lead me to a fairly old article on WLDJ - Removing Performance Bottlenecks Through JSP Precompilation. Technololgy wise, it's somewhat out of date, but the one good point that it made is that it's really not very useful to try and use the precompile option in the weblogic.xml file. That's a really good observation - particularly if you're trying to integrate a pre-compile step into a Hudson Continuous Integration process. That same article mentioned an alternative approach for programmatic pre-compilation using weblogic.jspc. This seemed like a much more useful approach for a CI environment. However, weblogic.jspc is now obsoleted by weblogic.appc so we'll use that instead.  Thanks to Steve for the pointer there. And So To APPC APPC has documentation - always a great place to start, and supports usage both from Ant via the wlappc task and from the command line using the weblogic.appc command. In my testing I took the latter approach. Usage, as the documentation will show you, is superficially pretty simple.  The nice thing here, is that you can pass an existing EAR file (generated of course using OJDeploy) and that EAR will be updated in place with the freshly compiled servlet classes created from the JSPs. Appc takes care of all the unpacking, compiling and re-packing of the EAR for you. Neat.  So we're done right...? Not quite. The Devil is in the Detail  OK so I'm being overly dramatic but it's not all plain sailing, so here's a short guide to using weblogic.appc to compile a simple ADF application without pain.  Information You'll Need The following is based on the assumption that you have a stand-alone WLS install with the Application Development  Runtime installed and a suitable ADF enabled domain created. This could of course all be run off of a JDeveloper install as well 1. Your Weblogic home directory. Everything you need is relative to this so make a note.  In my case it's c:\builds\wls_ps4. 2. Next deploy your EAR as normal and have a peek inside it using your favourite zip management tool. First of all look at the weblogic-application.xml inside the EAR /META-INF directory. Have a look for any library references. Something like this: <library-ref>    <library-name>adf.oracle.domain</library-name> </library-ref>   Make a note of the library ref (adf.oracle.domain in this case) , you'll need that in a second. 3. Next open the nested WAR file within the EAR and then have a peek inside the weblogic.xml file in the /WEB-INF directory. Again  make a note of the library references. 4. Now start the WebLogic as per normal and run the WebLogic console app (e.g. http://localhost:7001/console). In the Domain Structure navigator, select Deployments. 5. For each of the libraries you noted down drill into the library definition and make a note of the .war, .ear or .jar that defines the library. For example, in my case adf.oracle.domain maps to "C:\ builds\ WLS_PS4\ oracle_common\ modules\ oracle. adf. model_11. 1. 1\ adf. oracle. domain. ear". Note the extra spaces that are salted throughout this string as it is displayed in the console - just to make it annoying, you'll have to strip these out. 6. Finally you'll need the location of the adfsharebean.jar. We need to pass this on the classpath for APPC so that the ADFConfigLifeCycleCallBack listener can be found. In a more complex app of your own you may need additional classpath entries as well.  Now we're ready to go, and it's a simple matter of applying the information we have gathered into the relevant command line arguments for the utility A Simple CMD File to Run APPC  Here's the stub .cmd file I'm using on Windows to run this. @echo offREM Stub weblogic.appc Runner setlocal set WLS_HOME=C:\builds\WLS_PS4 set ADF_LIB_ROOT=%WLS_HOME%\oracle_common\modulesset COMMON_LIB_ROOT=%WLS_HOME%\wlserver_10.3\common\deployable-libraries set ADF_WEBAPP=%ADF_LIB_ROOT%\oracle.adf.view_11.1.1\adf.oracle.domain.webapp.war set ADF_DOMAIN=%ADF_LIB_ROOT%\oracle.adf.model_11.1.1\adf.oracle.domain.ear set JSTL=%COMMON_LIB_ROOT%\jstl-1.2.war set JSF=%COMMON_LIB_ROOT%\jsf-1.2.war set ADF_SHARE=%ADF_LIB_ROOT%\oracle.adf.share_11.1.1\adfsharembean.jar REM Set up the WebLogic Environment so appc can be found call %WLS_HOME%\wlserver_10.3\server\bin\setWLSEnv.cmd CLS REM Now compile away!java weblogic.appc -verbose -library %ADF_WEBAPP%,%ADF_DOMAIN%,%JSTL%,%JSF% -classpath %ADF_SHARE% %1 endlocal Running the above on a target ADF .ear  file will zip through and create all of the relevant compiled classes inside your nested .war file in the \WEB-INF\classes\jsp_servlet\ directory (but don't take my word for it, run it and take a look!) And So... In the immortal words of  the Pet Shop Boys, Was It Worth It? Well, here's where you'll have to do your own testing. In  my case here, with a simple ADF application, pre-compilation shaved an non-scientific "3 Elephants" off of the initial page load time for the first access of each page. That's a pretty significant payback for such a simple step to add into your CI process, so why not give it a go.

    Read the article

  • How do I restrict concurrent statistics gathering to a small set of tables from a single schema?

    - by Maria Colgan
    I got an interesting question from one of my colleagues in the performance team last week about how to restrict a concurrent statistics gather to a small subset of tables from one schema, rather than the entire schema. I thought I would share the solution we came up with because it was rather elegant, and took advantage of concurrent statistics gathering, incremental statistics, and the not so well known “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. You should note that the solution outline below with “obj_filter_list” still applies, even when concurrent statistics gathering and/or incremental statistics gathering is disabled. The reason my colleague had asked the question in the first place was because he wanted to enable incremental statistics for 5 large partitioned tables in one schema. The first time you gather statistics after you enable incremental statistics on a table, you have to gather statistics for all of the existing partitions so that a synopsis may be created for them. If the partitioned table in question is large and contains a lot of partition, this could take a considerable amount of time. Since my colleague only had the Exadata environment at his disposal overnight, he wanted to re-gather statistics on 5 partition tables as quickly as possible to ensure that it all finished before morning. Prior to Oracle Database 11g Release 2, the only way to do this would have been to write a script with an individual DBMS_STATS.GATHER_TABLE_STATS command for each partition, in each of the 5 tables, as well as another one to gather global statistics on the table. Then, run each script in a separate session and manually manage how many of this session could run concurrently. Since each table has over one thousand partitions that would definitely be a daunting task and would most likely keep my colleague up all night! In Oracle Database 11g Release 2 we can take advantage of concurrent statistics gathering, which enables us to gather statistics on multiple tables in a schema (or database), and multiple (sub)partitions within a table concurrently. By using concurrent statistics gathering we no longer have to run individual statistics gathering commands for each partition. Oracle will automatically create a statistics gathering job for each partition, and one for the global statistics on each partitioned table. With the use of concurrent statistics, our script can now be simplified to just five DBMS_STATS.GATHER_TABLE_STATS commands, one for each table. This approach would work just fine but we really wanted to get this down to just one command. So how can we do that? You may be wondering why we didn’t just use the DBMS_STATS.GATHER_SCHEMA_STATS procedure with the OPTION parameter set to ‘GATHER STALE’. Unfortunately the statistics on the 5 partitioned tables were not stale and enabling incremental statistics does not mark the existing statistics stale. Plus how would we limit the schema statistics gather to just the 5 partitioned tables? So we went to ask one of the statistics developers if there was an alternative way. The developer told us the advantage of the “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. The “obj_filter_list” parameter allows you to specify a list of objects that you want to gather statistics on within a schema or database. The parameter takes a collection of type DBMS_STATS.OBJECTTAB. Each entry in the collection has 5 feilds; the schema name or the object owner, the object type (i.e., ‘TABLE’ or ‘INDEX’), object name, partition name, and subpartition name. You don't have to specify all five fields for each entry. Empty fields in an entry are treated as if it is a wildcard field (similar to ‘*’ character in LIKE predicates). Each entry corresponds to one set of filter conditions on the objects. If you have more than one entry, an object is qualified for statistics gathering as long as it satisfies the filter conditions in one entry. You first must create the collection of objects, and then gather statistics for the specified collection. It’s probably easier to explain this with an example. I’m using the SH sample schema but needed a couple of additional partitioned table tables to get recreate my colleagues scenario of 5 partitioned tables. So I created SALES2, SALES3, and COSTS2 as copies of the SALES and COSTS table respectively (setup.sql). I also deleted statistics on all of the tables in the SH schema beforehand to more easily demonstrate our approach. Step 0. Delete the statistics on the tables in the SH schema. Step 1. Enable concurrent statistics gathering. Remember, this has to be done at the global level. Step 2. Enable incremental statistics for the 5 partitioned tables. Step 3. Create the DBMS_STATS.OBJECTTAB and pass it to the DBMS_STATS.GATHER_SCHEMA_STATS command. Here, you will notice that we defined two variables of DBMS_STATS.OBJECTTAB type. The first, filter_lst, will be used to pass the list of tables we want to gather statistics on, and will be the value passed to the obj_filter_list parameter. The second, obj_lst, will be used to capture the list of tables that have had statistics gathered on them by this command, and will be the value passed to the objlist parameter. In Oracle Database 11g Release 2, you need to specify the objlist parameter in order to get the obj_filter_list parameter to work correctly due to bug 14539274. Will also needed to define the number of objects we would supply in the obj_filter_list. In our case we ere specifying 5 tables (filter_lst.extend(5)). Finally, we need to specify the owner name and object name for each of the objects in the list. Once the list definition is complete we can issue the DBMS_STATS.GATHER_SCHEMA_STATS command. Step 4. Confirm statistics were gathered on the 5 partitioned tables. Here are a couple of other things to keep in mind when specifying the entries for the  obj_filter_list parameter. If a field in the entry is empty, i.e., null, it means there is no condition on this field. In the above example , suppose you remove the statement Obj_filter_lst(1).ownname := ‘SH’; You will get the same result since when you have specified gather_schema_stats so there is no need to further specify ownname in the obj_filter_lst. All of the names in the entry are normalized, i.e., uppercased if they are not double quoted. So in the above example, it is OK to use Obj_filter_lst(1).objname := ‘sales’;. However if you have a table called ‘MyTab’ instead of ‘MYTAB’, then you need to specify Obj_filter_lst(1).objname := ‘”MyTab”’; As I said before, although we have illustrated the usage of the obj_filter_list parameter for partitioned tables, with concurrent and incremental statistics gathering turned on, the obj_filter_list parameter is generally applicable to any gather_database_stats, gather_dictionary_stats and gather_schema_stats command. You can get a copy of the script I used to generate this post here. +Maria Colgan

    Read the article

  • How to access GNU Xnee

    - by Gaurav Butola
    I have installed GNU Xnee (Gnee an OS X automator alternative) from the Software Centre but now I cant find it anywhere in the menus. Here is the output when I run gnee in the terminal gaurav@gaurav-HCL-ME-Laptop:~$ gnee (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated (gnee:6864): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated *** glibc detected *** gnee: free(): invalid next size (fast): 0x08afb638 *** ======= Backtrace: ========= /lib/libc.so.6(+0x6c501)[0x53de501] /lib/libc.so.6(+0x6dd70)[0x53dfd70] /lib/libc.so.6(cfree+0x6d)[0x53e2e5d] gnee[0x804c9f5] /lib/libc.so.6(__libc_start_main+0xe7)[0x5388ce7] gnee[0x804c571] ======= Memory map: ======== 00110000-00112000 r-xp 00000000 08:01 2755679 /usr/lib/libgmodule-2.0.so.0.2600.0 00112000-00113000 r--p 00002000 08:01 2755679 /usr/lib/libgmodule-2.0.so.0.2600.0 00113000-00114000 rw-p 00003000 08:01 2755679 /usr/lib/libgmodule-2.0.so.0.2600.0 00116000-0011a000 r-xp 00000000 08:01 2755370 /usr/lib/libXtst.so.6.1.0 0011a000-0011b000 r--p 00003000 08:01 2755370 /usr/lib/libXtst.so.6.1.0 0011b000-0011c000 rw-p 00004000 08:01 2755370 /usr/lib/libXtst.so.6.1.0 0011c000-00176000 r-xp 00000000 08:01 2755432 /usr/lib/libbonoboui-2.so.0.0.0 00176000-00177000 r--p 00059000 08:01 2755432 /usr/lib/libbonoboui-2.so.0.0.0 00177000-00179000 rw-p 0005a000 08:01 2755432 /usr/lib/libbonoboui-2.so.0.0.0 00179000-001c8000 r-xp 00000000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001c8000-001c9000 ---p 0004f000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001c9000-001cc000 r--p 0004f000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001cc000-001d3000 rw-p 00052000 08:01 2755428 /usr/lib/libbonobo-2.so.0.0.0 001d3000-00200000 r-xp 00000000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00200000-00201000 ---p 0002d000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00201000-00202000 r--p 0002d000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00202000-00204000 rw-p 0002e000 08:01 2754521 /usr/lib/libgconf-2.so.4.1.5 00204000-0021c000 r-xp 00000000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021c000-0021d000 ---p 00018000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021d000-0021e000 r--p 00018000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021e000-0021f000 rw-p 00019000 08:01 2755405 /usr/lib/libatk-1.0.so.0.3209.1 0021f000-00243000 r-xp 00000000 08:01 2756035 /usr/lib/libpangoft2-1.0.so.0.2800.1 00243000-00244000 r--p 00023000 08:01 2756035 /usr/lib/libpangoft2-1.0.so.0.2800.1 00244000-00245000 rw-p 00024000 08:01 2756035 /usr/lib/libpangoft2-1.0.so.0.2800.1 00245000-00248000 r-xp 00000000 08:01 393403 /lib/libuuid.so.1.3.0 00248000-00249000 r--p 00002000 08:01 393403 /lib/libuuid.so.1.3.0 00249000-0024a000 rw-p 00003000 08:01 393403 /lib/libuuid.so.1.3.0 0024a000-0024c000 r-xp 00000000 08:01 2755415 /usr/lib/libavahi-glib.so.1.0.2 0024c000-0024d000 r--p 00001000 08:01 2755415 /usr/lib/libavahi-glib.so.1.0.2 0024d000-0024e000 rw-p 00002000 08:01 2755415 /usr/lib/libavahi-glib.so.1.0.2 0024e000-00250000 r-xp 00000000 08:01 393661 /lib/libutil-2.12.1.so 00250000-00251000 r--p 00001000 08:01 393661 /lib/libutil-2.12.1.so 00251000-00252000 rw-p 00002000 08:01 393661 /lib/libutil-2.12.1.so 00254000-00255000 r-xp 00000000 00:00 0 [vdso] 00255000-0026c000 r-xp 00000000 08:01 2755647 /usr/lib/libgdk_pixbuf-2.0.so.0.2200.0 0026c000-0026d000 r--p 00017000 08:01 2755647 /usr/lib/libgdk_pixbuf-2.0.so.0.2200.0 0026d000-0026e000 rw-p 00018000 08:01 2755647 /usr/lib/libgdk_pixbuf-2.0.so.0.2200.0 0026e000-002ad000 r-xp 00000000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002ad000-002ae000 ---p 0003f000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002ae000-002af000 r--p 0003f000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002af000-002b0000 rw-p 00040000 08:01 2756031 /usr/lib/libpango-1.0.so.0.2800.1 002b0000-002be000 r-xp 00000000 08:01 2755342 /usr/lib/libXext.so.6.4.0 002be000-002bf000 r--p 0000d000 08:01 2755342 /usr/lib/libXext.so.6.4.0 002bf000-002c0000 rw-p 0000e000 08:01 2755342 /usr/lib/libXext.so.6.4.0 002c0000-002c4000 r-xp 00000000 08:01 2755317 /usr/lib/libORBitCosNaming-2.so.0.1.0 002c4000-002c5000 r--p 00003000 08:01 2755317 /usr/lib/libORBitCosNaming-2.so.0.1.0 002c5000-002c6000 rw-p 00004000 08:01 2755317 /usr/lib/libORBitCosNaming-2.so.0.1.0 002c7000-002d9000 r-xp 00000000 08:01 2755430 /usr/lib/libbonobo-activation.so.4.0.0 002d9000-002da000 r--p 00012000 08:01 2755430 /usr/lib/libbonobo-activation.so.4.0.0 002da000-002db000 rw-p 00013000 08:01 2755430 /usr/lib/libbonobo-activation.so.4.0.0 002db000-002dc000 rw-p 00000000 00:00 0 002dc000-00370000 r-xp 00000000 08:01 2755645 /usr/lib/libgdk-x11-2.0.so.0.2200.0 00370000-00372000 r--p 00094000 08:01 2755645 /usr/lib/libgdk-x11-2.0.so.0.2200.0 00372000-00373000 rw-p 00096000 08:01 2755645 /usr/lib/libgdk-x11-2.0.so.0.2200.0 00373000-0038d000 r-xp 00000000 08:01 2755689 /usr/lib/libgnome-keyring.so.0.1.1 0038d000-0038e000 r--p 00019000 08:01 2755689 /usr/lib/libgnome-keyring.so.0.1.1 0038e000-0038f000 rw-p 0001a000 08:01 2755689 /usr/lib/libgnome-keyring.so.0.1.1 0038f000-00395000 r-xp 00000000 08:01 2755619 /usr/lib/libgailutil.so.18.0.1 00395000-00396000 r--p 00005000 08:01 2755619 /usr/lib/libgailutil.so.18.0.1 00396000-00397000 rw-p 00006000 08:01 2755619 /usr/lib/libgailutil.so.18.0.1 00397000-003ac000 r-xp 00000000 08:01 2755300 /usr/lib/libICE.so.6.3.0 003ac000-003ad000 r--p 00014000 08:01 2755300 /usr/lib/libICE.so.6.3.0 003ad000-003ae000 rw-p 00015000 08:01 2755300 /usr/lib/libICE.so.6.3.0 003ae000-003b0000 rw-p 00000000 00:00 0 003b0000-003f0000 r-xp 00000000 08:01 2755715 /usr/lib/libgobject-2.0.so.0.2600.0 003f0000-003f1000 r--p 00040000 08:01 2755715 /usr/lib/libgobject-2.0.so.0.2600.0 003f1000-003f2000 rw-p 00041000 08:01 2755715 /usr/lib/libgobject-2.0.so.0.2600.0 003f2000-0040f000 r-xp 00000000 08:01 2755524 /usr/lib/libdbus-glib-1.so.2.1.0 0040f000-00410000 r--p 0001c000 08:01 2755524 /usr/lib/libdbus-glib-1.so.2.1.0 00410000-00411000 rw-p 0001d000 08:01 2755524 /usr/lib/libdbus-glib-1.so.2.1.0 00411000-00413000 r-xp 00000000 08:01 2755352 /usr/lib/libXinerama.so.1.0.0 00413000-00414000 r--p 00001000 08:01 2755352 /usr/lib/libXinerama.so.1.0.0 00414000-00415000 rw-p 00002000 08:01 2755352 /usr/lib/libXinerama.so.1.0.0 00416000-0045f000 r-xp 00000000 08:01 2755313 /usr/lib/libORBit-2.so.0.1.0 0045f000-00467000 r--p 00049000 08:01 2755313 /usr/lib/libORBit-2.so.0.1.0 00467000-00469000 rw-p 00051000 08:01 2755313 /usr/lib/libORBit-2.so.0.1.0 00469000-00551000 r-xp 00000000 08:01 2755661 /usr/lib/libgio-2.0.so.0.2600.0 00551000-00553000 r--p 000e7000 08:01 2755661 /usr/lib/libgio-2.0.so.0.2600.0 00553000-00554000 rw-p 000e9000 08:01 2755661 /usr/lib/libgio-2.0.so.0.2600.0 00554000-00555000 rw-p 00000000 00:00 0 00555000-00578000 r-xp 00000000 08:01 393365 /lib/libpng12.so.0.44.0 00578000-00579000 r--p 00022000 08:01 393365 /lib/libpng12.so.0.44.0 00579000-0057a000 rw-p 00023000 08:01 393365 /lib/libpng12.so.0.44.0 0057d000-0057f000 r-xp 00000000 08:01 393656 /lib/libdl-2.12.1.so 0057f000-00580000 r--p 00001000 08:01 393656 /lib/libdl-2.12.1.soAborted

    Read the article

  • The Presentation Isn't Over Until It's Over

    - by Phil Factor
    The senior corporate dignitaries settled into their seats looking important in a blue-suited sort of way. The lights dimmed as I strode out in front to give my presentation.  I had ten vital minutes to make my pitch.  I was about to dazzle the top management of a large software company who were considering the purchase of my software product. I would present them with a dazzling synthesis of diagrams, graphs, followed by  a live demonstration of my software projected from my laptop.  My preparation had been meticulous: It had to be: A year’s hard work was at stake, so I’d prepared it to perfection.  I stood up and took them all in, with a gaze of sublime confidence. Then the laptop expired. There are several possible alternative plans of action when this happens     A. Stare at the smoking laptop vacuously, flapping ones mouth slowly up and down     B. Stand frozen like a statue, locked in indecision between fright and flight.     C. Run out of the room, weeping     D. Pretend that this was all planned     E. Abandon the presentation in favour of a stilted and tedious dissertation about the software     F. Shake your fist at the sky, and curse the sense of humour of your preferred deity I started for a few seconds on plan B, normally referred to as the ‘Rabbit in the headlamps of the car’ technique. Suddenly, a little voice inside my head spoke. It spoke the famous inane words of Yogi Berra; ‘The game isn't over until it's over.’ ‘Too right’, I thought. What to do? I ran through the alternatives A-F inclusive in my mind but none appealed to me. I was completely unprepared for this. Nowadays, longevity has since taught me more than I wanted to know about the wacky sense of humour of fate, and I would have taken two laptops. I hadn’t, but decided to do the presentation anyway as planned. I started out ignoring the dead laptop, but pretending, instead that it was still working. The audience looked startled. They were expecting plan B to be succeeded by plan C, I suspect. They weren’t used to denial on this scale. After my introductory talk, which didn’t require any visuals, I came to the diagram that described the application I’d written.  I’d taken ages over it and it was hot stuff. Well, it would have been had it been projected onto the screen. It wasn’t. Before I describe what happened then, I must explain that I have thespian tendencies.  My  triumph as Professor Higgins in My Fair Lady at the local operatic society is now long forgotten, but I remember at the time of my finest performance, the moment that, glancing up over the vast audience of  moist-eyed faces at the during the poignant  scene between Eliza and Higgins at the end, I  realised that I had a talent that one day could possibly  be harnessed for commercial use I just talked about the diagram as if it was there, but throwing in some extra description. The audience nodded helpfully when I’d done enough. Emboldened, I began a sort of mime, well, more of a ballet, to represent each slide as I came to it. Heaven knows I’d done my preparation and, in my mind’s eye, I could see every detail, but I had to somehow project the reality of that vision to the audience, much the same way any actor playing Macbeth should do the ghost of Banquo.  My desperation gave me a manic energy. If you’ve ever demonstrated a windows application entirely by mime, gesture and florid description, you’ll understand the scale of the challenge, but then I had nothing to lose. With a brief sentence of description here and there, and arms flailing whilst outlining the size and shape of  graphs and diagrams, I used the many tricks of mime, gesture and body-language  learned from playing Captain Hook, or the Sheriff of Nottingham in pantomime. I set out determinedly on my desperate venture. There wasn’t time to do anything but focus on the challenge of the task: the world around me narrowed down to ten faces and my presentation: ten souls who had to be hypnotized into seeing a Windows application:  one that was slick, well organized and functional I don’t remember the details. Eight minutes of my life are gone completely. I was a thespian berserker.  I know however that I followed the basic plan of building the presentation in a carefully controlled crescendo until the dazzling finale where the results were displayed on-screen.  ‘And here you see the results, neatly formatted and grouped carefully to enhance the significance of the figures, together with running trend-graphs!’ I waved a mime to signify an animated  window-opening, and looked up, in my first pause, to gaze defiantly  at the audience.  It was a sight I’ll never forget. Ten pairs of eyes were gazing in rapt attention at the imaginary window, and several pairs of eyes were glancing at the imaginary graphs and figures.  I hadn’t had an audience like that since my starring role in  Beauty and the Beast.  At that moment, I realized that my desperate ploy might work. I sat down, slightly winded, when my ten minutes were up.  For the first and last time in my life, the audience of a  ‘PowerPoint’ presentation burst into spontaneous applause. ‘Any questions?’ ‘Yes,  Have you got an agent?’ Yes, in case you’re wondering, I got the deal. They bought the software product from me there and then. However, it was a life-changing experience for me and I have never ever again trusted technology as part of a presentation.  Even if things can’t go wrong, they’ll go wrong and they’ll kill the flow of what you’re presenting.  if you can’t do something without the techno-props, then you shouldn’t do it.  The greatest lesson of all is that great presentations require preparation and  ‘stage-presence’ rather than fancy graphics. They’re a great supporting aid, but they should never dominate to the point that you’re lost without them.

    Read the article

  • Refactoring FizzBuzz

    - by MarkPearl
    A few years ago I blogger about FizzBuzz, at the time the post was prompted by Scott Hanselman who had podcasted about how surprized he was that some programmers could not even solve the FizzBuzz problem within a reasonable period of time during a job interview. At the time I thought I would give the problem a go in F# and sure enough the solution was fairly simple – I then also did a basic solution in C# but never posted it. Since then I have learned that being able to solve a problem and how you solve the problem are two totally different things. Today I decided to give the problem a retry and see if I had learnt anything new in the last year or so. Here is how my solution looked after refactoring… Solution 1 – Cheap and Nasty public class FizzBuzzCalculator { public string NumberFormat(int number) { var numDivisibleBy3 = (number % 3) == 0; var numDivisibleBy5 = (number % 5) == 0; if (numDivisibleBy3 && numDivisibleBy5) return String.Format("{0} FizzBuz", number); else if (numDivisibleBy3) return String.Format("{0} Fizz", number); else if (numDivisibleBy5) return String.Format("{0} Buz", number); return number.ToString(); } } class Program { static void Main(string[] args) { var fizzBuzz = new FizzBuzzCalculator(); for (int i = 0; i < 100; i++) { Console.WriteLine(fizzBuzz.NumberFormat(i)); } } } My first attempt I just looked at solving the problem – it works, and could be an acceptable solution but tonight I thought I would see how far  I could refactor it… The section I decided to focus on was the mass of if..else code in the NumberFormat method. Solution 2 – Replacing If…Else with a Dictionary public class FizzBuzzCalculator { private readonly Dictionary<Tuple<bool, bool>, string> _mappings; public FizzBuzzCalculator(Dictionary<Tuple<bool, bool>, string> mappings) { _mappings = mappings; } public string NumberFormat(int number) { var numDivisibleBy3 = (number % 3) == 0; var numDivisibleBy5 = (number % 5) == 0; var mappedKey = new Tuple<bool, bool>(numDivisibleBy3, numDivisibleBy5); return String.Format("{0} {1}", number, _mappings[mappedKey]); } } class Program { static void Main(string[] args) { var mappings = new Dictionary<Tuple<bool, bool>, string> { { new Tuple<bool, bool>(true, true), "- FizzBuzz"}, { new Tuple<bool, bool>(true, false), "- Fizz"}, { new Tuple<bool, bool>(false, true), "- Buzz"}, { new Tuple<bool, bool>(false, false), ""} }; var fizzBuzz = new FizzBuzzCalculator(mappings); for (int i = 0; i < 100; i++) { Console.WriteLine(fizzBuzz.NumberFormat(i)); } Console.ReadLine(); } } In my second attempt I looked at removing the if else in the NumberFormat method. A dictionary proved to be useful for this – I added a constructor to the class and injected the dictionary mapping. One could argue that this is totally overkill, but if I was going to use this code in a large system an approach like this makes it easy to put this data in a configuration file, which would up its OC (Open for extensibility, closed for modification principle). I could of course take the OC principle even further – the check for divisibility by 3 and 5 is tightly coupled to this class. If I wanted to make it 4 instead of 3, I would need to adjust this class. This introduces my third refactoring. Solution 3 – Introducing Delegates and Injecting them into the class public delegate bool FizzBuzzComparison(int number); public class FizzBuzzCalculator { private readonly Dictionary<Tuple<bool, bool>, string> _mappings; private readonly FizzBuzzComparison _comparison1; private readonly FizzBuzzComparison _comparison2; public FizzBuzzCalculator(Dictionary<Tuple<bool, bool>, string> mappings, FizzBuzzComparison comparison1, FizzBuzzComparison comparison2) { _mappings = mappings; _comparison1 = comparison1; _comparison2 = comparison2; } public string NumberFormat(int number) { var mappedKey = new Tuple<bool, bool>(_comparison1(number), _comparison2(number)); return String.Format("{0} {1}", number, _mappings[mappedKey]); } } class Program { private static bool DivisibleByNum(int number, int divisor) { return number % divisor == 0; } public static bool Divisibleby3(int number) { return number % 3 == 0; } public static bool Divisibleby5(int number) { return number % 5 == 0; } static void Main(string[] args) { var mappings = new Dictionary<Tuple<bool, bool>, string> { { new Tuple<bool, bool>(true, true), "- FizzBuzz"}, { new Tuple<bool, bool>(true, false), "- Fizz"}, { new Tuple<bool, bool>(false, true), "- Buzz"}, { new Tuple<bool, bool>(false, false), ""} }; var fizzBuzz = new FizzBuzzCalculator(mappings, Divisibleby3, Divisibleby5); for (int i = 0; i < 100; i++) { Console.WriteLine(fizzBuzz.NumberFormat(i)); } Console.ReadLine(); } } I have taken this one step further and introduced delegates that are injected into the FizzBuzz Calculator class, from an OC principle perspective it has probably made it more compliant than the previous Solution 2, but there seems to be a lot of noise. Anonymous Delegates increase the readability level, which is what I have done in Solution 4. Solution 4 – Anon Delegates public delegate bool FizzBuzzComparison(int number); public class FizzBuzzCalculator { private readonly Dictionary<Tuple<bool, bool>, string> _mappings; private readonly FizzBuzzComparison _comparison1; private readonly FizzBuzzComparison _comparison2; public FizzBuzzCalculator(Dictionary<Tuple<bool, bool>, string> mappings, FizzBuzzComparison comparison1, FizzBuzzComparison comparison2) { _mappings = mappings; _comparison1 = comparison1; _comparison2 = comparison2; } public string NumberFormat(int number) { var mappedKey = new Tuple<bool, bool>(_comparison1(number), _comparison2(number)); return String.Format("{0} {1}", number, _mappings[mappedKey]); } } class Program { static void Main(string[] args) { var mappings = new Dictionary<Tuple<bool, bool>, string> { { new Tuple<bool, bool>(true, true), "- FizzBuzz"}, { new Tuple<bool, bool>(true, false), "- Fizz"}, { new Tuple<bool, bool>(false, true), "- Buzz"}, { new Tuple<bool, bool>(false, false), ""} }; var fizzBuzz = new FizzBuzzCalculator(mappings, (n) => n % 3 == 0, (n) => n % 5 == 0); for (int i = 0; i < 100; i++) { Console.WriteLine(fizzBuzz.NumberFormat(i)); } Console.ReadLine(); } }   Using the anonymous delegates I think the noise level has now been reduced. This is where I am going to end this post, I have gone through 4 iterations of the code from the initial solution using If..Else to delegates and dictionaries. I think each approach would have it’s pro’s and con’s and depending on the intention of where the code would be used would be a large determining factor. If you can think of an alternative way to do FizzBuzz, add a comment!

    Read the article

  • Unlocking Productivity

    - by Michael Snow
    Unlocking Productivity in Life Sciences with Consolidated Content Management by Joe Golemba, Vice President, Product Management, Oracle WebCenter As life sciences organizations look to become more operationally efficient, the ability to effectively leverage information is a competitive advantage. Whether data mining at the drug discovery phase or prepping the sales team before a product launch, content management can play a key role in developing, organizing, and disseminating vital information. The goal of content management is relatively straightforward: put the information that people need where they can find it. A number of issues can complicate this; information sits in many different systems, each of those systems has its own security, and the information in those systems exists in many different formats. Identifying and extracting pertinent information from mountains of farflung data is no simple job, but the alternative—wasted effort or even regulatory compliance issues—is worse. An integrated information architecture can enable health sciences organizations to make better decisions, accelerate clinical operations, and be more competitive. Unstructured data matters Often when we think of drug development data, we think of structured data that fits neatly into one or more research databases. But structured data is often directly supported by unstructured data such as experimental protocols, reaction conditions, lot numbers, run times, analyses, and research notes. As life sciences companies seek integrated views of data, they are typically finding diverse islands of data that seemingly have no relationship to other data in the organization. Information like sales reports or call center reports can be locked into siloed systems, and unavailable to the discovery process. Additionally, in the increasingly networked clinical environment, Web pages, instant messages, videos, scientific imaging, sales and marketing data, collaborative workspaces, and predictive modeling data are likely to be present within an organization, and each source potentially possesses information that can help to better inform specific efforts. Historically, content management solutions that had 21CFR Part 11 capabilities—electronic records and signatures—were focused mainly on content-enabling manufacturing-related processes. Today, life sciences companies have many standalone repositories, requiring different skills, service level agreements, and vendor support costs to manage them. With the amount of content doubling every three to six months, companies have recognized the need to manage unstructured content from the beginning, in order to increase employee productivity and operational efficiency. Using scalable and secure enterprise content management (ECM) solutions, organizations can better manage their unstructured content. These solutions can also be integrated with enterprise resource planning (ERP) systems or research systems, making content available immediately, in the context of the application and within the flow of the employee’s typical business activity. Administrative safeguards—such as content de-duplication—can also be applied within ECM systems, so documents are never recreated, eliminating redundant efforts, ensuring one source of truth, and maintaining content standards in the organization. Putting it in context Consolidating structured and unstructured information in a single system can greatly simplify access to relevant information when it is needed through contextual search. Using contextual filters, results can include therapeutic area, position in the value chain, semantic commonalities, technology-specific factors, specific researchers involved, or potential business impact. The use of taxonomies is essential to organizing information and enabling contextual searches. Taxonomy solutions are composed of a hierarchical tree that defines the relationship between different life science terms. When overlaid with additional indexing related to research and/or business processes, it becomes possible to effectively narrow down the amount of data that is returned during searches, as well as prioritize results based on specific criteria and/or prior search history. Thus, search results are more accurate and relevant to an employee’s day-to-day work. For example, a search for the word "tissue" by a lab researcher would return significantly different results than a search for the same word performed by someone in procurement. Of course, diverse data repositories, combined with the immense amounts of data present in an organization, necessitate that the data elements be regularly indexed and cached beforehand to enable reasonable search response times. In its simplest form, indexing of a single, consolidated data warehouse can be expected to be a relatively straightforward effort. However, organizations require the ability to index multiple data repositories, enabling a single search to reference multiple data sources and provide an integrated results listing. Security and compliance Beyond yielding efficiencies and supporting new insight, an enterprise search environment can support important security considerations as well as compliance initiatives. For example, the systems enable organizations to retain the relevance and the security of the indexed systems, so users can only see the results to which they are granted access. This is especially important as life sciences companies are working in an increasingly networked environment and need to provide secure, role-based access to information across multiple partners. Although not officially required by the 21 CFR Part 11 regulation, the U.S. Food and Drug Administraiton has begun to extend the type of content considered when performing relevant audits and discoveries. Having an ECM infrastructure that provides centralized management of all content enterprise-wide—with the ability to consistently apply records and retention policies along with the appropriate controls, validations, audit trails, and electronic signatures—is becoming increasingly critical for life sciences companies. Making the move Creating an enterprise-wide ECM environment requires moving large amounts of content into a single enterprise repository, a daunting and risk-laden initiative. The first key is to focus on data taxonomy, allowing content to be mapped across systems. The second is to take advantage new tools which can dramatically speed and reduce the cost of the data migration process through automation. Additional content need not be frozen while it is migrated, enabling productivity throughout the process. The ability to effectively leverage information into success has been gaining importance in the life sciences industry for years. The rapid adoption of enterprise content management, both in operational processes as well as in scientific management, are clear indicators that the companies are looking to use all available data to be better informed, improve decision making, minimize risk, and increase time to market, to maintain profitability and be more competitive. As more and more varieties and sources of information are brought under the strategic management umbrella, the ability to divine knowledge from the vast pool of information is increasingly difficult. Simple search engines and basic content management are increasingly unable to effectively extract the right information from the mountains of data available. By bringing these tools into context and integrating them with business processes and applications, we can effectively focus on the right decisions that make our organizations more profitable. More Information Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >