Search Results

Search found 6346 results on 254 pages for 'patch panel'.

Page 217/254 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Add Your Gmail Account to Outlook 2010 Using IMAP

    - by Mysticgeek
    If you’re upgrading from Outlook 2003 to 2010, you might want to use IMAP with your Gmail account to synchronize mail across multiple machines. Using our guide, you will be able to start using it in no time. Enable IMAP in Gmail First log into your Gmail account and open the Settings panel. Click on the Forwarding and POP/IMAP tab and verify IMAP is enabled and save changes. Next open Outlook 2010, click on the File tab to access the Backstage view. Click on Account Settings and Add and remove accounts or change existing connection settings. In the Account Settings window click on the New button. Enter in your name, email address, and password twice then click Next. Outlook will configure the email server settings, the amount of time it takes will vary. Provided everything goes correctly, the configuration will be successful and you can begin using your account. Manually Configure IMAP Settings If the above instructions don’t work, then we’ll need to manually configure the settings. Again, go into Auto Account Setup and select Manually configure server settings or additional server types and click Next.   Select Internet E-mail – Connect to POP or IMAP server to send and receive e-mail messages. Now we need to manually enter in our settings similar to the following. Under the Server Information section verify the following. Account Type: IMAP Incoming mail server: imap.gmail.com Outgoing mail server (SMTP): smtp.gmail.com Note: If you have a Google Apps account make sure to put the full email address ([email protected]) in the Your Name and User Name fields. Note: If you live outside of the US you might need to use imap.googlemail.com and smtp.googlemail.com Next, we need to click on the More Settings button… In the Internet E-mail Settings screen that pops up, click on the Outgoing Server tab, and check the box next to My outgoing server (SMTP) requires authentication. Also select the radio button next to Use same settings as my incoming mail server. In the same window click on the Advanced tab and verify the following. Incoming server: 993 Incoming server encrypted connection: SSL Outgoing server encrypted connection TLS Outgoing server: 587 Note: You will need to change the Outgoing server encrypted connection first, otherwise it will default back to port 25. Also, if TLS doesn’t work, we were able to successfully use Auto. Click OK when finished. Now we want to test the settings, before continuing on…it’s just easier that way incase something was entered incorrectly. To make sure the settings are tested, check the box Test Account Settings by clicking the Next button. If you’ve entered everything in correctly, both tasks will be completed successfully and you can close out of the window. and begin using your account via Outlook 2010. You’ll get a final congratulations message you can close out of… And begin using your account via Outlook 2010. Conclusion Using IMAP allows you to synchronize email across multiple machines and devices. The IMAP feature in Gmail is free to use, and this should get you started using it with Outlook 2010. If you’re still using 2007 or just upgraded to it, check out our guide on how to use Gmail IMAP in Outlook 2007. Similar Articles Productive Geek Tips Add Your Gmail To Windows Live MailForce Outlook 2007 to Download Complete IMAP ItemsUse Gmail IMAP in Microsoft Outlook 2007Prevent Outlook with Gmail IMAP from Showing Duplicate Tasks in the To-Do BarSetting up Gmail IMAP Support for Windows Vista Mail TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid Enhance Your Laptop’s Battery Life With These Tips Easily Search Food Recipes With Recipe Chimp

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • NDepend tool – Why every developer working with Visual Studio.NET must try it!

    - by hajan
    In the past two months, I have had a chance to test the capabilities and features of the amazing NDepend tool designed to help you make your .NET code better, more beautiful and achieve high code quality. In other words, this tool will definitely help you harmonize your code. I mean, you’ve probably heard about Chaos Theory. Experienced developers and architects are already advocates of the programming chaos that happens when working with complex project architecture, the matrix of relationships between objects which simply even if you are the one who have written all that code, you know how hard is to visualize everything what does the code do. When the application get more and more complex, you will start missing a lot of details in your code… NDepend will help you visualize all the details on a clever way that will help you make smart moves to make your code better. The NDepend tool supports many features, such as: Code Query Language – which will help you write custom rules and query your own code! Imagine, you want to find all your methods which have more than 100 lines of code :)! That’s something simple! However, I will dig much deeper in one of my next blogs which I’m going to dedicate to the NDepend’s CQL (Code Query Language) Architecture Visualization – You are an architect and want to visualize your application’s architecture? I’m thinking how many architects will be really surprised from their architectures since NDepend shows your whole architecture showing each piece of it. NDepend will show you how your code is structured. It shows the architecture in graphs, but if you have very complex architecture, you can see it in Dependency Matrix which is more suited to display large architecture Code Metrics – Using NDepend’s panel, you can see the code base according to Code Metrics. You can do some additional filtering, like selecting the top code elements ordered by their current code metric value. You can use the CQL language for this purpose too. Smart Search – NDepend has great searching ability, which is again based on the CQL (Code Query Language). However, you have some options to search using dropdown lists and text boxes and it will generate the appropriate CQL code on fly. Moreover, you can modify the CQL code if you want it to fit some more advanced searching tasks. Compare Builds and Code Difference – NDepend will also help you compare previous versions of your code with the current one at one of the most clever ways I’ve seen till now. Create Custom Rules – using CQL you can create custom rules and let NDepend warn you on each build if you break a rule Reporting – NDepend can automatically generate reports with detailed stats, graph representation, dependency matrixes and some additional advanced reporting features that will simply explain you everything related to your application’s code, architecture and what you’ve done. And that’s not all. As I’ve seen, there are many other features that NDepend supports. I will dig more in the upcoming days and will blog more about it. The team who built the NDepend have also created good documentation, which you can find on the NDepend website. On their website, you can also find some good videos that will help you get started quite fast. It’s easy to install and what is very important it is fully integrated with Visual Studio. To get you started, you can watch the following Getting Started Online Demo and Tutorial with explanations and screenshots. If you are interested to know more about how to use the features of this tool, either visit their website or wait for my next blogs where I will show some real examples of using the tool and how it helps make your code better. And the last thing for this blog, I would like to copy one sentence from the NDepend’s home page which says: ‘Hence the software design becomes concrete, code reviews are effective, large refactoring are easy and evolution is mastered.’ Website: www.ndepend.com Getting Started: http://www.ndepend.com/GettingStarted.aspx Features: http://www.ndepend.com/Features.aspx Download: http://www.ndepend.com/NDependDownload.aspx Hope you like it! Please do let me know your feedback by providing comments to my blog post. Kind Regards, Hajan

    Read the article

  • Sneak peek at next generation Three MiFi unit – Huawei E585

    - by Liam Westley
    Last Wednesday I was fortunate to be invited to a sneak preview of the next generation Three MiFi unit, the Huawei E585. Many thanks to all those who posted questions both via this blog or via @westleyl on Twitter. I think I made sure I asked every question posed to the MiFi product manager from Three UK, and so here's the answers you were after. What is a MiFi? For those who are wondering, a MiFi unit is a 3G broadband modem combined with a WiFi access point, providing 3G broadband data access to up to five devices simultaneously via standard WiFi connections. What is different? It appears the prime task of enhancing the MiFi was to improve the user experience and user interface, both in terms of the device hardware and within the management software to configure the device.  I think this was a very sensible decision as these areas had substantial room for improvement. Single button operation to switch on, enable WiFi and connect to 3G Improved OELD display (see below), replacing the multi coloured LEDs; including signal strength, SMS notifications, the number of connected clients and data usage Management is via a web based dashboard accessible from any web browser. This is a big win for those running Linux, Mac OS/X, iPad users and, for me, as I can now configure the device from Windows 7 64-bit Charging is via micro USB, the new standard for small USB devices; you cannot use your old charger for the new MiFi unit Automatic reconnection when regaining a signal Improved charging time, which should allow recharging of the device when in use Although subjective, the black and silver design does look more classy than the silver and white plastic of the original MiFi What is the same? Virtually the same size and weight The battery is the same unit as the original MiFi so you’ll have a handy spare if you upgrade Data plans remain the same as the current MiFi, so cheapest price for upgraders will be £49 pay as you go Still only works on 3G networks, with no fallback to GPRS or EDGE There is no specific upgrade path for existing three customers, either from dongle or from the original MiFi My opinion I think three have concentrated on the correct areas of usability and user experience rather than trying to add new whizz bang technology features which aren’t of interest to mainstream users. The one button operation and the improved device display will make it much easier to use when out and about. If the automatic reconnection proves reliable that will remove a major bugbear that I experienced the previous evening when travelling on the First Great Western line from Paddington to Didcot Parkway.  The signal was repeatedly lost as we sped through tunnels and cuttings, and without automatic reconnection is was a real pain to keep pressing the data button on the MiFi to re-establish my data connection. And finally, the web based dashboard will mean I no longer need to resort to my XP based netbook to configure the SSID and password. My everyday laptop runs Windows 7 64-bit which appears to confuse the older 3 WiFi manager which cannot locate the MiFi when connected. Links to other sites, and other images of the device Good first impressions from Ben Smith, http://thereallymobileproject.com/2010/06/3uk-announce-a-new-mifi-with-a-screen/ Also, a round up of other sneak preview posts, http://www.3mobilebuzz.com/2010/06/11/mifi-round-two-your-view/ Pictures Here is a comparison of the old MiFi device next to the new device, complete with OLED display and the Huawei logo now being a prominent feature on the front of the device. One of my fellow bloggers had a Linux based netbook, showing off the web based dashboard complete with Text messages panel to manage SMS. And finally, I never thought that my blog sub title would ever end up printed onto a cup cake, ... and here's some of the other cup cakes ...

    Read the article

  • Session Report: What’s New in JSF: A Complete Tour of JSF 2.2

    - by Janice J. Heiss
    On Wednesday, Ed Burns, Consulting Staff Member at Oracle, presented a session, CON3870 -- “What’s New in JSF: A Complete Tour of JSF 2.2,” in which he provided an update on recent developments in JavaServer Faces 2.2. He began by emphasizing that, “JavaServer Faces 2.2 continues the evolution of the Java EE standard user interface technology. Like previous releases, this iteration is very community-driven and transparent.” He pointed out that since JSF was introduced at the 2001 JavaOne Keynote, it has had a long and successful run and has found a home in applications where the UI logic resides entirely on the server where the model and UI logic is. In such cases, the browser performs fairly simple functions. However, developers can take advantage of the power of browsers, something that Project Avatar is focused on by letting developers author their applications so the UI logic is running on the client and communicating to the back end via RESTful web services. “Most importantly,” remarked Burns, “JSF 2.2 offers a really good migration path because even in the scope of one application you could have an app written with JSF that has its UI logic on the server and, on a gradual basis, you could migrate parts of the app over to use client-side technologies. This can be done at any level of granularity – per page or per collection of pages. It all depends on what you want to do.” His presentation, which focused on the basic new features of JSF 2.2, began by restating the scope of JSF and encouraged attendees to check out Roger Kitain’s session: CON5133 “Techniques for Responsive Real-Time Web UIs.” Burns explained that JSF has endured because, “We still need web apps that are maintainable, localizable, quick to build, accessible, secure, look great and are fun to use.” It is used on every continent – the curious can go here to check out where its unofficial usage is tracked. He emphasized the significance of the UI logic being substantially on the server. This: Separates Component Semantics from Rendering, Allows components to “own” their little patch of the UI -- encode/decode, And offers a well-defined lifecycle: Inversion of Control. Burns reminded attendees that JSR-344, the spec for JSF 2.2, is now on Java Community Process 2.8, a revised version of the JCP that allows for more openness and transparency. He then offered some tools for community access to JSF 2.2:    * Public java.net projects spec http://jsf-spec.java.net/ impl http://jsf.java.net/ Open Source: GPL+Classpath Exception    * Mailing Lists [email protected]                                Public readable archive, JSPA signed member read/write [email protected]                                     Public readable archive, any java.net member read/write                         All mail sent to jsr344-experts is sent to users. * Issue Tracker spec http://jsf-spec.java.net/issues/ impl http://jsf.java.net/issues/ JSF 2.2, which is JSR 344, has a Public Review Draft planned by December 2012 with no need for a Renewal Ballot. The Early Draft Review of JSR 344 was published on December 8, 2011. Interested developers are encouraged to offer their input. Six Big Ticket Features of JSF 2.2 Burns summarized the six big ticket features of JSF 2.2:* HTML5 Friendly Markup Support Pass through attributes and elements * Faces Flows* Cross Site Request Forgery Protection* Loading Facelets via ResourceHandler* File Upload Component* Multi-Templating He explained that he called it “HTML 5 friendly” because there is really nothing HTML 5 specific about it -- it could be 4. But it enables developers to use new elements that are present in HTML5 without having a JSF component library that is written to take advantage of those specifically. It gives the page author the ability to use plain HTML5 to write their page, but to still take advantage of the server-side available in JSF. He presented a demo showing JSF 2.2’s ability to leverage the expressiveness of HTML5. Burns then explained the significance of face flows, which offer function points and quantify how much work has taken place, something of great value to JSF users. He went on to talk about JSF 2.2.’s cross-site request forgery protection (CSRF) and offered details about how it protects applications against attack. Then he talked about JSF 2.2’s File Upload Component and explained that the final specification will have Ajax and non-Ajax support. The current milestone has non-Ajax support implemented. He then went on to explain its capacity to add facelets through ResourceHandler. Previously, JSF 2.0 added Facelets and ResourceHandler as disparate units; now in JSF 2.2 the two concepts are unified. Finally, he explained the concept of multi-templating in JSF 2.2 and went on to discuss more medium-level features of the release. For an easy, low maintenance way of staying in touch with JSF developments go to JSF’s Twitter page where every month or so, important updates are offered.

    Read the article

  • Raspberry Pi entrance signed backed by Umbraco - Part 1

    - by Chris Houston
    Being experts on all things Umbraco, we jumped at the chance to help our client, QV Offices, with their pressing signage predicament. They needed to display a sign in the entrance to their building and approached us for our advice. Of course it had to be electronic: displaying multiple names of their serviced office clients, meeting room bookings and on-the-pulse promotions. But with a winding Victorian staircase and minimal storage space how could the monitor be run, updated and managed? That’s where we came in…Raspberry PiUmbraco CMSAutomatic updatesAutomated monitor of the signPower saving when the screen is not in useMounting the screenThe screen that has been used is a standard LED low energy Full HD screen and has been mounted on the wall using it's VESA mounting points, as the wall is a stud wall we were able to add an access panel behind the screen to feed through the mains, HDMI and sensor cables.The Raspberry Pi is then tucked away out of sight in the main electrical cupboard which just happens to be next to the sign, we had an electrician add a power point inside this cupboard to allow us to power the screen and the Raspberry Pi.Designing the interface and editing the contentAlthough a room sign was the initial requirement from QV Offices, their medium term goal has always been to add online meeting booking to their website and hence we suggested adding information about the current and next day's meetings to the sign that would be pulled directly from their online booking system.We produced the design and built the web page to fit exactly on a 1920 x 1080 screen (Full HD in Portrait)As you would expect all the information can be edited via an Umbraco CMS, they are able to add floors, rooms, clients and virtual clients as well as add meeting bookings to their meeting diary.How we configured the Raspberry PiAfter receiving a new Raspberry Pi we downloaded the latest release of Raspbian operating system and followed the official guide which shows how to copy the OS onto an SD card from a Mac, we then followed the majority of steps on this useful guide: 10 Things to Do After Buying a Raspberry Pi.Installing ChromiumWe chose to use the Chromium web browser which for those who do not know is the open sourced version of Google Chrome. You can install this from the terminal with the following command:sudo apt-get install chromium-browserInstalling UnclutterWe found this little application which automatically hides the mouse pointer, it is used in the script below and is installed using the following command:sudo apt-get install unclutterAuto start Chromium and disabling the screen saver, power saving and mouseWhen the Raspberry Pi has been installed it will not have a keyboard or mouse and hence if their was a power cut we needed it to always boot and re-loaded Chromium with the correct URL.Our preferred command line text editor is Nano and I have assumed you know how to use this editor or will be able to work it out pretty quickly.So using the following command:sudo nano /etc/xdg/lxsession/LXDE/autostartWe then changed the autostart file content to:@lxpanel --profile LXDE@pcmanfm --desktop --profile LXDE@xscreensaver -no-splash@xset s off@xset -dpms@xset s noblank@chromium --kiosk --incognito http://www.qvoffices.com/someURL@unclutter -idle 0The first few commands turn off the screen saver and power saving, we then open Cromium in Kiosk Mode (full screen with no menu etc) and pass in the URL to use (I have changed the URL in this example) We found a useful blog post with the Cromium command line switches.Finally we also open an application called Unclutter which auto hides the mouse after 0 seconds, so you will never see a mouse on the sign.We also had to edit the following file:sudo nano /etc/lightdm/lightdm.confAnd added the following line under the [SeatDefault] section:xserver-command=X -s 0 dpmsRefreshing the screenWe decided to try and add a scheduled task that would trigger Chromium to reload the page, at some point in the future we might well change this to using Javascript to update the content, but for now this works fine.First we installed the XDOTool which enables you to script Keyboard commands:sudo apt-get install xdotoolWe used the Refreshing Chromium Browser by Shell Script post as a reference and created the following shell script (which we called refreshing.sh):export DISPLAY=":0"WID=$(xdotool search --onlyvisible --class chromium|head -1)xdotool windowactivate ${WID}xdotool key ctrl+F5This selects the correct display and then sends a CTRL + F5 to refresh Chromium.You will need to give this file execute permissions:chmod a=rwx refreshing.shNow we have the script file setup we just need to schedule it to call this script periodically which is done by using Crontab, to edit this you use the following command:crontab -eAnd we added the following:*/5 * * * * DISPLAY=":.0" /home/pi/scripts/refreshing.sh >/home/pi/cronlog.log 2>&1This calls our script every 5 minutes to refresh the display and it logs any errors to the cronlog.log file.SummaryQV Offices now have a richer and more manageable booking system than they did before we started, and a great new sign to boot.How could we make sure that the sign was running smoothly downstairs in a busy office centre? A second post will follow outlining exactly how Vizioz enabled QV Offices to monitor their sign simply and remotely, from the comfort of their desks.

    Read the article

  • Create a Persistent Bootable Ubuntu USB Flash Drive

    - by Trevor Bekolay
    Don’t feel like reinstalling an antivirus program every time you boot up your Ubuntu flash drive? We’ll show you how to create a bootable Ubuntu flash drive that will remember your settings, installed programs, and more! Previously, we showed you how to create a bootable Ubuntu flash drive that would reset to its initial state every time you booted it up. This is great if you’re worried about messing something up, and want to start fresh every time you start tinkering with Ubuntu. However, if you’re using the Ubuntu flash drive to diagnose and solve problems with your PC, you might find that a lot of problems require guess-and-test cycles. It would be great if the settings you change in Ubuntu and the programs you install stay installed the next time you boot it up. Fortunately, Universal USB Installer, a great little program from Pen Drive Linux, can do just that! Note: You will need a USB drive at least 2 GB large. Make sure you back up any files on the flash drive because this process will format the drive, removing any files currently on it. Once Ubuntu has been installed on the flash drive, you can move those files back if there is enough space. Put Ubuntu on your flash drive Universal-USB-Installer.exe does not need to be installed, so just double click on it to run it wherever you downloaded it. Click Yes if you get a UAC prompt, and you will be greeted with this window. Click I Agree. In the drop-down box on the next screen, select Ubuntu 9.10 Desktop i386. Don’t worry if you normally use 64-bit operating systems – the 32-bit version of Ubuntu 9.10 will still work fine. Some useful tools do not have 64-bit versions, so unless you’re planning on switching to Ubuntu permanently, the 32-bit version will work best. If you don’t have a copy of the Ubuntu 9.10 CD downloaded, then click on the checkbox to Download the ISO. You’ll be prompted to launch a web browser; click Yes. The download should start immediately. When it’s finished, return the the Universal USB Installer and click on Browse to navigate to the ISO file you just downloaded. Click OK and the text field will be populated with the path to the ISO file. Select the drive letter that corresponds to the flash drive that you would like to use from the dropdown box. If you’ve backed up the files on this drive, we recommend checking the box to format the drive. Finally, you have to choose how much space you would like to set aside for the settings and programs that will be stored on the flash drive. Considering that Ubuntu itself only takes up around 700 MB, 1 GB should be plenty, but we’re choosing 2 GB in this example because we have lots of space on this USB drive. Click on the Create button and then make yourself a sandwich – it will take some time to install no matter how fast your PC is. Eventually it will finish. Click Close. Now you have a flash drive that will boot into a fully capable Ubuntu installation, and any changes you make will persist the next time you boot it up! Boot into Ubuntu If you’re not sure how to set your computer to boot using the USB drive, then check out the How to Boot Into Ubuntu section of our previous article on creating bootable USB drives, or refer to your motherboard’s manual. Once your computer is set to boot using the USB drive, you’ll be greeted with splash screen with some options. Press Enter to boot into Ubuntu. The first time you do this, it may take some time to boot up. Fortunately, we’ve found that the process speeds up on subsequent boots. You’ll be greeted with the Ubuntu desktop. Now, if you change settings like the desktop resolution, or install a program, those changes will be permanently stored on the USB drive! We installed avast! Antivirus, and on the next boot, found that it was still in the Accessories menu where we left it. Conclusion We think that a bootable Ubuntu USB flash drive is a great tool to have around in case your PC has problems booting otherwise. By having the changes you make persist, you can customize your Ubuntu installation to be the ultimate computer repair toolkit! Download Universal USB Installer from Pen Drive Linux Similar Articles Productive Geek Tips Create a Bootable Ubuntu USB Flash Drive the Easy WayCreate a Bootable Ubuntu 9.10 USB Flash DriveReset Your Ubuntu Password Easily from the Live CDHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista SetupHow To Setup a USB Flash Drive to Install Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7

    Read the article

  • Windows Presentation Foundation 4.5 Cookbook Review

    - by Ricardo Peres
    As promised, here’s my review of Windows Presentation Foundation 4.5 Cookbook, that Packt Publishing kindly made available to me. It is an introductory book, targeted at WPF newcomers or users with few experience, following the typical recipes or cookbook style. Like all Packt Publishing books on development, each recipe comes with sample code that is self-sufficient for understanding the concepts it tries to illustrate. It starts on chapter 1 by introducing the most important concepts, the XAML language itself, what can be declared in XAML and how to do it, what are dependency and attached properties as well as markup extensions and events, which should give readers a most required introduction to how WPF works and how to do basic stuff. It moves on to resources on chapter 2, which also makes since, since it’s such an important concept in WPF. Next, chapter 3, come the panels used for laying controls on the screen, all of the out of the box panels are described with typical use cases. Controls come next in chapter 4; the difference between elements and controls is introduced, as well as content controls, headered controls and items controls, and all standard controls are introduced. The book shows how to change the way they look by using templates. The next chapter, 5, talks about top level windows and the WPF application object: how to access startup arguments, how to set the main window, using standard dialogs and there’s even a sample on how to have a irregularly-shaped window. This is one of the most important concepts in WPF: data binding, which is the theme for the following chapter, 6. All common scenarios are introduced, the binding modes, directions, triggers, etc. It talks about the INotifyPropertyChanged interface and how to use it for notifying data binding subscribers of changes in data sources. Data templates and selectors are also covered, as are value converters and data triggers. Examples include master-detail and sorting, grouping and filtering collections and binding trees and grids. Last it covers validation rules and error templates. Chapter 7 talks about the current trend in WPF development, the Model View View-Model (MVVM) framework. This is a well known pattern for connecting things interface to actions, and it is explained competently. A typical implementation is presented which also presents the command pattern used throughout WPF. A complete application using MVVM is presented from start to finish, including typical features such as undo. Style and layout is covered on chapter 8. Why/how to use styles, applying them automatically,  using the many types of triggers to change styles automatically, using Expression Blend behaviors and templates are all covered. Next chapter, 9, is about graphics and animations programming. It explains how to create shapes, transform common UI elements, apply special effects and perform simple animations. The following chapter, 10, is about creating custom controls, either by deriving from UserControl or from an existing control or framework element class, applying custom templates for changing the way the control looks. One useful example is a custom layout panel that arranges its children along a circumference. The final chapter, 11, is about multi-threading programming and how one can integrate it with WPF. Includes how to invoke methods and properties on WPF classes from threads other than the main UI, using background tasks and timers and even using the new C# 5.0 asynchronous operations. It’s an interesting book, like I said, mostly for newcomers. It provides a competent introduction to WPF, with examples that cover the most common scenarios and also give directions to more complex ones. I recommend it to everyone wishing to learn WPF.

    Read the article

  • Add Microsoft Core Fonts to Ubuntu

    - by Matthew Guay
    Have you ever needed the standard Microsoft fonts such as Times New Roman on your Ubuntu computer?  Here’s how you can easily add the core Microsoft fonts to Ubuntu. Times New Roman, Arial, and other core Microsoft fonts are still some of the most commonly used fonts in documents and websites.  Times New Roman especially is often required for college essays, legal docs, and other critical documents that you may need to write or edit.  Ubuntu includes the Liberation alternate fonts that include similar alternates to Times New Roman, Arial, and Courier New, but these may not be accepted by professors and others when a certain font is required.  But, don’t worry; it only takes a couple clicks to add these fonts to Ubuntu for free. Installing the Core Microsoft Fonts Microsoft has released their core fonts, including Times New Roman and Arial, for free, and you can easily download these from the Software Center.  Open your Applications menu, and select Ubuntu Software Center.   In the search box enter the following: ttf-mscorefonts Click Install on the “Installer for Microsoft TrueType core fonts” directly in the search results. Enter your password when requested, and click Authenticate. The fonts will then automatically download and install in a couple minutes depending on your internet connection speed. Once the install is finished, you can launch OpenOffice Writer to try out the new fonts.  Here’s a preview of all the fonts included in this pack.  And, yes, this does included the infamous Comic Sans and Webdings fonts as well as the all-important Times New Roman. Please Note:  By default in Ubuntu, OpenOffice uses Liberation Serif as the default font, but after installing this font pack, the default font will switch to Times New Roman. Adding Other Fonts In addition to the Microsoft Core Fonts, the Ubuntu Software Center has hundreds of free fonts available.  Click the Fonts link on the front page to explore these, and install the same as above. If you’ve downloaded another font individually, you can also install it easily in Ubuntu.  Just double-click it, and then click Install in the preview window. Conclusion Although you may prefer the fonts that are included with Ubuntu, there are many reasons why having the Microsoft core fonts can be helpful.  Thankfully it’s easy in Ubuntu to install them, so you’ll never have to worry about not having them when you need to edit an important document. Similar Articles Productive Geek Tips Enable Smooth fonts on Ubuntu LinuxEmbed True Type Fonts in Word and PowerPoint 2007 DocumentsNew Vista Syntax for Opening Control Panel Items from the Command-lineStupid Geek Tricks: Enable More Fonts for the Windows Command PromptAdding extra Repositories on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • How to Create Views for All Tables with Oracle SQL Developer

    - by thatjeffsmith
    Got this question over the weekend via a friend and Oracle ACE Director, so I thought I would share the answer here. If you want to quickly generate DDL to create VIEWs for all the tables in your system, the easiest way to do that with SQL Developer is to create a data model. Wait, why would I want to do this? StackOverflow has a few things to say on this subject… So, start with importing a data dictionary. Step One: Open of Create a Model In SQL Developer, go to View – Data Modeler – Browser. Then in the browser panel, expand your design and create a new Relational Model. Step Two: Import your Data Dictionary This is a fancy way of saying, ‘suck objects out of the database into my model’ This will open a wizard to connect, select your schema(s), objects, etc. Once they’re in your model, you’re ready to cook with gas I’m using HR (Human Resources) for this example. You should end up with something that looks like this. Our favorite HR model Now we’re ready to generate the views! Step Three: Auto-generate the Views Go to Tools – Data Modeler – Table to View Wizard. I don’t want all my tables included, and I want to change the naming standard Decide if you want to change the default generated view names By default the views will be created as ‘V_TABLE_NAME.’ If you don’t like the ‘V_’ you can enter your own. You also can reference the object and model name with variables as shown in the screenshot above. I’m going to go with something a little more personal. The views are the little green boxes in the diagram Can’t find your views? They should be grouped together in your diagram. Don’t forget to use the Navigator to easily find and navigate to those model diagram objects! Step Four: Generate the DDL Ok, let’s use the Generate DDL button on the toolbar. Un-check everything but your views If you used a prefix, take advantage of that to create a filter. You might have existing views in your model that you don’t want to include, right? Once you click ‘OK’ the DDL will be generated. -- Generated by Oracle SQL Developer Data Modeler 4.0.0.825 -- at: 2013-11-04 10:26:39 EST -- site: Oracle Database 11g -- type: Oracle Database 11g CREATE OR REPLACE VIEW HR.TJS_BLOG_COUNTRIES ( COUNTRY_ID , COUNTRY_NAME , REGION_ID ) AS SELECT COUNTRY_ID , COUNTRY_NAME , REGION_ID FROM HR.COUNTRIES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_EMPLOYEES ( EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID FROM HR.EMPLOYEES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOBS ( JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY ) AS SELECT JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY FROM HR.JOBS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOB_HISTORY ( EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID FROM HR.JOB_HISTORY ; CREATE OR REPLACE VIEW HR.TJS_BLOG_LOCATIONS ( LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID ) AS SELECT LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID FROM HR.LOCATIONS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_REGIONS ( REGION_ID , REGION_NAME ) AS SELECT REGION_ID , REGION_NAME FROM HR.REGIONS ; -- Oracle SQL Developer Data Modeler Summary Report: -- -- CREATE TABLE 0 -- CREATE INDEX 0 -- ALTER TABLE 0 -- CREATE VIEW 6 -- CREATE PACKAGE 0 -- CREATE PACKAGE BODY 0 -- CREATE PROCEDURE 0 -- CREATE FUNCTION 0 -- CREATE TRIGGER 0 -- ALTER TRIGGER 0 -- CREATE COLLECTION TYPE 0 -- CREATE STRUCTURED TYPE 0 -- CREATE STRUCTURED TYPE BODY 0 -- CREATE CLUSTER 0 -- CREATE CONTEXT 0 -- CREATE DATABASE 0 -- CREATE DIMENSION 0 -- CREATE DIRECTORY 0 -- CREATE DISK GROUP 0 -- CREATE ROLE 0 -- CREATE ROLLBACK SEGMENT 0 -- CREATE SEQUENCE 0 -- CREATE MATERIALIZED VIEW 0 -- CREATE SYNONYM 0 -- CREATE TABLESPACE 0 -- CREATE USER 0 -- -- DROP TABLESPACE 0 -- DROP DATABASE 0 -- -- REDACTION POLICY 0 -- -- ERRORS 0 -- WARNINGS 0 You can then choose to save this to a file or not. This has a few steps, but as the number of tables in your system increases, so does the amount of time this feature can save you!

    Read the article

  • Install GIMP 2.7.1 on Lucid Lynx using PPA

    - by Vivek
    GIMP lovers are going to be disappointed to hear that GIMP is going away in the next release of much awaited Ubuntu 10.04. Today we take a look at installing in on Lucid Lynx using PPA. The reason for getting rid of it as cited by the GIMP developers, is that GIMP is too professional a software to be included in regular desktop version of Ubuntu. And it takes up too much of space on the disk. Also, the fact that it’s too complicated for regular users. If you can’t live without it…let’s see how to install GIMP 2.7.1 on Lucid Lynx (Currently in Alpha). The new version of GIMP supports single window mode and we will also see how to enable this feature as well. First we need to add the official GIMP 2.7.1 PPA in the software sources of Ubuntu 10.04, by opening the terminal window and typing the following command: sudo sh -c “echo ‘deb http://ppa.launchpad.net/matthaeus123/mrw-gimp-svn/ubuntu lucid main’ >> /etc/apt/sources.list” Now that we have added the PPA we need to add the GPG key, so type the following in your Terminal window. sudo apt-key adv –recv-keys –keyserver keyserver.ubuntu.com 405A15CB Next up we have to update the software repository… sudo apt-get update All that is left is to install GIMP 2.7.1 by typing in the following… sudo apt-get install gimp Click ‘Y’ (for yes) to install GIMP Once GIMP is installed you can start it by going to Applications > Graphics > GNU Image Manipulation Program. You now have your favorite GIMP on your favorite Ubuntu 10.04. As you can see in the image below, GIMP still comes with default 3 windows, which could clog up your lower panel In Ubuntu 10.04. However, now you can run GIMP in single window mode by going to Windows > Single-Window mode. That’s all! Now you have your GIMP running in single window mode with less of hassle to manage 3 windows. It’s unfortunate that GIMP will not be included, but by following these instructions, you’ll be able to enjoy using it in Ubuntu 10.04. Similar Articles Productive Geek Tips Show the List of Installed Packages on Ubuntu or DebianHow to Install Windows Applications on Linux Using CrossoverInstall VMware Tools on Ubuntu Edgy EftInstall Adobe PDF Reader on Ubuntu EdgyInstall MySQL Server 4.1 on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more Download Microsoft Office Help tab The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements

    Read the article

  • Start/Stop Window Service from ASP.NET page

    - by kaushalparik27
    Last week, I needed to complete one task on which I am going to blog about in this entry. The task is "Create a control panel like webpage to control (Start/Stop) Window Services which are part of my solution installed on computer where the main application is hosted". Here are the important points to accomplish:[1] You need to add System.ServiceProcess reference in your application. This namespace holds ServiceController Class to access the window service.[2] You need to check the status of the window services before you explicitly start or stop it.[3] By default, IIS application runs under ASP.NET account which doesn't have access rights permission to window service. So, Very Important part of the solution is: Impersonation. You need to impersonate the application/part of the code with the User Credentials which is having proper rights and permission to access the window service. If you try to access window service it will generate "access denied" error.The alternatives are: You can either impersonate whole application by adding Identity tag in web.cofig as:        <identity impersonate="true" userName="" password=""/>This tag will be under System.Web section. the "userName" and "password" will be the credentials of the user which is having rights to access the window service. But, this would not be a wise and good solution; because you may not impersonate whole website like this just to have access window service (which is going to be a small part of code).Second alternative is: Only impersonate part of code where you need to access the window service to start or stop it. I opted this one. But, to be fair; I am really unaware of the code part for impersonation. So, I just googled it and injected the code in my solution in a separate class file named as "Impersonate" with required static methods. In Impersonate class; impersonateValidUser() is the method to impersonate a part of code and undoImpersonation() is the method to undo the impersonation. Below is one example:  You need to provide domain name (which is "." if you are working on your home computer), username and password of appropriate user to impersonate.[4] Here, it is very important to note that: You need to have to store the Access Credentials (username and password) which you are going to user for impersonation; to some secured and encrypted format. I have used Machinekey Encryption to store the value encrypted value inside database.[5] So now; The real part is to start or stop a window service. You are almost done; because ServiceController class has simple Start() and Stop() methods to start or stop a window service. A ServiceController class has parametrized constructor that takes name of the service as parameter.Code to Start the window service: Code to Stop the window service: Isn't that too easy! ServiceController made it easy :) I have attached a working example with this post here to start/stop "SQLBrowser" service where you need to provide proper credentials who have permission to access to window service.  hope it would helps./.

    Read the article

  • OWB 11gR2 - Find and Search Metadata in Designer

    - by David Allan
    Here are some tools and techniques for finding objects, specifically in the design repository. There are ways of navigating and collating objects that are useful for day to day development and build-time usage - this includes features out of the box and utilities constructed on top. There are a variety of techniques to navigate and find objects in the repository, the first 3 are out of the box, the 4th is an expert utility. Navigating by the tree, grouping by project and module - ok if you are aware of the exact module/folder that objects reside in. The structure panel is a useful way of finding parts of an object, especially when large rather than using the canvas. In large scale projects it helps to have accelerators (either find or collections below). Advanced find to search by name - 11gR2 included a find capability specifically for large scale projects. There were improvements in both the tree search and the object editors (including highlighting in mapping for example). So you can now do regular expression based search and quickly navigate to objects within a repository. Collections - logically organize your objects into virtual folders by shortcutting the actual objects. This is useful for a range of things since all the OWB services operate on collections too (export/import, validation, deployment). See the post here for new collection functionality in 11gR2. Reports for searching by type, updated on, updated by etc. Useful for activities such as periodic incremental actions (deploy all mappings changed in the past week). The report style view is useful since I can quickly see who changed what and when. You can see all the audit details for objects within each objects property inspector, but its useful to just get all objects changed today or example, all objects changed since my last build etc. This utility combines both UI extensions via experts and the public views on the repository. In the figure to the right you see the contextual option 'Object Search' which invokes the utility, you can see I have quite a number of modules within my project. Figure out all the potential objects which have been changed is not simple. The utility is an expert which provides this kind of search capability. The utility provides a report of the objects in the design repository which satisfy some filter criteria. The type of criteria includes; objects updated in the last n days optionally filter the objects updated by user filter the user by project and by type (table/mappings etc.) The search dialog appears with these options, you can multi-select the object types, so for example you can select TABLE and MAPPING. Its also possible to search across projects if need be. If you have multiple users using the repository you can define the OWB user name in the 'Updated by' property to restrict the report to just that user also. Finally there is a search name that will be used for some of the options such as building a collection - this name is used for the collection to be built. In the example I have done, I've just searched my project for all process flows and mappings that users have updated in the last 7 days. The results of the query are returned in a table containing the object names, types, full path and audit details. The columns are sort-able, you can sort the results by name, type, path etc. One of the cool things here, is that you can then perform operations on these objects - such as edit them, export single selection or entire results to MDL, create a collection from the results (now you have a saved set of references in the repository, you could do deploy/export etc.), create a deployment script from the results...or even add in your own ideas! You see from this that you can do bulk operations on sets of objects based on search results. So for example selecting the 'Build Collection' option creates a collection with all of the objects from my search, you can subsequently deploy/generate/maintain this collection of objects. Under the hood of the expert if just basic OMB commands from the product and the use of the public views on the design repository. You can see how easy it is to build up macro-like capabilities that will help you do day-to-day as well as build like tasks on sets of objects.

    Read the article

  • Open Source but not Free Software (or vice versa)

    - by TRiG
    The definition of "Free Software" from the Free Software Foundation: “Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.” Free software is a matter of the users' freedom to run, copy, distribute, study, change and improve the software. More precisely, it means that the program's users have the four essential freedoms: The freedom to run the program, for any purpose (freedom 0). The freedom to study how the program works, and change it to make it do what you wish (freedom 1). Access to the source code is a precondition for this. The freedom to redistribute copies so you can help your neighbor (freedom 2). The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. A program is free software if users have all of these freedoms. Thus, you should be free to redistribute copies, either with or without modifications, either gratis or charging a fee for distribution, to anyone anywhere. Being free to do these things means (among other things) that you do not have to ask or pay for permission to do so. The definition of "Open Source Software" from the Open Source Initiative: Open source doesn't just mean access to the source code. The distribution terms of open-source software must comply with the following criteria: Free Redistribution The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale. Source Code The program must include source code, and must allow distribution in source code as well as compiled form. Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost preferably, downloading via the Internet without charge. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed. Derived Works The license must allow modifications and derived works, and must allow them to be distributed under the same terms as the license of the original software. Integrity of The Author's Source Code The license may restrict source-code from being distributed in modified form only if the license allows the distribution of "patch files" with the source code for the purpose of modifying the program at build time. The license must explicitly permit distribution of software built from modified source code. The license may require derived works to carry a different name or version number from the original software. No Discrimination Against Persons or Groups The license must not discriminate against any person or group of persons. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research. Distribution of License The rights attached to the program must apply to all to whom the program is redistributed without the need for execution of an additional license by those parties. License Must Not Be Specific to a Product The rights attached to the program must not depend on the program's being part of a particular software distribution. If the program is extracted from that distribution and used or distributed within the terms of the program's license, all parties to whom the program is redistributed should have the same rights as those that are granted in conjunction with the original software distribution. License Must Not Restrict Other Software The license must not place restrictions on other software that is distributed along with the licensed software. For example, the license must not insist that all other programs distributed on the same medium must be open-source software. License Must Be Technology-Neutral No provision of the license may be predicated on any individual technology or style of interface. These definitions, although they derive from very different ideologies, are broadly compatible, and most Free Software is also Open Source Software and vice versa. I believe, however, that it is possible for this not to be the case: It is possible for software to be Open Source without being Free, or to be Free without being Open Source. Questions Is my belief correct? Is it possible for software to fall into one camp and not the other? Does any such software actually exist? Please give examples. Clarification I've already accepted an answer now, but I seem to have confused a lot of people, so perhaps a clarification is in order. I was not asking about the difference between copyleft (or "viral", though I don't like that term) and non-copyleft ("permissive") licenses. Nor was I asking about your personal idiosyncratic definitions of "Free" and "Open". I was asking about "Free Software as defined by the FSF" and "Open Source Software as defined by the OSI". Are the two always the same? Is it possible to be one without being the other? And the answer, it seems, is that it's impossible to be Free without being Open, but possible to be Open without being Free. Thank you everyone who actually answered the question.

    Read the article

  • They may block off Howard Street—but Oracle OpenWorld is a two-way street.

    - by Oracle Accelerate for Midsize Companies
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 by Jim Lein, Sr. Director, Oracle Accelerate for Midsize Companies “Engineered to Inform and Inspire”—that’s the theme of Oracle OpenWorld 2012. In early October, tens of thousands of attendees will descend on the streets of San Francisco because they share one thing in common: the desire to learn more about Oracle. You might think that’s the way we, Oracle employees, look at this event—as just another opportunity for attendees to learn about what we do. But it’s really a two way street. Every year I’m amazed by how informed and inspired I am by our customers and their companies. Midsize companies buy Oracle to grow. As part of the Oracle Accelerate for Midsize Companies team I get to talk with our partners and business leaders at growing companies almost every day, usually via phone. Oracle OpenWorld presents the perfect opportunity to meet some of them in person, in an informal setting, and in one of the most beautiful cities in the world. The stories our customers tell me about their businesses provide vivid examples of how they have overcome the challenges of managing increasingly complex global operations and growing during uncertain economic conditions. It’s no secret that my favorite session at Oracle OpenWorld (besides Larry Ellison’s keynotes and the Customer Appreciation Event, of course) is the Oracle Accelerate Customer Panel. This year we’re featuring executives from three companies who deployed Oracle ERP rapidly to support their company’s growth: Chris Powell, VP and Corporate Controller of Beats by Dr. Dre, a California based designer and manufacturer of premium headphones (sorry, no free samples), Iñaki Zuazo, CIO of Industrias Juno, a building materials provider based in Spain, Kamran Moosa, Project Coordinator for Spartan Engineering, a provider of engineering and construction support services for an LPG storage project in Texas, and That’s a pretty diverse lineup and it will be interesting to hear the perspectives of both IT and financial project stakeholders. The session, “Oracle Accelerate Customer Case Studies: Rapid Deployment of Oracle Applications”, is at 3:30 pm on Wednesday, October 3, in the Concert room at the Palace Hotel. Oracle loves our hometown of San Francisco and it’s a great place to host Oracle OpenWorld. It’s now San Francisco’s largest conference and the city closes off Howard Street to better accommodate the attendees. Some Bay Area commuters may be inconvenienced for a few days by this closure but the conference brings about $100 million into the local economy. Now that’s a two-way street. More Oracle Accelerate at Oracle OpenWorld “Faster, Better, Cheaper Application Deployment with Oracle Business Accelerators”, Monday, October 1st, 10:45 a.m., Moscone West Room 3016 “Oracle Accelerate and Oracle Business Accelerators for Midsize Companies”, (partners only), Wednesday, October 3, 10:15 a.m., Marriott – Golden Gate B Visit the Oracle Accelerate and Oracle Business Accelerator Kiosk in the Moscone West Exhibit Grounds Download the Focus On Oracle Accelerate for Midsize Companies Focus document /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Challenges and Opportunities to Drive Change in the Healthcare System Explored at America’s Health Insurance Plans Exchange Conference and Institute 2013

    - by elaine blog
    The program theme at the June America’s Health Insurance Plans (AHIP) Exchange Conference and AHIP’s Institute 2013 was Transforming Our Health Care System: Navigating and Succeeding in the New Marketplace.  Topics included care delivery transformation, innovation for a new healthcare eco system, Health Insurance Exchanges, the nexus of consumerism, retail and healthcare, driving value through improved operations and leveraging technology, data and innovation to transform care. Oracle participated as a sponsor of both conferences, signaling the significant investment and activity Oracle continues to make in helping health plans, providers and government agencies become more efficient and more relevant in the healthcare market place. AHIP is a national trade association representing the health insurance industry. AHIP’s members provide health and supplemental benefits to more than 200 million Americans through employer-sponsored coverage, the individual insurance market and public programs such as Medicare and Medicaid.   AHIP advocates for public policies that expand access to affordable health care. Health plans are focusing on the Health Insurance Exchanges and the opportunities they offer to provide better access and higher quality healthcare.  With the opportunities come operational challenges to implementation and innovative technology solutions to consider.   At the Exchange Conference, Oracle hosted a breakfast symposium on “Strategies for Success:  Driving Business Transformation in the Growing Health Insurance Exchange Market”. With Health Insurance Exchanges as catalysts for change, attendees learned about how to achieve integration within an Exchange and deploy new business strategies to support health reform initiatives. Discussion covered steps and processes to successfully establish and implement enrollment systems, quote to card activities, program pricing, claims billing, automated claims processing and new customer service tools. Piyush Pushkar, COO of Benefitalign, an Oracle partner that provides solutions to adopt innovative business models for retail, HIX, consumer-centric health plan and benefits administration, spoke on the state of the Exchanges in the U.S. and the activities health plans are engaged in to support individuals entering the healthcare system, including sales automation, member enrollment automation/portals and integration strategies with the Exchanges. The Oracle and Benefitalign partnership allows seamless integration between a health plan enrollment solution with the HIX individual market and allows for the health plan to customize and characterize the offerings available to the HIX that may or may not be available through other channels.  This approach can benefit the health plan through separation of interests, but also because some state-run HIXs require such separation. Janice W. Young, Program Director, Payer IT Strategies, IDC Health Insights, reviewed a survey of health plans on their investment priorities for this last year as well as this year.  She also identified the 2013-2015 strategies of go/get to market with front end and compliance investments; leveraging existing business processes and internal technologies; and establishing best practices.  Of key interest to the audience was a reform era payer solutions platform overview mapping technologies to support the business operations. David Bonham of the Oracle Health Insurance organization moderated the panel and spoke on Oracle’s presence in healthcare and products for payers to help them drive efficiencies and gain a competitive advantage in an ever changing market. Oracle serves healthcare stakeholders with applications such as billing, rating and underwriting, analytics, CRM, enrollment, and products for processing of health insurance claims including pricing and benefits administration, as well as payment of providers through alternative, non-fee for service reimbursement methods. Oracle in Healthcare….Did you know? More than 80 healthcare payers run Oracle applications. More than 300 leading healthcare providers run Oracle applications. 10 out of the top 12 fortune Global 500 healthcare organizations run Oracle applications. For more information on Oracle solutions for healthcare payers, please visit oracle.com/insurance or these individual solution pages: Oracle Health Insurance Components Oracle Insurance Insbridge Rating and Underwriting Oracle Insurance Revenue Management and Billing Oracle Documaker Oracle Healthcare Oracle CRM Related Resources Webcast On Demand: Strategies for Success: Driving Business Transformation in the Growing Health Insurance Exchange Market Strategy Brief: Executing on the Individual Mandate: Opportunities and Challenges for Healthcare Payers White Paper: White paper: Navigating Alternative Provider Reimbursement Models of the Future Strategy Brief: Enterprise Rating Agility Improves Payer Response to Healthcare Reform Podcast: Technology Implications of Healthcare Reform Don’t forget to keep up with us year-round: Facebook: www.facebook.com/oracleinsurance Twitter: www.twitter.com/oracleinsurance YouTube: www.youtube.com/oracleinsurance

    Read the article

  • Building InstallShield based Installers using Team Build 2010

    - by jehan
    Last few weeks, I have been working on Application Packaging stuff using all the widely used tools like InstallShield, WISE, WiX and Visual Studio Installer. So, I thought it would be good to post about how to Build the Installers developed using these tools with Team Build 2010. This post will focus on how to build the InstallShield generated packages using Team Build 2010. For the release of VS2010, Microsoft has partnered with Flexera who are the makers of InstallShield to create InstallShield Limited Edition, especially for the customers of Visual Studio. First Microsoft planned to release WiX (Windows Installer Xml) with VS2010, but later Microsoft dropped  WiX from VS2010 due to reasons which are best known to them and partnered with InstallShield for Limited Edition. It disappointed lot of people because InstallShield Limited Edition provides only few features of InstallShield and it may not feasable to build complex installer packages using this and it also requires License, where as WiX is an open source with no license costs and it has proved efficient in building most complex packages. Only the last three features are available in InstallShield Limited Edition from the total features offered by InstallShield as shown in below list.                                                                                            Feature Limited Edition for Visual Studio 2010 Standalone Build System Maintain a clean build machine by using only the part of InstallShield that compiles the installations. InstallShield Best Practices Validation Suite Avoid common installation issues. Try and Die Functionality RCreate a fully functional trial version of your product. InstallShield Repackager Create Windows Installer setups from any legacy installation. Multilingual Support Present installation text in up to 35 languages. Microsoft App-V™ Support Deploy your applications as App-V virtual packages that run without conflict. Industry-Standard InstallScript Achieve maximum flexibility in your installations. Dialog Editor Modify the layout of existing end-user dialogs, create new custom dialogs, and more. Patch Creation Build updates and patches for your products. Setup Prerequisite Editor Easily control prerequisite restart behavior and source locations. String Editor View Control the localizable text strings displayed at run time with this spreadsheet-like table. Text File Changes View Configure search-and-replace actions for content in text files to be modified at run time. Virtual Machine Detection Block your installations from running on virtual machines. Unicode Support Improve multi-language installation development. Support for 64-Bit COM Extraction Extract COM data from a 64-bit COM server. Windows Installer Installation Chaining Add MSI packages to your main installation and chain them together. XML Support Save time by quickly testing XML configuration changes to installation projects. Billboard Support for Custom Branding Display Adobe Flash billboards and other graphic files during the install process. SaaS Support (IIS 7 and SSL Technologies) Easily deploy Windows-based Web applications. Project Assistant Jumpstart a project by using a simplified set of views. Support for Digital Signatures Save time by digitally signing all your files at build time. Easily Run Custom Actions Schedule a custom action to run at precisely the right moment in your installation. Installation Prerequisites Check for and install prerequisites before your installation is executed. To create a InstallShield project in Visual Studio and Build it using Team Build 2010, first you have to add the InstallShield Project template  to your Solution file. If you want to use InstallShield Limited edition you can add it from FileàNewà project àother Project Types àSetup and Deploymentà InstallShield LE and if you are using other versions of InstallShield, then you have to add it from  from FileàNewà project àInstallShield Projects. Here, I’m using  InstallShield 2011 Premier edition as I already have it Installed. I have created a simple package for TailSpin Application which has a Feature called Web, few components and a IIS Web Site for  TailSpin application.   Before started working on this, I thought I may need to build the package by calling invoke process activity in build process template or have to create a new custom activity. But, it got build without any changes to build process template. But, it was failing with below error message. C:\Program Files (x86)\MSBuild\InstallShield\2011\InstallShield.targets (68): The "InstallShield.Tasks.InstallShield" task could not be loaded from the assembly C:\Program Files (x86)\MSBuild\InstallShield\2010Limited\InstallShield.Tasks.dll. Could not load file or assembly 'file:///C:\Program Files(x86)\MSBuild\InstallShield\2011\InstallShield.Tasks.dll' or one of its dependencies. An attempt was made to load a program with an incorrect format. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. This error is due to 64-bit build machine which I’m using. This issue will be replicable if you are queuing a build on a 64-bit build machine. To avoid this you have to ensure that you configured the build definition for your InstallShield project to load the InstallShield.Tasks.dll file (which is a 32-bit file); otherwise, you will encounter this build error informing you that the InstallShield.Tasks.dll file could not be loaded. To select the 32-bit version of MSBuild, click the Process tab of your build definition in Team Explorer. Then, under the Advanced node, find the MSBuild Platform setting, and select x86. Note that if you are using a 32-bit build machine, you can select either Auto or x86 for the MSBuild Platform setting.  Once I did above changes, the build got successful.

    Read the article

  • Is there a low carbon future for the retail industry?

    - by user801960
    Recently Oracle published a report in conjunction with The Future Laboratory and a global panel of experts to highlight the issue of energy use in modern industry and the serious need to reduce carbon emissions radically by 2050.  Emissions must be cut by 80-95% below the levels in 1990 – but what can the retail industry do to keep up with this? There are three key aspects to the retail industry where carbon emissions can be cut:  manufacturing, transport and IT.  Manufacturing Naturally, manufacturing is going to be a big area where businesses across all industries will be forced to make considerable savings in carbon emissions as well as other forms of pollution.  Many retailers of all sizes will use third party factories and will have little control over specific environmental impacts from the factory, but retailers can reduce environmental impact at the factories by managing orders more efficiently – better planning for stock requirements means economies of scale both in terms of finance and the environment. The John Lewis Partnership has made detailed commitments to reducing manufacturing and packaging waste on both its own-brand products and products it sources from third party suppliers. It aims to divert 95 percent of its operational waste from landfill by 2013, which is a huge logistics challenge.  The John Lewis Partnership’s website provides a large amount of information on its responsibilities towards the environment. Transport Similarly to manufacturing, tightening up on logistical planning for stock distribution will make savings on carbon emissions from haulage.  More accurate supply and demand analysis will mean less stock re-allocation after initial distribution, and better warehouse management will mean more efficient stock distribution.  UK grocery retailer Morrisons has introduced double-decked trailers to its haulage fleet and adjusted distribution logistics accordingly to reduce the number of kilometers travelled by the fleet.  Morrisons measures route planning efficiency in terms of cases moved per kilometre and has, over the last two years, increased the number of cases per kilometre by 12.7%.  See Morrisons Corporate Responsibility report for more information. IT IT infrastructure is often initially overlooked by businesses when considering environmental efficiency.  Datacentres and web servers often need to run 24/7 to handle both consumer orders and internal logistics, and this both requires a lot of energy and puts out a lot of heat.  Many businesses are lowering environmental impact by reducing IT system fragmentation in their offices, while an increasing number of businesses are outsourcing their datacenters to cloud-based services.  Using centralised datacenters reduces the power usage at smaller offices, while using cloud based services means the datacenters can be based in a more environmentally friendly location.  For example, Facebook is opening a massive datacentre in Sweden – close to the Arctic Circle – to reduce the need for artificial cooling methods.  In addition, moving to a cloud-based solution makes IT services more easily scaleable, reducing redundant IT systems that would still use energy.  In store, the UK’s Carbon Trust reports that on average, lighting accounts for 25% of a retailer’s electricity costs, and for grocery retailers, up to 50% of their electricity bill comes from refrigeration units.  On a smaller scale, retailers can invest in greener technologies in store and in their offices.  The report concludes that widely shared objectives of energy security, reduced emissions and continued economic growth are dependent on the development of a smart grid capable of delivering energy efficiency and demand response, as well as integrating renewable and variable sources of energy. The report is available to download from http://emeapressoffice.oracle.com/imagelibrary/detail.aspx?MediaDetailsID=1766I’d be interested to hear your thoughts on the report.   

    Read the article

  • How To Delete Built-in Windows 7 Power Plans (and Why You Probably Shouldn’t)

    - by The Geek
    Do you actually use the Windows 7 power management features? If so, have you ever wanted to just delete one of the built-in power plans? Here’s how you can do so, and why you probably should leave it alone. Just in case you’re new to the party, we’re talking about the power plans that you see when you click on the battery/plug icon in the system tray. The problem is that one of the built-in plans always shows up there, even if you only use custom plans. When you go to “More power options” on the menu there, you’ll be taken to a list of them, but you’ll be unable to get rid of any of the built-in ones, even if you have your own. You can actually delete the power plans, but it will probably cause problems, so we highly recommend against it. If you still want to proceed, keep reading. Delete Built-in Power Plans in Windows 7 Open up an Administrator mod command prompt by right-clicking on the command prompt and choosing “Run as Administrator”, then type in the following command, which will show you a whole list of the plans. powercfg list Do you see that really long GUID code in the middle of each listing? That’s what we’re going to need for the next step. To make it easier, we’ll provide the codes here, just in case you don’t know how to copy to the clipboard from the command prompt. Power Scheme GUID: 381b4222-f694-41f0-9685-ff5bb260df2e  (Balanced) Power Scheme GUID: 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c  (High performance)Power Scheme GUID: a1841308-3541-4fab-bc81-f71556f20b4a  (Power saver) Before you do any deleting, what you’re going to want to do is export the plan to a file using the –export parameter. For some unknown reason, I used the .xml extension when I did this, though the file isn’t in XML format. Moving on… here’s the syntax of the command: powercfg –export balanced.xml 381b4222-f694-41f0-9685-ff5bb260df2e This will export the Balanced plan to the file balanced.xml. And now, we can delete the plan by using the –delete parameter, and the same GUID.  powercfg –delete 381b4222-f694-41f0-9685-ff5bb260df2e If you want to import the plan again, you can use the -import parameter, though it has one weirdness—you have to specify the full path to the file, like this: powercfg –import c:\balanced.xml Using what you’ve learned, you can export each of the plans to a file, and then delete the ones you want to delete. Why Shouldn’t You Do This? Very simple. Stuff will break. On my test machine, for example, I removed all of the built-in plans, and then imported them all back in, but I’m still getting this error anytime I try to access the panel to choose what the power buttons do: There’s a lot more error messages, but I’m not going to waste your time with all of them. So if you want to delete the plans, do so at your own peril. At least you’ve been warned! Similar Articles Productive Geek Tips Learning Windows 7: Manage Power SettingsCreate a Shortcut or Hotkey to Switch Power PlansDisable Power Management on Windows 7 or VistaChange the Windows 7 or Vista Power Buttons to Shut Down/Sleep/HibernateDisable Windows Vista’s Built-in CD/DVD Burning Features TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • Silverlight Cream for February 22, 2011 -- #1050

    - by Dave Campbell
    In this Issue: Robby Ingebretsen, Victor Gaudioso, Andrea Boschin(-2-), Rudi Grobler(-2-), Michael Crump, Deborah Kurata, Dennis Delimarsky, Pete Vickers, Yochay Kiriaty, Peter Kuhn, WindowsPhoneGeek, and Jesse Liberty(-2-). Above the Fold: Silverlight: "Silverlight Simple MVVM Printing" Deborah Kurata WP7: "Creating theme friendly UI in WP7 using OpacityMask" WindowsPhoneGeek Tools: "KAXAML v1.8" Robby Ingebretsen Shoutouts: Peter Foot posted Silverlight for Windows Phone Toolkit–Feb 2011 Rudi Grobler posts his top requested features for WP7, Silverlight, and WCF: vNext ... see you in Seattle, Rudi! From SilverlightCream.com: KAXAML v1.8 Robby Ingebretsen just posted KAXML v1.8 that now supports .NET 4.0, WPF, and Silferlight4 ... go grab it. Learn how to use Blend to create a Data Store, Add Properties to it, etc... Victor Gaudioso has 3 new Silverlight and/or Expression Blend video tutorials up... first is this one on Creating a Data store, adding properties to it, oh... read the title :), Next up is: Send async messages across UserControls or even applications, followed by the latest: Create a Sketchflow Animation using the Sketchflow Animation Panel A base class for threaded Application Services Andrea Boschin continues his IApplicationServices series with this one on a base class he created to develop Application Services that runs a thread. Windows Phone 7 - Part #6: Taking advantage of the phone Andrea Boschin also has part 6 of his series at SilverlightShow on WP7... this one is covering a bunch of items... Capabilities, Launchers/Choosers, and gestures... plus the source for a fun game. {homebrew} Skype for WP7 Rudi Grobler posted about the availability of (some features of) Skype for WP7 being available. The XDA guys have working contacts and the ability to chat going, plus they're looking for poeple to join in... Follow Rudi's link, and let them know you're up for it! Simple menu for your WP7 application Rudi Grobler has another post up about a very simple menu control for WP7 that he produced that is also very easy to use. Attaching a Command to the WP7 Application Bar Michael Crump shows how to bind the application bar to a Relay Command with the use of MVVMLight in 7 Easy Steps :) Silverlight Simple MVVM Printing Deborah Kurata continues her MVVM series with this one on printing what your user sees on the page... but doing so within the MVVM pattern. Enhancing the general Zune experience on Windows Phone 7 with Zune web API Dennis Delimarsky apparently likes the Zune as much as I do, and has ratted out tons of information about the Zune API for use in WP7 apps... and lots of code... Validating input forms in Windows Phone 7 Pete Vickers takes a great detailed spin through validation on the WP7... the rules have changed, but Pete explains with some code examples. Windows Phone Shake Gestures Library Yochay Kiriaty discusses Shake Gestures for the WP7 device and then describes the "Windows Phone Shake Gesture Library" that detects shake gestures in 3D space... and after a great description has the link for downloading. What difference does a sprite sheet make? Peter Kuhn is writing a series at SilverlightShow on XNA for Silverlight Devs that I've highlighted. An outshoot of that is this discussion of the use of sprite sheets and game development. Creating theme friendly UI in WP7 using OpacityMask WindowsPhoneGeek has a new post up today on using Opacity Masks in WP7 to enable using one set of icons for either the dark or light theme.. too cool, you'll wanna check this out! Linq to XML Jesse Liberty continues with Linq with regard to WP7 with this post on Linq to XML... and why XML? crap... I was just saving/loading XML today! :) Lambda–Not as weird as it sounds Jesse Liberty then jumps into Lambda expressions... maybe it's a chance for me to learn WTF the lambdas really do that I use all the time! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • BPM Suite 11gR1 Released

    - by Manoj Das
    This morning (April 27th, 2010), Oracle BPM Suite 11gR1 became available for download from OTN and eDelivery. If you have been following our plans in this area, you know that this is the release unifying BEA ALBPM product, which became Oracle BPM10gR3, with the Oracle stack. Some of the highlights of this release are: BPMN 2.0 modeling and simulation Web based Process Composer for BPMN and Rules authoring Zero-code environment with full access to Oracle SOA Suite’s rich set of application and other adapters Process Spaces – Out-of-box integration with Web Center Suite Process Analytics – Native process cubes as well as integration with Oracle BAM You can learn more about this release from the documentation. Notes about downloading and installing Please note that Oracle BPM Suite 11gR1 is delivered and installed as part of SOA 11.1.1.3.0, which is a sparse release (only incremental patch). To install: Download and install SOA 11.1.1.2.0, which is a full release (you can find the bits at the above location) Download and install SOA 11.1.1.3.0 During configure step (using the Fusion Middleware configuration wizard), use the Oracle Business Process Management template supplied with the SOA Suite11g (11.1.1.3.0) If you plan to use Process Spaces, also install Web Center 11.1.1.3.0, which also is delivered as a sparse release and needs to be installed on top of Web Center 11.1.1.2.0 Some early feedback We have been receiving very encouraging feedback on this release. Some quotes from partners are included below: “I just attended a preview workshop on BPM Studio, Oracle's BPMN 2.0 tool, held by Clemens Utschig Utschig from Oracle HQ. The usability and ease to get started are impressive. In the business view analysts can intuitively start modeling, then developers refine in their own, more technical view. The BPM Studio sets itself apart from pure play BPMN 2.0 tools by being seamlessly integrated inside a holistic SOA / BPM toolset: BPMN models are placed in SCA-Composites in SOA Suite 11g. This allows to abstract away the complexities of SOA integration aspects from business process aspects. For UIs in BPMN tasks, you have the richness of ADF 11g based Frontends. With BPM Studio we architects have a new modeling and development IDE that gives us interesting design challenges to grasp and elaborate, since many things BPMN 2.0 are different from good ol' BPEL. For example, for simple transformations, you don't use BPEL "assign" any more, but add the transformation directly to the service call. There is much less XPath involved. And, there is no translation from model to BPEL code anymore, so the awkward process model to BPEL roundtrip, which never really worked as well as it looked on marketing slides, is obsolete: With BPMN 2.0 "the model is the code". Now, these are great times to start the journey into BPM! Some tips: Start Projects smoothly, with initial processes being not overly complex and not using the more esoteric areas of BPMN, to manage the learning path and to stay successful with each iteration. Verify non functional requirements by conducting performance and load tests early. As mentioned above, separate all technical integration logic into SOA Suite or Oracle Service Bus. And - share your experience!” Hajo Normann, SOA Architect - Oracle ACE Director - Co-Leader DOAG SIG SOA   "Reuse of components across the Oracle 11G Fusion Middleware stack, like for instance a Database Adapter, is essential. It improves stability and predictability of the solution. BPM just is one of the components plugging into the stack and reuses all other components." Mr. Leon Smiers, Oracle Solution Architect, Capgemini   “I had the opportunity to follow a hands-on workshop held by Clemens for Oracle partners and I was really impressed of the overall offering of BPM11g. BPM11g allows the execution of BPMN 2.0 processes, without having to transform/translate them first to BPEL in order to be executable. The fact that BPMN uses the same underlying service infrastructure of SOA Suite 11g has a lot of benefits for us already familiar with SOA Suite 11g. BPMN is just another SCA component within a SCA composite and can (re)use all the existing components like Rules, Human Workflow, Adapters and Mediator. I also like the fact that BPMN runs on the same service engine as BPEL. By that all known best practices for making a BPEL  process reliable are valid for BPMN processes as well. Last but not least, BPMN is integrated into the superior end-to-end tracing of SOA Suite 11g. With BPM11g, Oracle offers a very competitive product which will have a big effect on the IT market. Clemens and Jürgen: Thanks for the great workshop! I’m really looking forward to my first project using Oracle BPM11g!” Guido Schmutz, Technology Manager / Oracle ACE Director for Fusion Middleware and SOA, Company:  Trivadis Some earlier feedback were summarized in this post.

    Read the article

  • 11gR2 11.2.0.3 Database Certified with E-Business Suie

    - by Elke Phelps (Oracle Development)
    The 11gR2 11.2.0.2 Database was certified with E-Business Suite (EBS) 11i and EBS 12 almost one year ago today.  I’m pleased to announce that 11.2.0.3, the second patchset for the 11gR2 Database is now certified. Be sure to review the interoperability notes for R11i and R12 for the most up-to-date requirements for deployment. This certification announcement is important as you plan upgrades to the technology stack for your environment. For additional upgrade direction, please refer to the recently published EBS upgrade recommendations article. Database support implications may also be reviewed in the database patching and support article. Oracle E-Business Suite Release 11i Prerequisites 11.5.10.2 + ATG PF.H RUP 6 and higher Certified Platforms Linux x86 (Oracle Linux 4, 5) Linux x86 (RHEL 4, 5) Linux x86 (SLES 10) Linux x86-64 (Oracle Linux 4, 5) -- Database-tier only Linux x86-64 (RHEL 4, 5) -- Database-tier only Linux x86-64 (SLES 10--Database-tier only) Oracle Solaris on SPARC (64-bit) (10) Oracle Solaris on x86-64 (64-bit) (10) -- Database-tier only Pending Platform Certifications Microsoft Windows Server (32-bit) Microsoft Windows Server (64-bit) HP-UX PA-RISC (64-bit) HP-UX Itanium IBM: Linux on System z  IBM AIX on Power Systems Oracle E-Business Suite Release 12 Prerequisites Oracle E-Business Suite Release 12.0.4 or later; or,Oracle E-Business Suite Release 12.1.1 or later Certified Platforms Linux x86 (Oracle Linux 4, 5) Linux x86 (RHEL 4, 5) Linux x86 (SLES 10) Linux x86-64 (Oracle Linux 4, 5) Linux x86-64 (RHEL 4, 5) Linux x86-64 (SLES 10) Oracle Solaris on SPARC (64-bit) (10) Oracle Solaris on x86-64 (64-bit) (10)  -- Database-tier only Pending Platform Certifications Microsoft Windows Server (32-bit) Microsoft Windows Server (64-bit) HP-UX PA-RISC (64-bit) IBM: Linux on System z IBM AIX on Power Systems HP-UX Itanium Database Feature and Option CertificationsThe following 11gR2 11.2.0.2 database options and features are supported for use: Advanced Compression Active Data Guard Advanced Security Option (ASO) / Advanced Networking Option (ANO) Database Vault  Database Partitioning Data Guard Redo Apply with Physical Standby Databases Native PL/SQL compilation Oracle Label Security (OLS) Real Application Clusters (RAC) Real Application Testing SecureFiles Virtual Private Database (VPD) Certification of the following database options and features is still underway: Transparent Data Encryption (TDE) Column Encryption 11gR2 version 11.2.0.3 Transparent Data Encryption (TDE) Tablespace Encryption 11gR2 version 11.2.0.3 About the pending certifications Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates, but you're welcome to monitor or subscribe to this blog for updates, which I'll post as soon as soon as they're available.     EBS 11i References Interoperability Notes - Oracle E-Business Suite Release 11i with Oracle Database 11g Release 2 (11.2.0) (Note 881505.1) Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i (Note 823586.1) Encrypting Oracle E-Business Suite Release 11i Network Traffic using Advanced Security Option and Advanced Networking Option (Note 391248.1) Using Transparent Data Encryption with Oracle E-Business Release 11i (Note 403294.1) Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 11gR2 (Note 1091086.1) Using Oracle E-Business Suite with a Split Configuration Database Tier on Oracle 11gR2 Version 11.2.0.1.0 (Note 946413.1) Export/Import Process for Oracle E-Business Suite Release 11i Database Instances Using Oracle Database 11g Release 1 or 2 (Note 557738.1) Database Initialization Parameters for Oracle Applications Release 11i (Note 216205.1) EBS 12 References Interoperability Notes - Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) (Note 1058763.1) Database Initialization Parameters for Oracle Applications Release 12 (Note 396009.1) Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 (Note 823587.1) Using Transparent Data Encryption with Oracle E-Business Suite Release 12 (Note 732764.1) Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11gR2 (Note 1091083.1) Export/Import Process for Oracle E-Business Suite Release 12 Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 (Note 741818.1) Enabling SSL in Oracle Applications Release 12 (Note 376700.1) Related Articles 11gR2 Database Certified with E-Business Suite 11i 11gR2 Database Certified with E-Business Suite 12 11gR2 11.2.0.2 Database Certified with E-Business Suite 12 Can E-Business Users Apply Database Patch Set Updates? On Apps Tier Patching and Support: A Primer for E-Business Suite Users On Database Patching and Support:  A Primer for E-Business Suite Users Quarterly E-Business Suite Upgrade Recommendations;  October 2011 Edition The preceding is intended to outline our general product direction.  It is intended for information purposes only, and may not be incorporated into any contract.   It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision.  The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle.

    Read the article

  • Change the Default Font Size in Word

    - by Matthew Guay
    Are you frustrated by always having to change the font size before you create a document it Word?  Here’s how you can end that frustration and set your favorite default font size for once and for all! Microsoft changed the default font font to 11 point Calibri in Word 2007 after years of 12 point Times New Roman being the default.  Although it can be easily overlooked, there are ways in Word to change the default settings to anything you want.  Whether you want to change your default to 12 point Calibri or to 48 point Comic Sans…here’s how to change your default font settings in Word 2007 and 2010. Changing Default Fonts in Word To change the default font settings, click the small box with an arrow in the right left corner of the Font section of the Home tab in the Ribbon.   In the Font dialog box, choose the default font settings you want.  Notice in the Font box it says “+Body”; this means that the font will be chosen by the document style you choose, and you are only selecting the default font style and size.  So, if your style uses Calibri, then your font will be Calibri at the size and style you chose.  If you’d prefer to choose a specific font to be the default, just select one from the drop-down box and this selection will override the font selection in your document style. Here we left all the default settings, except we selected 12 point font in the Latin text box (this is your standard body text; users of Asian languages such as Chinese may see a box for Asian languages).  When you’ve made your selections, click the “Set as Default” button in the bottom left corner of the dialog. You will be asked to confirm that you want these settings to be made default.  In Word 2010, you will be given the option to set these settings for this document only or for all documents.  Click the bullet beside “All documents based on the Normal.dotm template?”, and then click Ok. In Word 2007, simply click Ok to save these settings as default. Now, whenever you open Word or create a new document, your default font settings should be set exactly to what you want.  And simply repeat these steps to change your default font settings again if you want. Editing your default template file Another way to change your default font settings is to edit your Normal.dotm file.  This file is what Word uses to create new documents; it basically copies the formatting in this document each time you make a new document. To edit your Normal.dotm file, enter the following in the address bar in Explorer or in the Run prompt: %appdata%\Microsoft\Templates This will open your Office Templates folder.  Right-click on the Normal.dotm file, and click Open to edit it.  Note: Do not double-click on the file, as this will only create a new document based on Normal.dotm and any edits you make will not be saved in this file.   Now, change any font settings as you normally would.  Remember: anything you change or enter in this document will appear in any new document you create using Word. If you want to revert to your default settings, simply delete your Normal.dotm file.  Word will recreate it with the standard default settings the next time you open Word. Please Note: Changing your default font size will not change the font size in existing documents, so these will still show the settings you used when these documents were created.  Also, some addins can affect your Normal.dotm template.  If Word does not seem to remember your font settings, try disabling Word addins to see if this helps. Conclusion Sometimes it’s the small things that can be the most frustrating.  Getting your default font settings the way you want is a great way to take away a frustration and make you more productive. And here’s a quick question: Do you prefer the new default 11 point Calibri, or do you prefer 12 point Times New Roman or some other combination?  Sound off in the comments, and let the world know your favorite font settings. Similar Articles Productive Geek Tips Change the Default Font in Excel 2007Add Emphasis to Paragraphs with Drop Caps in Word 2007Keep Websites From Using Tiny Fonts in SafariMake Word 2007 Always Save in Word 2003 FormatStupid Geek Tricks: Enable More Fonts for the Windows Command Prompt TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Spyware Blaster v4.3 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player

    Read the article

  • HDFC Bank's Journey to Oracle Private Database Cloud

    - by Nilesh Agrawal
    One of the key takeaways from a recent post by Sushil Kumar is the importance of business initiative that drives the transformational journey from legacy IT to enterprise private cloud. The journey that leads to a agile, self-service and efficient infrastructure with reduced complexity and enables IT to deliver services more closely aligned with business requirements. Nilanjay Bhattacharjee, AVP, IT of HDFC Bank presented a real-world case study based on one such initiative in his Oracle OpenWorld session titled "HDFC BANK Journey into Oracle Database Cloud with EM 12c DBaaS". The case study highlighted in this session is from HDFC Bank’s Lending Business Segment, which comprises roughly 50% of Bank’s top line. Bank’s Lending Business is always under pressure to launch “New Schemes” to compete and stay ahead in this segment and IT has to keep up with this challenging business requirement. Lending related applications are highly dynamic and go through constant changes and every single and minor change in each related application is required to be thoroughly UAT tested certified before they are certified for production rollout. This leads to a constant pressure in IT for rapid provisioning of UAT databases on an ongoing basis to enable faster time to market. Nilanjay joined Sushil Kumar, VP, Product Strategy, Oracle, during the Enterprise Manager general session at Oracle OpenWorld 2012. Let's watch what Nilanjay had to say about their recent Database cloud deployment. “Agility” in launching new business schemes became the key business driver for private database cloud adoption in the Bank. Nilanjay spent an hour discussing it during his session. Let's look at why Database-as-a-Service(DBaaS) model was need of the hour in this case  - Average 3 days to provision UAT Database for Loan Management Application Silo’ed UAT environment with Average 30% utilization Compliance requirement consume UAT testing resources DBA activities leads to $$ paid to SI for provisioning databases manually Overhead in managing configuration drift between production and test environments Rollout impact/delay on new business initiatives The private database cloud implementation progressed through 4 fundamental phases - Standardization, Consolidation, Automation, Optimization of UAT infrastructure. Project scoping was carried out and end users and stakeholders were engaged early on right from planning phase and including all phases of implementation. Standardization and Consolidation phase involved multiple iterations of planning to first standardize on infrastructure, db versions, patch levels, configuration, IT processes etc and with database level consolidation project onto Exadata platform. It was also decided to have existing AIX UAT DB landscape covered and EM 12c DBaaS solution being platform agnostic supported this model well. Automation and Optimization phase provided the necessary Agility, Self-Service and efficiency and this was made possible via EM 12c DBaaS. EM 12c DBaaS Self-Service/SSA Portal was setup with required zones, quotas, service templates, charge plan defined. There were 2 zones implemented - Exadata zone  primarily for UAT and benchmark testing for databases running on Exadata platform and second zone was for AIX setup to cover other databases those running on AIX. Metering and Chargeback/Showback capabilities provided business and IT the framework for cloud optimization and also visibility into cloud usage. More details on UAT cloud implementation, related building blocks and EM 12c DBaaS solution are covered in Nilanjay's OpenWorld session here. Some of the key Benefits achieved from UAT cloud initiative are - New business initiatives can be easily launched due to rapid provisioning of UAT Databases [ ~3 hours ] Drastically cut down $$ on SI for DBA Activities due to Self-Service Effective usage of infrastructure leading to  better ROI Empowering  consumers to provision database using Self-Service Control on project schedule with DB end date aligned to project plan submitted during provisioning Databases provisioned through Self-Service are monitored in EM and auto configured for Alerts and KPI Regulatory requirement of database does not impact existing project in queue This table below shows typical list of activities and tasks involved when a end user requests for a UAT database. EM 12c DBaaS solution helped reduce UAT database provisioning time from roughly 3 days down to 3 hours and this timing also includes provisioning time for database with production scale data (ranging from 250 G to 2 TB of data) - And it's not just about time to provision,  this initiative has enabled an agile, efficient and transparent UAT environment where end users are empowered with real control of cloud resources and IT's role is shifted as enabler of strategic services instead of being administrator of all user requests. The strong collaboration between IT and business community right from planning to implementation to go-live has played the key role in achieving this common goal of enterprise private cloud. Finally, real cloud is here and this cloud is accompanied with rain (business benefits) as well ! For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • How to export ECC key and Cert from NSS DB and import into JKS keystore and Oracle Wallet

    - by mv
    How to export ECC key and Cert from NSS DB and import into JKS keystore and Oracle Wallet In this blog I will write about how to extract a cert and key from NSS Db and import it to a JKS Keystore and then import that JKS Keystore into Oracle Wallet. 1. Set Java Home I pointed it to JRE 1.6.0_22 $ export JAVA_HOME=/usr/java/jre1.6.0_22/ 2. Create a self signed ECC cert in NSS DB I created NSS DB with self signed ECC certificate. If you already have NSS Db with ECC cert (and key) skip this step. $export NSS_DIR=/export/home/nss/ $$NSS_DIR/certutil -N -d . $$NSS_DIR/certutil -S -x -s "CN=test,C=US" -t "C,C,C" -n ecc-cert -k ec -q nistp192 -d . 3. Export ECC cert and key using pk12util Use NSS tool pk12util to export this cert and key into a p12 file      $$NSS_DIR/pk12util -o ecc-cert.p12 -n ecc-cert -d . -W password 4. Use keytool to create JKS keystore and import this p12 file 4.1 Import p12 file created above into a JKS keystore $JAVA_HOME/bin/keytool -importkeystore -srckeystore ecc-cert.p12 -srcstoretype PKCS12 -deststoretype JKS -destkeystore ecc.jks -srcstorepass password -deststorepass password -srcalias ecc-cert -destalias ecc-cert -srckeypass password -destkeypass password -v But if an error as shown is encountered, keytool error: java.security.UnrecoverableKeyException: Get Key failed: EC KeyFactory not available java.security.UnrecoverableKeyException: Get Key failed: EC KeyFactory not available        at com.sun.net.ssl.internal.pkcs12.PKCS12KeyStore.engineGetKey(Unknown Source)         at java.security.KeyStoreSpi.engineGetEntry(Unknown Source)         at java.security.KeyStore.getEntry(Unknown Source)         at sun.security.tools.KeyTool.recoverEntry(Unknown Source)         at sun.security.tools.KeyTool.doImportKeyStoreSingle(Unknown Source)         at sun.security.tools.KeyTool.doImportKeyStore(Unknown Source)         at sun.security.tools.KeyTool.doCommands(Unknown Source)         at sun.security.tools.KeyTool.run(Unknown Source)         at sun.security.tools.KeyTool.main(Unknown Source) Caused by: java.security.NoSuchAlgorithmException: EC KeyFactory not available         at java.security.KeyFactory.<init>(Unknown Source)         at java.security.KeyFactory.getInstance(Unknown Source)         ... 9 more 4.2 Create a new PKCS11 provider If you didn't get an error as shown above skip this step. Since we already have NSS libraries built with ECC, we can create a new PKCS11 provider Create ${java.home}/jre/lib/security/nss.cfg as follows: name = NSS     nssLibraryDirectory = ${nsslibdir}    nssDbMode = noDb    attributes = compatibility where nsslibdir should contain NSS libs with ECC support. Add the following line to ${java.home}/jre/lib/security/java.security :      security.provider.9=sun.security.pkcs11.SunPKCS11 ${java.home}/lib/security/nss.cfg Note that those who are using Oracle iPlanet Web Server or Oracle Traffic Director, NSS libs built with ECC are in <ws_install_dir>/lib or <otd_install_dir>/lib. 4.3. Now keytool should work Now you can try the same keytool command and see that it succeeds : $JAVA_HOME/bin/keytool -importkeystore -srckeystore ecc-cert.p12 -srcstoretype PKCS12 -deststoretype JKS -destkeystore ecc.jks -srcstorepass password -deststorepass password -srcalias ecc-cert -destalias ecc-cert -srckeypass password -destkeypass password -v [Storing ecc.jks] 5. Convert JKS keystore into an Oracle Wallet You can export this cert and key from JKS keystore and import it into an Oracle Wallet if you need using orapki tool as shown below. Make sure that orapki you use supports ECC. Also for ECC you MUST use "-jsafe" option. $ orapki wallet create -pwd password  -wallet .  -jsafe $ orapki wallet jks_to_pkcs12 -wallet . -pwd password -keystore ecc.jks -jkspwd password -jsafe AS $orapki wallet display -wallet . -pwd welcome1  -jsafeOracle PKI Tool : Version 11.1.2.0.0Copyright (c) 2004, 2012, Oracle and/or its affiliates. All rights reserved.Requested Certificates:User Certificates:Subject:        CN=test,C=USTrusted Certificates:Subject:        OU=Class 3 Public Primary Certification Authority,O=VeriSign\, Inc.,C=USSubject:        CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=USSubject:        OU=Class 2 Public Primary Certification Authority,O=VeriSign\, Inc.,C=USSubject:        OU=Class 1 Public Primary Certification Authority,O=VeriSign\, Inc.,C=USSubject:        CN=test,C=US As you can see our ECC cert in the wallet. You can follow the same steps for RSA certs as well. 6. References http://icedtea.classpath.org/bugzilla/show_bug.cgi?id=356 http://old.nabble.com/-PATCH-FOR-REVIEW-%3A-Support-PKCS11-cryptography-via-NSS-p25282932.html http://www.mozilla.org/projects/security/pki/nss/tools/pk12util.html

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >