Search Results

Search found 14680 results on 588 pages for 'mobile world congress'.

Page 470/588 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • SQL SERVER – Introduction to Adaptive ETL Tool – How adaptive is your ETL?

    - by pinaldave
    I am often reminded by the fact that BI/data warehousing infrastructure is very brittle and not very adaptive to change. There are lots of basic use cases where data needs to be frequently loaded into SQL Server or another database. What I have found is that as long as the sources and targets stay the same, SSIS or any other ETL tool for that matter does a pretty good job handling these types of scenarios. But what happens when you are faced with more challenging scenarios, where the data formats and possibly the data types of the source data are changing from customer to customer?  Let’s examine a real life situation where a health management company receives claims data from their customers in various source formats. Even though this company supplied all their customers with the same claims forms, they ended up building one-off ETL applications to process the claims for each customer. Why, you ask? Well, it turned out that the claims data from various regional hospitals they needed to process had slightly different data formats, e.g. “integer” versus “string” data field definitions.  Moreover the data itself was represented with slight nuances, e.g. “0001124” or “1124” or “0000001124” to represent a particular account number, which forced them, as I eluded above, to build new ETL processes for each customer in order to overcome the inconsistencies in the various claims forms.  As a result, they experienced a lot of redundancy in these ETL processes and recognized quickly that their system would become more difficult to maintain over time. So imagine for a moment that you could use an ETL tool that helps you abstract the data formats so that your ETL transformation process becomes more reusable. Imagine that one claims form represents a data item as a string – acc_no(varchar) – while a second claims form represents the same data item as an integer – account_no(integer). This would break your traditional ETL process as the data mappings are hard-wired.  But in a world of abstracted definitions, all you need to do is create parallel data mappings to a common data representation used within your ETL application; that is, map both external data fields to a common attribute whose name and type remain unchanged within the application. acc_no(varchar) is mapped to account_number(integer) expressor Studio first claim form schema mapping account_no(integer) is also mapped to account_number(integer) expressor Studio second claim form schema mapping All the data processing logic that follows manipulates the data as an integer value named account_number. Well, these are the kind of problems that that the expressor data integration solution automates for you.  I’ve been following them since last year and encourage you to check them out by downloading their free expressor Studio ETL software. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL, SSIS

    Read the article

  • Diving into OpenStack Network Architecture - Part 1

    - by Ronen Kofman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} rkofman Normal rkofman 83 3045 2014-05-23T21:11:00Z 2014-05-27T06:58:00Z 3 1883 10739 Oracle Corporation 89 25 12597 12.00 140 Clean Clean false false false false EN-US X-NONE HE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} Before we begin OpenStack networking has very powerful capabilities but at the same time it is quite complicated. In this blog series we will review an existing OpenStack setup using the Oracle OpenStack Tech Preview and explain the different network components through use cases and examples. The goal is to show how the different pieces come together and provide a bigger picture view of the network architecture in OpenStack. This can be very helpful to users making their first steps in OpenStack or anyone wishes to understand how networking works in this environment.  We will go through the basics first and build the examples as we go. According to the recent Icehouse user survey and the one before it, Neutron with Open vSwitch plug-in is the most widely used network setup both in production and in POCs (in terms of number of customers) and so in this blog series we will analyze this specific OpenStack networking setup. As we know there are many options to setup OpenStack networking and while Neturon + Open vSwitch is the most popular setup there is no claim that it is either best or the most efficient option. Neutron + Open vSwitch is an example, one which provides a good starting point for anyone interested in understanding OpenStack networking. Even if you are using different kind of network setup such as different Neutron plug-in or even not using Neutron at all this will still be a good starting point to understand the network architecture in OpenStack. The setup we are using for the examples is the one used in the Oracle OpenStack Tech Preview. Installing it is simple and it would be helpful to have it as reference. In this setup we use eth2 on all servers for VM network, all VM traffic will be flowing through this interface.The Oracle OpenStack Tech Preview is using VLANs for L2 isolation to provide tenant and network isolation. The following diagram shows how we have configured our deployment: This first post is a bit long and will focus on some basic concepts in OpenStack networking. The components we will be discussing are Open vSwitch, network namespaces, Linux bridge and veth pairs. Note that this is not meant to be a comprehensive review of these components, it is meant to describe the component as much as needed to understand OpenStack network architecture. All the components described here can be further explored using other resources. Open vSwitch (OVS) In the Oracle OpenStack Tech Preview OVS is used to connect virtual machines to the physical port (in our case eth2) as shown in the deployment diagram. OVS contains bridges and ports, the OVS bridges are different from the Linux bridge (controlled by the brctl command) which are also used in this setup. To get started let’s view the OVS structure, use the following command: # ovs-vsctl show 7ec51567-ab42-49e8-906d-b854309c9edf     Bridge br-int         Port br-int             Interface br-int type: internal         Port "int-br-eth2"             Interface "int-br-eth2"     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2" ovs_version: "1.11.0" We see a standard post deployment OVS on a compute node with two bridges and several ports hanging off of each of them. The example above is a compute node without any VMs, we can see that the physical port eth2 is connected to a bridge called “br-eth2”. We also see two ports "int-br-eth2" and "phy-br-eth2" which are actually a veth pair and form virtual wire between the two bridges, veth pairs are discussed later in this post. When a virtual machine is created a port is created on one the br-int bridge and this port is eventually connected to the virtual machine (we will discuss the exact connectivity later in the series). Here is how OVS looks after a VM was launched: # ovs-vsctl show efd98c87-dc62-422d-8f73-a68c2a14e73d     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port br-int             Interface br-int type: internal         Port "qvocb64ea96-9f" tag: 1             Interface "qvocb64ea96-9f"     Bridge "br-eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2" ovs_version: "1.11.0" Bridge "br-int" now has a new port "qvocb64ea96-9f" which connects to the VM and tagged with VLAN 1. Every VM which will be launched will add a port on the “br-int” bridge for every network interface the VM has. Another useful command on OVS is dump-flows for example: # ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=735.544s, table=0, n_packets=70, n_bytes=9976, idle_age=17, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=76679.786s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=1 actions=drop cookie=0x0, duration=76681.36s, table=0, n_packets=68, n_bytes=7950, idle_age=17, hard_age=65534, priority=1 actions=NORMAL As we see the port which is connected to the VM has the VLAN tag 1. However the port on the VM network (eth2) will be using tag 1000. OVS is modifying the vlan as the packet flow from the VM to the physical interface. In OpenStack the Open vSwitch agent takes care of programming the flows in Open vSwitch so the users do not have to deal with this at all. If you wish to learn more about how to program the Open vSwitch you can read more about it at http://openvswitch.org looking at the documentation describing the ovs-ofctl command. Network Namespaces (netns) Network namespaces is a very cool Linux feature can be used for many purposes and is heavily used in OpenStack networking. Network namespaces are isolated containers which can hold a network configuration and is not seen from outside of the namespace. A network namespace can be used to encapsulate specific network functionality or provide a network service in isolation as well as simply help to organize a complicated network setup. Using the Oracle OpenStack Tech Preview we are using the latest Unbreakable Enterprise Kernel R3 (UEK3), this kernel provides a complete support for netns. Let's see how namespaces work through couple of examples to control network namespaces we use the ip netns command: Defining a new namespace: # ip netns add my-ns # ip netns list my-ns As mentioned the namespace is an isolated container, we can perform all the normal actions in the namespace context using the exec command for example running the ifconfig command: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:16436 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) We can run every command in the namespace context, this is especially useful for debug using tcpdump command, we can ping or ssh or define iptables all within the namespace. Connecting the namespace to the outside world: There are various ways to connect into a namespaces and between namespaces we will focus on how this is done in OpenStack. OpenStack uses a combination of Open vSwitch and network namespaces. OVS defines the interfaces and then we can add those interfaces to namespace. So first let's add a bridge to OVS: # ovs-vsctl add-br my-bridge Now let's add a port on the OVS and make it internal: # ovs-vsctl add-port my-bridge my-port # ovs-vsctl set Interface my-port type=internal And let's connect it into the namespace: # ip link set my-port netns my-ns Looking inside the namespace: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:65536 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) my-port   Link encap:Ethernet HWaddr 22:04:45:E2:85:21           BROADCAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) Now we can add more ports to the OVS bridge and connect it to other namespaces or other device like physical interfaces. Neutron is using network namespaces to implement network services such as DCHP, routing, gateway, firewall, load balance and more. In the next post we will go into this in further details. Linux Bridge and veth pairs Linux bridge is used to connect the port from OVS to the VM. Every port goes from the OVS bridge to a Linux bridge and from there to the VM. The reason for using regular Linux bridges is for security groups’ enforcement. Security groups are implemented using iptables and iptables can only be applied to Linux bridges and not to OVS bridges. Veth pairs are used extensively throughout the network setup in OpenStack and are also a good tool to debug a network problem. Veth pairs are simply a virtual wire and so veths always come in pairs. Typically one side of the veth pair will connect to a bridge and the other side to another bridge or simply left as a usable interface. In this example we will create some veth pairs, connect them to bridges and test connectivity. This example is using regular Linux server and not an OpenStack node: Creating a veth pair, note that we define names for both ends: # ip link add veth0 type veth peer name veth1 # ifconfig -a . . veth0     Link encap:Ethernet HWaddr 5E:2C:E6:03:D0:17           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) veth1     Link encap:Ethernet HWaddr E6:B6:E2:6D:42:B8           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) . . To make the example more meaningful this we will create the following setup: veth0 => veth1 => br-eth3 => eth3 ======> eth2 on another Linux server br-eth3 – a regular Linux bridge which will be connected to veth1 and eth3 eth3 – a physical interface with no IP on it, connected to a private network eth2 – a physical interface on the remote Linux box connected to the private network and configured with the IP of 50.50.50.1 Once we create the setup we will ping 50.50.50.1 (the remote IP) through veth0 to test that the connection is up: # brctl addbr br-eth3 # brctl addif br-eth3 eth3 # brctl addif br-eth3 veth1 # brctl show bridge name     bridge id               STP enabled     interfaces br-eth3         8000.00505682e7f6       no              eth3                                                         veth1 # ifconfig veth0 50.50.50.50 # ping -I veth0 50.50.50.51 PING 50.50.50.51 (50.50.50.51) from 50.50.50.50 veth0: 56(84) bytes of data. 64 bytes from 50.50.50.51: icmp_seq=1 ttl=64 time=0.454 ms 64 bytes from 50.50.50.51: icmp_seq=2 ttl=64 time=0.298 ms When the naming is not as obvious as the previous example and we don't know who are the paired veth interfaces we can use the ethtool command to figure this out. The ethtool command returns an index we can look up using ip link command, for example: # ethtool -S veth1 NIC statistics: peer_ifindex: 12 # ip link . . 12: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 Summary That’s all for now, we quickly reviewed OVS, network namespaces, Linux bridges and veth pairs. These components are heavily used in the OpenStack network architecture we are exploring and understanding them well will be very useful when reviewing the different use cases. In the next post we will look at how the OpenStack network is laid out connecting the virtual machines to each other and to the external world. @RonenKofman

    Read the article

  • OBIEE 11.1.1 - Introduction to OBIEE 11g Full Sample App

    - by user809526
    Isn't it nice to discover OBIEE 11g around a nice "How To" catalog of features? to observe OBI and Essbase relationships at work? to discover TimesTen? The OBIEE 11g Full Sample App (FSA) is a comprehensive collection of examples designed to demonstrate the latest Oracle BIEE 11g capabilities and design best practices: Enhanced visualizations as Geo-spacial maps and interactive dashboards, Action Framework,  BI Publisher, Scorecard and Strategy Management, Mobile style sheets, Semantic layer modeling, Multi-source federation, Integration with products such as Essbase, Oracle OLAP, ODM, TimesTen, ODI and more The FSA is intended to be comprehensive, it is big (see CAVEAT below). The FSA is not an Oracle product, it is a good will free deployment of OBIEE/Essbase designed to exemplify OBIEE features, infrastructure and security around the Fusion Middleware components. Its contents and code are distributed free for demonstrative purposes only. It is neither maintained nor supported by Oracle as a licensed product. The OBIEE Full Sample App is independent of the default Sample App that comes with the OBIEE product. BENEFITS The FSA helps as a demonstrator of OBIEE 11g best practices, a tutorial, an environment "Test & Scrap", a SR bench (regression, conflicts), a tuning bench, a quick ready made POC seed for projects, a security options environment, ... The FSA - Is organized around a catalog of functional features - Has been deployed over 1000 times, it should be stable RELEASE The Full Sample App (V107) is bound to OBIEE 11.1.1.5 and Essbase 11.1.2.1 (November 2011). The FSA release dates are independent of the Product GA date (OBIEE). In early December 2011, a new functional Patch (V110) is released. It is easily applied (in less than 15 mins) on top of OBIEE SampleApp 11.1.1.5 (V107). The patch (V110) includes additional functional examples:        1. Web Catalog Statistics Application: Provides detailed insight into your web catalog content, dormant catalog objects, webcat impact analysis for metadata changes and more        2. Data inflation Scripts: A set of simple SQL procedures to quickly inflate SampleApp Fact and Dimension data to millions of records in a few minutes        3. Public Content Extensions Framework: A patching framework for public examples and contributions leveraging SampleApp        4. Additional report examples (including bridge report, external chart integrations) and bug fixes DISTRIBUTION as VBox image (November 2011) The ready made VBox image is designed to run on Virtual Box. It can be converted to VMware (see another BLOG). 1/ http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html VBox Image Deployment Guide Sampleapp_v107_GA.ovf - VBox image key file The above http URL provides the user:password for the ftp URLs below. 2/ ftp://user:[email protected]/static/SampleAppV107/ 12 "7-zip" files Sampleapp_v107_GA_7_20.7z.001 -> .012 We recommend 7-zip file manager for unzipping (http://www.7-zip.org/). Select Unzip here option, it will create the contents under a directory named "SampleApp_10722". On Windows, it is important to download and save zip file under the root directory (e.g. C:\ or D:\) because of possible long pathnames. 3/ ftp://user:[email protected]/static/SampleAppV107/Unzipped_Version/ 4 files Sampleapp_v107_GA-disk[1234].vmdk Important note: Check the provided checksums (md5sum). Please do it! DISTRIBUTION as Installation files for existing OBI 11.1.1.5 (November 2011) http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html Install files Deployment Guide SampleApp_10722_1.zip - 198 MB CAVEAT Many computers have RAM chips problems that keep often silent ... until you manipulate big files. It is strongly advised you run some memory check program eg MEMTEST in GRUB boot manager. Running md5sum repeatedly onto the very same big file must be consistent [same result], else a hardware memory problem is suspected. For Virtual Box, you should most likely enable VT-X (Vanderpool) hardware virtualization in BIOS. A free disk space of 80 GB is required to perform safely the VBox image installation. A Virtual Machine of minimum 6 to 7 GB memory fits the needs of combining OBIEE and Essbase execution.

    Read the article

  • ASP.NET Web API - Screencast series with downloadable sample code - Part 1

    - by Jon Galloway
    There's a lot of great ASP.NET Web API content on the ASP.NET website at http://asp.net/web-api. I mentioned my screencast series in original announcement post, but we've since added the sample code so I thought it was worth pointing the series out specifically. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. So - let's watch them together! Grab some popcorn and pay attention, because these are short. After each video, I'll talk about what I thought was important. I'm embedding the videos using HTML5 (MP4) with Silverlight fallback, but if something goes wrong or your browser / device / whatever doesn't support them, I'll include the link to where the videos are more professionally hosted on the ASP.NET site. Note also if you're following along with the samples that, since Part 1 just looks at the File / New Project step, the screencast part numbers are one ahead of the sample part numbers - so screencast 4 matches with sample code demo 3. Note: I started this as one long post for all 6 parts, but as it grew over 2000 words I figured it'd be better to break it up. Part 1: Your First Web API [Video and code on the ASP.NET site] This screencast starts with an overview of why you'd want to use ASP.NET Web API: Reach more clients (thinking beyond the browser to mobile clients, other applications, etc.) Scale (who doesn't love the cloud?!) Embrace HTTP (a focus on HTTP both on client and server really simplifies and focuses service interactions) Next, I start a new ASP.NET Web API application and show some of the basics of the ApiController. We don't write any new code in this first step, just look at the example controller that's created by File / New Project. using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace NewProject_Mvc4BetaWebApi.Controllers { public class ValuesController : ApiController { // GET /api/values public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } // GET /api/values/5 public string Get(int id) { return "value"; } // POST /api/values public void Post(string value) { } // PUT /api/values/5 public void Put(int id, string value) { } // DELETE /api/values/5 public void Delete(int id) { } } } Finally, we walk through testing the output of this API controller using browser tools. There are several ways you can test API output, including Fiddler (as described by Scott Hanselman in this post) and built-in developer tools available in all modern browsers. For simplicity I used Internet Explorer 9 F12 developer tools, but you're of course welcome to use whatever you'd like. A few important things to note: This class derives from an ApiController base class, not the standard ASP.NET MVC Controller base class. They're similar in places where API's and HTML returning controller uses are similar, and different where API and HTML use differ. A good example of where those things are different is in the routing conventions. In an HTTP controller, there's no need for an "action" to be specified, since the HTTP verbs are the actions. We don't need to do anything to map verbs to actions; when a request comes in to /api/values/5 with the DELETE HTTP verb, it'll automatically be handled by the Delete method in an ApiController. The comments above the API methods show sample URL's and HTTP verbs, so we can test out the first two GET methods by browsing to the site in IE9, hitting F12 to bring up the tools, and entering /api/values in the URL: That sample action returns a list of values. To get just one value back, we'd browse to /values/5: That's it for Part 1. In Part 2 we'll look at getting data (beyond hardcoded strings) and start building out a sample application.

    Read the article

  • Podcast Show Notes: The Red Room Interview &ndash; Part 1

    - by Bob Rhubart
      The latest OTN Arch2Arch podcast is Part 1 of a three-part series featuring a discussion of a broad range of SOA  issues with three members of the small army of contributors to The Red Room Blog, now part of the OJam.biz site, the Australia-New Zealand outpost of the global Oracle community. The panelists for this program are: Sean Boiling - Sales Consulting Manager for Oracle Fusion Middleware LinkedIn | Twitter | Blog Richard Ward - SOA Channel Development Manager at Oracle LinkedIn | Blog Mervin Chiang - Consulting Principal at Leonardo Consulting LinkedIn | Twitter | Blog (You can also follow the Red Room itself on Twitter: @OracleRedRoom.) The genesis of this interview goes back to 2009, and the original Red Room blog, on which Sean, Richard, Mervin, and other Red Roomers published a 10-part series of posts that, taken together, form a kind of SOA best-practices guide, presented in an irreverent style that is rare in a lot of technical writing. It was on the basis of their expertise and irreverence that I wanted to get a few of the Red Room bloggers on an Arch2Arch podcast.  Easier said than done. Trying to schedule a group interview with very busy people on the other side of world (they’re actually 15 hours in the future, relative to my location) is not a simple process. The conversations about getting some of the Red Room people on the program began in the summer of 2009. The interview finally happened at 5:30 PM EDT on Tuesday March 30, 2010, which for the panelists, located in Australia, was 8:30 AM on Wednesday March 31, 2010. I was waiting for dinner, and Sean, Richard, and Mervin were waiting for breakfast. But the call went off without a hitch, and the panelists carried on a great discussion of SOA issues. Listen to Part 1 Many thanks to Gareth Llewellyn for his help in putting this together. SOA Best Practices Here’s a complete list of the posts in the original 10-part Red Room series: SOA is Dead. Long Live SOA by Sean Boiling Are you doing SOP’s instead of SOA? by Saul Cunningham All The President's SOA by Sean Boiling SOA – Pay Now or Pay Dearly by Richard Ward SOA where are the skills? by Richard Ward Project Management Pitfalls within SOA by Anton Gouws Viewing SOA as a project instead of an architecture by Saul Cunningham Kiss and Tell by Sean Boiling Failure to implement and adhere to SOA Governance by Mervin Chiang Ten Out Of Ten by Sean Boiling Parts 2 of the Red Room Interview will be available next week, followed by Part 3, so stay tuned: RSS Change in the Wind Beginning with next week’s program, the OTN Arch2Arch Podcast will be rechristened as the OTN ArchBeat Podcast, to better align with this blog. The transformation will be painless – you won’t feel a thing.   del.icio.us Tags: otn,oracle,Archbeat,Arch2Arch,soa,service oriented architecture,podcast Technorati Tags: otn,oracle,Archbeat,Arch2Arch,soa,service oriented architecture,podcast

    Read the article

  • Now It’s Personal (Although It Should Always Be): Campus Recruitment

    - by user769227
    One of the things that I think is important and I want our Campus Recruitment Team here at Oracle to be known for is outstanding customer service. When I say customer service, I mean both students and hiring managers should feel they have had a great experience in our campus hiring process. I think one of the keys to providing outstanding customer service is being able to provide as best as we can a personalised experience where the students who are interviewing with us feel like individuals in our process and not just part a ‘campus drive’. In the campus world this can be challenging at times especially in countries where there is high volume hiring. It can be tricky to create a personal experience when you are hiring for a large number of open graduate roles at one time. I think Campus Recruitment is one of the areas in the recruitment industry that is just waiting for a change. We have all seen the proliferation of Social Media in Recruitment over the past 4-6 years. Every Recruiter has a LinkedIn account or uses Twitter or G+ or FB, etc… and some individuals and organisations do it really well. Even in Campus Hiring there is great Social Media initiatives where companies reach out to students and talk to them. However one thing that has not really changed (and this is a generalisation) is the campus hiring interview process. Do these words inspire enthusiasm to you: “Group Interview, Assessment Centre, On-Campus Drive, Off-Campus Drive, etc...” I don’t know about you but to me these words don’t really sound very personal or individual to students. It almost conjures up images of a factory production line or those long queues you see where the person behind the counter says ‘take a number’. Campus Recruitment has come a long way don’t get me wrong – companies can share data with and talk to students in so many different ways now it really has become a much more transparent and open process. There are some times such as at IIT’s in India where it really is a bit old school in terms of interviewing with students running from company to company interviewing on campus over the course of a few days but I want students talking to Oracle to have as great an experience as possible (the outcome of getting a job or not is separate to the customer experience). As students, what are your thoughts? Do you feel like ‘just a number’ when you are interviewing or is there ways that companies can make the process more personalised. Let us know your thoughts. If you are interviewing with Oracle and have questions, want to talk to us or want to know what it is like working here – email us and we will help where we can. If you can’t reach your local Recruiter in your region email me at [email protected] and I will put you in touch with the appropriate person.

    Read the article

  • On SQL Developer and TNSNAMES.ORA

    - by thatjeffsmith
    Tnsnames.ora [DOCS] is a configuration file for SQL*Net that describes the network service names for the databases in your organization. Basically, it tells Oracle applications how to find your databases. This post is just a quick overview on how to get SQL Developer to ‘see’ this file and define a connection. There’s only a single prerequisite for having SQL Devleoper setup such that it can use TNSNAMES to connect: You have somewhere a tnsnames.ora file You don’t need a client, instant or otherwise, on your machine. You just need the file. Now, if you DO you have a client or HOME on your machine, SQL Developer will look for those and find the tnsnames file for you. IF we can’t find it at the usual places, you can simply tell us where it is via this preference: On the Database – Advanced page Once you’ve done this, assuming you have a file (or 10) in that directory, we’ll read it, parse it, and list the entries in the connection dialog. The File(s) That’s right, files. Just like SQL*Plus, we’ll read any file that starts with ‘tnsnames’ – that includes files you’ve renamed to .bak or .old. Kris talks about that more here. I have just the one, which is all I need anyway. There we go! Defining the Connection Just set the connection type to TNS. This is a lot easier to do than manually defining the connections – esp as they’re likely to frequently change in ‘the real world.’ No Client or Home Required That’s right. You don’t need an Oracle Client or $ORACLE_HOME to have SQL Developer see and read a TNS file. Just so you know I’m not cheating… SQL Dev doesn’t know which client to use and won’t use it even if it DID know… I’m able to define a new connection AND connect with these preferences ON|OFF.

    Read the article

  • Oracle Enterprise Manager users present today at Oracle Users Forum

    - by Anand Akela
    Oracle Users Forum starts in a few minutes at Moscone West, Levels 2 & 3. There are more than hundreds of Oracle user sessions during the day. Many Oracle Oracle Enterprise Manager users are presenting today as well.  In addition, we will have a Twitter Chat today from 11:30 AM to 12:30 PM with IOUG leaders, Enterprise Manager SIG contributors and many speakers. You can participate in the chat using hash tag #em12c on Twitter.com or by going to  tweetchat.com/room/em12c      (Needs Twitter credential for participating).  Feel free to join IOUG and Enterprise team members at the User Group Pavilion on 2nd Floor, Moscone West. RSVP by going http://tweetvite.com/event/IOUG  . Don't miss the Oracle Open World welcome keynote by Larry Ellison this evening at 5 PM . Here is the complete list of Oracle Enterprise Manager sessions during the Oracle Users Forum : Time Session Title Speakers Location 8:00AM - 8:45AM UGF4569 - Oracle RAC Migration with Oracle Automatic Storage Management and Oracle Enterprise Manager 12c VINOD Emmanuel -Database Engineering, Dell, Inc. Wendy Chen - Sr. Systems Engineer, Dell, Inc. Moscone West - 2011 8:00AM - 8:45AM UGF10389 -  Monitoring Storage Systems for Oracle Enterprise Manager 12c Anand Ranganathan - Product Manager, NetApp Moscone West - 2016 9:00AM - 10:00AM UGF2571 - Make Oracle Enterprise Manager Sing and Dance with the Command-Line Interface Ray Smith - Senior Database Administrator, Portland General Electric Moscone West - 2011 10:30AM - 11:30AM UGF2850 - Optimal Support: Oracle Enterprise Manager 12c Cloud Control, My Oracle Support, and More April Sims - DBA, Southern Utah University Moscone West - 2011 12:30PM-2:00PM UGF5131 - Migrating from Oracle Enterprise Manager 10g Grid Control to 12c Cloud Control    Leighton Nelson - Database Administrator, Mercy Moscone West - 2011 2:15PM-3:15PM UGF6511 -  Database Performance Tuning: Get the Best out of Oracle Enterprise Manager 12c Cloud Control Mike Ault - Oracle Guru, TEXAS MEMORY SYSTEMS INC Tariq Farooq - CEO/Founder, BrainSurface Moscone West - 2011 3:30PM-4:30PM UGF4556 - Will It Blend? Verifying Capacity in Server and Database Consolidations Jeremiah Wilton - Database Technology, Blue Gecko / DatAvail Moscone West - 2018 3:30PM-4:30PM UGF10400 - Oracle Enterprise Manager 12c: Monitoring, Metric Extensions, and Configuration Best Practices Kellyn Pot'Vin - Sr. Technical Consultant, Enkitec Moscone West - 2011 Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Easily Add Facebook Chat to Pidgin

    - by Matthew Guay
    Want to keep in touch with your Facebook friends throughout the day?  Here we’ll show you how to easily add Facebook chat to the popular multi-protocol chat client Pidgin. Facebook has recently added support for XMPP chat, which means you can easily add it to popular chat clients such as Pidgin.  Previously you could only add Facebook chat to Pidgin through a plug-in that didn’t always work correctly.  Here we’ll walk you through setting up your Facebook account in Pidgin. Getting Started First, make sure you have a username for your Facebook account (link below).  This is a relatively new feature for Facebook, so if you’ve had your account for a while you may need to choose one.    If you already have one, you should see it listed instead. Now, open Pidgin, and click Manage Accounts. Click Add… Then select XMPP from the Protocol list. Now, enter your Facebook username without the facebook.com part (e.g your.facebook.username, not http://www.facebook.com/your.user.name).  Then, enter chat.facebook.com for the Domain, and enter your standard Facebook password.  You can check the “Remember password” box if you’d like Pidgin to automatically sign in to Facebook chat. Now, click on the Advanced tab, and uncheck the “Require SSL/TLS” box.  Also, make sure the Connect port is 5222.  Click Add, and your Facebook account is added to Pidgin. Now Facebook will show up in your list of accounts, with the username [email protected]. Your Facebook friends will show up directly in your Buddy list, complete with their full name and Facebook profile picture.  Any users that are not in a group will show under your standard list, while ones in a Facebook group will be shown in a separate group.  You can move which groups your Facebook friends show up in, just like you can with other chat contacts.   And no matter if your friend is logged in on the standard Facebook website or through another chat application, it will work the same as always.   This is a great way to keep in touch with your Facebook friends throughout the day.  If you like Facebook chat and already use Pidgin, now you can keep from switching between programs and just chat with all your friends from a central location. Links: Download Pidgin Set your Facebook username Similar Articles Productive Geek Tips The How-To Geek is No Longer on FacebookWin a Free iPod Touch in the How-To Geek Facebook Giveaway!Block Those Irritating Facebook Quiz & Application MessagesPut Your Pidgin Buddy List into the Windows Vista SidebarHow to Lock Down Your Facebook Account TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • MySQL on Windows - Why, Where and How

    - by bertrand.matthelie(at)oracle.com
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Over the years Windows has become a major development and deployment platform for MySQL. As a matter of fact, Windows consistently ranks as the #1 development platform in our surveys, and now also ranks higher than any Linux distribution as a deployment platform among MySQL Community Edition users.   We've made various technical resources available in our MySQL on Windows Resource Center including articles, whitepapers and archived webinars. MySQL users are also sharing their experiences and writing how-to articles, and it's great to see former MySQL/Sun/Oracle employees still contributing! Thanks Anders for a recent step-by-step part 1 article on working with MySQL on Windows.   We also got feedback from customers wishing to get higher-level information about MySQL on Windows, to help them and others in their organizations better understand:   ·       Why is the world's most popular open source database so popular on Windows?   ·       What are the applications for which one should consider MySQL on Microsoft's platform?   ·       How should Windows shops relying on Microsoft databases get going with MySQL?   Those are the questions we aim to answer in our guide "MySQL on Windows - Why, Where and How", that you can download here.

    Read the article

  • How to Quickly Cut a Clip From a Video File with Avidemux

    - by Trevor Bekolay
    Whether you’re cutting out the boring parts of your vacation video or getting a hilarious scene for an animated GIF, Avidemux provides a quick and easy way to cut clips from any video file. It’s overkill to use a full-featured video editing program if you just want to cut a few clips from a video file. Even programs that are designed to be small can have confusing interfaces when dealing with video. We’ve found that a great free program, Avidemux, makes the job of cutting clips extremely simple. Note: While the screenshots in this guide are taken from the Windows version, Avidemux runs on all of the major platforms – Windows, Mac OS X and Linux (GTK). Image by Keith Williamson. Cutting Clips from a Video File Open up Avidemux, and load the video file that you want to work with. If you get a prompt like this one: we recommend clicking Yes to use the safer mode. Find the portion of the video that you’d like to isolate. Get as close as you can to the start of the clip you want to cut. Once you find the start of your clip, look at the “Frame Type” of the current frame. You want it to read I; if it isn’t frame type I, then use the single left and right arrow buttons to go forward or backward one frame until you find an appropriate I frame. Once you’ve found the right starting frame, click the button with the A over a red bar. This will set the start of the clip. Advance to where you want your clip to end. Click on the button with a B when you’ve found the appropriate frame. This frame can be of any type. You can now save the clip, either by going to File –> Save –> Save Video… or by pressing Ctrl+S. Give the file a name, and Avidemux will prepare your clip. And that’s it! You should now have a movie file that contains only the portion of the original file that you want. Download Avidemux free for all platforms Latest Features How-To Geek ETC How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? Battlestar Galactica – Caprica Map of the 12 Colonies (Wallpaper Also Available) View Enlarged Versions of Thumbnail Images with Thumbnail Zoom for Firefox IntoNow Identifies Any TV Show by Sound Walk Score Calculates a Neighborhood’s Pedestrian Friendliness Factor Fantasy World at Twilight Wallpaper Hack a Wireless Doorbell into a Snail Mail Indicator

    Read the article

  • New Communications Industry Data Model with "Factory Installed" Predictive Analytics using Oracle Da

    - by charlie.berger
    Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications Service Providers   We've integrated pre-installed analytical methodologies with the new Oracle Communications Data Model to deliver automated, simple, yet powerful predictive analytics solutions for customers.  Churn, sentiment analysis, identifying customer segments - all things that can be anticipated and hence, preconcieved and implemented inside an applications.  Read on for more information! TM Forum Management World, Nice, France - 18 May 2010 News Facts To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle today introduced the Oracle Communications Data Model. With the Oracle Communications Data Model, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the Oracle Communications Data Model, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Also announced today, Hong Kong Broadband Network enhanced their data warehouse system, going live on Oracle Communications Data Model in three months. The leading provider increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Product Details Oracle Communications Data Model provides industry-specific schema and embedded analytics that address key areas such as customer management, marketing segmentation, product development and network health. CSPs can efficiently capture and monitor critical data and transform it into actionable information to support development and delivery of next-generation services using: More than 1,300 industry-specific measurements and key performance indicators (KPIs) such as network reliability statistics, provisioning metrics and customer churn propensity. Embedded OLAP cubes for extremely fast dimensional analysis of business information. Embedded data mining models for sophisticated trending and predictive analysis. Support for multiple lines of business, such as cable, mobile, wireline and Internet, which can be easily extended to support future requirements. With Oracle Communications Data Model, CSPs can jump start the implementation of a communications data warehouse in line with communications-industry standards including the TM Forum Information Framework (SID), formerly known as the Shared Information Model. Oracle Communications Data Model is optimized for any Oracle Database 11g platform, including Oracle Exadata, which can improve call data record query performance by 10x or more. Supporting Quotes "Oracle Communications Data Model covers a wide range of business areas that are relevant to modern communications service providers and is a comprehensive solution - with its data model and pre-packaged templates including BI dashboards, KPIs, OLAP cubes and mining models. It helps us save a great deal of time in building and implementing a customized data warehouse and enables us to leverage the advanced analytics quickly and more effectively," said Yasuki Hayashi, executive manager, NTT Comware Corporation. "Data volumes will only continue to grow as communications service providers expand next-generation networks, deploy new services and adopt new business models. They will increasingly need efficient, reliable data warehouses to capture key insights on data such as customer value, network value and churn probability. With the Oracle Communications Data Model, Oracle has demonstrated its commitment to meeting these needs by delivering data warehouse tools designed to fill communications industry-specific needs," said Elisabeth Rainge, program director, Network Software, IDC. "The TM Forum Conformance Mark provides reassurance to customers seeking standards-based, and therefore, cost-effective and flexible solutions. TM Forum is extremely pleased to work with Oracle to certify its Oracle Communications Data Model solution. Upon successful completion, this certification will represent the broadest and most complete implementation of the TM Forum Information Framework to date, with more than 130 aggregate business entities," said Keith Willetts, chairman and chief executive officer, TM Forum. Supporting Resources Oracle Communications Oracle Communications Data Model Data Sheet Oracle Communications Data Model Podcast Oracle Data Warehousing Oracle Communications on YouTube Oracle Communications on Delicious Oracle Communications on Facebook Oracle Communications on Twitter Oracle Communications on LinkedIn Oracle Database on Twitter The Data Warehouse Insider Blog

    Read the article

  • SQL SERVER – Implementing IF … THEN in SQL SERVER with CASE Statements

    - by Pinal Dave
    Here is the question I received the other day in email. “I have business logic in my .net code and we use lots of IF … ELSE logic in our code. I want to move the logic to Stored Procedure. How do I convert the logic of the IF…ELSE to T-SQL. Please help.” I have previously received this answer few times. As data grows the performance problems grows more as well. Here is the how you can convert the logic of IF…ELSE in to CASE statement of SQL Server. Here are few of the examples: Example 1: If you are logic is as following: IF -1 < 1 THEN ‘TRUE’ ELSE ‘FALSE’ You can just use CASE statement as follows: -- SQL Server 2008 and earlier version solution SELECT CASE WHEN -1 < 1 THEN 'TRUE' ELSE 'FALSE' END AS Result GO -- SQL Server 2012 solution SELECT IIF ( -1 < 1, 'TRUE', 'FALSE' ) AS Result; GO If you are interested further about how IIF of SQL Server 2012 works read the blog post which I have written earlier this year . Well, in our example the condition which we have used is pretty simple but in the real world the logic can very complex. Let us see two different methods of how we an do CASE statement when we have logic based on the column of the table. Example 2: If you are logic is as following: IF BusinessEntityID < 10 THEN FirstName ELSE IF BusinessEntityID > 10 THEN PersonType FROM Person.Person p You can convert the same in the T-SQL as follows: SELECT CASE WHEN BusinessEntityID < 10 THEN FirstName WHEN BusinessEntityID > 10 THEN PersonType END AS Col, BusinessEntityID, Title, PersonType FROM Person.Person p However, if your logic is based on multiple column and conditions are complicated, you can follow the example 3. Example 3: If you are logic is as following: IF BusinessEntityID < 10 THEN FirstName ELSE IF BusinessEntityID > 10 AND Title IS NOT NULL THEN PersonType ELSE IF Title = 'Mr.' THEN 'Mister' ELSE 'No Idea' FROM Person.Person p You can convert the same in the T-SQL as follows: SELECT CASE WHEN BusinessEntityID < 10 THEN FirstName WHEN BusinessEntityID > 10 AND Title IS NOT NULL THEN PersonType WHEN Title = 'Mr.' THEN 'Mister' ELSE 'No Idea' END AS Col, BusinessEntityID, Title, PersonType FROM Person.Person p I hope this solution is good enough to convert the IF…ELSE logic to CASE Statement in SQL Server. Let me know if you need further information about the same. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Silverlight Cream for April 02, 2010 -- #828

    - by Dave Campbell
    In this Issue: Phil Middlemiss, Robert Kozak, Kathleen Dollard, Avi Pilosof, Nokola, Jeff Wilcox, David Anson, Timmy Kokke, Tim Greenfield, and Josh Smith. Shoutout: SmartyP has additional info up on his WP7 Pivot app: Preview of My Current Windows Phone 7 Pivot Work From SilverlightCream.com: A Chrome and Glass Theme - Part I Phil Middlemiss is starting a tutorial series on building a new theme for Silverlight, in this first one we define some gradients and color resources... good stuff Phil Intercepting INotifyPropertyChanged This is Robert Kozak's first post on this blog, but it's a good one about INotifyPropertyChanged and MVVM and has a solution in the post with lots of code and discussion. How do I Display Data of Complex Bound Criteria in Horizontal Lists in Silverlight? Kathleen Dollard's latest article in Visual Studio magazine is in answer to a question about displaying a list of complex bound criteria including data, child data, and photos, and displaying them horizontally one at a time. Very nice-looking result, and all the code. Windows Phone: Frame/Page navigation and transitions using the TransitioningContentControl Avi Pilosof discusses the built-in (boring) navigation on WP7, and then shows using the TransitionContentControl from the Toolkit to apply transitions to the navigation. EasyPainter: Cloud Turbulence and Particle Buzz Nokola returns with a couple more effects for EasyPainter: Cloud Turbulence and Particle Buzz ... check out the example screenshots, then go grab the code. Property change notifications for multithreaded Silverlight applications Jeff Wilcox is discussing the need for getting change notifications to always happen on the UI thread in multi-threaded apps... great diagrams to see what's going on. Tip: The default value of a DependencyProperty is shared by all instances of the class that registers it David Anson has a tip up about setting the default value of a DependencyProperty, and the consequence that may have depending upon the type. Building a “real” extension for Expression Blend Timmy Kokke's code is WPF, but the subject is near and dear to us all, Timmy has a real-world Expression Blend extension up... a search for controls in the Objects and Timelines pane ... and even if that doesn't interest you... it's the source to a Blend extension! XPath support in Silverlight 4 + XPathPad Tim Greenfield not only talks about XPath in SL4RC, but he has produced a tool, XPathPad, and provided the source... if you've used XPath, you either are a higher thinker than me(not a big stretch), or you need this :) Using a Service Locator to Work with MessageBoxes in an MVVM Application Josh Smith posted about a question that comes up a lot: showing a messagebox from a ViewModel object. This might not work for custom message boxes or unit testing. This post covers the Unit Testing aspect. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How To Personalize the Windows Command Prompt

    - by Matthew Guay
    Command line interfaces can be downright boring, and always seem to miss out on the fresh coats of paint liberally applied to the rest of Windows.  Here’s how to add a splash of color to Command Prompt and make it unique. By default, Windows Command Prompt is white text on a black background. It get’s the job done, but maybe you want to add some color to it.   To get an overview of what we can do with the color command, let’s enter: color /? So, to get the color you want, enter color then the option for the background color followed by the font color.  For example, let’s make an old-fashioned green on black look by entering: color 02   There are a bunch of different combinations you can do, like this black background with red text. color 04 You can’t mess it up too much.  The color command won’t let you set both the font and the background to the same color, which would make it unreadable.  Also, if you want to get back to the default settings, just enter: color Now we’re back to plain-old black and white. Personalize Command Prompt Without Commands If you’d prefer to change the color without entering commands, just click on the Command Prompt icon in the top left corner of the window and select Properties. Select the Colors tab, and then choose the color you want for the screen text and background.  You can also enter your own RGB color combination if you want.   Here we entered the RGB values to get a purple background color like Ubuntu 10.04. Back in the Properties dialog, you can also change your Command Prompt font from the font tab.  Choose any font you want, as long as the one you want is one of the three listed here. Customizations you make via the Properties dialog are saved and will be used any time you open Command Prompt, but any customizations you make with the Color command are only for that session. Conclusion Whether you want to make your command prompt bright enough to cause a sunburn or old-style enough to scare a mainframe operator, with these settings, you can make Command Prompt a bit more unique.   Similar Articles Productive Geek Tips Use "Command Prompt Here" in Windows VistaVerify the Integrity of Windows Vista System FilesKeyboard Ninja: Scrolling the Windows Command Prompt With Only the KeyboardRun a Command as Administrator from the Windows 7 / Vista Run boxStart an Application Assigned to a Specific CPU in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app

    Read the article

  • OWB 11gR2 &ndash; Flexible and extensible

    - by David Allan
    The Oracle data integration extensibility capabilities are something I love, nothing more frustrating than a tool or platform that is very constraining. I think extensibility and flexibility are invaluable capabilities in the data integration arena. I liked Uli Bethke's posting on some extensibility capabilities with ODI (see Nesting ODI Substitution Method Calls here), he has some useful guidance on making customizations to existing KMs, nice to learn by example. I thought I'd illustrate the same capabilities with ODI's partner OWB for the OWB community. There is a whole new world of potential. The LKM/IKM/CKM/JKMs are the primary templates that are supported (plus the Oracle Target code template), so there is a lot of potential for customizing and extending the product in this release. Enough waffle... Diving in at the deep end from Uli's post, in OWB the table operator has a number of additional properties in OWB 11gR2 that let you annotate the column usage with ODI-like properties such as the slowly changing usage or for your own user-defined purpose as in Uli's post, below you see for the target table SALES_TARGET we can use the UD5 property which when assigned the code template (knowledge module) which has been modified with Uli's change we can do custom things such as creating indices - provides The code template used by the mapping has the additional step which is basically the code illustrated from Uli's posting just used directly, the ODI 10g substitution references also supported from within OWB's runtime. Now to see whether this does what we expect before we execute it, we can check out the generated code similar to how the traditional mapping generation and preview works, you do this by clicking on the 'Inspect Code' button on the execution units code template assignment. This then  creates another tab with prefix 'Code - <mapping name>' where the generated code is put, scrolling down we find the last step with the indices being created, looks good, so we are ready to deploy and execute. After executing the mapping we can then use the 'Audit Information' panel (select the mapping in the designer tree and click on View/Audit Information), this gives us a view of the execution where we can drill into the tasks that were executed and inspect both the template and the generated code that was executed and any potential errors. Reflecting back on earlier versions of OWB, these were the kinds of features that were always highly desirable, getting under the hood of the code generation and tweaking bit and pieces - fun and powerful stuff! We can step it up a bit here and explore some further ideas. The example below is a daisy-chained set of execution units where the intermediate table is a target of one unit and the source for another. We want that table to be a global temporary table, so can tweak the templates. Back to the copy of SQL Control Append (for demo purposes) we modify the create target table step to make the table a global temporary table, with the option of on commit preserve rows. You can get a feel for some of the customizations and changes possible, providing some great flexibility and extensibility for the data integration tools.

    Read the article

  • A Few Words from Oracle’s Channel Chief

    - by Meghan Fritz-Oracle
    As Oracle enters a new fiscal year, I want to take a moment and reflect on my time at Oracle thus far. The technology industry is currently at an inflection point trying to figure out where growth will come from. When you look at Oracle’s portfolio of products, it's a complete stack from applications to disc, offering differentiation in the marketplace. I was initially drawn to Oracle’s leadership, strategy, and world-class technology. Since joining the Oracle team in October 2013, I’ve had the privilege of traveling around the globe visiting our partners and customers, and wanted to share several common themes that came up during these meetings. Cloud: Many partners are trying to figure out how to build a business around the cloud. Oracle partners can currently resell or refer our cloud services. We saw over 300 percent growth from cloud resale last quarter. Engineered Systems: Hardware and software integrated together to simplify IT allows our joint customers to focus on the innovation they need to compete in a complex marketplace. We're seeing great success in a several areas, with more partners saying, “Let’s start with Oracle on Oracle.” The Internet of Things: This is the next big opportunity for device manufacturers and ISV‘s to capture market share in what is projected to be a mulit-trillion-dollar opportunity, according to Gartner.  Competition: We've got a tremendous middleware platform and a tremendous database install base. We’re not just a database company; we are a complete provider. So looking ahead, what are my priorities for fiscal 2015? Oracle PartnerNetwork has some very exciting plans on the horizon. There’s a lot more leadership and announcements to unfold, especially at this year’s Global Partner Kickoff taking place on June 25 + 26 depending on your region and time zone. I along with several other Oracle executives will be shedding light on Oracle’s strategy for the upcoming year, the latest opportunities within the OPN Specialized Program and sales strategies that will help you to continue to grow and profit with Oracle. Stay tuned for registration information next week.We also have Oracle OpenWorld and JavaOne to look forward to. These conferences are taking place in San Francisco from September 28 – October 2. We’ll have a variety of partner-specific activities for you at OPN Central @ OpenWorld including the OPN keynote, the famed AfterDark networking reception, access to the OPN Lounge and more.In the meantime, I hope that everyone has a great end to fiscal 2014.Best regards,Rich Geraffo Senior Vice President, Worldwide Alliances and Channels

    Read the article

  • Call Webservices&hellip;Maybe!?

    - by MOSSLover
    So I have been doing preliminary work for my iOS talk for a while, but did not get into the meat of the project until recently.  One day I envision my talk uploading pictures from a camera on an iPhone or iPad into SharePoint and telling people how I did it.  As you know with my Silverlight talk and any new technology, building new talks with new technologies always ends up with some pain points that you must jump over just to grab data.  So step 1 always starts out with how do we even access a webservice using the new technology. I started out watching every single SPC video available on oAuth and Rest Webservices in SharePoint 2013.  I also sent an email to Eric Shupps about some REST and 2013 examples.  The videos further confused me, because all the videos were on SharePoint hosted apps (provider and autohosted).  I did not want to create a SharePoint hosted app, but instead a mobile app outside of the SharePoint context altogether.  Nick Swan sent me his code and it was great for a starting point on how the JSON calls would look like on iOS, but I was still missing a piece.  Nick does a great job on showing how to use the REST/JSON calls in a non-MS tech, however his presentation uses the SharePoint context and can grab the SPAppToken.  At this point I had to ask the question how do you grab the SAML token outside of SharePoint 2013 in iOS using Objective-C?  After reading all the MSDN documentation, some documentation on Restkit and Objective-C/oAuth calls, and some SharePoint 2013 blog post my head was swimming.  I was dreaming about REST and iOS in SharePoint 2013.  SAML tokens were taunting me.  I was nowhere near understanding 2013. I started talking to my friend, Pedro Jimenez, who is also playing with Objective-C and went to SPC.  He found me a couple good MSDN posts with REST/JSON calls that basically showed the accessToken was all I needed (at this point I was still thinking iOS needed to be a provider hosted app which is wrong).  So then again I had to ask the SAML token question…How do you get a SAML token outside of SharePoint without the TokenHelper class? So then I started talking to people and thinking why do I need to completely avoid TokenHelper…The solution in concept is basically create a webservice in Azure wrapped into a Provider Hosted App in SharePoint.  Wictor Wilen created a helper webservice in the following blog post: http://www.wictorwilen.se/Post/How-to-do-active-authentication-to-Office-365-and-SharePoint-Online.aspx. So now I have to basically stand up the webservice, the SharePoint app wrapper, and then use Restkit to call the first webservice to grab the token and then the second webservice to pass in the token and grab some SharePoint data.  What this means is that you can no longer just pass credentials into SharePoint webservices and get data back.  You have to pass in a SAML token with every single webservice call to SharePoint.  The theory is that this token is associated with the permissions the app can handle (read, write, whatever).  It seems like a ton of pain and a lot of work, but this is step 1 in my crusade to pull some piece of data into iOS from SharePoint and show people how to do it themselves.  In the upcoming months hopefully I can get halfway to my end goal. Technorati Tags: SharePoint 2013,REST,oAuth,Objective-C,iOS

    Read the article

  • Disable the Splash Screen in Portable Firefox (and Other Portable Apps)

    - by Mysticgeek
    Portable applications are cool because you can run them on any machine from your thumb drive. What isn’t cool is the annoying splash screens that appear when launching the apps. Here’s how to disable the annoyance. In this example we are using Portable Apps version 1.6.1. Disable Splash Screen in Portable Firefox  To disable the Splash Screen, open up Computer and double-click on your flash drive containing PortableApps.   Now browse to the following location… PortableApps\FirefoxPortable\Other\Source In this directory you’ll find the file FirefoxPortable.ini. Open this file with Notepad… This ini file should look similar to the shot below. By default, the line DisableSplashScreen=False … we just need to change False to True. Then make sure to save the change… Now copy the FirefoxPortable.ini file we just edited. Then go back to the main directory PortableApps \ FirefoxPortable and paste it there. That is all there is to it! Now when you launch Portable Firefox, you won’t have to wait while the Splash Screen displays before you can start using it. If you ever want to revert back to having the Splash Screen display, all you’ll need to do is delete FirefoxPortable.ini from PortableApps \ FirefoxPortable. The process is essentially the same in other PortableApps as well. Just follow the steps shown above. For example here we’re disabling the Splash Screen from KeePassPortable by going into the thumb drive PortableApps \ KeePassPortable \ Other \ Source and changing the KeePassPortable.ini file for DisableSplashScreen to equal True. Save it… Then copy it to the main KeePassPortable directory… If you are annoyed by having to see the Splash Screen every time you launch a portable app, following these steps rids the annoyance! Download PortableApps Similar Articles Productive Geek Tips Speed up Visual Studio 2003 Startup Time By Disabling the Splash ScreenSpeed up Visual Studio 2003 Startup Time By Disabling the Splash ScreenUpdate Portable Firefox the Easy WayStart Portable Firefox in Safe ModeInstall and Run Applications from Your iPod, Flash Drive or Mp3 Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer

    Read the article

  • Cost justification for buying a 32GB superfast Alienware M18x with a price tag of around £5K ($10K)

    - by tonyrogerson
    When considering buying a laptop that’s going to cost me around £5,000 I really need to justify the purchase from a business perspective; my Lenovo W700 has served me very well for the last 2 years, it’s an extremely good machine and as solid as a rock (and as heavy), alas though it is limited to the 8GB. As SQL Server 2012 approaches and with my interest in working in the Business Intelligence space over the next year or two it is clear I need a powerful machine that I can run a full infrastructure though virtualised. My requirements For High Availability / Disaster Recovery research and demonstration Machine for a domain controller Four machines in a shared disk cluster (SQL Server Clustering active – active etc.) Five  machines in a file share cluster (SQL Server Availability Groups) For Business Intelligence research and demonstration Not entirely sure how many machine I want to run here, but it would be to cover the entire BI stack in an enterprise setting, sharepoint, sql server etc. For Big Data Research I have a fondness for the NoSQL approach to scalability and dealing with large volumes so I need a number of machines to research VoltDB, Hadoop etc. As you can see the requirements for a SQL Server consultant to service their clients well is considerable; will 8GB suffice, alas no, it will no longer do. I’m a very strong believer that in order to do your job well you must expense it, short cuts only cost you time, waiting 5 minutes instead of an hour for something to run not only saves me time but my clients time, I can do things quicker and more importantly I can demonstrate concepts. My W700 with the 8GB of RAM and SSD’s cost me around £3.5K two years ago, to be honest I’ve not got the full use I wanted out of it but the machine has had the power when I’ve needed it, it’s served me and my clients well. Alienware now do a model (the M18x) with 32GB of RAM; yes 32GB in a laptop! Dual drives so I can whack a couple of really good SSD’s in there, a quad core with hyper threading i7 and a decent speed. I can reduce the cost of the memory by getting it from Crucial, so instead of £1.5K for 32GB it will be around £900, I can also cost save on the SSD as well. The beauty about the M18x is that it is USB3.0, SATA 3 and also really importantly has eSATA, running VM’s will never be easier, I can have a removeable SSD with my VM’s on it and can plug it into my home machine or laptop – an ideal world! The initial outlay of £5K is peanuts compared to the benefits I’ll give my clients, I will be able to present real enterprise concepts, I’ll also be able to give training on those real enterprise concepts and with real, albeit virtualised machines.

    Read the article

  • SQLAuthority News – Social Media Series – Twitter and Myself

    - by pinaldave
    Pinal Dave on Twitter! Frequent readers of my blog might know that I am trying to get more involved in all social media sites, both professionally and personally.  Readers might also know that I have often struggled with finding the purpose of some social media sites – Twitter especially.  One of the great uses of social media is to stay connected and updated with followers.  Twitter’s 140 character limit means that Twitter is a great place to get quick updates from the world, but not a lot of deep information.  In fact, I have the feeling that Twitter’s form might actually limit its usefulness – especially for complex subjects like SQL Server. However, #sqlhelp has tag has for sure overcome that belief. You can instantly talk about SQL and get help with your SQL problems on twitter. I believe in keeping up with the changing times, and it didn’t feel right to give up on Twitter.  So I have determined a good way to use Twitter and set rules for myself.  The problem I was facing that if I followed everyone who interested me and let anyone follow me, I was completely overwhelmed by the amount of information Twitter could give me every day.  It didn’t seem like 140 characters should be able to take up so much of my time, but it took hours to sort through all the updates to find things that were of interest to me and to SQL Server. First, I was forced to unfollow anyone who made too many updates every day.  This was not an easy decision, but just for my own sake I had to limit the amount of information I could take in every day.  I still let anyone follow me who wants to, because I didn’t want to limit my readership, and I hope that they do not feel the way I did – that there are too many updates! Next, I made sure that the information I put on Twitter is useful and to the point.  I try to announce new blog posts on Twitter at least once a day, and I also try to find five posts from other people every day that are worth re-Tweeting.  This forces me to stay active in the community.  But it is not all business on Twitter.  It is also a place for me to post updates about my family and home life, for anyone who is interested. In simple words, I talk every thing and anything on twitter. If you’d like to follow me, my Twitter handle is www.twitter.com/pinaldave.  It is a good place to start if you’d like to keep updated with my blog and find out who I follow and who my influences are.  Twitter is perfect for getting little “tastes” of things you’re interested in.  If you are interested in my blog, SQL Server, or both, I hope that my Twitter updates will be interesting and helpful. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: Social Media

    Read the article

  • SQL SERVER – Weekend Project – Visiting Friend’s Company – Koenig Solutions

    - by pinaldave
    I have decided to do some interesting experiments every weekend and share it next week as a weekend project on the blog. Many times our business lives and personal lives are very separate, however this post will talk about one instance where my two lives connect. This weekend I visited my friend’s company. My friend owns Koenig, so of course I am very interested so see that they are doing well.  I have been very impressed this year, as they have expanded into multiple cities and are offering more and more classes about Business Intelligence, Project Management, networking, and much more. Koenig Solutions originally were a company that focused on training IT professionals – from topics like databases and even English language courses.  As the company grew more popular, Koenig began their blog to keep fans updated, and gradually have added more and more courses. I am very happy for my friend’s success, but as a technology enthusiast I am also pleased with Koenig Solutions’ success.  Whenever anyone in our field improves, the field as a whole does better.  Koenig offers high quality classes on a variety of topics at a variety of levels, so anyone can benefit from browsing this blog. I am a big fan of technology (obviously), and I feel blessed to have gotten in on the “ground floor,” even though there are some people out there who think technology has advanced as far as possible – I believe they will be proven wrong.  And that is why I think companies like Koenig Solutions are so important – they are providing training and support in a quickly growing field, and providing job skills in this tough economy. I believe this particular post really highlights how I, and Koenig, feel about the IT industry.  It is quickly expanding, and job opportunities are sure to abound – but how can the average person get started in this exciting field?  This post emphasizes that knowledge is power – know what interests you in the IT field, get an education, and continue your training even after you’ve gotten your foot in the door. Koenig Solutions provides what I feel is one of the most important services in the modern world – in person training.  They obviously offer many online courses, but you can also set up physical, face-to-face training through their website.  As I mentioned before, they offer a wide variety of classes that cater to nearly every IT skill you can think of.  If you have more specific needs, they also offer one of the best English language training courses.  English is turning into the language of technology, so these courses can ensure that you are keeping up the pace. Koenig Solutions and I agree about how important training can be, and even better – they provide some of the best training around.  We share ideals and I am very happy see the success of my friend. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority Author Visit, T SQL, Technology Tagged: Developer Training

    Read the article

  • ASP.NET MVC, Web API, Razor, e Open Source (Código Aberto)

    - by Leniel Macaferi
    A Microsoft tornou o código fonte da ASP.NET MVC disponível sob uma licença open source (de código aberto) desde a primeira versão V1. Nós também integramos uma série de grandes tecnologias de código aberto no produto, e agora entregamos jQuery, jQuery UI, jQuery Mobile, jQuery Validation, Modernizr.js, NuGet, Knockout.js e JSON.NET como parte integrante dos lançamentos da ASP.NET MVC. Estou muito animado para anunciar hoje que também iremos liberar o código fonte da ASP.NET Web API e ASP.NET Web Pages (também conhecido como Razor) sob uma licença open source (Apache 2.0), e que iremos aumentar a transparência do desenvolvimento de todos os três projetos hospedando seus repositórios de código no CodePlex (usando o novo suporte ao Git anunciado na semana passada - em Inglês). Isso permitirá um modelo de desenvolvimento mais aberto, onde toda a comunidade será capaz de participar e fornecer feedback nos checkins (envios de código), corrigir bugs, desenvolver novos recursos, e construir e testar os produtos diariamente usando a versão do código-fonte e testes mais atualizada possível. Nós também pela primeira vez permitiremos que os desenvolvedores de fora da Microsoft enviem correções e contribuições de código que a equipe de desenvolvimento da Microsoft irá rever para potencial inclusão nos produtos. Nós anunciamos uma abordagem de desenvolvimento semelhantemente aberta com o Windows Azure SDK em Dezembro passado, e achamos que essa abordagem é um ótimo caminho para estreitar as relações, pois permite um excelente ciclo de feedback com os desenvolvedores - e, finalmente, permite a entrega de produtos ainda melhores, como resultado. Muito importante - ASP.NET MVC, Web API e o Razor continuarão a ser totalmente produtos suportados pela Microsoft que são lançados tanto independentemente, bem como parte do Visual Studio (exatamente da mesma maneira como é feito hoje em dia). Eles também continuarão a ser desenvolvidos pelos mesmos desenvolvedores da Microsoft que os constroem hoje (na verdade, temos agora muito mais desenvolvedores da Microsoft trabalhando na equipe da ASP.NET). Nosso objetivo com o anúncio de hoje é aumentar ainda mais o ciclo de feedback/retorno sobre os produtos, para nos permitir oferecer produtos ainda melhores. Estamos realmente entusiasmados com as melhorias que isso trará. Saiba mais Agora você pode navegar, sincronizar e construir a árvore de código fonte da ASP.NET MVC, Web API, e Razor através do website http://aspnetwebstack.codeplex.com.  O repositório Git atual no site refere-se à árvore de desenvolvimento do marco RC (release candidate/candidata a lançamento) na qual equipe vem trabalhando nas últimas semanas, e esta mesma árvore contém ambos o código fonte e os testes, e pode ser construída e testada por qualquer pessoa. Devido aos binários produzidos serem bin-deployable (DLLs instaladas diretamente na pasta bin sem demais dependências), isto permite a você compilar seus próprios builds e experimentar as atualizações do produto, tão logo elas sejam adicionadas no repositório. Agora você também pode contribuir diretamente para o desenvolvimento dos produtos através da revisão e envio de feedback sobre os checkins de código, enviando bugs e ajudando-nos a verificar as correções tão logo elas sejam enviadas para o repositório, sugerindo e dando feedback sobre os novos recursos enquanto eles são implementados, bem como enviando suas próprias correções ou contribuições de código. Note que todas as submissões de código serão rigorosamente analisadas ??e testadas pelo Time da ASP.NET MVC, e apenas aquelas que atenderem a um padrão elevado de qualidade e adequação ao roadmap (roteiro) definido para as próximas versões serão incorporadas ao código fonte do produto. Sumário Todos nós da equipe estamos realmente entusiasmados com o anúncio de hoje - isto é algo no qual nós estivemos trabalhando por muitos anos. O estreitamento no relacionamento entre a comunidade e os desenvolvedores nos permitirá construir produtos ainda melhores levando a ASP.NET para o próximo nível em termos de inovação e foco no cliente. Obrigado! Scott P.S. Além do blog, eu uso o Twitter para disponibilizar posts rápidos e para compartilhar links. Meu apelido no Twitter é: @scottgu Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • A few things I learned regarding Azure billing policies

    - by Vincent Grondin
    An hour of small computing time: 0,12$ per hour A Gig of storage in the cloud: 0,15$ per hour 1 Gig of relational database using Azure SQL: 9,99$  per month A Visual Studio Professional with MSDN Premium account: 2500$ per year Winning an MSDN Professional account that comes preloaded with 750 free hours of Azure per month:  PRICELESS !!!      But was it really free???? Hmmm… Let’s see.....   Here's a few things I learned regarding Azure billing policies when I attended a promotional training at Microsoft last week...   1)  An instance deployed in the cloud really means whatever you upload in there... it doesn't matter if it's in STAGING OR PRODUCTION!!!!   Your MSDN account comes with 750 free hours of small computing time per month which should be enough hours per month for one instance of one application deployed in the cloud...  So we're cool, the application you run in the cloud doesn't cost you a penny....  BUT the one that's in staging is still consuming time!!!   So if you don’t want to end up having to pay 42$ at the end of the month on your credit card like this happened to a friend of mine, DELETE them staging applications once you’ve put them in production! This also applies to the instance count you can modify in the configuration file… So stop and think before you decide you want to spawn 50 of those hello world apps  .     2) If you have an MSDN account, then you have the promotional 750 hours of Azure credits per month and can use the Azure credits to explore the Cloud! But be aware, this promotion ends in 8 months (maybe more like 7 now) and then you will most likely go back to the standard 250 hours of Azure credits. If you do not delete your applications by then, you’ll get billed for the extra hours, believe me…   There is a switch that you can toggle and which will STOP your automatic enrollment after the promotion and prevent you from renewing the Azure Account automatically. Yes the default setting is to automatically renew your account and remember, you entered your credit card information in the registration process so, yes, you WILL be billed…  Go disable that ASAP    Log into your account, go to “Windows Azure Platform” then click the “Subscriptions” tab and on the right side, you’ll see a drop down with different “Actions” into it… Choose “Opt out of auto renew” and, NOW you’re safe…   Still, this is a great offer by Microsoft and I think everyone that has a chance should play a bit with Azure to get to know this technology a bit more...     Happy Cloud Computing All

    Read the article

  • Home Energy Management & Automation with Windows Phone 7

    A number of people at Clarity are personally interested in home energy conservation and home automation. We feel that a mobile device is a great fit for bringing this idea to fruition. While this project is merely a concept and not directly associated with Microsofts Hohm web service, it provides a great model for communicating the concept. I wanted to take the idea a step further and combine saving energy in your home with the ability to track water usage and control your home devices. I designed an application that focuses on total home control and not just energy usage. Application Overview By monitoring home consumption in real time and with yearly projections users can pinpoint vampire devices, times of high or low consumption, and wasteful patterns of energy use. Energy usage meters indicate total current consumption as well as individual device consumption. Users can then use the information to take action, make adjustments, and change their consumption behaviors. The app can be used to automate certain systems like lighting, temperature, or alarms. Other features can be turned on an off at the touch of a toggle switch on your phone, away from home. Forget to turn off the TV or shut the garage door? No problem, you can do it from your phone. Through settings you can enable and disable features of the phone that apply to your home making it a completely customized and convenient experience. To be clear, this equates to more security, big environmental impact, and even bigger savings.   Design and User Interface  Since this panorama application is designed for win phone 7 devices, it complies with the UI Design and Interaction Guide for wp7. I developed the frame and page hierarchy from existing examples. The interface takes advantage of the interactive nature of touch screens with slider controls, pivot control views, and toggle switches to turn on and off devices (not shown in mockup). I followed recommendations for text based elements and adapted the tile notifications to display the most recent user activity. For example, the mockup indicates upon launching the app that the last thing you did was program the thermostat. This model is great for quick launching common user actions. One last design feature to point out is the technical reasons for supplying both light and dark themes for the app. Since this application is targeting energy consumption it only makes sense to consider the effect of the apps background color or image on the phones energy use. When displaying darker colors like black the OLED display may use less power, extending battery life. Other Considerations For now I left out options of wind and solar powered energy options because they are not available to everyone. Renewable energy sources and new technologies associated with them are definitely ideas to keep in mind for a next iteration. Another idea to explore for such an application would be to include a savings model similar to mint.com. In addition to general energy-saving recommendations the application could recommend customized ways to save based on your current utility providers and available options in your area. If your television or refrigerator is guilty of sucking a lot of energy then you may see recommendations for energy star products that could save you even more money! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >