Search Results

Search found 1070 results on 43 pages for 'uml modeling'.

Page 41/43 | < Previous Page | 37 38 39 40 41 42 43  | Next Page >

  • Guides for PostgreSQL query tuning?

    - by Joe
    I've found a number of resources that talk about tuning the database server, but I haven't found much on the tuning of the individual queries. For instance, in Oracle, I might try adding hints to ignore indexes or to use sort-merge vs. correlated joins, but I can't find much on tuning Postgres other than using explicit joins and recommendations when bulk loading tables. Do any such guides exist so I can focus on tuning the most run and/or underperforming queries, hopefully without adversely affecting the currently well-performing queries? I'd even be happy to find something that compared how certain types of queries performed relative to other databases, so I had a better clue of what sort of things to avoid. update: I should've mentioned, I took all of the Oracle DBA classes along with their data modeling and SQL tuning classes back in the 8i days ... so I know about 'EXPLAIN', but that's more to tell you what's going wrong with the query, not necessarily how to make it better. (eg, are 'while var=1 or var=2' and 'while var in (1,2)' considered the same when generating an execution plan? What if I'm doing it with 10 permutations? When are multi-column indexes used? Are there ways to get the planner to optimize for fastest start vs. fastest finish? What sort of 'gotchas' might I run into when moving from mySQL, Oracle or some other RDBMS?) I could write any complex query dozens if not hundreds of ways, and I'm hoping to not have to try them all and find which one works best through trial and error. I've already found that 'SELECT count(*)' won't use an index, but 'SELECT count(primary_key)' will ... maybe a 'PostgreSQL for experienced SQL users' sort of document that explained sorts of queries to avoid, and how best to re-write them, or how to get the planner to handle them better. update 2: I found a Comparison of different SQL Implementations which covers PostgreSQL, DB2, MS-SQL, mySQL, Oracle and Informix, and explains if, how, and gotchas on things you might try to do, and his references section linked to Oracle / SQL Server / DB2 / Mckoi /MySQL Database Equivalents (which is what its title suggests) and to the wikibook SQL Dialects Reference which covers whatever people contribute (includes some DB2, SQLite, mySQL, PostgreSQL, Firebird, Vituoso, Oracle, MS-SQL, Ingres, and Linter).

    Read the article

  • How do you model roles / relationships with Domain Driven Design in mind?

    - by kitsune
    If I have three entities, Project, ProjectRole and Person, where a Person can be a member of different Projects and be in different Project Roles (such as "Project Lead", or "Project Member") - how would you model such a relationship? In the database, I currently have the following tablers: Project, Person, ProjectRole Project_Person with PersonId & ProjectId as PK and a ProjectRoleId as a FK Relationship. I'm really at a loss here since all domain models I come up with seem to break some "DDD" rule. Are there any 'standards' for this problem? I had a look at a Streamlined Object Modeling and there is an example what a Project and ProjectMember would look like, but AddProjectMember() in Project would call ProjectMember.AddProject(). So Project has a List of ProjectMembers, and each ProjectMember in return has a reference to the Project. Looks a bit convoluted to me. update After reading more about this subject, I will try the following: There are distinct roles, or better, model relationships, that are of a certain role type within my domain. For instance, ProjectMember is a distinct role that tells us something about the relationship a Person plays within a Project. It contains a ProjectMembershipType that tells us more about the Role it will play. I do know for certain that persons will have to play roles inside a project, so I will model that relationship. ProjectMembershipTypes can be created and modified. These can be "Project Leader", "Developer", "External Adviser", or something different. A person can have many roles inside a project, and these roles can start and end at a certain date. Such relationships are modeled by the class ProjectMember. public class ProjectMember : IRole { public virtual int ProjectMemberId { get; set; } public virtual ProjectMembershipType ProjectMembershipType { get; set; } public virtual Person Person { get; set; } public virtual Project Project { get; set; } public virtual DateTime From { get; set; } public virtual DateTime Thru { get; set; } // etc... } ProjectMembershipType: ie. "Project Manager", "Developer", "Adviser" public class ProjectMembershipType : IRoleType { public virtual int ProjectMembershipTypeId { get; set; } public virtual string Name { get; set; } public virtual string Description { get; set; } // etc... }

    Read the article

  • Having trouble getting MEF imports to be resolved

    - by Dave
    This is sort of a continuation of one of my earlier posts, which involves the resolving of modules in my WPF application. This question is specifically related to the effect of interdependencies of modules and the method of constructing those modules (i.e. via MEF or through new) on MEF's ability to resolve relationships. First of all, here is a simple UML diagram of my test application: I have tried two approaches: left approach: the App implements IError right approach: the App has a member that implements IError Left approach My code behind looked like this (just the MEF-related stuff): // app.cs [Export(typeof(IError))] public partial class Window1 : Window, IError { [Import] public CandyCo.Shared.LibraryInterfaces.IPlugin Plugin { get; set; } [Export] public CandyCo.Shared.LibraryInterfaces.ICandySettings Settings { get; set; } private ICandySettings Settings; public Window1() { // I create the preferences here with new, instead of using MEF. I wonder // if that's my whole problem? If I use MEF, and want to have parameters // going to the constructor, then do I have to [Export] a POCO (i.e. string)? Settings = new CandySettings( "Settings", @"c:\settings.xml"); var catalog = new DirectoryCatalog( "."); var container = new CompositionContainer( catalog); try { container.ComposeParts( this); } catch( CompositionException ex) { foreach( CompositionError e in ex.Errors) { string description = e.Description; string details = e.Exception.Message; } throw; } } } // plugin.cs [Export(typeof(IPlugin))] public class Plugin : IPlugin { [Import] public CandyCo.Shared.LibraryInterfaces.ICandySettings CandySettings { get; set; } [Import] public CandyCo.Shared.LibraryInterfaces.IError ErrorInterface { get; set; } [ImportingConstructor] public Plugin( ICandySettings candy_settings, IError error_interface) { CandySettings = candy_settings; ErrorInterface = error_interface; } } // candysettings.cs [Export(typeof(ICandySettings))] public class CandySettings : ICandySettings { ... } Right-side approach Basically the same as the left-side approach, except that I created a class that inherits from IError in the same assembly as Window1. I then used an [Import] to try to get MEF to resolve that for me. Can anyone explain how the two ways I have approached MEF here are flawed? I have been in the dark for so long that instead of reading about MEF and trying different suggestions, I've added MEF to my solution and am stepping into the code. The part where it looks like it fails is when it calls partManager.GetSavedImport(). For some reason, the importCache is null, which I don't understand. All the way up to this point, it's been looking at the part (Window1) and trying to resolve two imported interfaces -- IError and IPlugin. I would have expected it to enter code that looks at other assemblies in the same executable folder, and then check it for exports so that it knows how to resolve the imports...

    Read the article

  • when to use the abstract factory pattern?

    - by hguser
    Hi: I want to know when we need to use the abstract factory pattern. Here is an example,I want to know if it is necessary. The UML THe above is the abstract factory pattern, it is recommended by my classmate. THe following is myown implemention. I do not think it is necessary to use the pattern. And the following is some core codes: package net; import java.io.IOException; import java.util.HashMap; import java.util.Map; import java.util.Properties; public class Test { public static void main(String[] args) throws IOException, InstantiationException, IllegalAccessException, ClassNotFoundException { DaoRepository dr=new DaoRepository(); AbstractDao dao=dr.findDao("sql"); dao.insert(); } } class DaoRepository { Map<String, AbstractDao> daoMap=new HashMap<String, AbstractDao>(); public DaoRepository () throws IOException, InstantiationException, IllegalAccessException, ClassNotFoundException { Properties p=new Properties(); p.load(DaoRepository.class.getResourceAsStream("Test.properties")); initDaos(p); } public void initDaos(Properties p) throws InstantiationException, IllegalAccessException, ClassNotFoundException { String[] daoarray=p.getProperty("dao").split(","); for(String dao:daoarray) { AbstractDao ad=(AbstractDao)Class.forName(dao).newInstance(); daoMap.put(ad.getID(),ad); } } public AbstractDao findDao(String id) {return daoMap.get(id);} } abstract class AbstractDao { public abstract String getID(); public abstract void insert(); public abstract void update(); } class SqlDao extends AbstractDao { public SqlDao() {} public String getID() {return "sql";} public void insert() {System.out.println("sql insert");} public void update() {System.out.println("sql update");} } class AccessDao extends AbstractDao { public AccessDao() {} public String getID() {return "access";} public void insert() {System.out.println("access insert");} public void update() {System.out.println("access update");} } And the content of the Test.properties is just one line: dao=net.SqlDao,net.SqlDao So any ont can tell me if this suitation is necessary?

    Read the article

  • How to best integrate generated code

    - by Arne
    I am evaluating the use of code generation for my flight simulation project. More specifically there is a requirement to allow "the average engineer" (no offense I am one myself) to define the differential equations that describe the dynamic system in a more natural syntax than C++ provides. The idea is to devise a abstract descriptor language that can be easily understood and edited to generate C++ code from. This descriptor is supplied by the modeling engineer and used by the ones implementing and maintaining the simulation evironment to generate code. I've got something like this in mind: model Aircraft has state x1, x2; state x3; input double : u; input bool : flag1, flag2; algebraic double : x1x2; model Engine : tw1, tw2; model Gear : gear; model ISA : isa; trim routine HorizontalFight; trim routine OnGround, General; constant double : c1, c2; constant int : ci1; begin differential equations x1' = x1 + 2.*x2; x2' = x2 + x1x2; begin algebraic equations x1x2 = x1*x2 + x1'; end model It is important to retain the flexibility of the C language thus the descriptor language is meant to only define certain parts of the definition and implementation of the model class. This way one enigneer provides the model in from of the descriptor language as examplified above and the maintenance enigneer will add all the code to read parameters from files, start/stop/pause the execution of the simulation and how a concrete object gets instatiated. My first though is to either generate two files from the descriptor file: one .h file containing declarations and one .cpp file containing the implementation of certain functions. These then need to be #included at appropriate places [File Aircarft.h] class Aircraft { public: void Aircraft(..); // hand-written constructor void ReadParameters(string &file_name); // hand-written private: /* more hand wirtten boiler-plate code */ /* generate declarations follow */ #include "Aircraft.generated.decl" }; [File Aircraft.cpp] Aircarft::Aircraft(..) { /* hand-written constructer implementation */ } /* more hand-written implementation code */ /* generated implementation code follows */ #include "Aircraft.generated.impl" Any thoughts or suggestions?

    Read the article

  • Any way to speed up this hierarchical query?

    - by RenderIn
    I've got a serious performance problem with a hierarchical query that I can't seem to fix. I am modeling several organization charts in my database, each representing a virtual organization within our company. For example, we have several temporary committees that are created from time to time and there may be a Committee Organizer role at the top of this virtual hierarchy, with several people assigned to the Committee Member role beneath the organizer. Some of our virtual organizations have many levels and several branches at each level. I have a single table in which I represent all the role assignments. i.e. a ROLE_ID column and a PARENT_ROLE_ID column which is a foreign key to the ROLE_ID column. For each assignment we also store as a column the location in the company where this person has the assignment. For example, the Committee Organizer would have a company-level/ CEO assignment, while the committee members would have department-level assignments such as ACCOUNTING, MARKETING, etc. So to model the organizer/member relationship for two individuals we would have: ROLE_ID = 4 PARENT_ROLE_ID = NULL EMPLOYEE_NUMBER = 213423 COMPANY_LOCATION = CEO ROLE_ID = 5 PARENT_ROLE_ID = 4 EMPLOYEE_NUMBER = 838221 COMPANY_LOCATION = ACCOUNTING Here's where things get tricky. I have an application that every person in the organization can log in to. When they log in they should be able to view all the virtual organizations in our company. e.g. the committee members should be able to see the committee organizer and vice-versa. However, only the committee organizer should be able to edit the committee members. The difficulty is in determining whether an individual (who can have multiple role assignments) has edit access for each other assignment. While this seems simple in the example, consider a virtual organization in which we have President at the top, 5 departments directly beneath him, 2 subdepartments below each department. We only want people in the Accounting department to be able to edit individuals in the subdepartments belonging to the Accounting department. They should not have edit access to anybody in the Marketing department or its subdepartments. To determine edit access when a user views a virtual organization in our company I run a query that executes two inline views: A) Hierarchically query for all assignments in this virtual organization and using SYS_CONNECT_BY_PATH to store the entire path to each user/role/company_location and B) Hierarchically retrieve all the assignments the individual logged in has and using the SYS_CONNECT_BY_PATH to store the entire path to each of these assignments. The result of the query is all the records from A) plus a boolean determined by joining with B) which flags whether the logged in user has edit access for each record. Indexes don't seem to be helping... it simply appears that there is too much processing going on to separate all the records and then determine edit access. One issue is that I can't store the SYS_CONNECT_BY_PATH and index it... determining whether an individual record has edit access consists of comparing if: test_record_sys_path LIKE individual_record_sys_path || '%' Is a materialized view the answer?

    Read the article

  • Event sourcing: Write event before or after updating the model

    - by Magnus
    I'm reasoning about event sourcing and often I arrive at a chicken and egg problem. Would be grateful for some hints on how to reason around this. If I execute all I/O-bound processing async (ie writing to the event log) then how do I handle, or sometimes even detect, failures? I'm using Akka Actors so processing is sequential for each event/message. I do not have any database at this time, instead I would persist all the events in an event log and then keep an aggregated state of all the events in a model stored in memory. Queries are all against this model, you can consider it to be a cache. Example Creating a new user: Validate that the user does not exist in model Persist event to journal Update model (in memory) If step 3 breaks I still have persisted my event so I can replay it at a later date. If step 2 breaks I can handle that as well gracefully. This is fine, but since step 2 is I/O-bound I figured that I should do I/O in a separate actor to free up the first actor for queries: Updating a user while allowing queries (A0 = Front end/GUI actor, A1 = Processor Actor, A2 = IO-actor, E = event bus). (A0-E-A1) Event is published to update user 'U1'. Validate that the user 'U1' exists in model (A1-A2) Persist event to journal (separate actor) (A0-E-A1-A0) Query for user 'U1' profile (A2-A1) Event is now persisted continue to update model (A0-E-A1-A0) Query for user 'U1' profile (now returns fresh data) This is appealing since queries can be processed while I/O-is churning along at it's own pace. But now I can cause myself all kinds of problems where I could have two incompatible commands (delete and then update) be persisted to the event log and crash on me when replayed up at a later date, since I do the validation before persisting the event and then update the model. My aim is to have a simple reasoning around my model (since Actor processes messages sequentially single threaded) but not be waiting for I/O-bound updates when Querying. I get the feeling I'm modeling a database which in itself is might be a problem. If things are unclear please write a comment.

    Read the article

  • ovs-vsctl: "eth0" is not a valid UUID

    - by Przemek Lach
    I'm trying to setup an open v-switch inside my Ubuntu 12.04 Server VM. I have created three interfaces for this VM and I want to create a port mirror inside of the VM using these there interfaces and open v-switch. There are three Host-Only Adapters: eth0, eth1, eth2. The idea is that three other VM's will be connected to these adapters. One of these VM's will stream UDP video to eth0 and I want the vswitch'd VM to mirror those packets from eth0 onto eth1 and eth2. Each of the VM's connected to eth1 and eth2 will get the same video stream. I performed the following steps to install open v-switch: $ apt-get install python-simplejson python-qt4 python-twisted-conch automake autoconf gcc uml-utilities libtool build-essential $ apt-get install build-essential autoconf automake pkg-config $ wget http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ tar xf http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ cd http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ apt-get install libssl-dev iproute tcpdump linux-headers-`uname -r` $ ./boot.sh $ ./configure - -with-linux=/lib/modules/`uname -r`/build $ make $ sudo make install After installation I configured as follows: $ insmod datapath/linux/openvswitch.ko $ sudo touch /usr/local/etc/ovs-vswitchd.conf $ mkdir -p /usr/local/etc/openvswitch $ ovsdb-tool create /usr/local/etc/openvswitch/conf.db Then I started the server: $ ovsdb-server /usr/local/etc/openvswitch/conf.db \ --remote=punix:/usr/local/var/run/openvswitch/db.sock \ --remote=db:Open_vSwitch,manager_options \ --private-key=db:SSL,private_key \ --certificate=db:SSL,certificate \ --bootstrap-ca-cert=db:SSL,ca_cert --pidfile --detach --log-file $ ovs-vsctl –no-wait init (run only once) $ ovs-vswitchd --pidfile --detach The above steps I got from this tutorial and it all worked fine. I then proceeded to add a port mirror based on the open v-switch documentation under Port Mirroring. I successfully completed the following commands: $ ovs-vsctl add-br br0 $ ovs-vsctl add-port br0 eth0 $ ovs-vsctl add-port br0 eth1 $ ovs-vsctl add-port br0 eth2 $ ifconfig eth0 promisc up $ ifconfig eth1 promisc up $ ifconfig eth2 promisc up At this point when I run ovs-vsctl show I get the following: 75bda8c2-b870-438b-9115-e36288ea1cd8 Bridge "br0" Port "br0" Interface "br0" type: internal Port "eth0" Interface "eth0" Port "eth2" Interface "eth2" Port "eth1" Interface "eth1" And when I run ifconfig I get the following: eth0 Link encap:Ethernet HWaddr 08:00:27:9f:51:ca inet6 addr: fe80::a00:27ff:fe9f:51ca/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth1 Link encap:Ethernet HWaddr 08:00:27:53:02:d4 inet6 addr: fe80::a00:27ff:fe53:2d4/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth2 Link encap:Ethernet HWaddr 08:00:27:cb:a5:93 inet6 addr: fe80::a00:27ff:fecb:a593/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth3 Link encap:Ethernet HWaddr 08:00:27:df:bb:d8 inet addr:192.168.1.139 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fedf:bbd8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2211 errors:0 dropped:0 overruns:0 frame:0 TX packets:1196 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:182987 (182.9 KB) TX bytes:125441 (125.4 KB) NOTE: I use eth3 as a bridge adapter for SSH'ing into the VM. So now, I think I've done everything correctly but when I try to create the bridge using the following command: $ ovs-vsctl -- set Bridge br0 mirrors=@m -- --id=@eth0 get Port eth0 -- --id=@eth1 get Port eth1 -- --id=@m create Mirror name=app1Mirror select-dst-port=eth0 select-src-port=@eth0 output-port=@eth1,eth2 I get the following error: ovs-vsctl: "eth0" is not a valid UUID I don't understand why it's not able to find the interfaces?

    Read the article

  • ParallelWork: Feature rich multithreaded fluent task execution library for WPF

    - by oazabir
    ParallelWork is an open source free helper class that lets you run multiple work in parallel threads, get success, failure and progress update on the WPF UI thread, wait for work to complete, abort all work (in case of shutdown), queue work to run after certain time, chain parallel work one after another. It’s more convenient than using .NET’s BackgroundWorker because you don’t have to declare one component per work, nor do you need to declare event handlers to receive notification and carry additional data through private variables. You can safely pass objects produced from different thread to the success callback. Moreover, you can wait for work to complete before you do certain operation and you can abort all parallel work while they are in-flight. If you are building highly responsive WPF UI where you have to carry out multiple job in parallel yet want full control over those parallel jobs completion and cancellation, then the ParallelWork library is the right solution for you. I am using the ParallelWork library in my PlantUmlEditor project, which is a free open source UML editor built on WPF. You can see some realistic use of the ParallelWork library there. Moreover, the test project comes with 400 lines of Behavior Driven Development flavored tests, that confirms it really does what it says it does. The source code of the library is part of the “Utilities” project in PlantUmlEditor source code hosted at Google Code. The library comes in two flavors, one is the ParallelWork static class, which has a collection of static methods that you can call. Another is the Start class, which is a fluent wrapper over the ParallelWork class to make it more readable and aesthetically pleasing code. ParallelWork allows you to start work immediately on separate thread or you can queue a work to start after some duration. You can start an immediate work in a new thread using the following methods: void StartNow(Action doWork, Action onComplete) void StartNow(Action doWork, Action onComplete, Action<Exception> failed) For example, ParallelWork.StartNow(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workEndedAt = DateTime.Now; }); Or you can use the fluent way Start.Work: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .Run(); Besides simple execution of work on a parallel thread, you can have the parallel thread produce some object and then pass it to the success callback by using these overloads: void StartNow<T>(Func<T> doWork, Action<T> onComplete) void StartNow<T>(Func<T> doWork, Action<T> onComplete, Action<Exception> fail) For example, ParallelWork.StartNow<Dictionary<string, string>>( () => { test = new Dictionary<string,string>(); test.Add("test", "test"); return test; }, (result) => { Assert.True(result.ContainsKey("test")); }); Or, the fluent way: Start<Dictionary<string, string>>.Work(() => { test = new Dictionary<string, string>(); test.Add("test", "test"); return test; }) .OnComplete((result) => { Assert.True(result.ContainsKey("test")); }) .Run(); You can also start a work to happen after some time using these methods: DispatcherTimer StartAfter(Action onComplete, TimeSpan duration) DispatcherTimer StartAfter(Action doWork,Action onComplete,TimeSpan duration) You can use this to perform some timed operation on the UI thread, as well as perform some operation in separate thread after some time. ParallelWork.StartAfter( () => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workCompletedAt = DateTime.Now; }, waitDuration); Or, the fluent way: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .RunAfter(waitDuration);   There are several overloads of these functions to have a exception callback for handling exceptions or get progress update from background thread while work is in progress. For example, I use it in my PlantUmlEditor to perform background update of the application. // Check if there's a newer version of the app Start<bool>.Work(() => { return UpdateChecker.HasUpdate(Settings.Default.DownloadUrl); }) .OnComplete((hasUpdate) => { if (hasUpdate) { if (MessageBox.Show(Window.GetWindow(me), "There's a newer version available. Do you want to download and install?", "New version available", MessageBoxButton.YesNo, MessageBoxImage.Information) == MessageBoxResult.Yes) { ParallelWork.StartNow(() => { var tempPath = System.IO.Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Settings.Default.SetupExeName); UpdateChecker.DownloadLatestUpdate(Settings.Default.DownloadUrl, tempPath); }, () => { }, (x) => { MessageBox.Show(Window.GetWindow(me), "Download failed. When you run next time, it will try downloading again.", "Download failed", MessageBoxButton.OK, MessageBoxImage.Warning); }); } } }) .OnException((x) => { MessageBox.Show(Window.GetWindow(me), x.Message, "Download failed", MessageBoxButton.OK, MessageBoxImage.Exclamation); }); The above code shows you how to get exception callbacks on the UI thread so that you can take necessary actions on the UI. Moreover, it shows how you can chain two parallel works to happen one after another. Sometimes you want to do some parallel work when user does some activity on the UI. For example, you might want to save file in an editor while user is typing every 10 second. In such case, you need to make sure you don’t start another parallel work every 10 seconds while a work is already queued. You need to make sure you start a new work only when there’s no other background work going on. Here’s how you can do it: private void ContentEditor_TextChanged(object sender, EventArgs e) { if (!ParallelWork.IsAnyWorkRunning()) { ParallelWork.StartAfter(SaveAndRefreshDiagram, TimeSpan.FromSeconds(10)); } } If you want to shutdown your application and want to make sure no parallel work is going on, then you can call the StopAll() method. ParallelWork.StopAll(); If you want to wait for parallel works to complete without a timeout, then you can call the WaitForAllWork(TimeSpan timeout). It will block the current thread until the all parallel work completes or the timeout period elapses. result = ParallelWork.WaitForAllWork(TimeSpan.FromSeconds(1)); The result is true, if all parallel work completed. If it’s false, then the timeout period elapsed and all parallel work did not complete. For details how this library is built and how it works, please read the following codeproject article: ParallelWork: Feature rich multithreaded fluent task execution library for WPF http://www.codeproject.com/KB/WPF/parallelwork.aspx If you like the article, please vote for me.

    Read the article

  • top Tweets SOA Partner Community – September 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity OracleBlogs ?Oracle SOA Suite for healthcare integration Dashboard http://ow.ly/1mcJvp SOA Community ?Lost in Translation &ndash; Common Mistakes Interpreting Patterns &ndash; Mark Simpson, Griffiths-Waite @ SOA, Cloud & Service… ServiceTechSymposium Matthias Zieger, Accenture just added to the agenda to co-present: "Service Modeling & BPM Business Value Patterns" http://ow.ly/ddu7A ServiceTechSymposium ?Newly updated session title and abstract: "Big Data and its impact on SOA", by Demed L'Her, Oracle. http://ow.ly/diOq2 Deepak Arora ?To PaaS or SaaS - the latest discussions with customers using SOA Suite - what are your thoughts #soa #soacommunity SOA Community top Tweets SOA Partner Community July 2012 - are you one of them? If yes please rt! https://soacommunity.wordpress.com/2012/08/28/top-tweets-soa-partner-community-august-2012/ … #soacommunity Sandor Nieuwenhuijs Checkout the BeNeLux Architectural Networking Event during Oracle Open World - meet your peers and the experts http://www.ddg-servicecenter.com/networkmanager/oow/architect/default.aspx … SOA Community ?top Tweets SOA Partner Community &ndash; August 2012 http://wp.me/p10C8u-uf SOA Community ?Follow SOA Community on facebook http://www.facebook.com/soacommunity #soacommunity SOA Community ?New Service to promote Your SOA & BPM events at http://oracle.com/events for SOA & BPM Specialized Partners Only! #soacommunity #opn #oracle Jan van Zoggel ?Hotel check, flight check, overview of sessions to visit check http://jvzoggel.wordpress.com/2012/08/27/soa-cloud-servicetech-symposium/ … I'm ready for SOA, Cloud & Service Technology Symposium SOA Community SOA & BPM Specialized Partners Only! New Service to Promote Your SOA & BPM Events at http://oracle.com/events http://wp.me/p10C8u-sH SOA Community Call for content for the next community newsletter. Do you want to publish your success & best practice? Send it @soacommunity #soacommunity SOA Community SOA Adoption in the Brazilian Ministry of Health - Case Study by Ricardo Puttini, University of Brasilia @ SOA, Cloud & Service… Jan van Zoggel ?Just registered for the 5th International SOA, Cloud & Service Technology Symposium in London. Looking forward to it. http://www.servicetechsymposium.com/ OTNArchBeat ?Want to prepare for Oracle SOA Specialization? @t_winterberg offers a suggestion. http://pub.vitrue.com/5Hqu OTNArchBeat ?Oracle BPM enable BAM | @deltalounge http://pub.vitrue.com/BCwj SOA Community Presentations & Training material OFM Summer Camps & Impressions & Feedback http://wp.me/p10C8u-sF Emiel Paasschens Nice! Pdf document on how to use a #Oracle #SOA Suite Domain Value Map (DVM) in the OSB: http://bit.ly/RzyS9w #yam OracleBlogs ?Using Cloud OER to Find Fusion Applications On-Premise Service Concrete WSDL URL http://ow.ly/1m4lz7 demed ?Free VIP pass for @techsymp if you are in London Sep. 24-25. Be the first one to retweet this and I'll DM you details! http://www.servicetechsymposium.com/speaker_bios.php?id=demed_lher … Jan van Zoggel blogpost: Oracle Service Bus duplicate message check using Oracle Coherence caching http://jvzoggel.wordpress.com/2012/08/20/osb-duplicate-message-with-coherence/ … OTNArchBeat ?Oracle Service Bus duplicate message check using Coherence | @jvzoggel http://pub.vitrue.com/ckY8 Oracle UPK & Tutor Synaptis and Oracle Present: Leveraging UPK Throughout the Project Lifecycle: Leveraging UPK throughout the Proj... http://bit.ly/OS2Rbg Rolando Carrasco ?New entry @ oracleradio http://bit.ly/SEvwwS @soacommunity @oracleace How to identify duplicated messages on Oracle SOA SUITE? SOA Community ?Business Driven Development (BDD) Demo Now Available! http://wp.me/p10C8u-sf OTNArchBeat ?Installing Oracle SOA Suite10g on Oracle Enterprise Linux | @lonnekedikmans http://pub.vitrue.com/BEyD OTNArchBeat ?Best practices for Oracle real-time data integration | Frank Ohlhorst http://pub.vitrue.com/1fH1 ServiceTechSymposium ?New OTN podcast featuring speakers Thomas Erl, Tim Hall and Demed L’Her just published. Tune into 1st 3 parts here: http://ow.ly/d1RRn OTNArchBeat ?SOA, Cloud, and Service Technologies - Part 4 of 4 - Best selling SOA author Thomas Erl talks about the latest title... http://ow.ly/1m0txY SOA Community Win a free conference pass for the SOA, Cloud + Service Technology Symposium &ndash; become a soacommunity facebook fan!… Lonneke Dikmans VENNSTER BLOG: Installing Oracle SOA Suite10g on Oracle Enterpris... http://blog.vennster.nl/2012/08/installing-oracle-soa-suite-10g-on.html?spref=tw … PeterPaul vande Beek published a blog on exporting Oracle #BPM metrics to a #DWH http://www.deltalounge.net/wpress/2012/08/export-oracle-bpm-metrics-to-a-data-warehouse/ … #soacommunity SOA Community ?Do you follow us on facebook http://www.facebook.com/soacommunity #soacommunity C2B2 Consulting ?Cloud-based Enterprise Architecture by Steve Millidge, C2B2 Consulting @ SOA, Cloud &amp; Service … http://wp.me/p10C8u-sv via @soacommunity Gertjan van het Hof Storing SCA Metadata in the Oracle Metadata Services Repository http://www.oracle.com/technetwork/articles/soa/fonnegra-storing-sca-metadata-1715004.html?msgid=3-6903117805 … arjankramer ?Encrypted OSB Service account passwords http://dlvr.it/20hbNV Richard van Tilborg BPM the Battle http://lnkd.in/yFAJaW OTNArchBeat Using Cloud OER to Find Fusion Applications On-Premise Service Concrete WSDL URL | @RahejaRajesh http://pub.vitrue.com/YDCD SOA Proactive ?Webcast: Introduction to SOA Human Workflow, 8/23, 10 AM EDT. Register @ http://bit.ly/Nx77sY Lucas Jellema ?Programmatically admnistration of OSB using JXM & MBeans. Interesting example is given in https://blogs.oracle.com/ateamsoab2b/entry/automatic_disabling_proxy_service_when … orclateamsoa ?A-Team Blog #ateam: Automatically Disable Proxy Service to avoid overloading OSB http://ow.ly/1lXGKV Atul_Kumar ?Oracle Enterprise Gateway – OEG 11gR1 (11.1.1.*) for beginners http://goo.gl/fb/EJboE Estafet Limited Advanced SOA Boot camp @soacommunity in Munich was excellent.@wlscommunity Learnt a lot and liked the format. SOA Community Oracle Fusion Applications Design Patterns Now Available For Developers by Ultan O'Broin http://wp.me/p10C8u-sd SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community twitter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Robotic Arm &ndash; Hardware

    - by Szymon Kobalczyk
    This is first in series of articles about project I've been building  in my spare time since last Summer. Actually it all began when I was researching a topic of modeling human motion kinematics in order to create gesture recognition library for Kinect. This ties heavily into motion theory of robotic manipulators so I also glanced at some designs of robotic arms. Somehow I stumbled upon this cool looking open source robotic arm: It was featured on Thingiverse and published by user jjshortcut (Jan-Jaap). Since for some time I got hooked on toying with microcontrollers, robots and other electronics, I decided to give it a try and build it myself. In this post I will describe the hardware build of the arm and in later posts I will be writing about the software to control it. Another reason to build the arm myself was the cost factor. Even small commercial robotic arms are quite expensive – products from Lynxmotion and Dagu look great but both cost around USD $300 (actually there is one cheap arm available but it looks more like a toy to me). In comparison this design is quite cheap. It uses seven hobby grade servos and even the cheapest ones should work fine. The structure is build from a set of laser cut parts connected with few metal spacers (15mm and 47mm) and lots of M3 screws. Other than that you’d only need a microcontroller board to drive the servos. So in total it comes a lot cheaper to build it yourself than buy an of the shelf robotic arm. Oh, and if you don’t like this one there are few more robotic arm projects at Thingiverse (including one by oomlout). Laser cut parts Some time ago I’ve build another robot using laser cut parts so I knew the process already. You can grab the design files in both DXF and EPS format from Thingiverse, and there are also 3D models of each part in STL. Actually the design is split into a second project for the mini servo gripper (there is also a standard servo version available but it won’t fit this arm).  I wanted to make some small adjustments, layout, and add measurements to the parts before sending it for cutting. I’ve looked at some free 2D CAD programs, and finally did all this work using QCad 3 Beta with worked great for me (I also tried LibreCAD but it didn’t work that well). All parts are cut from 4 mm thick material. Because I was worried that acrylic is too fragile and might break, I also ordered another set cut from plywood. In the end I build it from plywood because it was easier to glue (I was told acrylic requires a special glue). Btw. I found a great laser cutter service in Kraków and highly recommend it (www.ebbox.com.pl). It cost me only USD $26 for both sets ($16 acrylic + $10 plywood). Metal parts I bought all the M3 screws and nuts at local hardware store. Make sure to look for nylon lock (nyloc) nuts for the gripper because otherwise it unscrews and comes apart quickly. I couldn’t find local store with metal spacers and had to order them online (you’d need 11 x 47mm and 3 x 15mm). I think I paid less than USD $10 for all metal parts. Servos This arm uses five standards size servos to drive the arm itself, and two micro servos are used on the gripper. Author of the project used Modelcraft RS-2 Servo and Modelcraft ES-05 HT Servo. I had two Futaba S3001 servos laying around, and ordered additional TowerPro SG-5010 standard size servos and TowerPro SG90 micro servos. However it turned out that the SG90 won’t fit in the gripper so I had to replace it with a slightly smaller E-Sky EK2-0508 micro servo. Later it also turned out that Futaba servos make some strange noise while working so I swapped one with TowerPro SG-5010 which has higher torque (8kg / cm). I’ve also bought three servo extension cables. All servos cost me USD $45. Assembly The build process is not difficult but you need to think carefully about order of assembling it. You can do the base and upper arm first. Because two servos in the base are close together you need to put first with one piece of lower arm already connected before you put the second servo. Then you connect the upper arm and finally put the second piece of lower arm to hold it together. Gripper and base require some gluing so think it through too. Make sure to look closely at all the photos on Thingiverse (also other people copies) and read additional posts on jjshortcust’s blog: My mini servo grippers and completed robotic arm  Multiply the robotic arm and electronics Here is also Rob’s copy cut from aluminum My assembled arm looks like this – I think it turned out really nice: Servo controller board The last piece of hardware I needed was an electronic board that would take command from PC and drive all seven servos. I could probably use Arduino for this task, and in fact there are several Arduino servo shields available (for example from Adafruit or Renbotics).  However one problem is that most support only up to six servos, and second that their accuracy is limited by Arduino’s timer frequency. So instead I looked for dedicated servo controller and found a series of Maestro boards from Pololu. I picked the Pololu Mini Maestro 12-Channel USB Servo Controller. It has many nice features including native USB connection, high resolution pulses (0.25µs) with no jitter, built-in speed and acceleration control, and even scripting capability. Another cool feature is that besides servo control, each channel can be configured as either general input or output. So far I’m using seven channels so I still have five available to connect some sensors (for example distance sensor mounted on gripper might be useful). And last but important factor was that they have SDK in .NET – what more I could wish for! The board itself is very small – half of the size of Tic-Tac box. I picked one for about USD $35 in this store. Perhaps another good alternative would be the Phidgets Advanced Servo 8-Motor – but it is significantly more expensive at USD $87.30. The Maestro Controller Driver and Software package includes Maestro Control Center program with lets you immediately configure the board. For each servo I first figured out their move range and set the min/max limits. I played with setting the speed an acceleration values as well. Big issue for me was that there are two servos that control position of lower arm (shoulder joint), and both have to be moved at the same time. This is where the scripting feature of Pololu board turned out very helpful. I wrote a script that synchronizes position of second servo with first one – so now I only need to move one servo and other will follow automatically. This turned out tricky because I couldn’t find simple offset mapping of the move range for each servo – I had to divide it into several sub-ranges and map each individually. The scripting language is bit assembler-like but gets the job done. And there is even a runtime debugging and stack view available. Altogether I’m very happy with the Pololu Mini Maestro Servo Controller, and with this final piece I completed the build and was able to move my arm from the Meastro Control program.   The total cost of my robotic arm was: $10 laser cut parts $10 metal parts $45 servos $35 servo controller ----------------------- $100 total So here you have all the information about the hardware. In next post I’ll start talking about the software that I wrote in Microsoft Robotics Developer Studio 4. Stay tuned!

    Read the article

  • Video on Architecture and Code Quality using Visual Studio 2012&ndash;interview with Marcel de Vries and Terje Sandstrom by Adam Cogan

    - by terje
    Find the video HERE. Adam Cogan did a great Web TV interview with Marcel de Vries and myself on the topics of architecture and code quality.  It was real fun participating in this session.  Although we know each other from the MVP ALM community,  Marcel, Adam and I haven’t worked together before. It was very interesting to see how we agreed on so many terms, and how alike we where thinking.  The basics of ensuring you have a good architecture and how you could document it is one thing.  Also, the same agreement on the importance of having a high quality code base, and how we used the Visual Studio 2012 tools, and some others (NDepend for example)  to measure and ensure that the code quality was where it should be.  As the tools, methods and thinking popped up during the interview it was a lot of “Hey !  I do that too!”.  The tools are not only for “after the fact” work, but we use them during the coding.  That way the tools becomes an integrated part of our coding work, and helps us to find issues we may have overlooked.  The video has a bunch of call outs, pinpointing important things to remember. These are also listed on the corresponding web page. I haven’t seen that touch before, but really liked this way of doing it – it makes it much easier to spot the highlights.  Titus Maclaren and Raj Dhatt from SSW have done a terrific job producing this video.  And thanks to Lei Xu for doing the camera and recording job.  Thanks guys ! Also, if you are at TechEd Amsterdam 2012, go and listen to Adam Cogan in his session on “A modern architecture review: Using the new code review tools” Friday 29th, 10.15-11.30 and Marcel de Vries session on “Intellitrace, what is it and how can I use it to my benefit” Wednesday 27th, 5-6.15 The highlights points out some important practices.  I’ll elaborate on a few of them here: Add instructions on how to compile the solution.  You do this by adding a text file with instructions to the solution, and keep it under source control.  These instructions should contain what is needed on top of a standard install of Visual Studio.  I do a lot of code reviews, and more often that not, I am not even able to compile the program, because they have used some tool or library that needs to be installed.  The same applies to any new developer who enters into the team, so do this to increase your productivity when the team changes, or a team member switches computer. Don’t forget to document what you have to configure on the computer, the IIS being a common one. The more automatic you can do this, the better.  Use NuGet to get down libraries. When the text document gets more than say, half a page, with a bunch of different things to do, convert it into a powershell script instead.  The metrics warning levels.  These are very conservatively set by Microsoft.  You rarely see anything but green, and besides, you should have color scales for each of the metrics.  I have a blog post describing a more appropriate set of levels, based on both research work and industry “best practices”.  The essential limits are: Cyclomatic complexity and coupling:  Higher numbers are worse On method levels: Green :  From 0 to 10 Yellow:  From 10 to 20  (some say 15).   Acceptable, but have a look to see if there is something unneeded here. Red: From 20 to 40:   Action required, get these down. Bleeding Red: Above 40   This is the real red alert.  Immediate action!  (My invention, as people have asked what do I do when I have cyclomatic complexity of 150.  The only answer I could think of was: RUN! ) Maintainability index:  Lower numbers are worse, scale from 0 to 100. On method levels: Green:  60 to 100 Yellow:  40 – 60.    You will always have methods here too, accept the higher ones, take a look at those who are down to the lower limit.  Check up against the other metrics.) Red:  20 – 40:  Action required, fix these. Bleeding red:  Below 20.  Immediate action required. When doing metrics analysis, you should leave the generated code out.  You do this by adding attributes, unfortunately Microsoft has “forgotten” to add these to all their stuff, so you might have to add them to some of the code.  It most cases it can be done so that it is not overwritten by a new round of code generation.  Take a look a my blog post here for details on how to do that. Class level metrics might also be useful, at least for coupling and maintenance.  But it is much more difficult to set any fixed limits on those.  Any metric aggregations on higher level tend to be pretty useless, as the number of methods vary pretty much, and there are little science on what number of methods can be regarded as good or bad.  NDepend have a recommendation, but they say it may vary too.  And in these days of data binding, the number might be pretty high, as properties counts as methods.  However, if you take the worst case situations, classes with more than 20 methods are suspicious, and coupling and cyclomatic complexity go red above 20, so any classes with more than 20x20 = 400 for these measures should be checked over. In the video we mention the SOLID principles, coined by “Uncle Bob” (Richard Martin). One of them, the Dependency Inversion principle we discuss in the video.  It is important to note that this principle is NOT on whether you should use a Dependency Inversion Container or not, it is about how you design the interfaces and interactions between your classes.  The Dependency Inversion Container is just one technique which is based on this principle, but which main purpose is to isolate things you would like to change at runtime, for example if you implement a plug in architecture.  Overuse of a Dependency Inversion Container is however, NOT a good thing.  It should be used for a purpose and not as a general DI solution.  The general DI solution and thinking however is useful far beyond the DIC.   You should always “program to an abstraction”, and not to the concreteness.  We also talk a bit about the GRASP patterns, a term coined by Craig Larman in his book Applying UML and design patterns. GRASP patterns stand for General Responsibility Assignment Software Patterns and describe fundamental principles of object design and responsibility assignment.  What I find great with these patterns is that they is another way to focus on the responsibility of a class.  One of the things I most often found that is broken in software designs, is that the class lack responsibility, and as a result there are a lot of classes mucking around in the internals of the other classes.  We also discuss the term “Code Smells”.  This term was invented by Kent Beck and Martin Fowler when they worked with Fowler’s “Refactoring” book. A code smell is a set of “bad” coding practices, which are the drivers behind a corresponding set of refactorings.  Here is a good list of the smells, and their corresponding refactor patterns. See also this.

    Read the article

  • CodePlex Daily Summary for Tuesday, April 06, 2010

    CodePlex Daily Summary for Tuesday, April 06, 2010New ProjectsASP.NET MVC | SCAFFOLD: Add-in para Visual Studio 2008 que adiciona um poderoso scaffold para o ASP.NET MVC, com suporte ao Entity Framework.ASP.Net Permission Manager: This is an extension of ASP.Net Permission Manager that permission to roles.Babelfish.NET: Babelfish was created as a common framework for navigating several different node-to-node structured data sources, such as HTML, CSS, Javascript, X...CollaSuite: Collaboration Suite, Chat Client ServerdnyFramework: Denny FrameWorkDocxToHtml: DocxToHtmlDomain Driven Design and ASP.NET MVC 2 sample: It's a simple application ASP.NET MVC 2 with DDD modeling approach. It's about how to build maintainable applications applying DDD, IoC and infrast...DRP Address Book: A web based address book implementation using SQL Server 2008, ASP.NET, C#, and CSLA.NETFileSystemHelper SQL Server CLR: FileSystemHelper SQL Server CLR provides a collection of CLR stored procedures and functions for interacting with the file system. Using these sto...Foothill: This is an asp.net Web AppHouseFly controls: Controls for my upcomming app: HouseFlyiTunes Artwork App: This project is related to my iTunes Artwork App blog series. The application will automate the process of collecting album art for music tracks i...Logwiz - Automate the collection of Performance monitor logs using logman.exe: This tool is used to automate the process of collecting Performance monitoring data using the logman.exe on Windows Vista/Windows 7/Windows 2008 an...MailSharp - Beyond MailMessage: An easy-to-use library for .NET developers to send HTML formatted emails using templates with merge tags and embedded images instead of pointing at...MSTests.Fluently: MSTests.Fluently makes it easier for developers and testers to read and write tests with the Visual Studio Unit-Testing Framework. The Sentence-lik...openSIS dot net - Open Source SIS written in C#, built on dotnet 3.5 framework: openSIS dotnet is the dot net version of the popular openSIS Student Information System from OS4ED. This openSIS version is written in C# and is ba...PHP.net: PHP.net is a PHP IDE written in C# for Windows. The IDE will eventually be a complete standalone PHP development enviroment, including a developmen...Recommender System for Optus Website: <Recommender System for Optus Website>This project is trying to apply some recommeder system techniques to telecom company websites. This project ...Sendkeys: This is a tool for remote controlling any Windows Application.Shamil: Shamil WorkSite Directory for SharePoint 2010 (from Microsoft Consulting Services, UK): A solution which provides 'site directory' functionality for SharePoint 2010. Refer to [file:Solution Description|Microsoft.MCSUK.SPSiteDirectory...SPD Workflow action to add user to a security group: This is a custom SPD workflow step developed to facilitate the process of adding users from a list to the security group. Keep in mind this is run...Star Trooper for XNA 2D Tutorial: Source for the Star Trooper XNA 2d Tutorial on XNA-UK (www.XNA-UK.co.uk), including the full set of code and each phase of the tutorial. Additio...TFS WitAdminUI: Team Foundation Server 2010 RC WitAdmin simple application with UIWindows Phone 7 Panorama control: The Windows Phone 7 Panorama control is a sample implementation of a Silverlight control that allows to create "Hub" applications on Windows Phone ...Yulu: Yulu helps you maintain short quotations or your thoughts with your Windows Mobile phones.New ReleasesASP .NET MVC CMS (Content Management System): Atomic CMS 2.0: Atomic CMS 2.0 was released. Please visit http://atomiccms.com/ for download documentation, last release and get more information about Atomic CMS ...ASP.Net Permission Manager: Mal.Web.Security.dll v1.0.2.0: Mal.Web.Security.dll Relealse v1.0.2.0CycleMania Starter Kit EAP - ASP.NET 4 Problem - Design - Solution: Cyclemania 0.08.48: The application now uses Windows Communication Foundation services. See Source Code tab for other recent changes.dotNetInstaller: setup bootstrapper for Windows: 1.10 (Development): Build 1.10.6588.0. Features - Added support for .exe setup components with an optional response file. - Added has_value_disabled option to user-de...Examine: RC 1: This is Examine RC1 release. It includes: Examine UmbracoExamine Lucene.Net 2.9.2Extend SmallBasic: Teaching Extensions v.010: Improved the pentagone crazy quizFileSystemHelper SQL Server CLR: FileSystemHelper CLR Project: Source code for FileSystemHelper CLR assembly.GameStore League Manager: League Manager 1.0.5-Logging: Added Logging functionality to track down bugs.iSun Shut - PC Auto Shutdown: iSun Shut 2.5: Relase Notes: -To properly view the source code please install DotNetBar 8.3 (http://www.devcomponents.com) -The Shutdown after firefox download f...LINQ to Twitter: LINQ to Twitter Beta v2.0.10: New items added since v1.1 include: Support for OAuth (via DotNetOpenAuth), secure communication via https, VB language support, serialization of ...MIC Pattern: !MIC Pattern DAL: Data Access Layer Este arquivo contem a DLL que faz acesso a dados e simplifica as operações de INSERT, UPDATE, DELETE e SELECT em bases de dados ...MVC Foolproof Validation: Alpha 0.1: Server side validation is stable. Client side validation is fairly stable aside from some border cases I hope to address soon. I’m actually using t...OpenGL ES 2.0 Compact Framework Wrapper: First binary release: CAB-installer for installing the sample application provided with the solution. Demonstrates a simple quad with rotation animation. Changes from l...patterns & practices SharePoint Guidance: SPG2010 Drop8: SharePoint Guidance Drop Notes Microsoft patterns and practices ****************************************** ***************************************...PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT: PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT - V3: The Beta Version 3 of the Programmable Software Development Environment features the random generator, longitudinal and cryptographic commands whi...RoTwee: RoTwee (9.0.0.0): New feature in this version : 17102 Tweet rotated count.SharePhone: SharePhone v.1.0.3: Added search functionality. Use clientContext.SearchProvider.Search(..) or clientContext.SearchProvider.KeywordSearch(..) A few examples here: ht...SharePoint Outlook Connector: Version 1.2.4.3: UI has been improved. Some bugs have been resolved.SPD Workflow action to add user to a security group: Version 1 custom workflow action: A custom SPD workflow step that automatically adds user to the correct security group, the user name can be driven from a list item or document li...SQL Server Metadata Toolkit 2008: SQL Server Metadata Toolkit Alpha 5: This release addresses the Issue 10567, which was a recursive view recursing more than 100 times. This was caused by the addition of SQL Parsing in...TFS WitAdminUI: WitAdminUI ver1.0: Download zip file and unzip to TFS2010 RC. And Excute WitAdminUI.exe. Because WitAdmin is made by .net v4.0 so I can't my application with MSI.TFTP Server: TFTP Server 1.0 Installer: Installer for the binary release of TFTP server v 1.0VivoSocial: VivoSocial 7.1.0: Version 7.1.0 of VivoSocial has been released. If you experienced any issues with the previous version, please update your modules to the 7.1.0 rel...WAFFLE: Windows Authentication Functional Framework (LE): 1.3 (Development): Build 1.3.9740.0. Features Added waffle-jna-auth.jar, native Java with JNA port. Misc Project upgraded to Visual Studio 2008.Most Popular ProjectsWBFS ManagerASP.NET Ajax LibraryImage Resizer Powertoy Clone for WindowsSkype Voice ChangerAll-In-One Code FrameworkWindows Live Calendar GadgetMDownloaderWindows 7 USB/DVD Download ToolDroid ExplorerEnhSimMost Active ProjectsGraffiti CMSnopCommerce. Open Source online shop e-commerce solution.Facebook Developer ToolkitRawrpatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesShweet: SharePoint 2010 Team Messaging built with PexFarseer Physics EngineNcqrs Framework - The CQRS framework for .NETIonics Isapi Rewrite Filter

    Read the article

  • CodePlex Daily Summary for Sunday, June 06, 2010

    CodePlex Daily Summary for Sunday, June 06, 2010New ProjectsActive Worlds Dot Net Wrapper (Based on AwSdk): Active Worlds Dot Net Wrapper (Based on AwSdk)Combina: Smart calculator for large combinatorial calculations.Concurrent Cache: ConcurrentCache is a smart output cache library extending OutputCacheProvider. It consists of in memory, cache files and compressed files modes and...Decay: Personal use. For learningFazTalk: FazTalk is a suite of tools and products that are designed to improve collaboration and workflow interactions. FazTalk takes an innovative approach...grouped: A peer to peer text editor, written in C# [update] I wrote this little thing a while back and even forgot about it, I stopped coding for more tha...HitchARide MVC 2 Sample: An MVC 2 sample written as part of the Microsoft 2010 London Web Camp based on the wireframes at http://schematics.earthware.co.uk/hitcharide. Not...Inspiration.Web: Description: A simple (but entertaining) ASP.NET MVC (C#) project to suggest random code names for projects. Intended audience: People who ne...NetFileBrowser - TinyMCE: tinyMCE file plugin with asp.netOil Slick Live Feeds: All live feeds from BP's Remotely Operated VehiclesParticle Lexer: Parser and Tokenizer libraryPdf Form Tool: Pdf Form Tool demonstrates how the iTextSharp library could be used to fill PDF forms. The input data is provided as a csv file. The application ...Planning Poker Windows Mobile 7: This project is a Planning Poker application for Windows Mobile 7 (and later?). RandomRat: RandomRat is a program for generating random sets that meet specific criteriaScience.NET: A scientific library written in managed code. It supports advanced mathematics (algebra system, sequences, statistics, combinatorics...), data stru...Spider Compiler: Spider Compiler parses the input of a spider programming source file and compiles it (with help of csc.exe; the C#-Compiler) to an exe-file. This p...Sununpro: sunun's project for study by team foundation server.TFS Buddy: An application that manipulates your I-Buddy whenever something happens in your Team Foundation ServerValveSoft: ValveSysWiiMote Physics: WiiMote Physics is an application that allows you to retrieve data from your WiiMote or Balance Board and display it in real-time. It has a number...WinGet: WinGet is a download manager for Windows. You can drag links onto the WinGet Widget and it will download a file on the selected folder. It is dev...XProject.NET: A project management and team collaboration platformNew Releases.NET DiscUtils: Version 0.9 Preview: This release is still under development. New features available in this release: Support for accessing short file names stored in WIM files Incr...Active Worlds Dot Net Wrapper (Based on AwSdk): Active World Dot Net Wrapper (0.0.1.85): Based on AwSdk 85AwSdk UnOfficial Wrapper Howto Use: C# using AwWrapper; VB.Net Import AwWrapperAjaxControlToolkit additional extenders: ZhecheAjaxControls for .NET3.5: Used AJAX Control Toolkit Release Notes - April 12th 2010 Release Version 40412. Fixed deadlock in long operation canceling Some other fixesAnyCAD: AnyCAD.v1.2.ENU.Install: http://www.anycad.net Parametric Modeling *3D: Sphere, Box, Cylinder, Cone •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extr...Community Forums NNTP bridge: Community Forums NNTP Bridge V29: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Concurrent Cache: 1.0: This is the first release for the ConcurrentCache library.Configuration Section Designer: 2.0.0: This is the first Beta Release for VS 2010 supportDoxygen Browser Addin for VS: Doxygen Browser Addin - v0.1.4 Beta: Support for Visual Studio 2010 improved the logging of errors (Event Logs) Fixed some issues/bugs Hot key for navigation "Control + F1, Contr...Folder Bookmarks: Folder Bookmarks 1.6.2: The latest version of Folder Bookmarks (1.6.2), with new UI changes. Once you have extracted the file, do not delete any files/folders. They are n...HERB.IQ: Beta 0.1 Source code release 5: Beta 0.1 Source code release 5Inspiration.Web: Initial release (deployment package): Initial release (deployment package)NetFileBrowser - TinyMCE: Demo Project: Demo ProjectNetFileBrowser - TinyMCE: NetFileBrowser: NetImageBrowserNLog - Advanced .NET Logging: Nightly Build 2010.06.05.001: Changes since the last build:2010-06-04 23:29:42 Jarek Kowalski Massive update to documentation generator. 2010-05-28 15:41:42 Jarek Kowalski upda...Oil Slick Live Feeds: Oil Slick Live Feeds 0.1: A the first release, with feeds from the MS Skandi, Boa Deep C, Enterprise and Q4000. They are live streams from the ROV's monitoring the damaged...Pcap.Net: Pcap.Net 0.7.0 (46671): Pcap.Net - June 2010 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and includes...sqwarea: Sqwarea 0.0.289.0 (alpha): API supportTFS Buddy: TFS Buddy First release (Beta 1): This is the first release of the TFS Buddy.Visual Studio DSite: Looping Animation (Visual C++ 2008): A solider firing a bullet that loops and displays an explosion everytime it hits the edge of the form.WiiMote Physics: WiiMote Physics v4.0: v4.0.0.1 Recovered from existing compiled assembly after hard drive failure Now requires .NET 4.0 (it seems to make it run faster) Added new c...WinGet: Alpha 1: First Alpha of WinGet. It includes all the planned features but it contains many bugs. Packaged using 7-Zip and ClickOnce.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)PHPExcelpatterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsCommunity Forums NNTP bridgeRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationN2 CMSIonics Isapi Rewrite FilterStyleCopsmark C# LibraryFarseer Physics Enginepatterns & practices: Composite WPF and Silverlight

    Read the article

  • Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers?

    - by Harish Gaur
    Did you miss this general session on Monday morning presented by Amit Zavery, VP of Oracle Fusion Middleware Product Management? There will be a recording made available shortly and in the meanwhile, here is a recap. Amit presented 5 strategies customers can leverage today to extend their applications. Figure 1: 5 Oracle Fusion Middleware strategies to extend Oracle Applications & Oracle Fusion Apps 1. Engage Everyone – Provide intuitive and social experience for application users using Oracle WebCenter 2. Extend Enterprise – Extend Oracle Applications to mobile devices using Oracle ADF Mobile 3. Orchestrate Processes – Automate key organization processes across on-premise & cloud applications using Oracle BPM Suite & Oracle SOA Suite 4. Secure the core – Provide single sign-on and self-service provisioning across multiple apps using Oracle Identity Management 5. Optimize Performance – Leverage Exalogic stack to consolidate multiple instance and improve performance of Oracle Applications Session included 3 demonstrations to illustrate these strategies. 1. First demo highlighted significance of mobile applications for unlocking existing investment in Applications such as EBS. Using a native iPhone application interacting with e-Business Suite, demo showed how expense approval can be mobile enabled with enhanced visibility using BI dashboards. 2. Second demo showed how you can extend a banking process in Siebel and Oracle Policy Automation with Oracle BPM Suite.Process starts in Siebel with a customer requesting a loan, and then jumps to OPA for loan recommendations and decision making and loan processing with approvals in handled in BPM Suite. Once approvals are completed Siebel is updated to complete the process. 3. Final demo showcased FMW components inside Fusion Applications, specifically WebCenter. Boeing, Underwriter Laboratories and Electronic Arts joined this quest and discussed 3 different approaches of leveraging Fusion Middleware stack to maximize their investment in Oracle Applications and/or Fusion Applications technology. Let’s briefly review what these customers shared during the session: 1. Extend Fusion Applications We know that Oracle Fusion Middleware is the underlying technology infrastructure for Oracle Fusion Applications. Architecturally, Oracle Fusion Apps leverages several components of Oracle Fusion Middleware from Oracle WebCenter for rich collaborative interface, Oracle SOA Suite & Oracle BPM Suite for orchestrating key underlying processes to Oracle BIEE for dash boarding and analytics. Boeing talked about how they are using Oracle BPM Suite 11g, a key component of Oracle Fusion Middleware with Oracle Fusion Apps to transform their supply chain. Tim Murnin, Director of Supply Chain talked about Boeing’s 5 year supply chain transformation journey. Boeing’s Integrated and Information Management division began with automation of critical RFQ process using Oracle BPM Suite. This 1st phase resulted in 38% reduction in labor costs for RFP. As a next step in this effort, Boeing is now creating a platform to enable electronic Order Management. Fusion Apps are playing a significant role in this phase. Boeing has gone live with Oracle Fusion Product Hub and efforts are underway with Oracle Fusion Distributed Order Orchestration (DOO). So, where does Oracle BPM Suite 11g fit in this equation? Let me explain. Business processes within Fusion Apps are designed using 2 standards: Business Process Execution Language (BPEL) and Business Process Modeling Notation (BPMN). These processes can be easily configured using declarative set of tools. Boeing leverages Oracle BPM Suite 11g (which supports BPMN 2.0) and Oracle SOA Suite (which supports BPEL) to “extend” these applications. Traditionally, customizations are done within an app using native technologies. But, instead of making process changes within Fusion Apps, Boeing has taken an approach of building “extensions” layer on top of the application. Fig 2: Boeing’s use of Oracle BPM Suite to orchestrate key supply chain processes across Fusion Apps 2. Maximize Oracle Applications investment Fusion Middleware appeals not only to Fusion Apps customers, but is also leveraged by Oracle E-Business Suite, PeopleSoft, Siebel and JD Edwards customers significantly. Using Oracle BPM Suite and Oracle SOA Suite is the recommended extension strategy for Oracle Fusion Apps and Oracle Applications Unlimited customers. Electronic Arts, E-Business Suite customer, spoke about their strategy to transform their order-to-cash process using Oracle SOA Suite, Oracle Foundation Packs and Oracle BAM. Udesh Naicker, Sr Director of IT at Elecronic Arts (EA), discussed how growth of social and digital gaming had started to put tremendous pressure on EA’s existing IT infrastructure. He discussed the challenge with millions of micro-transactions coming from several sources – Microsoft Xbox, Paypal, several service providers. EA found Order-2-Cash processes stretched to their limits. They lacked visibility into these transactions across the entire value chain. EA began by consolidating their E-Business Suite R11 instances into single E-Business Suite R12. EA needed to cater to a variety of service requirements, connectivity methods, file formats, and information latency. Their integration strategy was tactical, i.e., using file uploads, TIBCO, SQL scripts. After consolidating E-Business suite, EA standardized their integration approach with Oracle SOA Suite and Oracle AIA Foundation Pack. Oracle SOA Suite is the platform used to extend E-Business Suite R12 and standardize 60+ interfaces across several heterogeneous systems including PeopleSoft, Demantra, SF.com, Workday, and Managed EDI services spanning on-premise, hosted and cloud applications. EA believes that Oracle SOA Suite 11g based extension strategy has helped significantly in the followings ways: - It helped them keep customizations out of E-Business Suite, thereby keeping EBS R12 vanilla and upgrade safe - Developers are now proficient in technology which is also leveraged by Fusion Apps. This has helped them prepare for adoption of Fusion Apps in the future Fig 3: Using Oracle SOA Suite & Oracle e-Business Suite, Electronic Arts built new platform for order processing 3. Consolidate apps and improve scalability Exalogic is an optimal platform for customers to consolidate their application deployments and enhance performance. Underwriter Laboratories talked about their strategy to run their mission critical applications including e-Business Suite on Exalogic. Christian Anschuetz, CIO of Underwriter Laboratories (UL) shared how UL is on a growth path - $1B to $2.5B in 5 years- and planning a significant business transformation from a not-for-profit to a for-profit business. To support this growth, UL is planning to simplify its IT environment and the deployment complexity associated with ERP applications and technology it runs on. Their current applications were deployed on variety of hardware platforms and lacked comprehensive disaster recovery architecture. UL embarked on a mission to deploy E-Business Suite on Exalogic. UL’s solution is unique because it is one of the first to deploy a large number of Oracle applications and related Fusion Middleware technologies (SOA, BI, Analytical Applications AIA Foundation Pack and AIA EBS to Siebel UCM prebuilt integration) on the combined Exalogic and Exadata environment. UL is planning to move to a virtualized architecture toward the end of 2012 to securely host external facing applications like iStore Fig 4: Underwrites Labs deployed e-Business Suite on Exalogic to achieve performance gains Key takeaways are: - Fusion Middleware platform is certified with major Oracle Applications Unlimited offerings. Fusion Middleware is the underlying technological infrastructure for Fusion Apps - Customers choose Oracle Fusion Middleware to extend their applications (Apps Unlimited or Fusion Apps) to keep applications upgrade safe and prepare for Fusion Apps - Exalogic is an optimum platform to consolidate applications deployments and enhance performance

    Read the article

  • Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers?

    - by Harish Gaur
    Did you miss this general session on Monday morning presented by Amit Zavery, VP of Oracle Fusion Middleware Product Management? There will be a recording made available shortly and in the meanwhile, here is a recap. Amit presented 5 strategies customers can leverage today to extend their applications. Figure 1: 5 Oracle Fusion Middleware strategies to extend Oracle Applications & Oracle Fusion Apps 1. Engage Everyone – Provide intuitive and social experience for application users using Oracle WebCenter 2. Extend Enterprise – Extend Oracle Applications to mobile devices using Oracle ADF Mobile 3. Orchestrate Processes – Automate key organization processes across on-premise & cloud applications using Oracle BPM Suite & Oracle SOA Suite 4. Secure the core – Provide single sign-on and self-service provisioning across multiple apps using Oracle Identity Management 5. Optimize Performance – Leverage Exalogic stack to consolidate multiple instance and improve performance of Oracle Applications Session included 3 demonstrations to illustrate these strategies. 1. First demo highlighted significance of mobile applications for unlocking existing investment in Applications such as EBS. Using a native iPhone application interacting with e-Business Suite, demo showed how expense approval can be mobile enabled with enhanced visibility using BI dashboards. 2. Second demo showed how you can extend a banking process in Siebel and Oracle Policy Automation with Oracle BPM Suite.Process starts in Siebel with a customer requesting a loan, and then jumps to OPA for loan recommendations and decision making and loan processing with approvals in handled in BPM Suite. Once approvals are completed Siebel is updated to complete the process. 3. Final demo showcased FMW components inside Fusion Applications, specifically WebCenter. Boeing, Underwriter Laboratories and Electronic Arts joined this quest and discussed 3 different approaches of leveraging Fusion Middleware stack to maximize their investment in Oracle Applications and/or Fusion Applications technology. Let’s briefly review what these customers shared during the session: 1. Extend Fusion Applications We know that Oracle Fusion Middleware is the underlying technology infrastructure for Oracle Fusion Applications. Architecturally, Oracle Fusion Apps leverages several components of Oracle Fusion Middleware from Oracle WebCenter for rich collaborative interface, Oracle SOA Suite & Oracle BPM Suite for orchestrating key underlying processes to Oracle BIEE for dash boarding and analytics. Boeing talked about how they are using Oracle BPM Suite 11g, a key component of Oracle Fusion Middleware with Oracle Fusion Apps to transform their supply chain. Tim Murnin, Director of Supply Chain talked about Boeing’s 5 year supply chain transformation journey. Boeing’s Integrated and Information Management division began with automation of critical RFQ process using Oracle BPM Suite. This 1st phase resulted in 38% reduction in labor costs for RFP. As a next step in this effort, Boeing is now creating a platform to enable electronic Order Management. Fusion Apps are playing a significant role in this phase. Boeing has gone live with Oracle Fusion Product Hub and efforts are underway with Oracle Fusion Distributed Order Orchestration (DOO). So, where does Oracle BPM Suite 11g fit in this equation? Let me explain. Business processes within Fusion Apps are designed using 2 standards: Business Process Execution Language (BPEL) and Business Process Modeling Notation (BPMN). These processes can be easily configured using declarative set of tools. Boeing leverages Oracle BPM Suite 11g (which supports BPMN 2.0) and Oracle SOA Suite (which supports BPEL) to “extend” these applications. Traditionally, customizations are done within an app using native technologies. But, instead of making process changes within Fusion Apps, Boeing has taken an approach of building “extensions” layer on top of the application. Fig 2: Boeing’s use of Oracle BPM Suite to orchestrate key supply chain processes across Fusion Apps 2. Maximize Oracle Applications investment Fusion Middleware appeals not only to Fusion Apps customers, but is also leveraged by Oracle E-Business Suite, PeopleSoft, Siebel and JD Edwards customers significantly. Using Oracle BPM Suite and Oracle SOA Suite is the recommended extension strategy for Oracle Fusion Apps and Oracle Applications Unlimited customers. Electronic Arts, E-Business Suite customer, spoke about their strategy to transform their order-to-cash process using Oracle SOA Suite, Oracle Foundation Packs and Oracle BAM. Udesh Naicker, Sr Director of IT at Elecronic Arts (EA), discussed how growth of social and digital gaming had started to put tremendous pressure on EA’s existing IT infrastructure. He discussed the challenge with millions of micro-transactions coming from several sources – Microsoft Xbox, Paypal, several service providers. EA found Order-2-Cash processes stretched to their limits. They lacked visibility into these transactions across the entire value chain. EA began by consolidating their E-Business Suite R11 instances into single E-Business Suite R12. EA needed to cater to a variety of service requirements, connectivity methods, file formats, and information latency. Their integration strategy was tactical, i.e., using file uploads, TIBCO, SQL scripts. After consolidating E-Business suite, EA standardized their integration approach with Oracle SOA Suite and Oracle AIA Foundation Pack. Oracle SOA Suite is the platform used to extend E-Business Suite R12 and standardize 60+ interfaces across several heterogeneous systems including PeopleSoft, Demantra, SF.com, Workday, and Managed EDI services spanning on-premise, hosted and cloud applications. EA believes that Oracle SOA Suite 11g based extension strategy has helped significantly in the followings ways: - It helped them keep customizations out of E-Business Suite, thereby keeping EBS R12 vanilla and upgrade safe - Developers are now proficient in technology which is also leveraged by Fusion Apps. This has helped them prepare for adoption of Fusion Apps in the future Fig 3: Using Oracle SOA Suite & Oracle e-Business Suite, Electronic Arts built new platform for order processing 3. Consolidate apps and improve scalability Exalogic is an optimal platform for customers to consolidate their application deployments and enhance performance. Underwriter Laboratories talked about their strategy to run their mission critical applications including e-Business Suite on Exalogic. Christian Anschuetz, CIO of Underwriter Laboratories (UL) shared how UL is on a growth path - $1B to $2.5B in 5 years- and planning a significant business transformation from a not-for-profit to a for-profit business. To support this growth, UL is planning to simplify its IT environment and the deployment complexity associated with ERP applications and technology it runs on. Their current applications were deployed on variety of hardware platforms and lacked comprehensive disaster recovery architecture. UL embarked on a mission to deploy E-Business Suite on Exalogic. UL’s solution is unique because it is one of the first to deploy a large number of Oracle applications and related Fusion Middleware technologies (SOA, BI, Analytical Applications AIA Foundation Pack and AIA EBS to Siebel UCM prebuilt integration) on the combined Exalogic and Exadata environment. UL is planning to move to a virtualized architecture toward the end of 2012 to securely host external facing applications like iStore Fig 4: Underwrites Labs deployed e-Business Suite on Exalogic to achieve performance gains Key takeaways are: - Fusion Middleware platform is certified with major Oracle Applications Unlimited offerings. Fusion Middleware is the underlying technological infrastructure for Fusion Apps - Customers choose Oracle Fusion Middleware to extend their applications (Apps Unlimited or Fusion Apps) to keep applications upgrade safe and prepare for Fusion Apps - Exalogic is an optimum platform to consolidate applications deployments and enhance performance TAGS: Fusion Apps, Exalogic, BPM Suite, SOA Suite, e-Business Suite Integration

    Read the article

  • CodePlex Daily Summary for Sunday, May 16, 2010

    CodePlex Daily Summary for Sunday, May 16, 2010New Projects3D Calculator: 3D Calc is a simple calculator application for Windows Phone 7, the purpose of this project is to demo the 3D animations capabilities of WP7 and sh...azaleas: AzaleasBlueset Studio Opensource Projects: Only for Opensource projects form Blueset Studio.Breck: A Phoenix and Jumper Moneky Production: Breck is a first person non-violent shooter developed in C++ and Dark GDK. After the main game is developed we are looking into making a sequel or...Discuz! Forum SDK: This project is use to login in and post or reply topic on discuz forum.Dominion.NET: Evolving Dominion source code originally written in VB6 and posted by "jatill" on Collectible Card Game Headquarters. Migration of the design and s...EkspSys2010-ITR: A mini project for the course Experimental System devolopment in spring 2010Facebook Graph Toolkit: This project is a .Net implementation of the Facebook Graph API. The aim of this project is to be a replacement to the existing Facebook Toolkit (h...iFree: This is a solution for Vietnamese network socialInfoPath Editor for Developer: InfoPath Editor for developer allows user to modify the html text directly inside InfoPath designer or filler and push the change back to InfoPath ...iZeit: Run your own online calendar, with blog integration, recurrence, todo list and categories.machgos dotNet Tests: Just some little test-projects for learningmim: TBAMinePost: MinePost is a game made for the first 48 hour Reddit Game Jam.Mockina: Mockina is a mock framework. Expression tree syntax is used to specify which members to mock, both public and non-public. The code is easy to under...MSBuild Launch Pad (mPad): This is just another shell extension for MSBuild to enable quick execution of MSBuild scripts via Windows Explorer context menu. (C) 2010 Lex LiPeacock: A browser like tabbed applicationPrimeCalculation: PrimeCalculation is a .NET app to calculate primes in a given range. Speed on Core2Duo 2,4GHZ: Found all primes from 0 to 1 billion in 35 seconds (...Slightly Silverlight: A Framework that leverages Silverlight for processing, business logic but standard HTML for the presentation layer.Stopwatch: Stopwatch is a tool for measuring the time. To start and pause stopwatch you only need to press a key on the keyboard. An additional context menu a...YAXLib: Yet Another XML Serialization Library for the .NET Framework: YAXLib is an XML Serialization library which helps you structure freely the XML result, choose among private and public fields to be serialized, an...New ReleasesActivate Your Glutes: v1.0.3.0: This release is a migration to VS2010, .Net 4, MVC2 and Entity Framework 4. The code has also been considerably cleaned up - taking advantage of E...AnyCAD: AnyCAD.Free.ENU.v1.1: http://www.anycad.net Modeling •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extrude, Loft, Chamfer, Sweep, Revol •Boolean: ...Blueset Studio Opensource Projects: 多功能计算器 3.5: 稳定版本。Code for Rapid C# Windows Development eBook: LLBLGen LINQPad Data Context Driver Ver 1.0.0.0: First release of a Static LLBLGen Pro Data Context Driver for LINQPad I recommend LINQPad 4 as it seems more stable with this driver than LINQPad 2.DSQLT - Dynamic SQL Templates: Release 1.2. Some behaviour has changed!!: Attention. Some behaviour has changed! Now its necessary to use WildCards in the pattern-parameter for DSQLT.AllSourceContains DSQLT.Databases DSQ...FDS AutoCAD plug-in: FDS to AutoCAD plug-in: Basic functionality was implemented. Some routines like setting fds executable location are still not automated.Feature Builder Guidance Extensions: FBGX 2 - Standalone FX: Background: The Feature Builder Guidance is extensible and displays guidance content supplied by all the Feature Builder Guidance Extensions (FBGX...Floe IRC Client: Floe IRC Client 2010-05 R3: - You can now right click on the input box to get options for toggling bold, underline, colors, etc. - The size of the nickname column is now saved...Floe IRC Client: Floe IRC Client 2010-05 R4: - A user's channel status now appears next to their nick when they talk (e.g. @Nick or +Nick) - Fixed an error where certain kinds of network probl...HD-Trailers.NET Downloader: HD-Trailers.NET Downloader v1.0: Version 1.0 Thanks to Wolfgang for all his help. I let this project languish for too long while focusing on other things, but his involvement has ...InfoPath Editor for Developer: InfoPath Editor Beta 1: Intial Release: Can load InfoPath inner html. Can edit InfoPath inner html. InfoPath 2007 only.LinkSharp: LinkSharp 0.1.0: First release of LinkSharp. Set up iis, and use the sql script to create a new database.PowerAuras: PowerAuras V3.0.0F: This version adds better integration with GTFO New Flags Added PvP flag In 5-Man Instance In Raid Instance In Battleground In ArenaRx Contrib: V1.4: Add the ability to catch internal exception and the ability to publish error by queue adaptersSEO SiteMap: SEO SiteMap RC1: -SevenZipLib Library: v9.13.2: Stable release associated with 7z.dll 9.13 beta. Ability to create and update archives not implemente yet.Silverlight / WPF Controls: Upload, FlipPanel, DeepZoom, Animation, Encryption: Code Camp Demonstration: This code example demonstrates MVVM/MEF with WPF with attached properties,security and custom ICommand class.SQL Data Capture - Black Box Application Testing: SQLDataCapture V1.2: Added Entity Framework Support to CRUD generator (Insert Stored Procedure) and switched to VS 2010 for development.Stopwatch: Stopwatch 0.1: Stopwatch Release 0.1VCC: Latest build, v2.1.30515.0: Automatic drop of latest buildYet another developer blog - Examples: Asynchronous TreeView in ASP.NET MVC: This sample application shows how to use jQuery TreeView plugin for creating an asynchronous TreeView in ASP.NET MVC. This application is accompani...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRawrPHPExcelBlogEngine.NETMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersMirror Testing SystemN2 CMSStyleCop

    Read the article

  • Combination of Operating Mode and Commit Strategy

    - by Kevin Yang
    If you want to populate a source into multiple targets, you may also want to ensure that every row from the source affects all targets uniformly (or separately). Let’s consider the Example Mapping below. If a row from SOURCE causes different changes in multiple targets (TARGET_1, TARGET_2 and TARGET_3), for example, it can be successfully inserted into TARGET_1 and TARGET_3, but failed to be inserted into TARGET_2, and the current Mapping Property TLO (target load order) is “TARGET_1 -> TARGET_2 -> TARGET_3”. What should Oracle Warehouse Builder do, in order to commit the appropriate data to all affected targets at the same time? If it doesn’t behave as you intended, the data could become inaccurate and possibly unusable.                                               Example Mapping In OWB, we can use Mapping Configuration Commit Strategies and Operating Modes together to achieve this kind of requirements. Below we will explore the combination of these two features and how they affect the results in the target tables Before going to the example, let’s review some of the terms we will be using (Details can be found in white paper Oracle® Warehouse Builder Data Modeling, ETL, and Data Quality Guide11g Release 2): Operating Modes: Set-Based Mode: Warehouse Builder generates a single SQL statement that processes all data and performs all operations. Row-Based Mode: Warehouse Builder generates statements that process data row by row. The select statement is in a SQL cursor. All subsequent statements are PL/SQL. Row-Based (Target Only) Mode: Warehouse Builder generates a cursor select statement and attempts to include as many operations as possible in the cursor. For each target, Warehouse Builder inserts each row into the target separately. Commit Strategies: Automatic: Warehouse Builder loads and then automatically commits data based on the mapping design. If the mapping has multiple targets, Warehouse Builder commits and rolls back each target separately and independently of other targets. Use the automatic commit when the consequences of multiple targets being loaded unequally are not great or are irrelevant. Automatic correlated: It is a specialized type of automatic commit that applies to PL/SQL mappings with multiple targets only. Warehouse Builder considers all targets collectively and commits or rolls back data uniformly across all targets. Use the correlated commit when it is important to ensure that every row in the source affects all affected targets uniformly. Manual: select manual commit control for PL/SQL mappings when you want to interject complex business logic, perform validations, or run other mappings before committing data. Combination of the commit strategy and operating mode To understand the effects of each combination of operating mode and commit strategy, I’ll illustrate using the following example Mapping. Firstly we insert 100 rows into the SOURCE table and make sure that the 99th row and 100th row have the same ID value. And then we create a unique key constraint on ID column for TARGET_2 table. So while running the example mapping, OWB tries to load all 100 rows to each of the targets. But the mapping should fail to load the 100th row to TARGET_2, because it will violate the unique key constraint of table TARGET_2. With different combinations of Commit Strategy and Operating Mode, here are the results ¦ Set-based/ Correlated Commit: Configuration of Example mapping:                                                     Result:                                                      What’s happening: A single error anywhere in the mapping triggers the rollback of all data. OWB encounters the error inserting into Target_2, it reports an error for the table and does not load the row. OWB rolls back all the rows inserted into Target_1 and does not attempt to load rows to Target_3. No rows are added to any of the target tables. ¦ Row-based/ Correlated Commit: Configuration of Example mapping:                                                   Result:                                                  What’s happening: OWB evaluates each row separately and loads it to all three targets. Loading continues in this way until OWB encounters an error loading row 100th to Target_2. OWB reports the error and does not load the row. It rolls back the row 100th previously inserted into Target_1 and does not attempt to load row 100 to Target_3. Then, if there are remaining rows, OWB will continue loading them, resuming with loading rows to Target_1. The mapping completes with 99 rows inserted into each target. ¦ Set-based/ Automatic Commit: Configuration of Example mapping: Result: What’s happening: When OWB encounters the error inserting into Target_2, it does not load any rows and reports an error for the table. It does, however, continue to insert rows into Target_3 and does not roll back the rows previously inserted into Target_1. The mapping completes with one error message for Target_2, no rows inserted into Target_2, and 100 rows inserted into Target_1 and Target_3 separately. ¦ Row-based/Automatic Commit: Configuration of Example mapping: Result: What’s happening: OWB evaluates each row separately for loading into the targets. Loading continues in this way until OWB encounters an error loading row 100 to Target_2 and reports the error. OWB does not roll back row 100th from Target_1, does insert it into Target_3. If there are remaining rows, it will continue to load them. The mapping completes with 99 rows inserted into Target_2 and 100 rows inserted into each of the other targets. Note: Automatic Correlated commit is not applicable for row-based (target only). If you design a mapping with the row-based (target only) and correlated commit combination, OWB runs the mapping but does not perform the correlated commit. In set-based mode, correlated commit may impact the size of your rollback segments. Space for rollback segments may be a concern when you merge data (insert/update or update/insert). Correlated commit operates transparently with PL/SQL bulk processing code. The correlated commit strategy is not available for mappings run in any mode that are configured for Partition Exchange Loading or that include a Queue, Match Merge, or Table Function operator. If you want to practice in your own environment, you can follow the steps: 1. Import the MDL file: commit_operating_mode.mdl 2. Fix the location for oracle module ORCL and deploy all tables under it. 3. Insert sample records into SOURCE table, using below plsql code: begin     for i in 1..99     loop         insert into source values(i, 'col_'||i);     end loop;     insert into source values(99, 'col_99'); end; 4. Configure MAPPING_1 to any combinations of operating mode and commit strategy you want to test. And make sure feature TLO of mapping is open. 5. Deploy Mapping “MAPPING_1”. 6. Run the mapping and check the result.

    Read the article

  • Oracle Expands Sun Blade Portfolio for Cloud and Highly Virtualized Environments

    - by Ferhat Hatay
    Oracle announced the expansion of Sun Blade Portfolio for cloud and highly virtualized environments that deliver powerful performance and simplified management as tightly integrated systems.  Along with the SPARC T3-1B blade server, Oracle VM blade cluster reference configuration and Oracle's optimized solution for Oracle WebLogic Suite, Oracle introduced the dual-node Sun Blade X6275 M2 server module with some impressive benchmark results.   Benchmarks on the Sun Blade X6275 M2 server module demonstrate the outstanding performance characteristics critical for running varied commercial applications used in cloud and highly virtualized environments.  These include best-in-class SPEC CPU2006 results with the Intel Xeon processor 5600 series, six Fluent world records and 1.8 times the price-performance of the IBM Power 755 running NAMD, a prominent bio-informatics workload.   Benchmarks for Sun Blade X6275 M2 server module  SPEC CPU2006  The Sun Blade X6275 M2 server module demonstrated best in class SPECint_rate2006 results for all published results using the Intel Xeon processor 5600 series, with a result of 679.  This result is 97% better than the HP BL460c G7 blade, 80% better than the IBM HS22V blade, and 79% better than the Dell M710 blade.  This result demonstrates the density advantage of the new Oracle's server module for space-constrained data centers.     Sun Blade X6275M2 (2 Nodes, Intel Xeon X5670 2.93GHz) - 679 SPECint_rate2006; HP ProLiant BL460c G7 (2.93 GHz, Intel Xeon X5670) - 347 SPECint_rate2006; IBM BladeCenter HS22V (Intel Xeon X5680)  - 377 SPECint_rate2006; Dell PowerEdge M710 (Intel Xeon X5680, 3.33 GHz) - 380 SPECint_rate2006.  SPEC, SPECint, SPECfp reg tm of Standard Performance Evaluation Corporation. Results from www.spec.org as of 11/24/2010 and this report.    For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   Fluent The Sun Fire X6275 M2 server module produced world-record results on each of the six standard cases in the current "FLUENT 12" benchmark test suite at 8-, 12-, 24-, 32-, 64- and 96-core configurations. These results beat the most recent QLogic score with IBM DX 360 M series platforms and QLogic "Truescale" interconnects.  Results on sedan_4m test case on the Sun Blade X6275 M2 server module are 23% better than the HP C7000 system, and 20% better than the IBM DX 360 M2; Dell has not posted a result for this test case.  Results can be found at the FLUENT website.   ANSYS's FLUENT software solves fluid flow problems, and is based on a numerical technique called computational fluid dynamics (CFD), which is used in the automotive, aerospace, and consumer products industries. The FLUENT 12 benchmark test suite consists of seven models that are well suited for multi-node clustered environments and representative of modern engineering CFD clusters. Vendors benchmark their systems with the principal objective of providing comparative performance information for FLUENT software that, among other things, depends on compilers, optimization, interconnect, and the performance characteristics of the hardware.   FLUENT application performance is representative of other commercial applications that require memory and CPU resources to be available in a scalable cluster-ready format.  FLUENT benchmark has six conventional test cases (eddy_417k, turbo_500k, aircraft_2m, sedan_4m, truck_14m, truck_poly_14m) at various core counts.   All information on the FLUENT website (http://www.fluent.com) is Copyrighted1995-2010 by ANSYS Inc. Results as of November 24, 2010. For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   NAMD Results on the Sun Blade X6275 M2 server module running NAMD (a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems) show up to a 1.8X better price/performance than IBM's Power 7-based system.  For space-constrained environments, the ultra-dense Sun Blade X6275 M2 server module provides a 1.7X better price/performance per rack unit than IBM's system.     IBM Power 755 4-way Cluster (16U). Total price for cluster: $324,212. See IBM United States Hardware Announcement 110-008, dated February 9, 2010, pp. 4, 21 and 39-46.  Sun Blade X6275 M2 8-Blade Cluster (10U). Total price for cluster:  $193,939. Price/performance and performance/RU comparisons based on f1ATPase molecule test results. Sun Blade X6275 M2 cluster: $3,568/step/sec, 5.435 step/sec/RU. IBM Power 755 cluster: $6,355/step/sec, 3.189 step/sec/U. See http://www-03.ibm.com/systems/power/hardware/reports/system_perf.html. See http://www.ks.uiuc.edu/Research/namd/performance.html for more information, results as of 11/24/10.   For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   Reverse Time Migration The Reverse Time Migration is heavily used in geophysical imaging and modeling for Oil & Gas Exploration.  The Sun Blade X6275 M2 server module showed up to a 40% performance improvement over the previous generation server module with super-linear scalability to 16 nodes for the 9-Point Stencil used in this Reverse Time Migration computational kernel.  The balanced combination of Oracle's Sun Storage 7410 system with the Sun Blade X6275 M2 server module cluster showed linear scalability for the total application throughput, including the I/O and MPI communication, to produce a final 3-D seismic depth imaged cube for interpretation. The final image write time from the Sun Blade X6275 M2 server module nodes to Oracle's Sun Storage 7410 system achieved 10GbE line speed of 1.25 GBytes/second or better performance. Between subsequent runs, the effects of I/O buffer caching on the Sun Blade X6275 M2 server module nodes and write optimized caching on the Sun Storage 7410 system gave up to 1.8 GBytes/second effective write performance. The performance results and characterization of this Reverse Time Migration benchmark could serve as a useful measure for many other I/O intensive commercial applications. 3D VTI Reverse Time Migration Seismic Depth Imaging, see http://blogs.sun.com/BestPerf/entry/3d_vti_reverse_time_migration for more information, results as of 11/14/2010.                            

    Read the article

  • The Birth of a Method - Where did OUM come from?

    - by user702549
    It seemed fitting to start this blog entry with the OUM vision statement. The vision for the Oracle® Unified Method (OUM) is to support the entire Enterprise IT lifecycle, including support for the successful implementation of every Oracle product.  Well, it’s that time of year again; we just finished testing and packaging OUM 5.6.  It will be released for general availability to qualifying customers and partners this month.  Because of this, I’ve been reflecting back on how the birth of Oracle’s Unified method - OUM came about. As the Release Director of OUM, I’ve been honored to package every method release.  No, maybe you’d say it’s not so special.  Of course, anyone can use packaging software to create an .exe file.  But to me, it is pretty special, because so many people work together to make each release come about.  The rich content that results is what makes OUM’s history worth talking about.   To me, professionally speaking, working on OUM, well it’s been “a labor of love”.  My youngest child was just 8 years old when OUM was born, and she’s now in High School!  Watching her grow and change has been fascinating, if you ask her, she’s grown up hearing about OUM.  My son would often walk into my home office and ask “How is OUM today, Mom?”  I am one of many people that take care of OUM, and have watched the method “mature” over these last 6 years.  Maybe that makes me a "Method Mom" (someone in one of my classes last year actually said this outloud) but there are so many others who collaborate and care about OUM Development. I’ve thought about writing this blog entry for a long time just to reflect on how far the Method has come. Each release, as I prepare the OUM Contributors list, I see how many people’s experience and ideas it has taken to create this wealth of knowledge, process and task guidance as well as templates and examples.  If you’re wondering how many people, just go into OUM select the resources button on the top of most pages of the method, and on that resources page click the ABOUT link. So now back to my nostalgic moment as I finished release 5.6 packaging.  I reflected back, on all the things that happened that cause OUM to become not just a dream but to actually come to fruition.  Here are some key conditions that make it possible for each release of the method: A vision to have one method instead of many methods, thereby focusing on deeper, richer content People within Oracle’s consulting Organization  willing to contribute to OUM providing Subject Matter Experts who are willing to write down and share what they know. Oracle’s continued acquisition of software companies, the need to assimilate high quality existing materials from these companies The need to bring together people from very different backgrounds and provide a common language to support Oracle Product implementations that often involve multiple product families What came first, and then what was the strategy? Initially OUM 4.0 was based on Oracle’s J2EE Custom Development Method (JCDM), it was a good “backbone”  (work breakdown structure) it was Unified Process based, and had good content around UML as well as custom software development.  But it needed to be extended in order to achieve the OUM Vision. What happened after that was to take in the “best of the best”, the legacy and acquired methods were scheduled for assimilation into OUM, one release after another.  We incrementally built OUM.  We didn’t want to lose any of the expertise that was reflected in AIM (Oracle’s legacy Application Implementation Method), Compass (People Soft’s Application implementation method) and so many more. When was OUM born? OUM 4.1 published April 30, 2006.  This release allowed Oracles Advanced Technology groups to begin the very first implementations of Fusion Middleware.  In the early days of the Method we would prepare several releases a year.  Our iterative release development cycle began and continues to be refined with each Method release.  Now we typically see one major release each year. The OUM release development cycle is not unlike many Oracle Implementation projects in that we need to gather requirements, prioritize, prepare the content, test package and then go production.  Typically we develop an OUM release MoSCoW (must have, should have, could have, and won’t have) right after the prior release goes out.   These are the high level requirements.  We break the timeframe into increments, frequent checkpoints that help us assess the content and progress is measured through frequent checkpoints.  We work as a team to prioritize what should be done in each increment. Yes, the team provides the estimates for what can be done within a particular increment.  We sometimes have Method Development workshops (physically or virtually) to accelerate content development on a particular subject area, that is where the best content results. As the written content nears the final stages, it goes through edit and evaluation through peer reviews, and then moves into the release staging environment.  Then content freeze and testing of the method pack take place.  This iterative cycle is run using the OUM artifacts that make sense “fit for purpose”, project plans, MoSCoW lists, Test plans are just a few of the OUM work products we use on a Method Release project. In 2007 OUM 4.3, 4.4 and 4.5 were published.  With the release of 4.5 our Custom BI Method (Data Warehouse Method FastTrack) was assimilated into OUM.  These early releases helped us align Oracle’s Unified method with other industry standards Then in 2008 we made significant changes to the OUM “Backbone” to support Applications Implementation projects with that went to the OUM 5.0 release.  Now things started to get really interesting.  Next we had some major developments in the Envision focus area in the area of Enterprise Architecture.  We acquired some really great content from the former BEA, Liquid Enterprise Method (LEM) along with some SMEs who were willing to work at bringing this content into OUM.  The Service Oriented Architecture content in OUM is extensive and can help support the successful implementation of Fusion Middleware, as well as Fusion Applications. Of course we’ve developed a wealth of OUM training materials that work also helps to improve the method content.  It is one thing to write “how to”, and quite another to be able to teach people how to use the materials to improve the success of their projects.  I’ve learned so much by teaching people how to use OUM. What's next? So here toward the end of 2012, what’s in store in OUM 5.6, well, I’m sure you won’t be surprised the answer is Cloud Computing.   More details to come in the next couple of weeks!  The best part of being involved in the development of OUM is to see how many people have “adopted” OUM over these six years, Clients, Partners, and Oracle Consultants.  The content just gets better with each release.   I’d love to hear your comments on how OUM has evolved, and ideas for new content you’d like to see in the upcoming releases.

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • Deterministic/Consistent Unique Masking

    - by Dinesh Rajasekharan-Oracle
    One of the key requirements while masking data in large databases or multi database environment is to consistently mask some columns, i.e. for a given input the output should always be the same. At the same time the masked output should not be predictable. Deterministic masking also eliminates the need to spend enormous amount of time spent in identifying data relationships, i.e. parent and child relationships among columns defined in the application tables. In this blog post I will explain different ways of consistently masking the data across databases using Oracle Data Masking and Subsetting The readers of post should have minimal knowledge on Oracle Enterprise Manager 12c, Application Data Modeling, Data Masking concepts. For more information on these concepts, please refer to Oracle Data Masking and Subsetting document Oracle Data Masking and Subsetting 12c provides four methods using which users can consistently yet irreversibly mask their inputs. 1. Substitute 2. SQL Expression 3. Encrypt 4. User Defined Function SUBSTITUTE The substitute masking format replaces the original value with a value from a pre-created database table. As the method uses a hash based algorithm in the back end the mappings are consistent. For example consider DEPARTMENT_ID in EMPLOYEES table is replaced with FAKE_DEPARTMENT_ID from FAKE_TABLE. The substitute masking transformation that all occurrences of DEPARTMENT_ID say ‘101’ will be replaced with ‘502’ provided same substitution table and column is used , i.e. FAKE_TABLE.FAKE_DEPARTMENT_ID. The following screen shot shows the usage of the Substitute masking format with in a masking definition: Note that the uniqueness of the masked value depends on the number of columns being used in the substitution table i.e. if the original table contains 50000 unique values, then for the masked output to be unique and deterministic the substitution column should also contain 50000 unique values without which only consistency is maintained but not uniqueness. SQL EXPRESSION SQL Expression replaces an existing value with the output of a specified SQL Expression. For example while masking an EMPLOYEES table the EMAIL_ID of an employee has to be in the format EMPLOYEE’s [email protected] while FIRST_NAME and LAST_NAME are the actual column names of the EMPLOYEES table then the corresponding SQL Expression will look like %FIRST_NAME%||’.’||%LAST_NAME%||’@COMPANY.COM’. The advantage of this technique is that if you are masking FIRST_NAME and LAST_NAME of the EMPLOYEES table than the corresponding EMAIL ID will be replaced accordingly by the masking scripts. One of the interesting aspect’s of a SQL Expressions is that you can use sub SQL expressions, which means that you can write a nested SQL and use it as SQL Expression to address a complex masking business use cases. SQL Expression can also be used to consistently replace value with hashed value using Oracle’s PL/SQL function ORA_HASH. The following SQL Expression will help in the previous example for replacing the DEPARTMENT_IDs with a hashed number ORA_HASH (%DEPARTMENT_ID%, 1000) The following screen shot shows the usage of encrypt masking format with in the masking definition: ORA_HASH takes three arguments: 1. Expression which can be of any data type except LONG, LOB, User Defined Type [nested table type is allowed]. In the above example I used the Original value as expression. 2. Number of hash buckets which can be number between 0 and 4294967295. The default value is 4294967295. You can also co-relate the number of hash buckets to a range of numbers. In the above example above the bucket value is specified as 1000, so the end result will be a hashed number in between 0 and 1000. 3. Seed, can be any number which decides the consistency, i.e. for a given seed value the output will always be same. The default seed is 0. In the above SQL Expression a seed in not specified, so it to 0. If you have to use a non default seed then the function will look like. ORA_HASH (%DEPARTMENT_ID%, 1000, 1234 The uniqueness depends on the input and the number of hash buckets used. However as ORA_HASH uses a 32 bit algorithm, considering birthday paradox or pigeonhole principle there is a 0.5 probability of collision after 232-1 unique values. ENCRYPT Encrypt masking format uses a blend of 3DES encryption algorithm, hashing, and regular expression to produce a deterministic and unique masked output. The format of the masked output corresponds to the specified regular expression. As this technique uses a key [string] to encrypt the data, the same string can be used to decrypt the data. The key also acts as seed to maintain consistent outputs for a given input. The following screen shot shows the usage of encrypt masking format with in the masking definition: Regular Expressions may look complex for the first time users but you will soon realize that it’s a simple language. There are many resources in internet, oracle documentation, oracle learning library, my oracle support on writing a Regular Expressions, out of all the following My Oracle Support document helped me to get started with Regular Expressions: Oracle SQL Support for Regular Expressions[Video](Doc ID 1369668.1) USER DEFINED FUNCTION [UDF] User Defined Function or UDF provides flexibility for the users to code their own masking logic in PL/SQL, which can be called from masking Defintion. The standard format of an UDF in Oracle Data Masking and Subsetting is: Function udf_func (rowid varchar2, column_name varchar2, original_value varchar2) returns varchar2; Where • rowid is the row identifier of the column that needs to be masked • column_name is the name of the column that needs to be masked • original_value is the column value that needs to be masked You can achieve deterministic masking by using Oracle’s built in hash functions like, ORA_HASH, DBMS_CRYPTO.MD4, DBMS_CRYPTO.MD5, DBMS_UTILITY. GET_HASH_VALUE.Please refers to the Oracle Database Documentation for more information on the Oracle Hash functions. For example the following masking UDF generate deterministic unique hexadecimal values for a given string input: CREATE OR REPLACE FUNCTION RD_DUX (rid varchar2, column_name varchar2, orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2 (26); no_of_characters number(2); BEGIN no_of_characters:=6; stext:=substr(RAWTOHEX(DBMS_CRYPTO.HASH(UTL_RAW.CAST_TO_RAW(text),1)),0,no_of_characters); RETURN stext; END; The uniqueness depends on the input and length of the string and number of bits used by hash algorithm. In the above function MD4 hash is used [denoted by argument 1 in the DBMS_CRYPTO.HASH function which is a 128 bit algorithm which produces 2^128-1 unique hashed values , however this is limited by the length of the input string which is 6, so only 6^6 unique values will be generated. Also do not forget about the birthday paradox/pigeonhole principle mentioned earlier in this post. An another example is to consistently replace characters or numbers preserving the length and special characters as shown below: CREATE OR REPLACE FUNCTION RD_DUS(rid varchar2,column_name varchar2,orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2(26); BEGIN DBMS_RANDOM.SEED(orig_val); stext:=TRANSLATE(orig_val,'ABCDEFGHILKLMNOPQRSTUVWXYZ',DBMS_RANDOM.STRING('U',26)); stext:=TRANSLATE(stext,'abcdefghijklmnopqrstuvwxyz',DBMS_RANDOM.STRING('L',26)); stext:=TRANSLATE(stext,'0123456789',to_char(DBMS_RANDOM.VALUE(1,9))); stext:=REPLACE(stext,'.','0'); RETURN stext; END; The following screen shot shows the usage of an UDF with in a masking definition: To summarize, Oracle Data Masking and Subsetting helps you to consistently mask data across databases using one or all of the methods described in this post. It saves the hassle of identifying the parent-child relationships defined in the application table. Happy Masking

    Read the article

  • 3D Ball Physics Theory: collision response on ground and against walls?

    - by David
    I'm really struggling to get a strong grasp on how I should be handling collision response in a game engine I'm building around a 3D ball physics concept. Think Monkey Ball as an example of the type of gameplay. I am currently using sphere-to-sphere broad phase, then AABB to OBB testing (the final test I am using right now is one that checks if one of the 8 OBB points crosses the planes of the object it is testing against). This seems to work pretty well, and I am getting back: Plane that object is colliding against (with a point on the plane, the plane's normal, and the exact point of intersection. I've tried what feels like dozens of different high-level strategies for handling these collisions, without any real success. I think my biggest problem is understanding how to handle collisions against walls in the x-y axes (left/right, front/back), which I want to have elasticity, and the ground (z-axis) where I want an elastic reaction if the ball drops down, but then for it to eventually normalize and be kept "on the ground" (not go into the ground, but also not continue bouncing). Without kluging something together, I'm positive there is a good way to handle this, my theories just aren't getting me all the way there. For physics modeling and movement, I am trying to use a Euler based setup with each object maintaining a position (and destination position prior to collision detection), a velocity (which is added onto the position to determine the destination position), and an acceleration (which I use to store any player input being put on the ball, as well as gravity in the z coord). Starting from when I detect a collision, what is a good way to approach the response to get the expected behavior in all cases? Thanks in advance to anyone taking the time to assist... I am grateful for any pointers, and happy to post any additional info or code if it is useful. UPDATE Based on Steve H's and eBusiness' responses below, I have adapted my collision response to what makes a lot more sense now. It was close to right before, but I didn't have all the right pieces together at the right time! I have one problem left to solve, and that is what is causing the floor collision to hit every frame. Here's the collision response code I have now for the ball, then I'll describe the last bit I'm still struggling to understand. // if we are moving in the direction of the plane (against the normal)... if (m_velocity.dot(intersection.plane.normal) <= 0.0f) { float dampeningForce = 1.8f; // eventually create this value based on mass and acceleration // Calculate the projection velocity PVRTVec3 actingVelocity = m_velocity.project(intersection.plane.normal); m_velocity -= actingVelocity * dampeningForce; } // Clamp z-velocity to zero if we are within a certain threshold // -- NOTE: this was an experimental idea I had to solve the "jitter" bug I'll describe below float diff = 0.2f - abs(m_velocity.z); if (diff > 0.0f && diff <= 0.2f) { m_velocity.z = 0.0f; } // Take this object to its new destination position based on... // -- our pre-collision position + vector to the collision point + our new velocity after collision * time // -- remaining after the collision to finish the movement m_destPosition = m_position + intersection.diff + (m_velocity * intersection.tRemaining * GAMESTATE->dt); The above snippet is run after a collision is detected on the ball (collider) with a collidee (floor in this case). With a dampening force of 1.8f, the ball's reflected "upward" velocity will eventually be overcome by gravity, so the ball will essentially be stuck on the floor. THIS is the problem I have now... the collision code is running every frame (since the ball's z-velocity is constantly pushing it a collision with the floor below it). The ball is not technically stuck, I can move it around still, but the movement is really goofy because the velocity and position keep getting affected adversely by the above snippet. I was experimenting with an idea to clamp the z-velocity to zero if it was "close to zero", but this didn't do what I think... probably because the very next frame the ball gets a new gravity acceleration applied to its velocity regardless (which I think is good, right?). Collisions with walls are as they used to be and work very well. It's just this last bit of "stickiness" to deal with. The camera is constantly jittering up and down by extremely small fractions too when the ball is "at rest". I'll keep playing with it... I like puzzles like this, especially when I think I'm close. Any final ideas on what I could be doing wrong here? UPDATE 2 Good news - I discovered I should be subtracting the intersection.diff from the m_position (position prior to collision). The intersection.diff is my calculation of the difference in the vector of position to destPosition from the intersection point to the position. In this case, adding it was causing my ball to always go "up" just a little bit, causing the jitter. By subtracting it, and moving that clamper for the velocity.z when close to zero to being above the dot product (and changing the test from <= 0 to < 0), I now have the following: // Clamp z-velocity to zero if we are within a certain threshold float diff = 0.2f - abs(m_velocity.z); if (diff > 0.0f && diff <= 0.2f) { m_velocity.z = 0.0f; } // if we are moving in the direction of the plane (against the normal)... float dotprod = m_velocity.dot(intersection.plane.normal); if (dotprod < 0.0f) { float dampeningForce = 1.8f; // eventually create this value based on mass and acceleration? // Calculate the projection velocity PVRTVec3 actingVelocity = m_velocity.project(intersection.plane.normal); m_velocity -= actingVelocity * dampeningForce; } // Take this object to its new destination position based on... // -- our pre-collision position + vector to the collision point + our new velocity after collision * time // -- remaining after the collision to finish the movement m_destPosition = m_position - intersection.diff + (m_velocity * intersection.tRemaining * GAMESTATE->dt); UpdateWorldMatrix(m_destWorldMatrix, m_destOBB, m_destPosition, false); This is MUCH better. No jitter, and the ball now "rests" at the floor, while still bouncing off the floor and walls. The ONLY thing left is that the ball is now virtually "stuck". He can move but at a much slower rate, likely because the else of my dot product test is only letting the ball move at a rate multiplied against the tRemaining... I think this is a better solution than I had previously, but still somehow not the right idea. BTW, I'm trying to journal my progress through this problem for anyone else with a similar situation - hopefully it will serve as some help, as many similar posts have for me over the years.

    Read the article

  • Unlocking Productivity

    - by Michael Snow
    Unlocking Productivity in Life Sciences with Consolidated Content Management by Joe Golemba, Vice President, Product Management, Oracle WebCenter As life sciences organizations look to become more operationally efficient, the ability to effectively leverage information is a competitive advantage. Whether data mining at the drug discovery phase or prepping the sales team before a product launch, content management can play a key role in developing, organizing, and disseminating vital information. The goal of content management is relatively straightforward: put the information that people need where they can find it. A number of issues can complicate this; information sits in many different systems, each of those systems has its own security, and the information in those systems exists in many different formats. Identifying and extracting pertinent information from mountains of farflung data is no simple job, but the alternative—wasted effort or even regulatory compliance issues—is worse. An integrated information architecture can enable health sciences organizations to make better decisions, accelerate clinical operations, and be more competitive. Unstructured data matters Often when we think of drug development data, we think of structured data that fits neatly into one or more research databases. But structured data is often directly supported by unstructured data such as experimental protocols, reaction conditions, lot numbers, run times, analyses, and research notes. As life sciences companies seek integrated views of data, they are typically finding diverse islands of data that seemingly have no relationship to other data in the organization. Information like sales reports or call center reports can be locked into siloed systems, and unavailable to the discovery process. Additionally, in the increasingly networked clinical environment, Web pages, instant messages, videos, scientific imaging, sales and marketing data, collaborative workspaces, and predictive modeling data are likely to be present within an organization, and each source potentially possesses information that can help to better inform specific efforts. Historically, content management solutions that had 21CFR Part 11 capabilities—electronic records and signatures—were focused mainly on content-enabling manufacturing-related processes. Today, life sciences companies have many standalone repositories, requiring different skills, service level agreements, and vendor support costs to manage them. With the amount of content doubling every three to six months, companies have recognized the need to manage unstructured content from the beginning, in order to increase employee productivity and operational efficiency. Using scalable and secure enterprise content management (ECM) solutions, organizations can better manage their unstructured content. These solutions can also be integrated with enterprise resource planning (ERP) systems or research systems, making content available immediately, in the context of the application and within the flow of the employee’s typical business activity. Administrative safeguards—such as content de-duplication—can also be applied within ECM systems, so documents are never recreated, eliminating redundant efforts, ensuring one source of truth, and maintaining content standards in the organization. Putting it in context Consolidating structured and unstructured information in a single system can greatly simplify access to relevant information when it is needed through contextual search. Using contextual filters, results can include therapeutic area, position in the value chain, semantic commonalities, technology-specific factors, specific researchers involved, or potential business impact. The use of taxonomies is essential to organizing information and enabling contextual searches. Taxonomy solutions are composed of a hierarchical tree that defines the relationship between different life science terms. When overlaid with additional indexing related to research and/or business processes, it becomes possible to effectively narrow down the amount of data that is returned during searches, as well as prioritize results based on specific criteria and/or prior search history. Thus, search results are more accurate and relevant to an employee’s day-to-day work. For example, a search for the word "tissue" by a lab researcher would return significantly different results than a search for the same word performed by someone in procurement. Of course, diverse data repositories, combined with the immense amounts of data present in an organization, necessitate that the data elements be regularly indexed and cached beforehand to enable reasonable search response times. In its simplest form, indexing of a single, consolidated data warehouse can be expected to be a relatively straightforward effort. However, organizations require the ability to index multiple data repositories, enabling a single search to reference multiple data sources and provide an integrated results listing. Security and compliance Beyond yielding efficiencies and supporting new insight, an enterprise search environment can support important security considerations as well as compliance initiatives. For example, the systems enable organizations to retain the relevance and the security of the indexed systems, so users can only see the results to which they are granted access. This is especially important as life sciences companies are working in an increasingly networked environment and need to provide secure, role-based access to information across multiple partners. Although not officially required by the 21 CFR Part 11 regulation, the U.S. Food and Drug Administraiton has begun to extend the type of content considered when performing relevant audits and discoveries. Having an ECM infrastructure that provides centralized management of all content enterprise-wide—with the ability to consistently apply records and retention policies along with the appropriate controls, validations, audit trails, and electronic signatures—is becoming increasingly critical for life sciences companies. Making the move Creating an enterprise-wide ECM environment requires moving large amounts of content into a single enterprise repository, a daunting and risk-laden initiative. The first key is to focus on data taxonomy, allowing content to be mapped across systems. The second is to take advantage new tools which can dramatically speed and reduce the cost of the data migration process through automation. Additional content need not be frozen while it is migrated, enabling productivity throughout the process. The ability to effectively leverage information into success has been gaining importance in the life sciences industry for years. The rapid adoption of enterprise content management, both in operational processes as well as in scientific management, are clear indicators that the companies are looking to use all available data to be better informed, improve decision making, minimize risk, and increase time to market, to maintain profitability and be more competitive. As more and more varieties and sources of information are brought under the strategic management umbrella, the ability to divine knowledge from the vast pool of information is increasingly difficult. Simple search engines and basic content management are increasingly unable to effectively extract the right information from the mountains of data available. By bringing these tools into context and integrating them with business processes and applications, we can effectively focus on the right decisions that make our organizations more profitable. More Information Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

< Previous Page | 37 38 39 40 41 42 43  | Next Page >