Search Results

Search found 4410 results on 177 pages for 'azure worker roles'.

Page 115/177 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Using jQuery to call a web service

    - by Matt
    I have created a web service which takes a username and password as parameters and returns a list of children in JSON (the user is a Social Worker). The web service is hosted locally with IIS7. I am attempting to access the web service using javascript/jquery because it will eventually need to run as a mobile app. I'm not really experienced with web services, or javascript for that matter, but the following two links seemed to point me in the right direction: http://williamsportwebdeveloper.com/cgi/wp/?p=494 http://encosia.com/using-jquery-to-consume-aspnet-json-web-services/ This is my html page: <%@ Page Title="" Language="C#" MasterPageFile="~/MasterPage.Master" AutoEventWireup="true" CodeBehind="TestWebService.aspx.cs" Inherits="Sponsor_A_Child.TestWebService" %> <asp:Content ID="Content1" ContentPlaceHolderID="stylesPlaceHolder" runat="server"> <script type="text/javascript" src="Scripts/jquery-1.7.1.js"> $(document).ready(function () { }); function LoginClientClick() { $("#query_results").empty(); $("#query_results").append('<table id="ResultsTable" class="ChildrenTable"><tr><th>Child_ID</th><th>Child_Name</th><th>Child_Surname</th></tr>'); $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "http://localhost/PhoneWebServices/GetChildren.asmx/GetMyChildren", data: '{ "email" : "' + $("#EmailBox").val() + '", "password": "' + $("#PasswordBox").val() + '" }', dataType: "json", success: function (msg) { var c = eval(msg.d); alert("" + c); for (var i in c) { $("#ResultsTable tr:last").after("<tr><td>" + c[i][0] + "</td><td>" + c[i][1] + "</td><td>" + c[i][2] + "</td></tr>"); } } }); } </script> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="contentPlaceHolder" runat="server"> <div id="LoginDiv"> Email: <input id="EmailBox" type="text" /><br /> Password: <input id="PasswordBox" type="password" /><br /> <input id="LoginButton" type="button" value="Submit" onclick="LoginClientClick()" /> </div> <div id="query_results"> </div> </asp:Content> And this is my web service code: [WebMethod (Description="Returns the list of children for whom the social worker is responsible.")] public String GetMyChildren(String email,String password) { DataSet MyChildren=new DataSet(); int ID=SocialWorkerLogin(email, password); if (ID > 0) { MyChildren = FillChildrenTable(ID); } MyChildren.DataSetName = "My Children"; //To prevent 'DataTable name not set' error string[][] JaggedArray = new string[MyChildren.Tables[0].Rows.Count][]; int i = 0; foreach (DataRow rs in MyChildren.Tables[0].Rows) { JaggedArray[i] = new string[] { rs["Child_ID"].ToString(), rs["Child_Name"].ToString(), rs["Child_Surname"].ToString() }; i = i + 1; } // Return JSON data JavaScriptSerializer js = new JavaScriptSerializer(); string strJSON = js.Serialize(JaggedArray); return strJSON; } I followed the examples in the provided links, but when I press submit, only the table headers appear but not the list of children. When I test the web service on it's own though, it does return a JSON string so that part seems to be working. Any help is greatly appreciated :)

    Read the article

  • What's the best way to refactor this Rails controller?

    - by Robert DiNicolas
    I'd like some advice on how to best refactor this controller. The controller builds a page of zones and modules. Page has_many zones, zone has_many modules. So zones are just a cluster of modules wrapped in a container. The problem I'm having is that some modules may have some specific queries that I don't want executed on every page, so I've had to add conditions. The conditions just test if the module is on the page, if it is the query is executed. One of the problems with this is if I add a hundred special module queries, the controller has to iterate through each one. I think I would like to see these module condition moved out of the controller as well as all the additional custom actions. I can keep everything in this one controller, but I plan to have many apps using this controller so it could get messy. class PagesController < ApplicationController # GET /pages/1 # GET /pages/1.xml # Show is the main page rendering action, page routes are aliased in routes.rb def show #-+-+-+-+-Core Page Queries-+-+-+-+- @page = Page.find(params[:id]) @zones = @page.zones.find(:all, :order => 'zones.list_order ASC') @mods = @page.mods.find(:all) @columns = Page.columns # restful params to influence page rendering, see routes.rb @fragment = params[:fragment] # render single module @cluster = params[:cluster] # render single zone @head = params[:head] # render html, body and head #-+-+-+-+-Page Level Json Conversions-+-+-+-+- @metas = @page.metas ? ActiveSupport::JSON.decode(@page.metas) : nil @javascripts = @page.javascripts ? ActiveSupport::JSON.decode(@page.javascripts) : nil #-+-+-+-+-Module Specific Queries-+-+-+-+- # would like to refactor this process @mods.each do |mod| # Reps Module Custom Queries if mod.name == "reps" @reps = User.find(:all, :joins => :roles, :conditions => { :roles => { :name => 'rep' } }) end # Listing-poc Module Custom Queries if mod.name == "listing-poc" limit = params[:limit].to_i < 1 ? 10 : params[:limit] PropertyEntry.update_from_listing(mod.service_url) @properties = PropertyEntry.all(:limit => limit, :order => "city desc") end # Talents-index Module Custom Queries if mod.name == "talents-index" @talent = params[:type] @reps = User.find(:all, :joins => :talents, :conditions => { :talents => { :name => @talent } }) end end respond_to do |format| format.html # show.html.erb format.xml { render :xml => @page.to_xml( :include => { :zones => { :include => :mods } } ) } format.json { render :json => @page.to_json } format.css # show.css.erb, CSS dependency manager template end end # for property listing ajax request def update_properties limit = params[:limit].to_i < 1 ? 10 : params[:limit] offset = params[:offset] @properties = PropertyEntry.all(:limit => limit, :offset => offset, :order => "city desc") #render :nothing => true end end So imagine a site with a hundred modules and scores of additional controller actions. I think most would agree that it would be much cleaner if I could move that code out and refactor it to behave more like a configuration.

    Read the article

  • Rails development environment Resque.enqueue does not create jobs

    - by anton evangelatov
    I am having the same problem like Rails custom environment Resque.enqueue does not create jobs , but the solution there doesn't work for me. I'm using Resque for a couple of asynchronous jobs. It works just fine for the staging environment, but for some reason it stopped working on development environment. For example, if I run the following: $ rails c development > Resque.enqueue(MyLovelyJob, 1) Nothing is enqueued. I check Resque using resque-web If I run it on staging - it works just fine. $ rails c staging > Resque.enqueue(MyLovelyJob, 1) I have tried to duplicate the 2 environment, and they seem to use absolutely the same configurations (database.yml , config/environment , etc.), but development is still not working. If I do > Resque.enqueue(UpdateInstancesData, 2) > => true > Resque.info > => { > :pending => 0, > :processed => 0, > :queues => 0, > :workers => 1, > :working => 0, > :failed => 0, > :servers => [ > [0] "redis://127.0.0.1:6379/0" > ], > :environment => "development" > } Any suggestions where to look in order to debug this? I am running the application via foreman. My Procfile looks like: faye: rackup faye.ru -s thin -E production worker1: bundle exec rake resque:work QUEUE=* VERBOSE=1 worker2: bundle exec rake resque:work QUEUE=* VERBOSE=1 clock: bundle exec rake resque:scheduler VERBOSE=1 web: bundle exec rails s For staging, as mentioned, everything works and the log from foreman is: 17:03:42 clock.1 | 2013-06-26 17:03:42 Reloading Schedule 17:03:42 clock.1 | 2013-06-26 17:03:42 Loading Schedule 17:03:42 clock.1 | 2013-06-26 17:03:42 Scheduling logging_test 17:03:42 clock.1 | 2013-06-26 17:03:42 Schedules Loaded 17:03:43 worker2.1 | *** Starting worker ttttt-mbp.local:69573:* 17:03:43 worker2.1 | *** Registered signals 17:03:43 worker2.1 | *** Running before_first_fork hooks 17:03:43 worker1.1 | *** Starting worker ttttt-mbp.local:69572:* 17:03:43 worker1.1 | *** Registered signals 17:03:43 worker2.1 | *** Checking another_queue 17:03:43 worker2.1 | *** Checking anotherqueue 17:03:43 worker2.1 | *** Checking statused 17:03:43 worker2.1 | *** Found job on statused 17:03:43 worker2.1 | *** got: (Job{statused} | LoggingTest | ["57e89a1c1b24ce6866bcf5d0e1c07f01", {}]) 17:06:30 clock.1 | 2013-06-26 17:06:30 queueing LoggingTest (logging_test) 17:06:33 worker1.1 | *** Checking another_queue 17:06:33 worker2.1 | *** Checking another_queue 17:06:33 worker1.1 | *** Checking anotherqueue 17:06:33 worker2.1 | *** Checking anotherqueue 17:06:33 worker1.1 | *** Found job on anotherqueue 17:06:33 worker1.1 | *** got: (Job{anotherqueue} | LoggingTest | ["0d976869a945766e0cfeca83e7349305", {}]) 17:06:33 worker1.1 | *** resque-1.24.1: Processing anotherqueue since 1372259193 [LoggingTest] 17:06:33 worker1.1 | *** Running before_fork hooks with [(Job{anotherqueue} | LoggingTest | ["0d976869a945766e0cfeca83e7349305", {}])] 17:06:33 worker1.1 | *** resque-1.24.1: Forked 69955 at 1372259193 17:06:33 worker2.1 | *** resque-1.24.1: Forked 69956 at 1372259193 17:06:33 worker1.1 | *** Running after_fork hooks with [(Job{anotherqueue} | LoggingTest | ["0d976869a945766e0cfeca83e7349305", {}])] 17:06:33 worker1.1 | JOB :: LoggingTest 17:06:33 worker1.1 | 55555 17:06:33 worker1.1 | *** done: (Job{anotherqueue} | LoggingTest | ["0d976869a945766e0cfeca83e7349305", {}]) whereas for development it doesn't seem to enqueue and then find the job. If there is a job already in the queue (pending, left over from staging environment) the workers from development don't process it. 17:01:23 clock.1 | 2013-06-26 17:01:23 Reloading Schedule 17:01:23 clock.1 | 2013-06-26 17:01:23 Loading Schedule 17:01:23 clock.1 | 2013-06-26 17:01:23 Scheduling logging_test 17:01:23 clock.1 | 2013-06-26 17:01:23 Scheduling update_instances_data 17:01:23 clock.1 | 2013-06-26 17:01:23 Schedules Loaded 17:03:10 clock.1 | 2013-06-26 17:03:10 queueing LoggingTest (logging_test) 17:03:14 worker1.1 | *** Checking another_queue 17:03:14 worker2.1 | *** Checking another_queue 17:03:14 worker1.1 | *** Checking anotherqueue 17:03:14 worker2.1 | *** Checking anotherqueue 17:03:14 worker1.1 | *** Checking statused 17:03:14 worker2.1 | *** Checking statused

    Read the article

  • bidirectional OneToOne Mapping : From Entity to subclass and from superclass to Entity?

    - by Teocali
    I'm trying to establish a tricky bidirectional OneToOne mapping in hibernate. I got the following classes : @Entity @Inheritance(strategy = InheritanceType.JOINED) public class Parent { @OneToOne private AnotherEntity anotherEntity; } @Entity public class Child1 extends Parent{} @Entity public class Child2 extends Parent{} @Entity public class AnotherEntity { @OneToOne(mappedBy = "anotherEntity") private Child1 child1; @OneToOne(mappedBy = "anotherEntity") private Child1 child2; } The problem here is when I'm launching the application : I got the following message : org.hibernate.MappingException: property [anotherEntity] not found on entity [Child1] at org.hibernate.mapping.PersistentClass.getRecursiveProperty(PersistentClass.java:429) at org.hibernate.mapping.PersistentClass.getReferencedProperty(PersistentClass.java:369) at org.hibernate.cfg.Configuration.originalSecondPassCompile(Configuration.java:1614) at org.hibernate.cfg.Configuration.secondPassCompile(Configuration.java:1362) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1727) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1778) at org.springframework.orm.hibernate4.LocalSessionFactoryBuilder.buildSessionFactory(LocalSessionFactoryBuilder.java:189) at org.springframework.orm.hibernate4.LocalSessionFactoryBean.buildSessionFactory(LocalSessionFactoryBean.java:350) at org.springframework.orm.hibernate4.LocalSessionFactoryBean.afterPropertiesSet(LocalSessionFactoryBean.java:335) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1514) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1452) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:567) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:913) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:464) at org.springframework.web.servlet.FrameworkServlet.configureAndRefreshWebApplicationContext(FrameworkServlet.java:631) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:588) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:645) at org.springframework.web.servlet.FrameworkServlet.initWebApplicationContext(FrameworkServlet.java:508) at org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:449) at org.springframework.web.servlet.HttpServletBean.init(HttpServletBean.java:133) at javax.servlet.GenericServlet.init(GenericServlet.java:160) at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1266) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1185) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1080) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5015) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5302) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:962) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:536) at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1471) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:301) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at org.apache.catalina.manager.ManagerServlet.check(ManagerServlet.java:1436) at org.apache.catalina.manager.ManagerServlet.deploy(ManagerServlet.java:673) at org.apache.catalina.manager.ManagerServlet.doPut(ManagerServlet.java:431) at javax.servlet.http.HttpServlet.service(HttpServlet.java:644) at javax.servlet.http.HttpServlet.service(HttpServlet.java:722) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.filters.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:108) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:581) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:565) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) One obvious solution would be to move the anotherEntity field to Child1 and Child2, but it would mean lose the link from Parent to AnotherEntity. Any help welcome.

    Read the article

  • Win32 reset event like synchronization class with boost C++

    - by fgungor
    I need some mechanism reminiscent of Win32 reset events that I can check via functions having the same semantics with WaitForSingleObject() and WaitForMultipleObjects() (Only need the ..SingleObject() version for the moment) . But I am targeting multiple platforms so all I have is boost::threads (AFAIK) . I came up with the following class and wanted to ask about the potential problems and whether it is up to the task or not. Thanks in advance. class reset_event { bool flag, auto_reset; boost::condition_variable cond_var; boost::mutex mx_flag; public: reset_event(bool _auto_reset = false) : flag(false), auto_reset(_auto_reset) { } void wait() { boost::unique_lock<boost::mutex> LOCK(mx_flag); if (flag) return; cond_var.wait(LOCK); if (auto_reset) flag = false; } bool wait(const boost::posix_time::time_duration& dur) { boost::unique_lock<boost::mutex> LOCK(mx_flag); bool ret = cond_var.timed_wait(LOCK, dur) || flag; if (auto_reset && ret) flag = false; return ret; } void set() { boost::lock_guard<boost::mutex> LOCK(mx_flag); flag = true; cond_var.notify_all(); } void reset() { boost::lock_guard<boost::mutex> LOCK(mx_flag); flag = false; } }; Example usage; reset_event terminate_thread; void fn_thread() { while(!terminate_thread.wait(boost::posix_time::milliseconds(10))) { std::cout << "working..." << std::endl; boost::this_thread::sleep(boost::posix_time::milliseconds(1000)); } std::cout << "thread terminated" << std::endl; } int main() { boost::thread worker(fn_thread); boost::this_thread::sleep(boost::posix_time::seconds(1)); terminate_thread.set(); worker.join(); return 0; } EDIT I have fixed the code according to Michael Burr's suggestions. My "very simple" tests indicate no problems. class reset_event { bool flag, auto_reset; boost::condition_variable cond_var; boost::mutex mx_flag; public: explicit reset_event(bool _auto_reset = false) : flag(false), auto_reset(_auto_reset) { } void wait() { boost::unique_lock<boost::mutex> LOCK(mx_flag); if (flag) { if (auto_reset) flag = false; return; } do { cond_var.wait(LOCK); } while(!flag); if (auto_reset) flag = false; } bool wait(const boost::posix_time::time_duration& dur) { boost::unique_lock<boost::mutex> LOCK(mx_flag); if (flag) { if (auto_reset) flag = false; return true; } bool ret = cond_var.timed_wait(LOCK, dur); if (ret && flag) { if (auto_reset) flag = false; return true; } return false; } void set() { boost::lock_guard<boost::mutex> LOCK(mx_flag); flag = true; cond_var.notify_all(); } void reset() { boost::lock_guard<boost::mutex> LOCK(mx_flag); flag = false; } };

    Read the article

  • Social Shopping

    - by David Dorf
    I've written about various breeds of social shopping in the past, so I decided to give some thought into a categorization with examples. Below I've listed the different types of social shopping I've observed and some companies that support them. Comments and Ratings -- Commenting on products has been around almost as long as e-commerce. Two popular players in this space are BazaarVoice and PowerReviews. Most shoppers prefer relying on peer reviews rather than retailer descriptions, so the influence over sales is very strong. f-commerce -- A new term that was sure to rear its ugly head when retailers started allowing shopping on Facebook, And its all Elastic Path and Alvenda's fault! Co-shopping -- Retailers like Wet Seal are enabling multiple people to shop together online. This is particularly applicable to fashion, where the real-time exchange of opinions is important. I actually tried this with a co-worker and its pretty cool. Bragging -- Blippy is Twitter for shoppers, allowing purchases to be "tweeted" so you can keep up with your friends. I get alerted when friends download music or apps from iTunes because chances are I'll be interested as well. This covert influence is one-up'ed by Snatter, a service that gives people discounts for tweeting or posting promotions from retailers. This is the petri dish of viral marketing. Advice -- Combine the bragging of Blippy and the opinions from BazaarVoice and you'd get ShopSocially, a social network dedicated to spreading product knowledge amongst informed shoppers. I'm sure if I gave it more thought, a few more types would come to mind, but I've got to get back to work. Now is not the time to be blogging at Oracle!

    Read the article

  • Ubuntu 12.04 + Eclipse 64 bits key binding error

    - by user110933
    The text is quite extense so, this is just a part of it: !SESSION 2012-11-23 10:15:52.442 ----------------------------------------------- eclipse.buildId=I20120608-1200 java.version=1.7.0_09 java.vendor=Oracle Corporation BootLoader constants: OS=linux, ARCH=x86_64, WS=gtk, NL=en_US Command-line arguments: -os linux -ws gtk -arch x86_64 !ENTRY org.eclipse.jface 2 0 2012-11-23 10:16:06.408 !MESSAGE Keybinding conflicts occurred. They may interfere with normal accelerator operation. !SUBENTRY 1 org.eclipse.jface 2 0 2012-11-23 10:16:06.408 !MESSAGE A conflict occurred for ALT+SHIFT+R: Binding(ALT+SHIFT+R, ParameterizedCommand(Command(oracle.eclipse.tools.common.services.ui.refactor.rename.command,Rename, Rename the selected text., Category(org.eclipse.jdt.ui.category.refactoring,Refactor - Java,Java Refactoring Actions,true), oracle.eclipse.tools.common.services.ui.refactor.internal.ArtifactRefactoringCommandHandler, ,,true),null), org.eclipse.ui.defaultAcceleratorConfiguration, org.eclipse.ui.contexts.window,,,system) Binding(ALT+SHIFT+R, ParameterizedCommand(Command(org.eclipse.jdt.ui.edit.text.java.rename.element,Rename - Refactoring , Rename the selected element, Category(org.eclipse.jdt.ui.category.refactoring,Refactor - Java,Java Refactoring Actions,true), , ,,true),null), org.eclipse.ui.defaultAcceleratorConfiguration, org.eclipse.ui.contexts.window,,,system) !ENTRY org.eclipse.ui.workbench 4 0 2012-11-23 10:16:10.409 !MESSAGE An unexpected exception was thrown. !STACK 0 java.lang.NullPointerException at org.eclipse.ui.internal.WorkbenchWindow.putToolbarLabel(WorkbenchWindow.java:1697) at org.eclipse.ui.internal.menus.MenuAdditionCacheEntry.createToolBarAdditionContribution(MenuAdditionCacheEntry.java:208) at org.eclipse.ui.internal.menus.MenuAdditionCacheEntry.createContributionItems(MenuAdditionCacheEntry.java:177) at org.eclipse.ui.internal.menus.TrimContributionManager.update(TrimContributionManager.java:224) at org.eclipse.ui.internal.WorkbenchWindow.updateLayoutDataForContents(WorkbenchWindow.java:3874) at org.eclipse.ui.internal.WorkbenchWindow.setCoolBarVisible(WorkbenchWindow.java:3675) at org.eclipse.ui.internal.ViewIntroAdapterPart.setBarVisibility(ViewIntroAdapterPart.java:203) at org.eclipse.ui.internal.ViewIntroAdapterPart.dispose(ViewIntroAdapterPart.java:106) at org.eclipse.ui.internal.WorkbenchPartReference.doDisposePart(WorkbenchPartReference.java:737) at org.eclipse.ui.internal.ViewReference.doDisposePart(ViewReference.java:107) at org.eclipse.ui.internal.WorkbenchPartReference.dispose(WorkbenchPartReference.java:684) at org.eclipse.ui.internal.WorkbenchPage.disposePart(WorkbenchPage.java:1801) at org.eclipse.ui.internal.WorkbenchPage.partRemoved(WorkbenchPage.java:1793) at org.eclipse.ui.internal.ViewFactory.releaseView(ViewFactory.java:257) at org.eclipse.ui.internal.Perspective.dispose(Perspective.java:292) at org.eclipse.ui.internal.WorkbenchPage.dispose(WorkbenchPage.java:1872) at org.eclipse.ui.internal.WorkbenchWindow.closeAllPages(WorkbenchWindow.java:894) at org.eclipse.ui.internal.WorkbenchWindow.hardClose(WorkbenchWindow.java:1729) at org.eclipse.ui.internal.WorkbenchWindow.busyClose(WorkbenchWindow.java:730) at org.eclipse.ui.internal.WorkbenchWindow.access$0(WorkbenchWindow.java:715) at org.eclipse.ui.internal.WorkbenchWindow$6.run(WorkbenchWindow.java:867) at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70) at org.eclipse.ui.internal.WorkbenchWindow.close(WorkbenchWindow.java:865) at org.eclipse.jface.window.WindowManager.close(WindowManager.java:109) at org.eclipse.ui.internal.Workbench$18.run(Workbench.java:1114) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.busyClose(Workbench.java:1111) at org.eclipse.ui.internal.Workbench.access$15(Workbench.java:1040) at org.eclipse.ui.internal.Workbench$25.run(Workbench.java:1284) at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70) at org.eclipse.ui.internal.Workbench.close(Workbench.java:1282) at org.eclipse.ui.internal.Workbench.close(Workbench.java:1254) at org.eclipse.ui.internal.WorkbenchWindow.busyClose(WorkbenchWindow.java:727) at org.eclipse.ui.internal.WorkbenchWindow.access$0(WorkbenchWindow.java:715) at org.eclipse.ui.internal.WorkbenchWindow$6.run(WorkbenchWindow.java:867) at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70) at org.eclipse.ui.internal.WorkbenchWindow.close(WorkbenchWindow.java:865) at org.eclipse.jface.window.Window.handleShellCloseEvent(Window.java:741) at org.eclipse.jface.window.Window$3.shellClosed(Window.java:687) at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:98) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1276) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1300) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1285) at org.eclipse.swt.widgets.Shell.closeWidget(Shell.java:617) at org.eclipse.swt.widgets.Shell.gtk_delete_event(Shell.java:1191) at org.eclipse.swt.widgets.Widget.windowProc(Widget.java:1750) at org.eclipse.swt.widgets.Control.windowProc(Control.java:5116) at org.eclipse.swt.widgets.Display.windowProc(Display.java:4369) at org.eclipse.swt.internal.gtk.OS._gtk_main_do_event(Native Method) at org.eclipse.swt.internal.gtk.OS.gtk_main_do_event(OS.java:8295) at org.eclipse.swt.widgets.Display.eventProc(Display.java:1192) at org.eclipse.swt.internal.gtk.OS._g_main_context_iteration(Native Method) at org.eclipse.swt.internal.gtk.OS.g_main_context_iteration(OS.java:2332) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3177) at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:2701) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2665) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2499) at org.eclipse.ui.internal.Workbench$7.run(Workbench.java:679) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:668) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:124) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:353) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:180) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:629) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584) at org.eclipse.equinox.launcher.Main.run(Main.java:1438) at org.eclipse.equinox.launcher.Main.main(Main.java:1414) !SESSION 2012-11-23 10:36:07.863 ----------------------------------------------- eclipse.buildId=I20120608-1200 java.version=1.7.0_09 java.vendor=Oracle Corporation BootLoader constants: OS=linux, ARCH=x86_64, WS=gtk, NL=en_US Command-line arguments: -os linux -ws gtk -arch x86_64 !ENTRY org.eclipse.jface 2 0 2012-11-23 10:36:13.181 !MESSAGE Keybinding conflicts occurred. They may interfere with normal accelerator operation. !SUBENTRY 1 org.eclipse.jface 2 0 2012-11-23 10:36:13.181 !MESSAGE A conflict occurred for ALT+SHIFT+R: Binding(ALT+SHIFT+R, ParameterizedCommand(Command(oracle.eclipse.tools.common.services.ui.refactor.rename.command,Rename, Rename the selected text., Category(org.eclipse.jdt.ui.category.refactoring,Refactor - Java,Java Refactoring Actions,true), oracle.eclipse.tools.common.services.ui.refactor.internal.ArtifactRefactoringCommandHandler, ,,true),null), org.eclipse.ui.defaultAcceleratorConfiguration, org.eclipse.ui.contexts.window,,,system) Binding(ALT+SHIFT+R, ParameterizedCommand(Command(org.eclipse.jdt.ui.edit.text.java.rename.element,Rename - Refactoring , Rename the selected element, Category(org.eclipse.jdt.ui.category.refactoring,Refactor - Java,Java Refactoring Actions,true), , ,,true),null), org.eclipse.ui.defaultAcceleratorConfiguration, org.eclipse.ui.contexts.window,,,system) !ENTRY org.eclipse.osgi 2 1 2012-11-23 10:39:04.681 !MESSAGE NLS unused message: CacheManager_CannotLoadNonUrlLocation in: org.eclipse.equinox.internal.p2.repository.messages !SESSION 2012-11-23 15:14:12.933 ----------------------------------------------- eclipse.buildId=I20120608-1200 java.version=1.7.0_09 java.vendor=Oracle Corporation BootLoader constants: OS=linux, ARCH=x86_64, WS=gtk, NL=en_US Command-line arguments: -os linux -ws gtk -arch x86_64 !ENTRY org.eclipse.jface 2 0 2012-11-23 15:14:23.380 !MESSAGE Keybinding conflicts occurred. They may interfere with normal accelerator operation. !SUBENTRY 1 org.eclipse.jface 2 0 2012-11-23 15:14:23.380 !MESSAGE A conflict occurred for ALT+SHIFT+R: Binding(ALT+SHIFT+R, ParameterizedCommand(Command(oracle.eclipse.tools.common.services.ui.refactor.rename.command,Rename, Rename the selected text., Category(org.eclipse.jdt.ui.category.refactoring,Refactor - Java,Java Refactoring Actions,true), oracle.eclipse.tools.common.services.ui.refactor.internal.ArtifactRefactoringCommandHandler, ,,true),null), org.eclipse.ui.defaultAcceleratorConfiguration, org.eclipse.ui.contexts.window,,,system) Binding(ALT+SHIFT+R, ParameterizedCommand(Command(org.eclipse.jdt.ui.edit.text.java.rename.element,Rename - Refactoring , Rename the selected element, Category(org.eclipse.jdt.ui.category.refactoring,Refactor - Java,Java Refactoring Actions,true), , ,,true),null), org.eclipse.ui.defaultAcceleratorConfiguration, org.eclipse.ui.contexts.window,,,system) !ENTRY org.springframework.ide.eclipse.uaa 4 2 2012-11-23 15:14:32.800 !MESSAGE Problems occurred when invoking code from plug-in: "org.springframework.ide.eclipse.uaa". !STACK 0 java.lang.NullPointerException at org.springframework.ide.eclipse.internal.uaa.monitor.CommandUsageMonitor.startMonitoring(CommandUsageMonitor.java:61) at org.springframework.ide.eclipse.uaa.UaaPlugin$1$1.run(UaaPlugin.java:91) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.springframework.ide.eclipse.uaa.UaaPlugin$1.run(UaaPlugin.java:85) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54) !SESSION 2012-11-23 15:15:21.833 ----------------------------------------------- eclipse.buildId=I20120608-1200 java.version=1.7.0_09 java.vendor=Oracle Corporation BootLoader constants: OS=linux, ARCH=x86_64, WS=gtk, NL=en_US Command-line arguments: -os linux -ws gtk -arch x86_64 !ENTRY org.eclipse.jface 2 0 2012-11-23 15:15:27.283 !MESSAGE Keybinding conflicts occurred. They may interfere with normal accelerator operation. !SUBENTRY 1 org.eclipse.jface 2 0 2012-11-23 15:15:27.283 !MESSAGE A conflict occurred for ALT+SHIFT+R: Binding(ALT+SHIFT+R, ParameterizedCommand(Command(oracle.eclipse.tools.common.services.ui.refactor.rename.command,Rename, Rename the selected text., Category(org.eclipse.jdt.ui.category.refactoring,Refactor - Java,Java Refactoring Actions,true), oracle.eclipse.tools.common.services.ui.refactor.internal.ArtifactRefactoringCommandHandler, ,,true),null), org.eclipse.ui.defaultAcceleratorConfiguration, org.eclipse.ui.contexts.window,,,system) Binding(ALT+SHIFT+R, ParameterizedCommand(Command(org.eclipse.jdt.ui.edit.text.java.rename.element,Rename - Refactoring , Rename the selected element, Category(org.eclipse.jdt.ui.category.refactoring,Refactor - Java,Java Refactoring Actions,true), , ,,true),null), org.eclipse.ui.defaultAcceleratorConfiguration, org.eclipse.ui.contexts.window,,,system) !ENTRY org.eclipse.jface 2 0 2012-11-23 15:18:41.265 !MESSAGE Keybinding conflicts occurred. They may interfere with normal accelerator operation. !SUBENTRY 1 org.eclipse.jface 2 0 2012-11-23 15:18:41.265 !MESSAGE A conflict occurred for ALT+SHIFT+E: Binding(ALT+SHIFT+E, ParameterizedCommand(Command(oracle.eclipse.tools.common.services.ui.refactor.rename.command,Rename, Rename the selected text., Category(org.eclipse.jdt.ui.category.refactoring,Refactor - Java,Java Refactoring Actions,true), oracle.eclipse.tools.common.services.ui.refactor.internal.ArtifactRefactoringCommandHandler, ,,true),null), org.eclipse.ui.emacsAcceleratorConfiguration, org.eclipse.ui.contexts.window,,,user) Binding(ALT+SHIFT+E, ParameterizedCommand(Command(org.eclipse.jdt.ui.edit.text.java.rename.element,Rename - Refactoring , Rename the selected element, Category(org.eclipse.jdt.ui.category.refactoring,Refactor - Java,Java Refactoring Actions,true), , ,,true),null), org.eclipse.ui.emacsAcceleratorConfiguration, org.eclipse.ui.contexts.window,,,user) Binding(ALT+SHIFT+E, ParameterizedCommand(Command(org.eclipse.wst.jsdt.ui.edit.text.java.rename.element,Rename - Refactoring , Rename the selected element, Category(org.eclipse.wst.jsdt.ui.category.refactoring,Refactor - JavaScript,JavaScript Refactoring Actions,true), , ,,true),null), org.eclipse.ui.emacsAcceleratorConfiguration, org.eclipse.ui.contexts.window,,,user) !SESSION 2012-11-23 15:18:56.267 ----------------------------------------------- eclipse.buildId=I20120608-1200 java.version=1.7.0_09 java.vendor=Oracle Corporation BootLoader constants: OS=linux, ARCH=x86_64, WS=gtk, NL=en_US Command-line arguments: -os linux -ws gtk -arch x86_64 !ENTRY org.eclipse.jface 2 0 2012-11-23 15:19:01.605 !MESSAGE Keybinding conflicts occurred. They may interfere with normal accelerator operation. !SUBENTRY 1 org.eclipse.jface 2 0 2012-11-23 15:19:01.605 !MESSAGE A conflict occurred for ALT+SHIFT+E: Binding(ALT+SHIFT+E, ParameterizedCommand(Command(org.eclipse.wst.jsdt.ui.edit.text.java.rename.element,Rename - Refactoring , Rename the selected element, Category(org.eclipse.wst.jsdt.ui.category.refactoring,Refactor - JavaScript,JavaScript Refactoring Actions,true), , ,,true),null), org.eclipse.ui.emacsAcceleratorConfiguration, org.eclipse.ui.contexts.window,,,user) Binding(ALT+SHIFT+E, ParameterizedCommand(Command(org.eclipse.jdt.ui.edit.text.java.rename.element,Rename - Refactoring , Rename the selected element, Category(org.eclipse.jdt.ui.category.refactoring,Refactor - Java,Java Refactoring Actions,true), , ,,true),null), org.eclipse.ui.emacsAcceleratorConfiguration, org.eclipse.ui.contexts.window,,,user) Binding(ALT+SHIFT+E, ParameterizedCommand(Command(oracle.eclipse.tools.common.services.ui.refactor.rename.command,Rename, Rename the selected text., Category(org.eclipse.jdt.ui.category.refactoring,Refactor - Java,Java Refactoring Actions,true), oracle.eclipse.tools.common.services.ui.refactor.internal.ArtifactRefactoringCommandHandler, ,,true),null), org.eclipse.ui.emacsAcceleratorConfiguration, org.eclipse.ui.contexts.window,,,user)

    Read the article

  • 17 new features in Visual Studio 2010

    - by vik20000in
    Visual studio 2010 has been released to RTM a few days back. This release of Visual studio 2010 comes with a big number of improvements on many fronts. In this post I will try and point out some of the major improvements in Visual Studio 2010. 1)      Visual studio IDE Improvement. Visual studio IDE has been rewritten in WPF. The look and feel of the studio has been improved for improved readability. Start page has been redesigned and template so that anyone can change the start page as they wish. 2)      Multiple Monitor - Support for Multiple Monitor was already there in Visual studio. But in this edition it has been improved as much that we can now place the document, design and code window outside the IDE in another monitor. 3)      ZOOM in Code Editor – Making the editors in WPF has made significant improvement for them. The best one that I like is the ZOOM feature. We can now zoom in the code editor with the help of the ctrl + Mouse scroll. The zoom feature does not work on the Design surface or windows with icon like solution view and toolbox. 4)      Box Selection - Another Important improvement in the Visual studio 2010 is the box selection. We can select a rectangular by holding down the Alt Key and selecting with mouse.  Now in the rectangular selection we can insert text, Paste same code in different line etc. This is helpful if you want to convert a number of variables from public to private etc… 5)      New Improved Search – One of the best productivity improvements in Visual studio 2010 is its new search as you type support. This has been done in the Navigate To window which can be brought up by pressing (Ctrl + ,). The navigate To windows also take help of the Camel casing and will be able to search with the help of camel casing when character is entered in upper case. For example we can search AOH for AddOrederHeader. 6)      Call Hierarchy – This feature is only available to the Visual C# and Visual C++ editor. The call hierarchy windows displays the calls made to and from (yes both to and from) a selected method property or a constructor. The call hierarchy also shows the implementation of interface and the overrides of virtual or abstract methods. This window is very helpful in understanding the code flow, and evaluating the effect of making changes. The best part is it is available at design time and not at runtime only like a debugger. 7)      Highlighting references – One of the very cool stuff in Visual Studio 2010 is the fact if you select a variable then all the use of that variable will be highlighted alongside. This should work for all the result of symbols returned by Find all reference. This also works for Name of class, objects variable, properties and methods. We can also use the Ctrl + Shift + Down Arrow or Up Arror to move through them. 8)      Generate from usage - The Generate from usage feature lets you use classes and members before you define them. You can generate a stub for any undefined class, constructor, method, property, field, or enum that you want to use but have not yet defined. You can generate new types and members without leaving your current location in code, This minimizes interruption to your workflow.9)      IntelliSense Suggestion Mode - IntelliSense now provides two alternatives for IntelliSense statement completion, completion mode and suggestion mode. Use suggestion mode for situations where classes and members are used before they are defined. In suggestion mode, when you type in the editor and then commit the entry, the text you typed is inserted into the code. When you commit an entry in completion mode, the editor shows the entry that is highlighted on the members list. When an IntelliSense window is open, you can press CTRL+ALT+SPACEBAR to toggle between completion mode and suggestion mode. 10)   Application Lifecycle Management – A client application for management of application lifecycle like version control, work item tracking, build automation, team portal etc is available for free (this is not available for express edition.). 11)   Start Page – The start page has been redesigned with WPF for new functionality and look. Tabbed areas are provided for content from different source including MSDN. Once you open some project the start page closes automatically. The list of recent project also lets you remove project from the list. And above all the start page is customizable enough to be changed as per individual requirement. 12)   Extension Manager – Visual Studio 2010 has provided good ways to be extended. We can also use MEF to extend most of the features of Visual Studio. The new extension manager now can go the visual studio gallery and install the extension without even opening any explorer. 13)   Code snippets – Visual studio 2010 for HTML, Jscript and Asp.net also. 14)   Improved Intelligence for JavaScript has been improved vastly (around 2-5 times). Intelligence now also shows the XML documentation comment on the go. 15)   Web Deployment – Web Deployment has been vastly improved. We can package and publish the web application in one click. Three major supported deployment scenarios are Web packages, one click deployment and Web configuration Transformation. 16)   SharePoint - Visual Studio 2010 also brings vastly improved development experience for SharePoint. We can create, edit, debug, package, deploy and activate SharePoint project from within Visual Studio. Deployment of Site is as easy as hitting F5. 17)   Azure – Visual Studio 2010 also comes with handy improvement for developing on windows Azure environment. Vikram

    Read the article

  • Top tweets SOA Partner Community – October 2013

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Ronald Luttikhuizen ?My latest upload: SOA Made Simple | Introduction to SOA on @slideshare http://www.slideshare.net/rluttikhuizen/soa-made-simple-introduction-to-soa … via @SlideShare OTNArchBeat ?ArchBeat Link-o-Rama for October 4, 2013 #cloud #linux #oaam #soa http://pub.vitrue.com/y4SK Lucas Jellema ?My blog article shows news on the new SOA Suite 12c release - as it was publicly available during #oow13 see: http://technology.amis.nl/2013/09/27/oow13-soa-suite-12c/ … Yogesh Sontakke ?Introducing OER's new Express Workflows - Simplified Lifecycle Management. Blog post: http://bit.ly/16JKHCf @soacommunity #soagovernance SrinivasPadmanabhuni ?"@OTNArchBeat: SOA and User Interfaces - by @soacommunity @HajoNormann @gschmutz @t_winterberg et al #industrialsoa http://pub.vitrue.com/KmOp " SOA Community ?SOA and User-Interfaces http://servicetechmag.com/I76/0913-2 article published part of #industrialSOA at Service Technology Magazine #soacommunity Estafet Limited ?@Estafet win @UKOUG Middleware Partner of the Year 2013 Yogesh Sontakke ?RT @VikasAatOracle: #Oracle #B2B - written by experts #soa #soacommunity #oraclesoa - time to get a copy ! @SOAScott Danilo Schmiedel ?Thanks a lot to Juergen @soacommunity for the super interesting and well-organized Partner Advisory Council yesterday! Such a Great Value! OTNArchBeat ?Case management supporting re-landscaping application portfolios | @leonsmiers http://pub.vitrue.com/MC5j Samantha Searle ?Apply for the #GartnerBPM 2014 Excellence Awards - find out how via this link http://ow.ly/ptaNQ #Gartner #bpm #process #entarch #cio OTNArchBeat ?SOA and User Interfaces - by @soacommunity @hajonormann @gschmutz @t_winterberg et al #industrialsoa http://pub.vitrue.com/KmOp Dain Hansen ?Hybrid #cloud is on the rise, but is the IT department's culture standing in the way? http://add.vc/eJN #CloudIntegration #OracleSOA OTNArchBeat #SOASuite 11g ps6 - Download your log files directly from the Enterprise Manager | @whitehorsenl http://pub.vitrue.com/KrJ2 Whitehorses ?Whiteblog: SOA Suite 11g ps6 - Download your log files directly from the Enterprise Manager (http://goo.gl/2Gqiax ) Rajesh Raheja ?Cloud integration session recap #oow13 http://blog.raastech.com/2013/09/recap-of-real-world-cloud-integration.html?m=1 … Vikas Anand ?@Ahmed_Aboulnaga thanks for the excellent summary and kind words. #oow13 #cloud #oraclesoa http://blog.raastech.com/2013/09/recap-of-real-world-cloud-integration.html?m=1 … Luis Augusto Weir ?REST is also SOA. Check it out http://www.soa4u.co.uk/2013/09/restful-is-also-soa.html?m=1 … #soacommunity Graham ?“@OracleBPM & @soacommunity: 5 Ways to Modernize Applications with BPM #AppAdvantage" #oracleday http://bit.ly/15yC6e3 SOA Community ?#ACED director asked me for BPM references in FSI - ever visited my #SOACommunity workspace? https://beehiveonline.oracle.com/teamcollab/overview/SOA_Community_Workspace … #soacommunity #bpm OracleBlogs ?SOA Community Newsletter September 2013 http://ow.ly/2Aj6oK OTNArchBeat ?OOW13: First glimpses of the new #SOASuite12c | @LucasJellema http://pub.vitrue.com/2YgX sbernhardt ?Just published new blog entry on OOW 2013 wrap up. http://thecattlecrew.wordpress.com/2013/09/30/oracle-open-world-2013-wrap-up/ … #oow13 @OC_WIRE @soacommunity Emiel Paasschens ?Home with family after an overwhelming #OOW week in San Francisco with lot of info & meetings. Special thanx to @OracleBelux & @soacommunity Robert van Mölken ?Had a awesome week at #OOW13 in SF. Highlights were the @soacommunity Wine tour, @OracleBelux meet-ups and @OracleSOA CAB. Thanks to all :) SOA Community ?The place Oracle Fusion middleware comes from - Oracle 200 - TKs office - next Oracle 100 - SOA & BPM #soacommunity pic.twitter.com/qibFOQVbRo Oracle BPM ?5 Ways to Modernize Applications with BPM #AppAdvantage http://pub.vitrue.com/l2dn Simon Haslam ?Ha ha - how did we miss that! RT @lucasjellema: Post conference announcement of a new middleware appliance? #oow13 pic.twitter.com/3NvcjPfjXb OTNArchBeat ?The OTNArchBeat Daily is out! http://paper.li/OTNArchBeat/1329828521 … ? Top stories today via @lucasjellema @myfear @TylerJewell Packt Publishing ?Get 50% off ALL our DRM-free eBooks - this weekend only! Go to http://www.packtpub.com/ and use code BIG50, as often as you like! #BIG50 OracleBlogs ?Global Perspective: ACE Director from EMEA Weighs in on AppAdvantage http://ow.ly/2Afek2 orclateamsoa ?#orclateamsoa Blog: BPM Auditing Demystified - I've heard from a couple of customers recently asking about BPM aud... http://ow.ly/2AfbAn AMIS, Oracle & Java ?Cool #soasuite 12c feature managed file transfer - visit Dave Barry at demo point sr212 #oow #soacommunity pic.twitter.com/gb4HLbUarR SOA Community ?Let us know what was best at #OOW @soacommunity save trip home - thanks for coming to #SF ;-) see you at #OOW2014 pic.twitter.com/xbWXjRapqh Lonneke Dikmans ?Nice @dschmied is talking about the different steps in his project. He starts with explaining the user interface design #oow13 #ux #acm Lonneke Dikmans ?Saving the best for the end: managing knowledge worker processes by @dschmied and Prasen.#oow13 #acm cool stuff: adaptive case management Luis Augusto Weir ?SOA Governance is more than just OER. Requires people, processes and tools. Check it out #SOA #soacommunity http://youtu.be/Ohn06smVKVw Lonneke Dikmans ?“@OracleSOA: #oow Join us for:Enterprise SOA Infrastructure Best Practices Thu 9/26 2:00 PM - 3:00 PM Moscone West - 2020 SOA Community ?Business Process Management (BPM) 11g PS6 Awareness Course http://wp.me/p10C8u-1as Ajay Khanna ?Detect, Analyze, Act - Fast! http://wp.me/p10C8u-1ao via @soacommunity #OracleBPM Simone Geib ?It took a while, but I finally reached 500 followers. Thanks everybody and especially @soacommunity :) SOA Community ?Functional Testing Business Processes In Oracle BPM Suite 11g by Arun Pareek http://wp.me/p10C8u-1aq SOA Community Distribute the September edition of the SOA Community newsletter READ it! Didn't receive it register http://www.oracle.com/goto/emea/soa #soacommunity SOA Community ?Detect, Analyze, Act - Fast! by Ajay Khanna http://wp.me/p10C8u-1ao Robert van Mölken ?Finalised my #OOW presentation #CON8736 and live demo on wednesday 25th at 11:45am. Also giving a short version at the SOA CAB on thursday. Rajesh Raheja ?"The AppAdvantage of Oracle Cloud & On-premises Integration" http://bit.ly/14RYHmZ SOA Community ?Additional new content SOA & BPM Partner Community http://wp.me/p10C8u-1aw Dain Hansen ?Right now #oow13 SOA, BPM - Customer Advisory Boards. 'No tweeting' says @SOASimone. Instagram of funny cats still ok. leonsmiers ?Case Management with Oracle BPM Suite our presentation on #oow13 http://www.slideshare.net/leonsmiers/oracle-open-world-2013-case-management-smiers-kitson … #capgemini @nkitson72 Mark Simpson ?Flextronics reduced cost of processing an invoice to <$1 from $7 due to BPM @OracleBPM #oow13 saving millions. Way less than industry avg. Holger Mueller ?#Siemens Shared Services CIO says that #Fusion #Middleware made the difference for #Oracle over #Workday. #Integration matters. #OOW13 oracleopenworld ?Miss any #oow13 keynotes, or simply want to rewatch? Check out the live streaming site for keynotes on demand: http://pub.vitrue.com/RG4D SOA Community ?Analyze your m2m data and act on it! Big data Pattern matching, fast data & soa #soacommunity #oow pic.twitter.com/48Q1z4ckh7 SOA Community ?Top tweets SOA Partner Community – September 2013 http://wp.me/p10C8u-1cR Simone Geib ?#oraclesoa hands on lab at #oow13 pic.twitter.com/IJJrqXIMiu Danilo Schmiedel #oow13 CON8436: Managing Knowledge Worker Processes. Come & get a free Adaptive Case Management poster @soacommunity pic.twitter.com/FRc2CSyLwb John Sim ?Great job again Jurgen @soacommunity helping bring Ace Community together! Danilo Schmiedel ?Excellent #OracleBPM Adaptive Case Management intro by @heidibuelowBPM and Prasen at the #oow13 demo ground.Last chance today @soacommunity SOA Community ?Thanks to all our #bpm #soa and #weblogic partners for the great middleware business #oow #soacommunity pic.twitter.com/dBwZ8DMHfH Whitehorses ?Thanks @soacommunity for the party tonight. Great to meet product management & see all the talented EMEA middleware specialists. #oow13 Danilo Schmiedel ?Great tool demo from Link Consulting about managing your SOA with OER #oow13 @soacommunity Torsten Winterberg ?“@soacommunity: thanks to @dschmied and @OC_WIRE for making it happen to have our case management poster as printed version hier at #oow13 Ronald Luttikhuizen ?These were the architects involved in the diagram excitement :) just after State of SOA podcast with @OTNArchBeat pic.twitter.com/5B8jIrVTA9 SOA Community ?Tanks to AVIO for the excellent #bpmn poster and the great bpm business - visit then at #OOW & get the poster pic.twitter.com/ebTg9pFY1C Dain Hansen ?Kurian introducing Oracle Platform-as-a-Service developments. #oow13 #OracleCloud pic.twitter.com/evJLTU53rx Bruce Tierney ?API Management "multi-level pie chart" at #oow13 by Oracle's Tim Hall pic.twitter.com/q12OIRdaue Dain Hansen ?This is not your Daddy's BAM @soacommunity: Is this BAM? Very cool in #soasuite 12c get a demo at sr225 pic.twitter.com/EvwqXW9U5j SOA Community ?Is this BAM? Very cool in #soasuite 12c get a demo at sr225 pic.twitter.com/LybHxyF362 SOA Community ?SOA governance by @Yogesh_Sontakke at demo point sr214 many good new features - key for soa projects #oow #soa pic.twitter.com/DFK0ummsK1 SOA Community ?Cool #soasuite 12c feature managed file transfer - visit Dave Barry at demo point sr212 #oow #soacommunity pic.twitter.com/GDKcqDGhCF SOA Community ?Adaptive Case Management demo point at #OOW visit @heidibuelowBPM get a demo and cmmn notation poster #soacommunity pic.twitter.com/T7yEyI7tdn Lonneke Dikmans ?In case you missed it: http://blog.vennster.nl/2013/09/case-management-part-1.html?spref=tw … Lucas Jellema ?SOA Suite news: Cloud Adapters RightNow and SalesForce plus SDK to develop custom cloud adapters (CY13); REST/JSON support in SB/SCA (12c) Oracle SOA ?Cloud Integration and AppAdvantage: Transform your Enterprise #soa #oow13 http://pub.vitrue.com/UfPB Dain Hansen ?Cloud Integration and AppAdvantage: Transform your Enterprise #soa #oow13 http://pub.vitrue.com/4QWA Hajo Normann ?#BigData, eventing & real time #analytics suggest timely next actions in #oracleBPM & #oracleACM; #oow13 #FastData pic.twitter.com/aFVGrTXPqu Mark Simpson ?OEP CQL engine now used in BAM12c for event stream summary computation with temporal and pattern match features to feed dashboards. #oow13 Mark Simpson ?BAM12c virtually a new product. Analytics that senses ahead of time and also compares to historical trends to guide process or case #oow13 Andrejus Baranovskis ?Enabling UI Shell 12c/11g Multitasking Behavior http://fb.me/18l9vxQfA Amit Zavery ?Oracle Fusion Middleware Empowers Business Users, EVP Thomas Kurian's session summary http://onforb.es/18Ta1jf #oow13 #oraclemiddle #oracle Vikas Anand ?#oow13 #oracleopenworld BPM on display at Middleware keynote by Thomas Kurian pic.twitter.com/PMm719S0Ui SOA Community ?BPM composer - business user empowerment #oow #soacommunity #bpmsuite pic.twitter.com/0Qgl6oVh0h SOA Community ?Model your process in BPMN - make is executable and analyze & improve them #oow #soacommunity pic.twitter.com/jkLlObDdoi Bruce Tierney ?@demed and Thomas Kurian talk mobile and cloud at #oow13 pic.twitter.com/bAAeqn5a2V Amit Zavery ?Thomas Kurian showcasing all the new features of Oracle Fusion Middleware #oraclemiddle #oow13 SOA Community ?Demo time cloud adapters in #soasuite at Thomas Kurian keynote. Build and integrate mobile apps in minutes #oow pic.twitter.com/qTnCOJLLwS SOA Community ?Soa suite cloud adapters and mobile apps by @demed at Thomas Kurian keynote #oow #oracle #soacommunity pic.twitter.com/5aMLkNH4Ng Danilo Schmiedel ?First impressions from Oracle Open World 2013 http://wp.me/p2fG8x-77 @soacommunity @OC_WIRE SOA Community ?Good morning SFO let us know if you attend #OOW & #OPN keynote - #soacommunity pic.twitter.com/hzLYGDlRgE Simon Haslam ?Had a very useful @wlscommunity PAC meeting yesterday... & probably the best swag to date! pic.twitter.com/Lqus8ysbp7 Vikas Anand ?Oracle SOA Suite - Team Blog http://bit.ly/18I1Zj7 Rajesh Raheja ?Introducing new Cloud Connectivity Adapters #soa #demopod #oow13. I'll be there Sep 23 & 24 3-6pm to meetup http://bit.ly/18I1Zj7 leonsmiers ?..and again a very successful Oracle SOA/BPM partner council on the eve of #oow13. Thanks Jurgen! @soacommunity pic.twitter.com/aM1LMlb7Yw Vikas Anand ?#oow13 #soa #oep #exalogic Canon Delivers Fast Data with Oracle Event Processing (Oracle SOA Suite) http://bit.ly/1dwPeHb #soacommunity Rolf Scheuch ?The ACM poster is a big success. Great talks and .... I am soon out of posters! #bpmcon #ACM pic.twitter.com/TriaUyXRWK Oracle SOA ?British Telecom Sucess with Oracle B2B #oow #soa #b2b http://pub.vitrue.com/1RWi leonsmiers ?(Oracle) Case Management supporting re-platforming, a pre-read before our presentation at #oow13 http://leonsmiers.blogspot.com/2013/09/case-management-supporting-re.html … #capgemini #yammer SOA & BPM Partner CommunityFor regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Twitter,SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • A whole site for reviewing of SQL Server MVP Deep Dives

    - by Rob Farley
    This book just keeps amazing me. Not only as I read through some chapters for the first time, and others for the second and third times, but also as I read reviews of it written by other people. The guys over at http://sqlperspectives.wordpress.com are a prime example. They’ve been going through each chapter, each writing a review on it, and often getting a guest blogger to write something as well – and they’re clearly getting a lot of stuff out of this brilliant book. Back when I first heard about them doing this, I had offered to be involved, and recently did an interview with them about my chapters (chapter seven and chapter forty). That interview can be found at http://sqlperspectives.wordpress.com/2010/03/20/interview-with-rob-farley/ – and covers how I got into databases, and how I think the database roles in the IT industry are changing. If you don’t have a copy of SQL Server MVP Deep Dives yet, why not get a copy from http://www.sqlservermvpdeepdives.com (or persuade your local bookstore to get some copies in), and read through chapters with these guys? Treat it like a book club, discussing each chapter with others (guest blogging perhaps?), and you’ll probably end up getting even more out of it. Remember that the proceeds of the book go to charity (instead of the authors – we get nothing), so you don’t need to consider that you’re splashing out on a treat for yourself. Think of the kids helped by War Child instead. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Thread Synchronization and Synchronization Primitives

    When considering synchronization in an application, the decision truly depends on what the application and its worker threads are going to do. I would use synchronization if two or more threads could possibly manipulate the same instance of an object at the same time. An example of this in C# can be demonstrated through the use of storing data in a static object. A static object is initialized once per application and the data within the object can be accessed by all threads. I would use the synchronization primitives to prevent any data from being manipulated by multiple threads simultaneously. This would reduce any data corruption from occurring within the object. On the other hand if all the threads used non static objects and were independent of the other tasks there would be no need to use synchronization. Synchronization Primitives in C#: Basic Blocking Locking Signaling Non-Blocking Synchronization Constructs The Basic Blocking methods include Sleep, Join, and Task.Wait.  These methods force threads to wait until other threads have completed. In addition, these methods can also force a thread to wait a set amount of time before continuing to work.   The Locking primitive prevents a thread from entering a critical section of code while another thread is in the same critical section.  If another thread attempts to enter a locked code, it will wait, until the code block is released. The Signaling primitive allows a thread to temporarily pause work until receiving a notification from another thread that it is ok to continue working. The Signaling primitive removes the need for polling.The Non-Blocking Synchronization Constructs protect access to a common field by calling upon processor primitives.

    Read the article

  • Messing with the Team

    - by Robert May
    Good Product Owners will help the team be the best that they can be.  Bad product owners will mess with the team and won’t care about the team.  If you’re a product owner, seek to do good and avoid bad behavior at all costs.  Remember, this is for YOUR benefit and you have much power given to you.  Use that power wisely. Scope Creep The product owner has several tools at his disposal to inject scope into an iteration.  First, the product owner can use defects to inject scope.  To do this, they’ll tell the team what functionality that they want to see in a feature.  Then, after the feature is developed, the Product Owner will decide that they don’t really like how the functionality behaves.  To change it, rather than creating a new story, they’ll add a defect.  The functionality is correct, as designed, but the Product Owner doesn’t like it.  By creating the defect, the Product Owner destroys the trust that the team has of the product owner.  They may not be able to count the story, because the Product Owner changed the story in the iteration, and the team then ends up looking like they have low velocity for something over which they have no control.  This is bad.  One way to deal with this is to add “Product Owner Time” to the iteration.  This will slow the velocity, but then the ScrumMaster can tell stake holders that this time is strictly in place to deal with bad behavior of the Product Owner. Another mechanism often used to inject Scope is the concept of directed development.  Outside of planning, stand-ups, or any other meeting, the Product Owner will take a developer aside and ask them to complete a task for them.  This is bad!  The team should be allocating all of their time to development.  If the Product Owner asks for a favor, then time that would normally be used for development will be used for a pet project of the Product Owner and the team will not get credit for this work.  Selfish product owners do this, and I typically see people who were “managers” do this behavior.  Authoritarian command and control development environments also see this happen.  The best thing that can happen is for the team member to report the issue to the ScrumMaster and the ScrumMaster to get very aggressive with management and the Product Owner to try and stop the behavior.  This may result in the ScrumMaster being fired, but if the behavior continues, Scrum is doomed.  This problem is especially bad in cases where the team member’s direct supervisor is the Product Owner.  I don’t recommend that the Product Owner or ScrumMaster have a direct report relationship with team members, since team members need the ability to say no.  To work around this issue, team members need to say no.  If that fails, team members need to add extra time to the iteration to deal with the scope creep injection and accept the lower velocity. As discussed above, another mechanism for injecting scope is by changing acceptance tests after the work is complete.  This is similar to adding defects to change scope and is bad.  To get around, add time for Product Owner uncertainty to the iteration and make sure that stakeholders are aware of the need to add this time because of the Product Owner. Refusing to Prioritize Refusing to prioritize causes chaos for the team.  From the team’s perspective, things that are not important will be worked on while things that the team knows are vital will be ignored.  A poor Product Owner will often pick the stories for the iteration on a whim.  This leads to the team working on many different aspects of the product and results in a lower velocity, since each iteration the team must switch context to the new area of development. The team will also experience confusion about priorities.  In one iteration, Feature X was the highest priority and had to be done.  Then, the following iteration, even though parts of Feature X still need to be completed, no stories to address them will be in the iteration.  However, three iterations later, Feature X will again become high priority. This will cause the team to not trust the Product Owner, and eventually, they’ll stop caring about the features they implement.  They won’t know what is important, so to insulate themselves from the ever changing chaos, they’ll become apathetic to all features.  Team members are some of the most creative people in a company.  By losing their engagement, the company is going to have a substandard product because the passion for the product won’t be in the team. Other signs that the Product Owner refuses to prioritize is that no one outside of the product owner will be consulted on priorities.  Additionally, the product, release, and iteration backlogs will be weak or non-existent. Dealing with this issue is not easy.  This really isn’t something the team can fix, short of taking over the role of Product Owner themselves.  An appeal to the stake holders might work, but only if the Product Owner isn’t a “manager” themselves.  The ScrumMaster needs to protect the team and do what they can to either get the Product Owner to prioritize or have the Product Owner replaced. Managing the Team A Product Owner that is also the “boss” of team members is a Scrum team that is waiting to fail.  If your boss tells you to do something, failing to do that something can cause you to be fired.  The team needs the ability to tell the Product Owner NO.  If the product owner introduces scope creep, the team has a responsibility to tell the Product Owner no.  If the Product Owner tries to get the team to commit to more than they can accomplish in an iteration, the team needs the ability to tell the Product Owner no. If the Product Owner is your boss and determines your pay increases, you’re probably not going to ever tell them no, and Scrum will likely fail.  The team can’t do much in this situation. Another aspect of “managing the team” that often happens is the Product Owner tries to tell the team how to develop the stories that are in the iteration.  This is one reason why I recommend that Product Owners are NOT technical people.  That way, the team can come up with the tasks that are needed to accomplish the stories and the Product Owner won’t know better.  If the Product Owner is technical, the ScrumMaster will need to take great care to protect the team from the ScrumMaster changing how the team thinks they need to implement the stories. Product Owners can also try to manage the team by their body language.  If the team says a task is going to take 6 hours to complete, and the Product Owner disagrees, they will use some kind of sour body language to indicate this disagreement.  In weak teams, this may cause the team to revise their estimate down, which will result in them taking longer than estimated and may result in them missing the iteration.  The ScrumMaster will need to make sure that the Product Owner doesn’t send such messages and that the team ignores them and estimates what they REALLY think it will take to complete the tasks.  Forcing the team to deal with such items in the retrospective can be helpful. Absenteeism The team is completely dependent upon the Product Owner to develop features for the customer.  The Product Owner IS the voice of the customer and without them, the team will lack direction.  Being the Product Owner is a full time job!  If the Product Owner cannot dedicate daily time with the team, a different product owner should be found. The Product Owner needs to attend every stand-up, planning meeting, showcase, and retrospective that the team has.  The team also must be able to have instant communication with the product owner.  They must not be required to schedule meetings to speak with their product owner.  The team must be the highest priority task that the Product Owner has. The best way to work around an absent Product Owner is to appoint a new Product Owner in the team.  This person will be responsible for making the decisions that the Product Owner should be making and to act as the liaison to the absent Product Owner.  If the delegate Product Owner doesn’t have authority to make decisions for the team, Scrum will fail.  If the Product Owner is absent, the ScrumMaster should seek to have that Product Owner replaced by someone who has the time and ability to be a real Product Owner. Making it Personal Too often Product Owners will become convinced that their ideas are the ones that matter and that anyone who disagrees is making a personal attack on them.  Remember that Product Owners will inherently be at odds with many people, simply because they have the need to prioritize.  Others will frequently question prioritization because they only see part of the picture that Product Owners face. Product Owners must have a thick skin and think egos.  If they don’t, they tend to make things personal, which causes them to become emotional and causes them to take actions that can destroy the trust that team members have in the Product Owner. If a Product Owner is making things person, the best thing that team members can do is reassure them that its not personal, but be firm about doing what is best for the Company and for the users.  The ScrumMaster should also spend significant time coaching the Product Owner on how to not react emotionally and how to accept criticism without becoming defensive. Conclusion I’m sure there are other ways that a Product Owner can mess with the team, but these are the most common that I’ve seen.  I would encourage all Product Owners to seek to be a good Product Owner.  If you find yourself behaving in any of the bad product owner ways, change your behavior today!  Your team will thank you. Remember, being Product Owner is very difficult!  Product Owner is one of the most difficult roles in Scrum.  However, it can also be one of the most rewarding roles in Scrum, since Product Owners literally see their ideas brought to life on the computer screen.  Product Owners need to be very patient, even in the face of criticism and need to be willing to make tough decisions on priority, but then not become offended when others disagree with those decisions.  Companies should spend the time needed to find the right product owners for their teams.  Doing so will only help the company to write better software. Technorati Tags: Scrum,Product Owner

    Read the article

  • CRM@Oracle Series: Sales Pipeline Visibility

    - by tony.berk
    Can you see it? Should you see it all? Who else should see it? Can their manager see it? What is it? Today's slidecast discusses the popular topic of sales pipeline visibility within a CRM application and how Oracle implemented visibility in our global implementation of Siebel CRM. This post is next in the CRM@Oracle Series, which discusses Oracle's internal use of Oracle CRM products such as Siebel CRM and Oracle CRM On Demand. Oracle's requirements include a variety of different organizations, roles and responsibilities. Oracle's Applications IT CRM Systems team, responsible for deploying Siebel CRM within Oracle, implemented a number of creative solutions to address the requirements, and they are shared in the slidecast. CRM@Oracle - Sales Pipeline Visibility Click here to learn more about Oracle CRM products and here to learn about other customers using Oracle CRM products. We want to hear from you! If you have a particular CRM area or function which you'd like to hear how Oracle implemented it internally, let us know and we'll get it on our list. In the meantime, enjoy today's update on the CRM@Oracle series.

    Read the article

  • Using Teleriks new LINQ implementation to create OData feeds

    This week Telerik released a new LINQ implementation that is simple to use and produces domain models very fast. Built on top of the enterprise grade OpenAccess ORM, you can connect to any database that OpenAccess can connect to such as: SQL Server, MySQL, Oracle, SQL Azure, VistaDB, etc. While this is a separate LINQ implementation from traditional OpenAccess Entites, you can use the visual designer without ever interacting with OpenAccess, however, you can always hook into the advanced ORM features like caching, fetch plan optimization, etc, if needed. Just to show off how easy our LINQ implementation is to use, I will walk you through building an OData feed using Data Services Update for .NET Framework 3.5 SP1. (Memo to Microsoft: P-L-E-A-S-E hire someone from Apple to name your products.) How easy is it? If you have a fast machine, are skilled with the mouse, and type fast, you can do this in about 60 seconds via three easy steps. (I promise in about 2-3 weeks that you can do this in less then 30 seconds. Stay tuned for that.)  Step 1 (15-20 seconds): Building your Domain Model In your web project in Visual Studio, right click on the project and select Add|New Item and select Telerik OpenAccess Domain Model as your item template. Give the file a meaningful name as well. Select your database type (SQL Server, SQL Azure, Oracle, MySQL, VistaDB, etc) and build the connection string. If you already have a Visual Studio connection string already saved, this step is trivial.  Then select your tables, enter a name for your model and click Finish. In this case I connected to Northwind and selected only Customers, Orders, and Order Details.  I named my model NorthwindEntities and will use that in my DataService. Step 2 (20-25 seconds): Adding and Configuring your Data Service In your web project in Visual Studio, right click on the project and select Add|New Item and select ADO .NET Data Service as your item template and name your service. In the code behind for your Data Service you have to make three small changes. Add the name of your Telerik Domain Model (entered in Step 1) as the DataService name (shown on line 6 below as NorthwindEntities) and uncomment line 11 and add a * to show all entities. Optionally if you want to take advantage of the DataService 3.5 updates, add line 13 (and change IDataServiceConfiguration to DataServiceConfiguration in line 9.) 1: using System.Data.Services; 2: using System.Data.Services.Common; 3:   4: namespace Telerik.RLINQ.Astoria.Web 5: { 6: public class NorthwindService : DataService<NorthwindEntities> 7: { 8: //change the IDataServiceConfigurationto DataServiceConfiguration 9: public static void InitializeService(DataServiceConfiguration config) 10: { 11: config.SetEntitySetAccessRule("*", EntitySetRights.All); 12: //take advantage of the "Astoria3.5 Update" features 13: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 14: } 15: } 16: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Step 3 (~30 seconds): Adding the DataServiceKeys You now have to tell your data service what are the primary keys of each entity. To do this you have to create a new code file and create a few partial classes. If you type fast, use copy and paste from your first entity,  and use a refactoring productivity tool, you can add these 6-8 lines of code or so in about 30 seconds. This is the most tedious step, but dont worry, Ive bribed some of the developers and our next update will eliminate this step completely. Just create a partial class for each entity you have mapped and add the attribute [DataServiceKey] on top of it along with the keys field name. If you have any complex properties, you will need to make them a primitive type, as I do in line 15. Create this as a separate file, dont manipulate the generated data access classes in case you want to regenerate them again later (even thought that would be much faster.) 1: using System.Data.Services.Common; 2:   3: namespace Telerik.RLINQ.Astoria.Web 4: { 5: [DataServiceKey("CustomerID")] 6: public partial class Customer 7: { 8: } 9:   10: [DataServiceKey("OrderID")] 11: public partial class Order 12: { 13: } 14:   15: [DataServiceKey(new string[] { "OrderID", "ProductID" })] 16: public partial class OrderDetail 17: { 18: } 19:   20: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Done! Time to run the service. Now, lets run the service! Select the svc file and right click and say View in Browser. You will see your OData service and can interact with it in the browser. Now that you have an OData service set up, you can consume it in one of the many ways that OData is consumed: using LINQ, the Silverlight OData client, Excel PowerPivot, or PhP, etc. Happy Data Servicing! Technorati Tags: Telerik,Astoria,Data Services Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Do’s and Don’ts Building SharePoint Applications

    - by Bil Simser
    SharePoint is a great platform for building quick LOB applications. Simple things from employee time trackers to server and software inventory to full blown Help Desks can be crafted up using SharePoint from just customizing Lists. No programming necessary. However there are a few tricks I’ve painfully learned over the years that you can use for your own solutions. DO What’s In A Name? When you create a new list, column, or view you’ll commonly name it something like “Expense Reports”. However this has the ugly effect of creating a url to the list as “Expense%20Reports”. Or worse, an internal field name of “Expense_x0x0020_Reports” which is not only cryptic but hard to remember when you’re trying to find the column by internal name. While “Expense Reports 2011” is user friendly, “ExpenseReports2011” is not (unless you’re a programmer). So that’s not the solution. Well, not entirely. Instead when you create your column or list or view use the scrunched up name (I can’t think of the technical term for it right now) of “ExpenseReports2011”, “WomenAtTheOfficeThatAreMen” or “KoalaMeatIsGoodWhenBroiled”. After you’ve created it, go back and change the name to the more friendly “Silly Expense Reports That Nobody Reads”. The original internal name will be the url and code friendly one without spaces while the one used on data entry forms and view headers will be the human version. Smart Columns When building a view include columns that make sense. By default when you add a column the “Add to default view” is checked. Resist the urge to be lazy and leave it checked. Uncheck that puppy and decide consciously what columns should be included in the view. Pick columns that make sense to what the user is trying to do. This means you have to talk to the user. Yes, I know. That can be trying at times and even painful. Go ahead, talk to them. You might learn something. Find out what’s important to them and why. If they’re doing something repetitively as part of their job, try to make their life easier by including what’s most important to them. Do they really need to see the Created *and* Modified date of a document or do they just need the title and author? You’ll only find out after talking to them (or getting them drunk in a bar and leaving them in the back alley handcuffed to a garbage bin, don’t ask). Gotta Keep it Separated Hey, views are there for a reason. Use them. While “All Items” is a fine way to present a list of well, all items, it’s hardly sufficient to present a list of servers built before the Y2K bug hit. You’ll be scrolling the list for hours finally arriving at Page 387 of 12,591 and cursing that SharePoint guy for convincing you that putting your hardware into a list would be of any use to anyone. Next to collecting the data, presenting it is just as important. Views are often overlooked and many times ignored or misused. They’re the way you can slice and dice the data up so that you’re not trying to consume 3,000 years of human evolution on a single web page. Remember views can be filtered so feel free to create a view for each status or one for each operating system or one for each species of Information Worker you might be putting in that list or document library. Not only will it reduce the number of items someone sees at one time, it’ll also make the information that much more relevant. Also remember that each view is a separate page. Use it in navigation by creating a menu on the Quick Launch to each view. The discoverability of the Views menu isn’t overly obvious and if you violate the rule of columns (see Horizontally Scrolling below) the view menu doesn’t even show up until you shuffle the scroll bar to the left. Navigation links, big giant buttons, a screaming flashing “CLICK ME NOW” will help your users find their way. Sort It! Views are great so we’re building nice, rich views for the user. Awesomesauce. However sort is not very discoverable by the user. For example when you’re looking at a view how do you know if it’s ascending or descending and what is it sorted on. Maybe it’s sorted using two fields so what’s that all about? Help your users by letting them know the information they’re looking at is sorted. Maybe you name the view something appropriate like “Bogus Expense Claims Sorted By Deadbeats”. If you use the naming strategy just make sure you keep the name consistent with the description. In the previous example their better be a Deadbeat column so I can see the sort in action. Having a “Loser” column, while equally correct, is a little obtuse to the average Information Worker. Remember, they usually don’t use acronyms and even if they knew how to, it’s not immediately obvious to them that’s what you’re trying to convey. Another option is to simply drop a Content Editor Web Part above the list and explain exactly the view they’re looking at. Each view is it’s own page so one CEWP won’t be used across the board. Be descriptive in what the user is seeing but try to keep it brief. Dumping the first chapter of I, Claudius might be informative to the data but can gobble up screen real estate and miss the point of having the list. DO NOT Useless Attachments The attachments column is, in a word, useless. For the most part. Sure it indicates there’s an attachment on the list item but in the grand scheme of things that’s not overly informative. Maybe it is and by all means, if it makes sense to you include it. Colour it. Make it shine and stand like the Return of Clippy on every SharePoint list. Without it being functional it can be boring. EndUserSharePoint.com has an article to make the son of Clippy that much more useful so feel free to head over and check out this blog post by Paul Grenier on the task (Warning code ahead! Danger Will Robinson!) In any case, I would suggest you remove it from your views. Again if it’s important then include it but consider the jQuery solution above to make it functional. It’s added by default to views and one of things that people forget to clean up. Horizontal Scrolling Screen real estate is premium so building a list that contains 8,000 columns and stretches horizontally across 15 screens probably isn’t the most user friendly experience. Most users can’t figure out how to scroll vertically let alone horizontally so don’t make it even that more confusing for them. Take the Steve Krug approach in your view designs and try not to make the user think. Again views are your friend. Consider splitting up the data into views where one view contains 10 columns and other view contains the other 10. Okay, maybe your information doesn’t work that way but humans can only process 7 pieces of data at a time, 10 at most (then their heads explode and you don’t want to clean that mess up, especially on a Friday night before the big dance). It drives me batshit crazy when I see a view with 80 columns of data. I often ask the user “So what do you do with all this information”. The response is usually “With this data [the first 10 columns] I decide if I’m going to fire everyone, and with this data [the next 10 columns] I decide if I’m going to set the building on fire and collect the insurance”. It’s at that point I show them how to create two new views “People Who Are About To Get The Axe” and “Beach Time For The Executives”. Again, talk to your users and try to reason with them on cutting down the number of columns they see at once. Vertical Scrolling Another big faux pas I find is the use of multi-line comment fields in views. It’s not so bad when you have a statement like this in your view: “I really like, oh my god, thought I was going to scream when I saw this turtle then I decided what I was going to have for dinner and frankly I hate having to work late so when I was talking to the customer I thought, oh my god, what if the customer has turtles and then it appeared to me that I really was hungry so I'm going to have lunch now.” It’s fine if that’s the only column along with two or three others, but once you slap those 20 columns of data into the list, the comment field wraps and forms a new multi-page novel that takes up your entire screen. Do everyone a favour and just avoid adding the column to views. Train the user to just click through to the item if they need to see the contents. Duplicate Information Duplication is never good. Views and great as you can group data together. For example create a view of project status reports grouped by author. Then you can see what project manager is being a dip and not submitting their report. However if you group by author do you really need the Created By field as well in the view? Or if the view is grouped by Project then Author do you need both. Horizontal real estate is always at a premium so try not to clutter up the view with duplicate data like this. Oh  yeah, if you’re scratching your head saying “But Bil, if I don’t include the Project name in the view and I have a lot of items then how do I know which one I’m looking at”. That’s a hint that your grouping is too vague or you have too much data in the view based on that criteria. Filter it down a notch, create some views, and try to keep the group down to a single screen where you can see the group header at the top of the page. Again it’s just managing the information you have. Redundant, See Redundant This partially relates to duplicate information and smart columns but basically remember to not include the obvious in a view. Remember, don’t make me think. If you’ve gone to the trouble (and it was a lot of trouble wasn’t it?) to create separate views of your data by creating a “September Zombie Brain Sales”, “October Zombie Brain Sales”, etc. then please for the love of all that is holy do not include the Month and Product columns in your view. Similarly if you create a “My” view of anything (“My Favourite Brands of Spandex”, “My Co-Workers I Find The Urge To Disinfect”) then again, do not include the owner or author field (or whatever field you use to identify “My”). That’s just silly. Hope that helps! Happy customizing!

    Read the article

  • WWDC and Tech Ed: A Tale of Two DevCons

    - by andrewbrust
    Next week marks the first full week of June.  Summer will feel in full swing and it will be a pretty big season for technology.  In seeming acknowledgement of that very fact, both Apple and Microsoft will be holding large developers conferences starting Monday.  Apple will hold its annual Worldwide Developers Conference (WWDC) in lovely San Francisco and Microsoft will hold its Tech Ed conference in muggy, oil-laden yet soulful New Orleans.  A brief survey of each show reveals much about the differences in each company’s offerings, strategy, and approach to customers and partners. In the interest of full disclosure, I must explain that I will be speaking at Microsoft’s Tech Ed show, and have done so, on and off, since 2003.  I have never been to an Apple conference and, as readers of this blog may know, I acquired my first ever Apple product 2 months ago when I bought an iPad on the day of that product’s launch.  I think I have keen insights into Microsoft’s conference.  My ability to comment on Apple’s event ranges somewhere between backseat driver and naive observer.  Just so you know. Although both shows cater to their respective company’s developers, there are a number of differences in the events’ purposes and content approaches.  First off, let’s consider each show as a news and PR vehicle.  WWDC will feature Steve Jobs’ keynote address and most likely will be where Apple officially reveals details of its 4th-generation iPhone. Jobs will likely also provide deep background information on the corresponding iPhone OS release.  These presumed announcements will make the show a magnet for the tech press and tech blogger elite.  Apple’s customers will be interested too, especially since the iPhone OS release will likely be made available to owners of existing iPhone, iPod Touch and iPad devices. Tech Ed, on the other hand, may not be especially newsworthy at all.  The keynote address will be given by Bob Muglia, who is President of the company’s Server and Tools Division, and he’ll likely be reviewing things more than previewing them. That’s because the company has, in the last 6-8 months, already released new versions of a majority of its products, including Windows, Office, SharePoint, SQL Server, Exchange, its Azure cloud platform, its .NET software development layer, its Silverlight Rich Internet Application (RIA) technology and its Visual Studio developer suite.  Redmond’s product pipeline has functioned more like a firehose of late, and the company has a ton of work to do to get developers up to speed on everything that’s new. I know I keep saying “developers,” but in Tech Ed’s case, that’s not really accurate.  In North America, Tech Ed caters to both developers and IT pros (i.e. technologists who work with physical IT infrastructure, as well as security and administration of the server software that runs on it).  This pairing has, since its inception, struck some as anomalous and others, including many exhibitors, as very smart. Certainly, it means Tech Ed ends up being a confab for virtually all professionals in Microsoft’s ecosystem.  And this year, Microsoft’s Business Intelligence (BI) conference will be co-located with Tech Ed, further enhancing that fusion effect. Clearly then, Microsoft’s show will focus on education, as its name assures us.  Apple’s will serve as both a press event and an opportunity to get its own App Store developer channel synced up with its newest technology advances.  For example, we already know that iPhone OS 4.0 will provide for a limited multitasking capability; that will only work well if people know how to code to it in a capable way.  Apple also told us its iAd advertising platform will be part of the new OS, and Steve Jobs insists that’s to provide a revenue opportunity for developers.  This too, then, needs to be explicated and soaked up buy the faithful. A look at each show’s breakout session lineup provides some interesting takeaways.  WWDC will have very few Mac-specific sessions on offer, and virtually no sessions that at are IT- or “Enterprise-“ related.  It’s all about the phone, music players and tablets.  However, WWDC will have plenty of low-level, hardcore tech coverage of such things as Advanced Memory Analysis and Creating Secure Applications, as well as lots of rich media-related content like Core Animation and Game Design and Development.  Beyond Apple’s proprietary platform, WWDC will also feature an array of sessions on HTML 5 and other Web standards.  In all, WWDC offers over 100 technical sessions and hands-on labs. What about Tech Ed’s editorial content?  Like the target audience, it really runs the gamut.  The show has 21 tracks (versus WWDC’s 5) and more than 745 “learning opportunities” which include breakout sessions, demo stations, hands-on labs and BIrds of a Feather discussion sessions.  Topics range from Architecture talks like Patterns of Parallel Programming to cloud computing talks like Building High Capacity Compute Applications with Windows Azure to IT-focused topics like Virtualization of Microsoft SharePoint 2010 Farm Architecture.  I also count 19 sessions on Windows Phone 7.  Unfortunately, with regard to Web standards and HTML 5, only a few sessions are offered, all of them specific to Internet Explorer. All-in-all, Apple’s show looks more exciting and “sexier” than Tech Ed. Microsoft’s show seems a lot more enterprise-focused than WWDC. This is, of course, well in sync with each company’s approach and products.  Microsoft’s content is much wider ranging and bests WWDC in sheer volume of sessions and labs.  I suppose some might argue that less is more; others that Apple’s consumer-focused offerings simply don’t provide for the same depth of coverage to a business audience.  Microsoft has a serious focus on the cloud and  a paucity of coverage on client-side Web standards; Apple has virtually no cloud offering at all.  Again, this reflects each tech titan’s go-to-market strategy. My own take is that employees of each company should attend the other’s event.  The amount of mutual exclusivity in content may make sense in terms of corporate philosophy, but the reality is that each company could stand to diversify into the other’s territory, at least somewhat. My own talk at Tech Ed will focus on competitive analysis around Microsoft’s BI products.  Apple does not today figure into that analysis. Maybe one day it will.

    Read the article

  • Enabling 32-Bit Applications on IIS7 (also affects 32-bit oledb or odbc drivers) [Solved]

    - by Humprey Cogay, C|EH
    We just bought a new Web Server, after installing Windows 2008 R2(which is a 64bit OS and IIS7), SQL Server Standard 2008 R2 and IBM Client Access for V5R3 with its Dot Net Data Providers, I tried deploying our new project which is fully functional on an IIS6 Based Web Server, I encountered this Error The 'IBMDA400.DataSource.1' provider is not registered on the local machine. To remove the doubt that I still lack some Software Pre-Requesites or version conflicts  since I encountered some erros while installing my IBM Client Access, I created a Connection Tester which is Windows App that accepts a connection string as a parameter and verifies if that parameter is valid. After entering the Proper Conn String I tried hitting the button and the Test was Succesful. So now I trimmed my suspects to My Web App and IIS7. After Googling around I found this post by a Rakki Muthukumar(Microsoft Developer Support Engineer for ASP.NET and IIS7) http://blogs.msdn.com/b/rakkimk/archive/2007/11/03/iis7-running-32-bit-and-64-bit-asp-net-versions-at-the-same-time-on-different-worker-processes.aspx So I tried scouting on IIS7's management console and found this little tweak under the Application Pool where my App is a member of. After changing this parameter to TRUE Yahoo (although I'm a Google kind of person) the Web App Works .......

    Read the article

  • BizTalk 2009 - Naming Guidelines

    - by StuartBrierley
    The following is effectively a repost of the BizTalk 2004 naming guidlines that I have previously detailed.  I have posted these again for completeness under BizTalk 2009 and to allow an element of separation in case I find some reason to amend these for BizTalk 2009. These guidlines should be universal across any version of BizTalk you may wish to apply them to. General Rules All names should be named with a Pascal convention. Project Namespaces For message schemas: [CompanyName].XML.Schemas.[FunctionalName]* Examples:  ABC.XML.Schemas.Underwriting DEF.XML.Schemas.MarshmellowTradingExchange * Donates potential for multiple levels of functional name, such as Underwriting.Dictionary.Valuation For web services: [CompanyName].Web.Services.[FunctionalName] Examples: ABC.Web.Services.OrderJellyBeans For the main BizTalk Projects: [CompanyName].BizTalk.[AssemblyType].[FunctionalName]* Examples: ABC.BizTalk.Mappings.Underwriting ABC.BizTalk.Orchestrations.Underwriting * Donates potential for multiple levels of functional name, such as Mappings.Underwriting.Valuations Assemblies BizTalk Assembly names should match the associated Project Namespace, such as ABC.BizTalk.Mappings.Underwriting. This pertains to the formal assembly name and the DLL name. The Solution name should take the name of the main project within the solution, and also therefore the namespace for that project. Although long names such as this can be unwieldy to work with, the benefits of having the full scope available when the assemblies are installed on the target server are generally judged to outweigh this inconvenience. Messaging Artifacts Artifact Standard Notes Example Schema <DescriptiveName>.xsd   .NET Type name should match, without file extension.    .NET Namespace will likely match assembly name. PurchaseOrderAcknowledge_FF.xsd  or FNMA100330_FF.xsd Property Schema <DescriptiveName>.xsd Should be named to reflect possible common usage across multiple schemas  IspecMessagePropertySchema.xsd UnderwritingOrchestrationKeys.xsd Map <SourceSchema>2<DestinationSchema>.btm Exceptions to this may be made where the source and destination schemas share the majority of the name, such as in mainframe web service maps InstructionResponse2CustomEmailRequest.btm (exception example) AccountCustomerAddressSummaryRequest2MainframeRequest.btm Orchestration <DescriptiveName>.odx   GetValuationReports.odx SendMTEDecisionResponse.odx Send/Receive Pipeline <DescriptiveName>.btp   ValidatingXMLReceivePipeline.btp FlatFileAssembler.btp Receive Port A plainly worded phrase that will clearly explain the function.    FraudPreventionServices LetterProcessing   Receive Location A plainly worded phrase that will clearly explain the function.  ? Do we want to include the transport type here ? Arrears Web Service Send Port Group A plainly worded phrase that will clearly explain the function.   Customer Updates Send Port A plainly worded phrase that will clearly explain the function.    ABCProductUpdater LogLendingPolicyOutput Parties A meaningful name for a Trading Partner. If dealing with multiple entities within a Trading Partner organization, the Organization name could be used as a prefix.   Roles A meaningful name for the role that a Trading Partner plays.     Orchestration Workflow Shapes Shape Standard Notes Example Scopes <DescriptionOfContainedWork> or <DescOfcontainedWork><TxType>   Including info about transaction type may be appropriate in some situations where it adds significant documentation value to the diagram. HandleReportResponse         Receive Receive<MessageName> Typically, MessageName will be the same as the name of the message variable that is being received “into”. ReceiveReportResponse Send Send<MessageName> Typically, MessageName will be the same as the name of the message variable that is being sent. SendValuationDetailsRequest Expression <DescriptionOfEffect> Expression shapes should be named to describe the net effect of the expression, similar to naming a method.  The exception to this is the case where the expression is interacting with an external .NET component to perform a function that overlaps with existing BizTalk functionality – use closest BizTalk shape for this case. CreatePrintXML Decide <DescriptionOfDecision> A description of what will be decided in the “if” branch Report Type? Perform MF Save? If-Branch <DescriptionOfDecision> A (potentially abbreviated) description of what is being decided Mortgage Valuation Yes Else-Branch Else Else-branch shapes should always be named “Else” Else Construct Message (Assign) Create<Message> (for Construct)     <ExpressionDescription> (for expression) If a Construct shape contains a message assignment, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.    The actual message assignment shape contained should be named to describe the expression that is contained. CreateReportDataMV   which contains expression: ExtractReportData Construct Message (Transform) Create<Message> (for Construct)   <SourceSchema>2<DestSchema> (for transform) If a Construct shape contains a message transform, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.   The actual message transform shape contained should generally be named the same as the called map.  CreateReportDataMV   which contains transform: ReportDataMV2ReportDataMV                 Construct Message (containing multiple shapes)   If a Construct Message shape uses multiple assignments or transforms, the overall shape should be named to communicate the net effect, using no prefix.     Call/Start Orchestration Call<OrchestrationName>   Start<OrchestrationName>     Throw Throw<ExceptionType> The corresponding variable name for the exception type should (often) be the same name as the exception type, only camel-cased. ThrowRuleException, which references the “ruleException” variable.     Parallel <DescriptionOfParallelWork> Parallel shapes should be named by a description of what work will be done in parallel   Delay <DescriptionOfWhatWaitingFor> Delay shapes should be named by a description of what is being waited for.  POAcknowledgeTimeout Listen <DescriptionOfOutcomes> Listen shapes should be named by a description that captures (to the degree possible) all the branches of the Listen shape POAckOrTimeout FirstShippingBid Loop <DescriptionOfLoop> A (potentially abbreviated) description of what the loop is. ForEachValuationReport WhileErrorFlagTrue Role Link   See “Roles” in messaging naming conventions above.   Suspend <ReasonDescription> Describe what action an administrator must take to resume the orchestration.  More detail can be passed to error property – and should include what should be done by the administrator before resuming the orchestration. ReEstablishCreditLink Terminate <ReasonDescription> Describe why the orchestration terminated.  More detail can be passed to error property. TimeoutsExpired Call Rules Call<PolicyName> The policy name may need to be abbreviated. CallLendingPolicy Compensate Compensate or Compensate<TxName> If the shape compensates nested transactions, names should be suffixed with the name of the nested transaction – otherwise it should simple be Compensate. CompensateTransferFunds Orchestration Types Type Standard Notes Example Multi-Part Message Types <LogicalDocumentType>   Multi-part types encapsulate multiple parts.  The WSDL spec indicates “parts are a flexible mechanism for describing the logical abstract content of a message.”  The name of the multi-part type should correspond to the “logical” document type, i.e. what the sum of the parts describes. InvoiceReceipt   (which might encapsulate an invoice acknowledgement and a payment voucher.) Multi-Part Messsage Part <SchemaNameOfPart> Should be named (most often) simply for the schema (or simple type) associated with the part. InvoiceHeader Messages <SchemaName> or <MuliPartMessageTypeName> Should be named based on the corresponding schema type or multi-part message type.  If there is more than one variable of a type, name for its use within the orchestration. ReportDataMV UpdatedReportDataMV Variables <DescriptiveName>   TargetFilePath StringProcessor Port Types <FunctionDescription>PortType Should be named to suggest the nature of an endpoint, with pascal casing and suffixed with “PortType”.   If there will be more than one Port for a Port Type, the Port Type should be named according to the abstract service supplied.   The WSDL spec indicates port types are “a named set of abstract operations and the abstract messages involved” that also encapsulates the message pattern (i.e. one-way, request-response, solicit-response) that all operations on the port type adhere to. ReceiveReportResponsePortType  or CallEAEPortType (This is a two way port, so Receove or Send alone would not be appropriate.  Could have been ProcessEAERequestPortType etc....) Ports <FunctionDescription>Port Should be named to suggest a grouping of functionality, with pascal casing and suffixed with “Port.”  ReceiveReportResponsePort CallEAEPort Correlation types <DescriptiveName> Should be named based on the logical name of what is being used to correlate.  PurchaseOrderNumber Correlation sets <DescriptiveName> Should be named based on the corresponding correlation type.  If there is more than one, it should be named to reflect its specific purpose within the orchestration.   PurchaseOrderNumber Orchestration parameters <DescriptiveName> Should be named to match the caller’s names for the corresponding variables where appropriate.

    Read the article

  • Microsoft TechEd 2010 - Day 3 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 3 @ Bangalore Sorry for my delayed post on day 3 because I had to travel from Blore to Chennai So I couldnt write for the past two days. On day 3 as usual we had lot of simultaneous tracks on various sessions. This day I choose the Your Data, Our Platform Track. It had sessions on the following 5 topics :   Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan Data Recovery / Consistency with CheckDB - by Vinod Kumar M Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam This was one of the superb sessions i have attended. He explained all the concepts in detail with a demo. The important thing in this is there is something called Data-Tier application project which is newly introduced in this VS2010 with which we can manage all our data along with our application inside our VS itself. We can create DB,Tables,Procs,Views etc. here itself and once we deploy it creates a compressed file called .dacpac which stores all the changes in Table Schema,Created procs, etc. on to that single file which reduces our (developer's) effort in preparing the deployment scripts and giving it to the DBA. It also has some policy configurations which can be managed easily by checking some rules like in outlook. For Ex : IF the SQL Server Version > 10 then deploy else dont. This rule specifies that even if we try to deploy on SQL Server DB with version less than 10 It will not do it. And if we deploy some .dacpac to SQL server production db with the option upgrade DB with this dacpac once everything completes successfully it will say success else it rollsback to the prior version. Even if it gets deployed successfully and later @ a point of time you wish to revert it back to the prior version, you can go ahead and delete the existing dacpac version so that it reverts to the older version of the db changes. And for the good questions that were asked in the session T-Shirts were given. SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M This one too was the best session. The speaker Vinod explained everything very much clearly. This was really useful session and you dont believe, as per my knowledge, in the total 3 days in the TechEd except the Keynote, for this session seats were full (House FULL)  People were even standing out to attend this session. Such a great one it was. The speaker did a deep dive in to the Query Plan section and showed which actually causes the problem. Its all about the thing that we need to understand about the execution of SQL server Queries. We think in a way and SQL Server never executes in that way. We need to understand that first. He also told about there might be two plans generated for a single query at a point of time because of parallel processors in the system. The Key is here in every query. There is something called Estimated Row Count and Actual Row Count in the query plan. If the estimated row count by SQL server tallies with the actual row count your performance will be awesome. He said some tweaks to achieve the same. After this as usual we had lunch SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan This was more of a DBA's session. Am really sorry I was totally blank and I was not interested to attend this session and walked out to attend Migrating to the cloud by Harish Ranganathan (My favorite Speaker) but unfortunately that was some other persons session. There the speaker was telling about how to configure the connection strings in such a way that we can connect to the SQL Azure platform from our VS and also showed us how to deploy the same in to Windows Azure. In between there were lot of technical problems like laptop hang, user locked and he was switching between systems, also i came in the half so i wasnt able to listen that fully. In between, Since I got an MCTS certification they gave me T-Shirt with the lines 'Iam Certified. Are you?' and they asked me to wear that. If we wear that we might get spotted and they would give us some goodies  So on the 3rd day I was wearing that T-Shirt. I got spotted by the person Tarun who was coordinating things about the certification, and he was accompanied with a cameraman and they interviewed me about the certification and I was shown live in the Teched and was seen by 60000 live viewers of the TechEd. I was really happy on that. Data Recovery / Consistency with CheckDB - by Vinod Kumar M This was one of the best sessions too in the TechEd. This guy is really amazing. In front of us he crashed a DB and showed how to recover the same in 6 different ways for different no of failures. Showed about Different types of error msgs like : 823,824,825 msdb..suspect_pages DBCC CheckDB (different parameters to it) I am really waiting for his session to get uploaded live in the Teched Website. Here is his contact info If you wish to connect to him : Twitter : @vinodk_sql Website : www.ExtremeExperts.com Blog : http://blogs.sqlxml.org/vinodkumar Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Pinal Dave is a King in SQL and he is a SQL MVP and he is the owner of SQLAuthority.com He took the session on Spatial Databases from the start. Showed about the different types of Spatial : Geometric and Geographic Geometric : x and y axis its a planar surface Geographic : Spherical surface with 3600  as the maximum which is used to represent the geographic points on the earth and easy to draw maps of different kinds. He had a lot of obstacles during his session like rain coming inside the hall, mic wires got bursted due to rain, Videos off on the display screens. In spite of that he asked the audience to come in the front rows and managed to take a good session without ppts and finally we got the displays on and he was showing demos on the same what he explained orally. That was really a fun filled informative session. He gave some books for the persons who asked good questions and answered well for his questions and I got one too  (It was a book on Data Mining - Wrox Publishers) And finally after all these things there was Keynote session for close of the TechEd. and we all assembled in a big hall where Mr.Ashok Soota, a man of age around 70  co-founder of Mindtree was called to give some lecture on his successes. He was explaining about his past and what all companies he switched and for what reasons and what are all his successes and what are all his failures and the learnings of him from his past failures. and his success and failures on his partnerships with the other concern. And there were some questions for him like What is your suggestion on young entrepreneur? How did you learn from past failures? What is reiterating your success? What is your suggestion on partnerships? How to choose partnerships? etc. And they said @ 7.30 Pm there would be a party night, but unfortunately i was not able to attend that because I had to catch my train and before that i had to pack things, so I started @ 7 itself. Thats it about the TechED!!! Stay tuned for further Technology updates.

    Read the article

  • PeopleSoft 9.2 Financial Management Training – Now Available

    - by Di Seghposs
    A guest post from Oracle University.... Whether you’re part of a project team implementing PeopleSoft 9.2 Financials for your company or a partner implementing for your customer, you should attend some of the new training courses.  Everyone knows project team training is critical at the start of a new implementation, including configuration training on the core application modules being implemented. Oracle offers these courses to help customers and partners understand the functionality most relevant to complete end-to-end business processes, to identify any additional development work that may be necessary to customize applications, and to ensure integration between different modules within the overall business process. Training will provide you with the skills and knowledge needed to ensure a smooth, rapid and successful implementation of your PeopleSoft applications in support of your organization’s financial management processes - including step-by-step instruction for implementing, using, and maintaining your applications. It will also help you understand the application and configuration options to make the right implementation decisions. Courses vary based on your role in the implementation and on-going use of the application, and should be a part of every implementation plan, whether it is for an upgrade or a new rollout. Here’s some of the roles that should consider training: · Configuration or functional implementers · Implementation Consultants (Oracle partners) · Super Users · Business Analysts · Financial Reporting Specialists · Administrators PeopleSoft Financial Management Courses: New Features Course: · PeopleSoft Financial Solutions Rel 9.2 New Features Functional Training: · PeopleSoft General Ledger Rel 9.2 · PeopleSoft Payables Rel 9.2 · PeopleSoft Receivables Rel 9.2 · PeopleSoft Asset Management Rel 9.2 · Expenses Rel 9.2 · PeopleSoft Project Costing Rel 9.2 · PeopleSoft Billing Rel 9.2 · PeopleSoft PS / nVision for General Ledger Rel 9.2 Accelerated Courses (include content from two courses for more experienced team members): · PeopleSoft General Ledger Foundation Accelerated Rel 9.2 · PeopleSoft Billing / Receivables Accelerated Rel 9.2 · PeopleSoft Purchasing / Payable Accelerated Rel 9.2 View PeopleSoft Training Overview Video

    Read the article

  • Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)

    - by Bakhtiyor
    I have mailserver configure using dovecot+postfix+mysql and it was runnig fine in the server(Ubuntu Server). But during last week it stopped working correctly. It doesn't send email. When I try to telnet localhost smtp I'm connecting successfully but when I do mail from:<[email protected]> and hit Enter it hangs on, nothing happen. Having reviewed /var/log/mail.log file I've found out that probably(99%) the problem is on postfix when it is trying to connect to MySQL server. If you see the log file given below you can see that it says Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2). Nov 14 21:54:36 ns1 dovecot: dovecot: Killed with signal 15 (by pid=7731 uid=0 code=kill) Nov 14 21:54:36 ns1 dovecot: Dovecot v1.2.9 starting up (core dumps disabled) Nov 14 21:54:36 ns1 dovecot: auth-worker(default): mysql: Connected to localhost (mailserver) Nov 14 21:54:44 ns1 postfix/postfix-script[7753]: refreshing the Postfix mail system Nov 14 21:54:44 ns1 postfix/master[1670]: reload -- version 2.7.0, configuration /etc/postfix Nov 14 21:54:52 ns1 postfix/trivial-rewrite[7759]: warning: connect to mysql server localhost: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) Nov 14 21:54:52 ns1 postfix/trivial-rewrite[7759]: fatal: mysql:/etc/postfix/mysql-virtual-alias-maps.cf(0,lock|fold_fix): table lookup problem Nov 14 21:54:53 ns1 postfix/master[1670]: warning: process /usr/lib/postfix/trivial-rewrite pid 7759 exit status 1 Nov 14 21:54:53 ns1 postfix/cleanup[7397]: warning: problem talking to service rewrite: Connection reset by peer Nov 14 21:54:53 ns1 postfix/master[1670]: warning: /usr/lib/postfix/trivial-rewrite: bad command startup -- throttling Nov 14 21:54:53 ns1 postfix/smtpd[7071]: warning: problem talking to service rewrite: Success I tried netstat -ln | grep mysql and it returns unix 2 [ ACC ] STREAM LISTENING 5817 /var/run/mysqld/mysqld.sock. The content of /etc/postfix/mysql-virtual-alias-maps.cf file is here: user = stevejobs password = apple hosts = localhost dbname = mailserver query = SELECT destination FROM virtual_aliases WHERE source='%s' Here I tried to change hosts = 127.0.0.1 but it says warning: connect to mysql server 127.0.0.1: Can't connect to MySQL server on '127.0.0.1' (110) So, I am lost and don't know where else to change in order to solve the problem. Any help would be appreciated highly. Thank you.

    Read the article

  • Drupal CMS most Stable for High Traffic

    - by Aditi
    Drupal users have high satisfaction with Drupal compared to the Joomla users, for a number of reasons. If you are thinking of  choosing a high performance platform to run your high traffic website.. Drupal Installation is your forte! Overload Scenario Drupal is scalable high performance CMS and is stable under heavy load. If your server is pushed beyond its capacity, Drupal shuts off gracefully and doesn’t crash. As soon as the server is back within its traffic capability, Drupal handles all requests smoothly again. For example if your dedicated server can handle a maximum of 50,000 visits a day, and on lucky days when your news created the buzz in social media and your traffic rose to 70,000 on one day, then your server will be overloaded and usually it crashes causing permanent damage to your database at times.. But if you have used Drupal CMS it closes down gracefully an as soon as traffic goes down to within the server’s capacity, the Drupal running site accepts all requests again. Extensibility Drupal users know that their add-ons integrate better with the core, and their framework makes it easier to extend their CMS’s capabilities.. which makes an extended version of it quite stable unlike Joomla, which loses its strength if you have plenty of plugins & heavy customizations running. Any CMS with number of plugins makes the content complex and reduces your ability to handle high traffic requests. Accessibility Management or ACL Chances are if you are high traffic website, you may have various users & content contributors. ACL means group roles that is assigning people out of the various registered user levels and allocating many kinds of privileges. The most common example is the ability to see or edit a section or selected pages. This efficient feature of Drupal makes it a class apart than other CMSs out there.

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • Best Practices - updated: which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains). This is an updated and enlarged version of the post on this topic originally posted October 2012. One frequent question "what type of domain should I use to run applications?" There used to be a simple answer: "run applications in guest domains in almost all cases", but now there are more things to consider. Enhancements to Oracle VM Server for SPARC and introduction of systems like the current SPARC servers including the T4 and T5 systems, the Oracle SuperCluster T5-8 and Oracle SuperCluster M6-32 provide scale and performance much higher than the original servers that ran domains. Single-CPU performance, I/O capacity, memory sizes, are much larger now, and far more demanding applications are now being hosted in logical domains. The general advice continues to be "use guest domains in almost all cases", meaning, "use virtual I/O rather than physical I/O", unless there is a specific reason to use the other domain types. The sections below will discuss the criteria for choosing between domain types. Review: division of labor and types of domain Oracle VM Server for SPARC offloads management and I/O functionality from the hypervisor to domains (also called virtual machines), providing a modern alternative to older VM architectures that use a "thick", monolithic hypervisor. This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, further improving reliability and security. Oracle VM Server for SPARC defines the following types of domain, each with their own roles: Control domain - management control point for the server, runs the logical domain daemon and constraints engine, and is used to configure domains and manage resources. The control domain is the first domain to boot on a power-up, is always an I/O domain, and is usually a service domain as well. It doesn't have to be, but there's no reason to not leverage it for virtual I/O services. There is one control domain per T-series system, and one per Physical Domain (PDom) on an M5-32 or M6-32 system. M5 and M6 systems can be physically domained, with logical domains within the physical ones. I/O domain - a domain that has been assigned physical I/O devices. The devices may be one more more PCIe root complexes (in which case the domain is also called a root complex domain). The domain has native access to all the devices on the assigned PCIe buses. The devices can be any device type supported by Solaris on the hardware platform. a SR-IOV (Single-Root I/O Virtualization) function. SR-IOV lets a physical device (also called a physical function) or PF) be subdivided into multiple virtual functions (VFs) which can be individually assigned directly to domains. SR-IOV devices currently can be Ethernet or InfiniBand devices. direct I/O ownership of one or more PCI devices residing in a PCIe bus slot. The domain has direct access to the individual devices An I/O domain has native performance and functionality for the devices it owns, unmediated by any virtualization layer. It may also have virtual devices. Service domain - a domain that provides virtual network and disk devices to guest domains. The services are defined by commands that are run in the control domain. It usually is an I/O domain as well, in order for it to have devices to virtualize and serve out. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Device considerations Consider the following when choosing between virtual devices and physical devices: Virtual devices provide the best flexibility - they can be dynamically added to and removed from a running domain, and you can have a large number of them up to a per-domain device limit. Virtual devices are compatible with live migration - domains that exclusively have virtual devices can be live migrated between servers supporting domains. On the other hand: Physical devices provide the best performance - in fact, native "bare metal" performance. Virtual devices approach physical device throughput and latency, especially with virtual network devices that can now saturate 10GbE links, but physical devices are still faster. Physical I/O devices do not add load to service domains - all the I/O goes directly from the I/O domain to the device, while virtual I/O goes through service domains, which must be provided sufficient CPU and memory capacity. Physical I/O devices can be other than network and disk - we virtualize network, disk, and serial console, but physical devices can be the wide range of attachable certified devices, including things like tape and CDROM/DVD devices. In some cases the lines are now blurred: virtual devices have better performance than previously: starting with Oracle VM Server for SPARC 3.1 there is near-native virtual network performance. There is more flexibility with physical devices than before: SR-IOV devices can now be dynamically reconfigured on domains. Tradeoffs one used to have to make are now relaxed: you can often have the flexibility of virtual I/O with performance that previously required physical I/O. You can have the performance and isolation of SR-IOV with the ability to dynamically reconfigure it, just like with virtual devices. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI buses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain that is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure, as described in Availability Best Practices - Avoiding Single Points of Failure . Guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device does not result in an application outage. This also permits "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O buses, so there is more I/O capacity that can be used for applications. Increased server capacity made it attractive to run more vertically-scaled applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the Oracle SuperCluster engineered systems mentioned previously. In those engineered systems, I/O domains are used for high performance applications with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. Not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O to guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm command must be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. For reference, an excellent guide to secure deployment of domains by Stefan Hinker is at Secure Deployment of Oracle VM Server for SPARC. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. They should be considered the default domain type to use unless there is a specific requirement that mandates an I/O domain. I/O domains can be used for applications with the highest performance requirements. Single Root I/O Virtualization (SR-IOV) makes this more attractive by giving direct I/O access to more domains, and by permitting dynamic reconfiguration of SR-IOV devices. Today's larger systems provide multiple PCIe buses - for example, 16 buses on the T5-8 - making it possible to configure multiple I/O domains each owning their own bus. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so interruption of service in one service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. Oracle SuperCluster uses the control domain for applications, but it is an exception. It's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity servers that run Oracle VM Server for SPARC are attractive for applications with the most demanding resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide peak performance for critical applications. That said, the improved virtual device performance in Oracle VM Server means that the default choice should still be guest domains with virtual I/O.

    Read the article

  • Screen shots and documentation on the cheap

    - by Kyle Burns
    Occasionally I am surprised to open up my toolbox and find a great tool that I've had for years and never noticed.  The other day I had just such an experience with Windows Server 2008.  A co-worker of mine was squinting to read to screenshots that he had taken using the "Print Screen, paste" method in WordPad and asked me if there was a better tool available at a reasonable cost.  My first instinct was to take a look at CamStudio for him, but I also knew that he had an immediate need to take some more screenshots, so I decided to check and see if the Snipping Tool found in Windows 7 is also available in Windows Server 2008.  I clicked the Start button and typed “snip” into the search bar and while the Snipping Tool did not come up, a Control Panel item labeled “Record steps to reproduce a problem” did. The application behind the Control Panel entry was “Problem Steps Recorder” (PSR.exe) and I have confirmed that it is available in Windows 7 and Windows Server 2008 R2 but have not checked other platforms.  It presents a pretty minimal and intuitive interface in providing a “Start Record”, “Stop Record”, and “Add Comment” button.  The “Start Record” button shockingly starts recording and, sure enough, the “Stop Record” button stops recording.  The “Add Comment” button prompts for a comment and for you to highlight the area of the screen to which your comment is related.  Once you’re done recording, the tool outputs an MHT file packaged in a ZIP archive.  This file contains a series of screen shots depicting the user’s interactions and giving timestamps and descriptive text (such as “User left click on “Test” in “My Page – Windows Internet Explorer”) as well as the comments they made along the way and some diagnostics about the applications captured. The Problem Steps Recorder looks like a simple solution to the most common of my needs for documentation that can turn “I can’t understand how to make it do what you’re reporting” to “Oh, I see what you’re talking about and will fix it right away”.  I you’re like me and haven’t yet discovered this tool give it a whirl and see for yourself.

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >