Search Results

Search found 1048 results on 42 pages for 'producer consumer'.

Page 38/42 | < Previous Page | 34 35 36 37 38 39 40 41 42  | Next Page >

  • Weblogic JMS System Error

    - by Jeune
    We're getting a JMS error which we don't have a lot to go with: org.springframework.jms.UncategorizedJmsException: Uncategorized exception occured during JMS processing; nested exception is weblogic.jms.common.JMSException:[JMSClientExceptions:055039] A system error has occurred. The error is java.lang.NullPointerException; nested exception is java.lang.NullPointerException at com.pg.ecom.jms.service.ProducerServices.SendMessageSync(ProducerServices.java:131) at com.pg.ecom.jms.service.ProducerServices.SendMessageSync(ProducerServices.java:115) at com.pg.ecom.jms.producer.FormsCRRProducer.sendMessage(FormsCRRProducer.java:56) at com.pg.ecom.cpgt.processruleagent.managerbean.forms.GenerateFormsManagerBean.useNewGetTemplateData(GenerateFormsManagerBean.java:522) at com.pg.ecom.cpgt.processruleagent.managerbean.forms.GenerateFormsManagerBean.doService(GenerateFormsManagerBean.java:114) at com.pg.ecom.fw.processcontainer.AbstractManagerBean.doServiceWrapper(AbstractManagerBean.java:175) at com.pg.ecom.fw.processcontainer.AbstractManagerBean.doServiceRequest(AbstractManagerBean.java:151) at com.pg.ecom.fw.processcontainer.AbstractServlet.doManagerBeanServiceAndPresentation(AbstractServlet.java:1911) at com.pg.ecom.cpgt.processunit.servlet.CportalParamServlet.doService(CportalParamServlet.java:107) at com.pg.ecom.fw.processcontainer.AbstractServlet.service(AbstractServlet.java:983) at javax.servlet.http.HttpServlet.service(HttpServlet.java:856) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:283) at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42) at com.pg.ecom.cpgt.processunit.filter.UploadMultipartFilter.doFilter(UploadMultipartFilter.java:28) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3229) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2002) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:1908) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1362) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209) at weblogic.work.ExecuteThread.run(ExecuteThread.java:181) The only lead I have is line 127 in the code which is indicated by this error: Caused by: weblogic.jms.common.JMSException: [JMSClientExceptions:055039]A system error has occurred. The error is java.lang.Nul lPointerException at weblogic.jms.client.JMSSession.handleException(JMSSession.java:2853) at weblogic.jms.client.JMSConsumer.receive(JMSConsumer.java:629) at weblogic.jms.client.JMSConsumer.receive(JMSConsumer.java:488) at weblogic.jms.client.WLConsumerImpl.receive(WLConsumerImpl.java:155) at org.springframework.jms.core.JmsTemplate.doReceive(JmsTemplate.java:734) at org.springframework.jms.core.JmsTemplate.doReceive(JmsTemplate.java:706) at org.springframework.jms.core.JmsTemplate$9.doInJms(JmsTemplate.java:681) at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:447) at org.springframework.jms.core.JmsTemplate.receiveSelected(JmsTemplate.java:679) at org.springframework.jms.core.JmsTemplate.receiveSelectedAndConvert(JmsTemplate.java:784) at com.pg.ecom.jms.service.ProducerServices.SendMessageSync(ProducerServices.java:127) ... 25 more This is line 127: try { Thread.yield(); //line 127 below status=(StatusMessageBean)getJmsTemplate.receiveSelectedAndConvert(statusDestination, "JMSCorrelationID='"+ producerMsg.getProcessID() +"'"); Thread.yield(); } catch (Exception e) { Thread.yield(); loggingInterface.doErrorLogging(e.fillInStackTrace()); } According to the BEA documentation, we should contact BEA about error 055039 but I would like to try asking here first before bringing this to them? Some more errors: Caused by: java.lang.NullPointerException at weblogic.jms.common.JMSVariableBinder$JMSCorrelationIDVariable.get(JMSVariableBinder.java:127) at weblogic.utils.expressions.Expression.evaluateExpr(Expression.java:271) at weblogic.utils.expressions.Expression.evaluateExpr(Expression.java:298) at weblogic.utils.expressions.Expression.evaluateBoolean(Expression.java:209) at weblogic.utils.expressions.Expression.evaluate(Expression.java:167) at weblogic.jms.common.JMSSQLFilter$Exp.evaluate(JMSSQLFilter.java:304) at weblogic.messaging.common.SQLFilter.match(SQLFilter.java:158) at weblogic.messaging.kernel.internal.MessageList.findNextVisible(MessageList.java:274) at weblogic.messaging.kernel.internal.QueueImpl.nextFromIteratorOrGroup(QueueImpl.java:441) at weblogic.messaging.kernel.internal.QueueImpl.nextMatchFromIteratorOrGroup(QueueImpl.java:350) at weblogic.messaging.kernel.internal.QueueImpl.get(QueueImpl.java:233) at weblogic.messaging.kernel.internal.QueueImpl.addReader(QueueImpl.java:1069) at weblogic.messaging.kernel.internal.ReceiveRequestImpl.start(ReceiveRequestImpl.java:178) at weblogic.messaging.kernel.internal.ReceiveRequestImpl.<init>(ReceiveRequestImpl.java:86) at weblogic.messaging.kernel.internal.QueueImpl.receive(QueueImpl.java:820) at weblogic.jms.backend.BEConsumerImpl.blockingReceiveStart(BEConsumerImpl.java:1172) at weblogic.jms.backend.BEConsumerImpl.receive(BEConsumerImpl.java:1383) at weblogic.jms.backend.BEConsumerImpl.invoke(BEConsumerImpl.java:1088) at weblogic.messaging.dispatcher.Request.wrappedFiniteStateMachine(Request.java:759) at weblogic.messaging.dispatcher.DispatcherImpl.dispatchAsyncInternal(DispatcherImpl.java:129) at weblogic.messaging.dispatcher.DispatcherImpl.dispatchAsync(DispatcherImpl.java:112) at weblogic.messaging.dispatcher.Request.dispatchAsync(Request.java:1046) at weblogic.jms.dispatcher.Request.dispatchAsync(Request.java:72) at weblogic.jms.frontend.FEConsumer.receive(FEConsumer.java:557) at weblogic.jms.frontend.FEConsumer.invoke(FEConsumer.java:806) at weblogic.messaging.dispatcher.Request.wrappedFiniteStateMachine(Request.java:759) at weblogic.messaging.dispatcher.DispatcherServerRef.invoke(DispatcherServerRef.java:276) at weblogic.messaging.dispatcher.DispatcherServerRef.handleRequest(DispatcherServerRef.java:141) at weblogic.messaging.dispatcher.DispatcherServerRef.access$000(DispatcherServerRef.java:36) at weblogic.messaging.dispatcher.DispatcherServerRef$2.run(DispatcherServerRef.java:112) ... 2 more Any ideas?

    Read the article

  • Results Delphi users who wish to use HID USB in windows

    - by Lex Dean
    Results Delphi users who wish to use HID USB in windows HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\USB Contain a list if keys containing vender ID and Producer ID numbers that co inside with the USB web sites data base. These numbers and the GUID held within the key gives important information to execute the HID.dll that is otherwise imposable to execute. The Control Panel/System/Hardware/Device manager/USB Serial Bus Controllers/Mass Storage Devices/details simply lists the registry data. The access to the programmer has been documented through the API32.dll with a number of procedures that accesses the registry. But that is not the problem yet it looks like the problem!!!!!!!!! The key is info about the registry and how to use it. These keys are viewed in RegEdit.exe it’s self. Some parts of the registry like the USB have been given a windows security system type of protection with a Aurthz.dll to give the USB read and right protection. Even the api32.dll. Now only Microsoft give out these details and we all know Microsoft hate Delphi. Now C users have enjoyed this access for over 10 years now. Now some will make out that you should never give out such information because some idiot may make a stupid virus (true), but the argument is also do Delphi users need to be denied USB access for another ten years!!!!!!!!!!!!. What I do not have is the skill in is assembly code. I’m seeking for some one that can trace how regedit.exe gets its access through Aurthz.dll to access the USB data. So I’m asking all who reads this:- to partition any friend they have that has this skill to get the Aurthz.dll info needed. I find communicating with USB.org they reply when they have a positive email reply but do not bother should their email be a slightly negative policy. For all simple reasoning, all that USB had to do was to have a secure key as they have done, and to update the same data into a unsecured key every time the data is changed for USB developer to access. And not bother developers access to Aurthz.dll. Authz.dll with these functions for USB:- AuthzFreeResourceManager AuthzFreeContext AuthzAccessCheck(Flags: DWORD; AuthzClientContext: AUTHZ_CLIENT_CONTEXT_HANDLE; pRequest: PAUTHZ_ACCESS_REQUEST; AuditInfo: AUTHZ_AUDIT_INFO_HANDLE; pSecurityDescriptor: PSECURITY_DESCRIPTOR; OptionalSecurityDescriptorArray: PSECURITY_DESCRIPTOR; OptionalSecurityDescriptorCount: DWORD; //OPTIONAL, Var pReply: AUTHZ_ACCESS_REPLY; pAuthzHandle: PAUTHZ_ACCESS_CHECK_RESULTS_HANDLE): BOOl; AuthzInitializeContextFromSid(Flags: DWORD; UserSid: PSID; AuthzResourceManager: AUTHZ_RESOURCE_MANAGER_HANDLE; pExpirationTime: int64; Identifier: LUID; DynamicGroupArgs: PVOID; pAuthzClientContext: PAUTHZ_CLIENT_CONTEXT_HANDLE): BOOL; AuthzInitializeResourceManager(flags: DWORD; pfnAccessCheck: PFN_AUTHZ_DYNAMIC_ACCESS_CHECK; pfnComputeDynamicGroups: PFN_AUTHZ_COMPUTE_DYNAMIC_GROUPS; pfnFreeDynamicGroups: PFN_AUTHZ_FREE_DYNAMIC_GROUPS; ResourceManagerName: PWideChar; pAuthzResourceManager: PAUTHZ_RESOURCE_MANAGER_HANDLE): BOOL; further in Authz.h on kolers.com J Lex Dean.

    Read the article

  • Single DispatcherServlet with Multiple Controllers

    - by jwmajors81
    I am trying to create some restful web services using Spring MVC 3.0. I currently have an issue that only 1 of my 2 controllers will work at any given time. As it turns out, whichever class comes first when sorted alphabetically will work properly. The error I get is: handleNoSuchRequestHandlingMethod No matching handler method found for servlet request: path '/polinq.xml', method 'GET', parameters map[[empty]] I had a very simliar message earlier also, except instead of the map being empty it was something like map[v--String(array)] Regardless of the message though, currently the LocationCovgController works and the PolicyInquiryController doesn't. If I change the change of the PolicyInquiryController to APolicyInquiryController, then it will start funcitoning properly and the LocationCovgController will stop working. Any assistance would be greatly appreciated. Thank you very much, Jeremy The information provided below includes the skeleton of both controller classes and also the servlet config file that defines how spring should be setup. Controller 1 package org.example; @Controller @RequestMapping(value = "/polinq.*") public class PolicyInquiryController { @RequestMapping(value = "/polinq.*?comClientId={comClientId}") public ModelAndView getAccountSummary( @PathVariable("comClientId") String commercialClientId) { // setup of variable as was removed. ModelAndView mav = new ModelAndView("XmlView", BindingResult.MODEL_KEY_PREFIX + "accsumm", as); return mav; } } Controller 2 package org.example; @Controller @RequestMapping(value = "/loccovginquiry.*") public class LocationCovgController { @RequestMapping(value = "/loccovginquiry.*method={method}") public ModelAndView locationCovgInquiryByPolicyNo( @PathVariable("method")String method) { ModelAndView mav = new ModelAndView("XmlView", BindingResult.MODEL_KEY_PREFIX + "loccovg", covgs); return mav; } } Servlet Config <context:component-scan base-package="org.example." /> <bean class="org.springframework.web.servlet.view.ContentNegotiatingViewResolver" p:order="0"> <property name="mediaTypes"> <map> <entry key="atom" value="application/atom+xml"/> <entry key="xml" value="application/xml"/> <entry key="json" value="application/json"/> <entry key="html" value="text/html"/> </map> </property> <property name="defaultContentType" value="text/html"/> <property name="ignoreAcceptHeader" value="true"/> <property name="favorPathExtension" value="true"/> <property name="viewResolvers"> <list> <bean class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="prefix" value="/WEB-INF/jsp/"/> <property name="suffix" value=".jsp"/> </bean> </list> </property> <property name="defaultViews"> <list> <bean class="org.springframework.web.servlet.view.json.MappingJacksonJsonView"/> </list> </property> </bean> <bean class="org.springframework.web.servlet.view.BeanNameViewResolver" /> <bean id="XmlView" class="org.springframework.web.servlet.view.xml.MarshallingView"> <property name="marshaller" ref="marshaller"/> </bean> <oxm:jaxb2-marshaller id="marshaller"> <oxm:class-to-be-bound name="org.example.policy.dto.AccountSummary"/> <oxm:class-to-be-bound name="org.example.policy.dto.InsuredName"/> <oxm:class-to-be-bound name="org.example.policy.dto.Producer"/> <oxm:class-to-be-bound name="org.example.policy.dto.PropertyLocCoverage"/> <oxm:class-to-be-bound name="org.example.policy.dto.PropertyLocCoverages"/> </oxm:jaxb2-marshaller>

    Read the article

  • Representing game states in Tic Tac Toe

    - by dacman
    The goal of the assignment that I'm currently working on for my Data Structures class is to create a of Quantum Tic Tac Toe with an AI that plays to win. Currently, I'm having a bit of trouble finding the most efficient way to represent states. Overview of current Structure: AbstractGame Has and manages AbstractPlayers (game.nextPlayer() returns next player by int ID) Has and intializes AbstractBoard at the beginning of the game Has a GameTree (Complete if called in initialization, incomplete otherwise) AbstractBoard Has a State, a Dimension, and a Parent Game Is a mediator between Player and State, (Translates States from collections of rows to a Point representation Is a StateConsumer AbstractPlayer Is a State Producer Has a ConcreteEvaluationStrategy to evaluate the current board StateTransveralPool Precomputes possible transversals of "3-states". Stores them in a HashMap, where the Set contains nextStates for a given "3-state" State Contains 3 Sets -- a Set of X-Moves, O-Moves, and the Board Each Integer in the set is a Row. These Integer values can be used to get the next row-state from the StateTransversalPool SO, the principle is Each row can be represented by the binary numbers 000-111, where 0 implies an open space and 1 implies a closed space. So, for an incomplete TTT board: From the Set<Integer> board perspective: X_X R1 might be: 101 OO_ R2 might be: 110 X_X R3 might be: 101, where 1 is an open space, and 0 is a closed space From the Set<Integer> xMoves perspective: X_X R1 might be: 101 OO_ R2 might be: 000 X_X R3 might be: 101, where 1 is an X and 0 is not From the Set<Integer> oMoves perspective: X_X R1 might be: 000 OO_ R2 might be: 110 X_X R3 might be: 000, where 1 is an O and 0 is not Then we see that x{R1,R2,R3} & o{R1,R2,R3} = board{R1,R2,R3} The problem is quickly generating next states for the GameTree. If I have player Max (x) with board{R1,R2,R3}, then getting the next row-states for R1, R2, and R3 is simple.. Set<Integer> R1nextStates = StateTransversalPool.get(R1); The problem is that I have to combine each one of those states with R1 and R2. Is there a better data structure besides Set that I could use? Is there a more efficient approach in general? I've also found Point<-State mediation cumbersome. Is there another approach that I could try there? Thanks! Here is the code for my ConcretePlayer class. It might help explain how players produce new states via moves, using the StateProducer (which might need to become StateFactory or StateBuilder). public class ConcretePlayerGeneric extends AbstractPlayer { @Override public BinaryState makeMove() { // Given a move and the current state, produce a new state Point playerMove = super.strategy.evaluate(this); BinaryState currentState = super.getInGame().getBoard().getState(); return StateProducer.getState(this, playerMove, currentState); } } EDIT: I'm starting with normal TTT and moving to Quantum TTT. Given the framework, it should be as simple as creating several new Concrete classes and tweaking some things.

    Read the article

  • change height in Android 2.2 LinearLayout in code

    - by Niro
    Im trying to change height of Layouts through the code without success. I've tried all of the examples i saw here and other site and my app just keep shutting down. xml code: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/main_lay" android:orientation="vertical" tools:context=".MainActivity" > <LinearLayout android:id="@+id/layout_add" > </LinearLayout> <LinearLayout android:layout_width="fill_parent" android:layout_height="50dp" android:background="@color/green"> <ImageView android:layout_width="fill_parent" android:contentDescription="@string/desc" android:layout_height="45dp" android:layout_gravity="center_vertical|left" android:scaleType="fitStart" android:background="@color/orange" android:src="@drawable/logo" > </ImageView> </LinearLayout> </LinearLayout> Java code: main_layout=(LinearLayout)findViewById(R.id.main_lay); main_layout.setLayoutParams(new FrameLayout.LayoutParams(LayoutParams.MATCH_PARENT,LayoutParams.MATCH_PARENT)); main_layout.setBackgroundResource(R.color.white); layout_add = (LinearLayout) findViewById(R.id.layout_add); layout_add.setLayoutParams(new FrameLayout.LayoutParams(LayoutParams.FILL_PARENT,50 )); layout_add.setBackgroundResource(R.color.dark_grey); I cant understand what im doing wrong. I've tried different ways to fix it. The Backround setting is working fine. Thank you guys Niro This is the Logcat 12-09 16:12:39.007: E/AnalyticsSDKTest.cpp(6338): Time w/ UTC Offset: 2012-12-09 12-09 16:16:14.517: E/ActivityManager(121): fail to set top app changed! 12-09 16:16:14.547: E/InputDispatcher(121): channel '4056b9c8 com.nirosadvice.converter/com.nirosadvice.converter.MainActivity (server)' ~ Consumer closed input channel or an error occurred. events=0x8 12-09 16:16:14.547: E/InputDispatcher(121): channel '4056b9c8 com.nirosadvice.converter/com.nirosadvice.converter.MainActivity (server)' ~ Channel is unrecoverably broken and will be disposed! 12-09 16:16:18.071: E/PVWmdrmProxy(5716): binder died for component: ComponentInfo{com.pv.wmdrmservice/com.pv.wmdrmservice.PVWmdrmService} 12-09 16:16:18.161: E/AndroidRuntime(18911): FATAL EXCEPTION: main 12-09 16:16:18.161: E/AndroidRuntime(18911): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.nirosadvice.converter/com.nirosadvice.converter.MainActivity}: java.lang.RuntimeException: Binary XML file line #1: You must supply a layout_width attribute. 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1821) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1842) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.app.ActivityThread.access$1500(ActivityThread.java:132) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1038) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.os.Handler.dispatchMessage(Handler.java:99) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.os.Looper.loop(Looper.java:150) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.app.ActivityThread.main(ActivityThread.java:4263) 12-09 16:16:18.161: E/AndroidRuntime(18911): at java.lang.reflect.Method.invokeNative(Native Method) 12-09 16:16:18.161: E/AndroidRuntime(18911): at java.lang.reflect.Method.invoke(Method.java:507) 12-09 16:16:18.161: E/AndroidRuntime(18911): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 12-09 16:16:18.161: E/AndroidRuntime(18911): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 12-09 16:16:18.161: E/AndroidRuntime(18911): at dalvik.system.NativeStart.main(Native Method) 12-09 16:16:18.161: E/AndroidRuntime(18911): Caused by: java.lang.RuntimeException: Binary XML file line #1: You must supply a layout_width attribute. 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.content.res.TypedArray.getLayoutDimension(TypedArray.java:491) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.view.ViewGroup$LayoutParams.setBaseAttributes(ViewGroup.java:3684) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.view.ViewGroup$MarginLayoutParams.<init>(ViewGroup.java:3764) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.widget.FrameLayout$LayoutParams.<init>(FrameLayout.java:457) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.widget.FrameLayout.generateLayoutParams(FrameLayout.java:423) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.widget.FrameLayout.generateLayoutParams(FrameLayout.java:47) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.view.LayoutInflater.inflate(LayoutInflater.java:396) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.view.LayoutInflater.inflate(LayoutInflater.java:320) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.view.LayoutInflater.inflate(LayoutInflater.java:276) 12-09 16:16:18.161: E/AndroidRuntime(18911): at com.android.internal.policy.impl.PhoneWindow.setContentView(PhoneWindow.java:231) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.app.Activity.setContentView(Activity.java:1715) 12-09 16:16:18.161: E/AndroidRuntime(18911): at com.nirosadvice.converter.MainActivity.onCreate(MainActivity.java:87) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1072) 12-09 16:16:18.161: E/AndroidRuntime(18911): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1785) 12-09 16:16:18.161: E/AndroidRuntime(18911): ... 11 more Thanks After Iv'e added the attributes to the XML - this is what i'm getting : CatLog: 12-09 16:42:33.168: E/AndroidRuntime(19065): FATAL EXCEPTION: main 12-09 16:42:33.168: E/AndroidRuntime(19065): java.lang.ClassCastException: android.view.ViewGroup$LayoutParams 12-09 16:42:33.168: E/AndroidRuntime(19065): at android.widget.LinearLayout.measureVertical(LinearLayout.java:360) 12-09 16:42:33.168: E/AndroidRuntime(19065): at android.widget.LinearLayout.onMeasure(LinearLayout.java:309) 12-09 16:42:33.168: E/AndroidRuntime(19065): at android.view.View.measure(View.java:8526) 12-09 16:42:33.168: E/AndroidRuntime(19065): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:3224) 12-09 16:42:33.168: E/AndroidRuntime(19065): at android.widget.FrameLayout.onMeasure(FrameLayout.java:250) 12-09 16:42:33.168: E/AndroidRuntime(19065): at android.view.View.measure(View.java:8526) 12-09 16:42:33.168: E/AndroidRuntime(19065): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:3224) 12-09 16:42:33.168: E/AndroidRuntime(19065): at android.widget.FrameLayout.onMeasure(FrameLayout.java:250) 12-09 16:42:33.168: E/AndroidRuntime(19065): at android.view.View.measure(View.java:8526) 12-09 16:42:33.168: E/AndroidRuntime(19065): at android.view.ViewRoot.performTraversals(ViewRoot.java:902) 12-09 16:42:33.168: E/AndroidRuntime(19065): at android.view.ViewRoot.handleMessage(ViewRoot.java:1957) 12-09 16:42:33.168: E/AndroidRuntime(19065): at android.os.Handler.dispatchMessage(Handler.java:99) 12-09 16:42:33.168: E/AndroidRuntime(19065): at android.os.Looper.loop(Looper.java:150) 12-09 16:42:33.168: E/AndroidRuntime(19065): at android.app.ActivityThread.main(ActivityThread.java:4263) 12-09 16:42:33.168: E/AndroidRuntime(19065): at java.lang.reflect.Method.invokeNative(Native Method) 12-09 16:42:33.168: E/AndroidRuntime(19065): at java.lang.reflect.Method.invoke(Method.java:507) 12-09 16:42:33.168: E/AndroidRuntime(19065): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 12-09 16:42:33.168: E/AndroidRuntime(19065): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 12-09 16:42:33.168: E/AndroidRuntime(19065): at dalvik.system.NativeStart.main(Native Method) 12-09 16:42:39.023: E/AnalyticsSDKTest.cpp(6338): Time w/ UTC Offset: 2012-12-09 21:42:39-05:00 12-09 16:42:49.013: E/AndroidRuntime(19095): FATAL EXCEPTION: main 12-09 16:42:49.013: E/AndroidRuntime(19095): java.lang.ClassCastException: android.view.ViewGroup$LayoutParams 12-09 16:42:49.013: E/AndroidRuntime(19095): at android.widget.LinearLayout.measureVertical(LinearLayout.java:360) 12-09 16:42:49.013: E/AndroidRuntime(19095): at android.widget.LinearLayout.onMeasure(LinearLayout.java:309) 12-09 16:42:49.013: E/AndroidRuntime(19095): at android.view.View.measure(View.java:8526) 12-09 16:42:49.013: E/AndroidRuntime(19095): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:3224) 12-09 16:42:49.013: E/AndroidRuntime(19095): at android.widget.FrameLayout.onMeasure(FrameLayout.java:250) 12-09 16:42:49.013: E/AndroidRuntime(19095): at android.view.View.measure(View.java:8526) 12-09 16:42:49.013: E/AndroidRuntime(19095): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:3224) 12-09 16:42:49.013: E/AndroidRuntime(19095): at android.widget.FrameLayout.onMeasure(FrameLayout.java:250) 12-09 16:42:49.013: E/AndroidRuntime(19095): at android.view.View.measure(View.java:8526) 12-09 16:42:49.013: E/AndroidRuntime(19095): at android.view.ViewRoot.performTraversals(ViewRoot.java:902) 12-09 16:42:49.013: E/AndroidRuntime(19095): at android.view.ViewRoot.handleMessage(ViewRoot.java:1957) 12-09 16:42:49.013: E/AndroidRuntime(19095): at android.os.Handler.dispatchMessage(Handler.java:99) 12-09 16:42:49.013: E/AndroidRuntime(19095): at android.os.Looper.loop(Looper.java:150) 12-09 16:42:49.013: E/AndroidRuntime(19095): at android.app.ActivityThread.main(ActivityThread.java:4263) 12-09 16:42:49.013: E/AndroidRuntime(19095): at java.lang.reflect.Method.invokeNative(Native Method) 12-09 16:42:49.013: E/AndroidRuntime(19095): at java.lang.reflect.Method.invoke(Method.java:507) 12-09 16:42:49.013: E/AndroidRuntime(19095): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 12-09 16:42:49.013: E/AndroidRuntime(19095): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 12-09 16:42:49.013: E/AndroidRuntime(19095): at dalvik.system.NativeStart.main(Native Method) 12-09 16:44:00.913: E/AndroidRuntime(19148): FATAL EXCEPTION: main 12-09 16:44:00.913: E/AndroidRuntime(19148): java.lang.ClassCastException: android.view.ViewGroup$LayoutParams 12-09 16:44:00.913: E/AndroidRuntime(19148): at android.widget.LinearLayout.measureVertical(LinearLayout.java:360) 12-09 16:44:00.913: E/AndroidRuntime(19148): at android.widget.LinearLayout.onMeasure(LinearLayout.java:309) 12-09 16:44:00.913: E/AndroidRuntime(19148): at android.view.View.measure(View.java:8526) 12-09 16:44:00.913: E/AndroidRuntime(19148): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:3224) 12-09 16:44:00.913: E/AndroidRuntime(19148): at android.widget.FrameLayout.onMeasure(FrameLayout.java:250) 12-09 16:44:00.913: E/AndroidRuntime(19148): at android.view.View.measure(View.java:8526) 12-09 16:44:00.913: E/AndroidRuntime(19148): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:3224) 12-09 16:44:00.913: E/AndroidRuntime(19148): at android.widget.FrameLayout.onMeasure(FrameLayout.java:250) 12-09 16:44:00.913: E/AndroidRuntime(19148): at android.view.View.measure(View.java:8526) 12-09 16:44:00.913: E/AndroidRuntime(19148): at android.view.ViewRoot.performTraversals(ViewRoot.java:902) 12-09 16:44:00.913: E/AndroidRuntime(19148): at android.view.ViewRoot.handleMessage(ViewRoot.java:1957) 12-09 16:44:00.913: E/AndroidRuntime(19148): at android.os.Handler.dispatchMessage(Handler.java:99) 12-09 16:44:00.913: E/AndroidRuntime(19148): at android.os.Looper.loop(Looper.java:150) 12-09 16:44:00.913: E/AndroidRuntime(19148): at android.app.ActivityThread.main(ActivityThread.java:4263) 12-09 16:44:00.913: E/AndroidRuntime(19148): at java.lang.reflect.Method.invokeNative(Native Method) 12-09 16:44:00.913: E/AndroidRuntime(19148): at java.lang.reflect.Method.invoke(Method.java:507) 12-09 16:44:00.913: E/AndroidRuntime(19148): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 12-09 16:44:00.913: E/AndroidRuntime(19148): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 12-09 16:44:00.913: E/AndroidRuntime(19148): at dalvik.system.NativeStart.main(Native Method) 12-09 16:44:02.935: E/InputDispatcher(121): channel '4056b9c8 com.nirosadvice.converter/com.nirosadvice.converter.MainActivity (server)' ~ Consumer closed input channel or an error occurred. events=0x8 12-09 16:44:02.935: E/InputDispatcher(121): channel '4056b9c8 com.nirosadvice.converter/com.nirosadvice.converter.MainActivity (server)' ~ Channel is unrecoverably broken and will be disposed! 12-09 16:45:25.075: E/AndroidRuntime(19210): FATAL EXCEPTION: main 12-09 16:45:25.075: E/AndroidRuntime(19210): java.lang.ClassCastException: android.view.ViewGroup$LayoutParams 12-09 16:45:25.075: E/AndroidRuntime(19210): at android.widget.LinearLayout.measureVertical(LinearLayout.java:360) 12-09 16:45:25.075: E/AndroidRuntime(19210): at android.widget.LinearLayout.onMeasure(LinearLayout.java:309) 12-09 16:45:25.075: E/AndroidRuntime(19210): at android.view.View.measure(View.java:8526) 12-09 16:45:25.075: E/AndroidRuntime(19210): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:3224) 12-09 16:45:25.075: E/AndroidRuntime(19210): at android.widget.FrameLayout.onMeasure(FrameLayout.java:250) 12-09 16:45:25.075: E/AndroidRuntime(19210): at android.view.View.measure(View.java:8526) 12-09 16:45:25.075: E/AndroidRuntime(19210): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:3224) 12-09 16:45:25.075: E/AndroidRuntime(19210): at android.widget.FrameLayout.onMeasure(FrameLayout.java:250) 12-09 16:45:25.075: E/AndroidRuntime(19210): at android.view.View.measure(View.java:8526) 12-09 16:45:25.075: E/AndroidRuntime(19210): at android.view.ViewRoot.performTraversals(ViewRoot.java:902) 12-09 16:45:25.075: E/AndroidRuntime(19210): at android.view.ViewRoot.handleMessage(ViewRoot.java:1957) 12-09 16:45:25.075: E/AndroidRuntime(19210): at android.os.Handler.dispatchMessage(Handler.java:99) 12-09 16:45:25.075: E/AndroidRuntime(19210): at android.os.Looper.loop(Looper.java:150) 12-09 16:45:25.075: E/AndroidRuntime(19210): at android.app.ActivityThread.main(ActivityThread.java:4263) 12-09 16:45:25.075: E/AndroidRuntime(19210): at java.lang.reflect.Method.invokeNative(Native Method) 12-09 16:45:25.075: E/AndroidRuntime(19210): at java.lang.reflect.Method.invoke(Method.java:507) 12-09 16:45:25.075: E/AndroidRuntime(19210): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 12-09 16:45:25.075: E/AndroidRuntime(19210): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 12-09 16:45:25.075: E/AndroidRuntime(19210): at dalvik.system.NativeStart.main(Native Method) 12-09 16:50:34.507: E/AndroidRuntime(19358): FATAL EXCEPTION: main 12-09 16:50:34.507: E/AndroidRuntime(19358): java.lang.ClassCastException: android.view.ViewGroup$LayoutParams 12-09 16:50:34.507: E/AndroidRuntime(19358): at android.widget.LinearLayout.measureVertical(LinearLayout.java:360) 12-09 16:50:34.507: E/AndroidRuntime(19358): at android.widget.LinearLayout.onMeasure(LinearLayout.java:309) 12-09 16:50:34.507: E/AndroidRuntime(19358): at android.view.View.measure(View.java:8526) 12-09 16:50:34.507: E/AndroidRuntime(19358): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:3224) 12-09 16:50:34.507: E/AndroidRuntime(19358): at android.widget.FrameLayout.onMeasure(FrameLayout.java:250) 12-09 16:50:34.507: E/AndroidRuntime(19358): at android.view.View.measure(View.java:8526) 12-09 16:50:34.507: E/AndroidRuntime(19358): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:3224) 12-09 16:50:34.507: E/AndroidRuntime(19358): at android.widget.FrameLayout.onMeasure(FrameLayout.java:250) 12-09 16:50:34.507: E/AndroidRuntime(19358): at android.view.View.measure(View.java:8526) 12-09 16:50:34.507: E/AndroidRuntime(19358): at android.view.ViewRoot.performTraversals(ViewRoot.java:902) 12-09 16:50:34.507: E/AndroidRuntime(19358): at android.view.ViewRoot.handleMessage(ViewRoot.java:1957) 12-09 16:50:34.507: E/AndroidRuntime(19358): at android.os.Handler.dispatchMessage(Handler.java:99) 12-09 16:50:34.507: E/AndroidRuntime(19358): at android.os.Looper.loop(Looper.java:150) 12-09 16:50:34.507: E/AndroidRuntime(19358): at android.app.ActivityThread.main(ActivityThread.java:4263) 12-09 16:50:34.507: E/AndroidRuntime(19358): at java.lang.reflect.Method.invokeNative(Native Method) 12-09 16:50:34.507: E/AndroidRuntime(19358): at java.lang.reflect.Method.invoke(Method.java:507) 12-09 16:50:34.507: E/AndroidRuntime(19358): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 12-09 16:50:34.507: E/AndroidRuntime(19358): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 12-09 16:50:34.507: E/AndroidRuntime(19358): at dalvik.system.NativeStart.main(Native Method) Third try - after changing the Frame to Linear : 12-09 17:42:39.076: E/AnalyticsSDKTest.cpp(6338): Time w/ UTC Offset: 2012-12-09 22:42:39-05:00 12-09 17:55:44.141: E/ActivityManager(121): Fix ANR:broadcast when App died 12-09 17:55:44.722: E/AndroidRuntime(19469): FATAL EXCEPTION: main 12-09 17:55:44.722: E/AndroidRuntime(19469): java.lang.ClassCastException: android.view.ViewGroup$LayoutParams 12-09 17:55:44.722: E/AndroidRuntime(19469): at android.widget.LinearLayout.measureVertical(LinearLayout.java:360) 12-09 17:55:44.722: E/AndroidRuntime(19469): at android.widget.LinearLayout.onMeasure(LinearLayout.java:309) 12-09 17:55:44.722: E/AndroidRuntime(19469): at android.view.View.measure(View.java:8526) 12-09 17:55:44.722: E/AndroidRuntime(19469): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:3224) 12-09 17:55:44.722: E/AndroidRuntime(19469): at android.widget.FrameLayout.onMeasure(FrameLayout.java:250) 12-09 17:55:44.722: E/AndroidRuntime(19469): at android.view.View.measure(View.java:8526) 12-09 17:55:44.722: E/AndroidRuntime(19469): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:3224) 12-09 17:55:44.722: E/AndroidRuntime(19469): at android.widget.FrameLayout.onMeasure(FrameLayout.java:250) 12-09 17:55:44.722: E/AndroidRuntime(19469): at android.view.View.measure(View.java:8526) 12-09 17:55:44.722: E/AndroidRuntime(19469): at android.view.ViewRoot.performTraversals(ViewRoot.java:902) 12-09 17:55:44.722: E/AndroidRuntime(19469): at android.view.ViewRoot.handleMessage(ViewRoot.java:1957) 12-09 17:55:44.722: E/AndroidRuntime(19469): at android.os.Handler.dispatchMessage(Handler.java:99) 12-09 17:55:44.722: E/AndroidRuntime(19469): at android.os.Looper.loop(Looper.java:150) 12-09 17:55:44.722: E/AndroidRuntime(19469): at android.app.ActivityThread.main(ActivityThread.java:4263) 12-09 17:55:44.722: E/AndroidRuntime(19469): at java.lang.reflect.Method.invokeNative(Native Method) 12-09 17:55:44.722: E/AndroidRuntime(19469): at java.lang.reflect.Method.invoke(Method.java:507) 12-09 17:55:44.722: E/AndroidRuntime(19469): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 12-09 17:55:44.722: E/AndroidRuntime(19469): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 12-09 17:55:44.722: E/AndroidRuntime(19469): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • Mobile HCM: It’s not the future, it is right now

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Steve Boese, Director Product Strategy, Oracle I’ll bet you reached for your iPhone or Android or BlackBerry and took a quick look at email or Facebook or last night’s text messages before you even got out of bed this morning. Come on, admit it, it’s ok, you are among friends here. See, feel better now? But seriously, the incredible growth and near-ubiquity of increasingly powerful, capable, and for many of us, essential in our daily lives mobile devices has profoundly changed the way we communicate, consume information, socialize, and more and more, conduct business and get our work done. And if you doubt that profound change has happened, just think for a moment about the last time you misplaced your iPhone.  The shivers, the cold sweats, the panic... We have all been there. And indeed your personal experiences with mobile technology echoes throughout the world - here are a few data points to consider: Market research firm IDC estimates 1.8 billion mobile phones will be shipped in 2012. A recent Pew study reports 46% of Americans own a smartphone of some kind. And finally in the USA, ownership of tablets like the iPad has doubled from 10% to 19% in the last year. So truly for the Human Resources leader, the question is no longer, ‘Should HR explore ways to exploit mobile devices and their always-on nature to better support and empower the modern workforce?’, but rather ‘How can HR best take advantage of smartphone and tablet capability to provide information, enable transactions, and enhance decision making?’. Because even though moving HCM applications to mobile devices seems inherently logical given today’s fast-moving and mobile workforces, and its promise to deliver incredible value to the organization, HR leaders also have to consider many factors before devising their Mobile HCM strategy and embarking on mobile HR technology projects. Here are just some of the important considerations for HR leaders as you build your strategies and evaluate mobile HCM solutions: Does your organization provide mobile devices to the workforce today, and if so, will the current set of deployed devices have the necessary capability and ecosystems to support your mobile HCM initiatives? Will you allow workers to use or bring their own mobile devices, (commonly abbreviated as ‘BYOD’), and if so are your IT and Security organizations in agreement and capable of supporting that strategy? Do you know which workers need access to mobile HCM applications? Often mobile HCM capability flows down in an organization, with executives and other ‘road-warrior’ types having the most immediate needs, followed by field sales staff, project managers, and even potential job candidates. But just as an organization will have to spend time understanding ‘who’ should have access to mobile HCM technology, the ‘what’ of the way the solutions should be deployed to these groups will also vary. What works and makes sense for the executive, (company-wide dashboards and analytics on an iPad), might not be as relevant for a retail store manager, (employee schedules, location-level sales and inventory data, transaction approvals, etc.). With Oracle Fusion HCM, we are taking an approach to mobile HR that encompasses not just the mobile solution needs for the various types of worker, but also incorporates the fundamental attributes of great mobile applications - the ability to support end-to-end transactions, apps that respond with lightning-fast speed, with functions that are embedded in a worker’s daily activities, and features that can be mashed-up easily with other business areas like Finance and CRM. Finally, and perhaps most importantly for the Oracle Fusion HCM team, delivering mobile experiences that truly enhance, enable, and empower the mobile workforce, and deliver on the design mantras of the best-in-class consumer applications, continues to shape and drive design decisions. Mobile is no longer the future, it is right now, and the cutting-edge HR leader of today will need to consider how mobile fits her HCM technology strategy from here on out. You can learn more about our ideas and plans for Oracle Fusion HCM mobile solutions at https://fusiontap.oracle.com/.

    Read the article

  • Commercial Drupal Modules & Themes

    - by Ravish
    A discussion at Drupal.org forums prompted me to give my input about commercial ecosystem around Open Source Content Management Systems. WordPress and Joomla have been growing rapidly since past few years. But, growth rate of Drupal seems to be almost flat. Despite being the most powerful CMS around, Drupal is still not being adopted by masses. Many people will argue that Drupal is not targeted towards masses, but developers. I agree, Drupal is more of a development platform than a consumer CMS. Drupal is ‘many things to many people’, and I can build almost any type of website with it. Drupal is being used for building blogs, corporate websites, Intranet portals, social networking and even a project management system. Looking at the wide array of Drupal implementations, it deserves to be the most widely adopted CMS. I believe there are few challenges that Drupal community needs to overcome. To understand these challenges, I surveyed some webmasters who use Joomla or WordPress but not Drupal. I asked them why they don’t want to use Drupal, following are the responses I got from them: Drupal is too complicated, takes time to learn. Drupal is great, but its admin panel is overwhelming. I couldn’t find any nice themes for Drupal. There is no WYSIWYG editor in Drupal. Most Drupal modules do not work out of the box. There aren’t enough modules like Ubercart which provides any out of the box functionality. I tried modules like CCK, Views and Panels. After wasting several hours struggling with them, I decided to give up on Drupal. I don’t use Drupal because of pushbutton and Garland theme. I had hard time trying to customize Garland and it messed up the whole layout. There are no premium modules and themes for Drupal. Joomla has tons of awesome themes and modules. I don’t want a million hacks like CCK, Views, Tokens, Pathauto, ImageCache and CTools just to run a simple website. Most of the complaints from users are related to the learning and development curve involved with Drupal, and the lack of ecosystem. While most of the problems will be gone in Drupal 7, ecosystem is something that needs to be built by the Drupal community. Drupal distributions are a great step forward. There are few awesome Drupal distributions available like Open Publish, Open Atrium and Drupal Commons. I predict, there will be a wave of many powerful Drupal distributions after Drupal 7 release. Many of them will be user-friendly and commercial supported. Following is my post at Drupal.org forums: Quote from: http://drupal.org/node/863776#comment-3313836 Brian Gardner (StudioPress) and Woo Themes launched premium WordPress themes in 2007, the developer community did not accept it at first. Moreover, they were not even GPL licensed. There was an outcry in WordPress community against them. Following that, most premium theme providers switched to GPL licensing. Despite controversies, users voted for premium theme and plugins by buying them. Inspired by their success, hundreds of other developers started to sell premium themes and plugins. It is now the acceptable and in fact most popular business model among WordPress community. Matt Mullenweg once told me, they would not support premium themes. If he supported, developers would no more give out free GPL themes & plugins. He pointed me towards Joomla, there were hardly any nice free themes & modules available. Now two years forward, premium products are not just accepted but embraced by the WordPress community – http://wordpress.org/extend/themes/commercial/ The quality and number of themes & modules has increased, even the free ones. This also helped to boost the adoption and ecosystem of WordPress. Today, state of Drupal is like WordPress was in 2007. There are hardly any out of the box solutions available for Drupal. Ubercart, Open Publish and Open Atrium are the only ones I can think of. Many of the popular Drupal modules are patches and hole-fillers. Thankfully, these hole-filler modules are going to be in Drupal 7 core. Drupal 7 and distributions will spawn a new array of solutions built upon Drupal. Soon, we will have more like Ubercarts and Open Atriums. If commercial solutions can help fuel this ecosystem and growth, Drupal community will accept them eventually. This debate will not stop your customers from buying your product. If your product is awesome, they will vote for you by buying your product.

    Read the article

  • SQL SERVER – SSMS: Memory Usage By Memory Optimized Objects Report

    - by Pinal Dave
    At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition. This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list. If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group. Here is the query used to identify whether a database has memory-optimized file group or not. SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX' Once we add filegroup using below command, we would see different version of report. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA GO The report is still empty because we have not defined any Memory Optimized table in the database.  Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO') TO FILEGROUP [IMO_FG] GO You may have to change the path based on your SQL Server configuration. Below is the script to create the table. USE MemoryOptimizedDB GO --Drop table if it already exists. IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL DROP TABLE dbo.SQLAuthority GO CREATE TABLE dbo.SQLAuthority ( ID INT IDENTITY NOT NULL, Name CHAR(500)  COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal', CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID), INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below. Notice that table memory is zero but index is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows INSERT INTO SQLAuthority VALUES (DEFAULT) GO 10000 Here is the same report after inserting 1000 rows into our InMemory table.    There are total three sections in the whole report. Total Memory consumed by In-Memory Objects Pie chart showing memory distribution based on type of consumer – table, index and system. Details of memory usage by each table. The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs.  So, to look at user table memory usage, below is the over-simplified version of query. USE MemoryOptimizedDB GO SELECT OBJECT_NAME(OBJECT_ID), * FROM sys.dm_db_xtp_table_memory_stats WHERE OBJECT_ID > 0 GO This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Memory, SQL Reports

    Read the article

  • Lessons from a SAN Failure

    - by Bill Graziano
    At 1:10AM Sunday morning the main SAN at one of my clients suffered a “partial” failure.  Partial means that the SAN was still online and functioning but the LUNs attached to our two main SQL Servers “failed”.  Failed means that SQL Server wouldn’t start and the MDF and LDF files mostly showed a zero file size.  But they were online and responding and most other LUNs were available.  I’m not sure how SANs know to fail at 1AM on a Saturday night but they seem to.  From a personal standpoint this worked out poorly: I was out with friends and after more than a few drinks.  From a work standpoint this was about the best time to fail you could imagine.  Everything was running well before Monday morning.  But it was a long, long Sunday.  I started tipsy, got tired and ended up hung over later in the day. Note to self: Try not to go out drinking right before the SAN fails. This caught us at an interesting time.  We’re in the process of migrating to an entirely new set of servers so some things were partially moved.  This made it difficult to follow our procedures as cleanly as we’d like.  The benefit was that we had much better documentation of everything on the server.  I would encourage everyone to really think through the process of implementing your DR plan and document as much as possible.  Following a checklist is much easier than trying to remember at night under pressure in a hurry after a few drinks. I had a series of estimates on how long things would take.  They were accurate for any single server failure.  They weren’t accurate for a SAN failure that took two servers down.  This wasn’t bad but we should have communicated better. Don’t forget how many things are outside the database.  Logins, linked servers, DTS packages (yikes!), jobs, service broker, DTC (especially DTC), database triggers and any objects in the master database are all things you need backed up.  We’d done a decent job on this and didn’t find significant problems here.  That said this still took a lot of time.  There were many annoyances as a result of this.  Small settings like a login’s default database had a big impact on whether an application could run.  This is probably the single biggest area of concern when looking to recreate a server.  I’d encourage everyone to go through every single node of SSMS and look for user created objects or settings outside the database. Script out your logins with the proper SID and already encrypted passwords and keep it updated.  This makes life so much easier.  I used an approach based on KB246133 that worked well.  I’ll get my scripts posted over the next few days. The disaster can cause your DR process to fail in unexpected ways.  We have a job that scripts out all logins and role memberships and writes it to a file.  This runs on the DR server and pulls from the production server.  Upon opening the file I found that the contents were a “server not found” error.  Fortunately we had other copies and didn’t need to try and restore the master database.  This now runs on the production server and pushes the script to the DR site.  Soon we’ll get it pushed to our version control software. One of the biggest challenges is keeping your DR resources up to date.  Any server change (new linked server, new SQL Server Agent job, etc.) means that your DR plan (and scripts) is out of date.  It helps to automate the generation of these resources if possible. Take time now to test your database restore process.  We test ours quarterly.  If you have a large database I’d also encourage you to invest in a compressed backup solution.  Restoring backups was the single larger consumer of time during our recovery. And yes, there’s a database mirroring solution planned in our new architecture. I didn’t have much involvement in things outside SQL Server but this caused many, many things to change in our environment.  Many applications today aren’t just executables or web sites.  They are a combination of those plus network infrastructure, reports, network ports, IP addresses, DTS and SSIS packages, batch systems and many other things.  These all needed a little bit of attention to make sure they were functioning properly. Profiler turned out to be a handy tool.  I started a trace for failed logins and kept that running.  That let me fix a number of problems before people were able to report them.  I also ran traces to capture exceptions.  This helped identify problems with linked servers. Overall the thing that gave me the most problem was linked servers.  In order for a linked server to function properly you need to be pointed to the right server, have the proper login information, have the network routes available and have MSDTC configured properly.  We have a lot of linked servers and this created many failure points.  Some of the older linked servers used IP addresses and not DNS names.  This meant we had to go in and touch all those linked servers when the servers moved.

    Read the article

  • Guest Post: Christian Finn: Is Facebook About to Become a Victim of its Own Success?

    - by Michael Snow
    12.00 Print 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Cambria; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}  Since we have a number of new members of the WebCenter Evangelist team - I thought it would be appropriate to close the week with the newest hire and leader of the global WebCenter Evangelists, Christian Finn, who has just joined the Red team after many years with the small technology company up in Redmond, WA. He gave an intro to himself in an earlier post this morning but his post below is a great example of how customer engagement takes on a life of its own in our global online connected and social digital ecosystem. Is Facebook About to Become a Victim of its Own Success? What if I told you that your brand could advertise so successfully, you wouldn’t have to pay for the ads? A recent campaign by Ford Motor Company for the Ford Focus featuring Doug the spokespuppet (I am not making this up) did just that—and it raises some interesting issues for marketers and social media alike in the brave new world of customer engagement that is the Social Web. Allow me to elaborate. An article in the Wall Street Journal last week—“Big Brands Like Facebook, But They Don’t Like to Pay” tells the story of Ford’s recently concluded online campaign for the 2012 Ford Focus. (Ford, by the way, under the leadership of people such as Scott Monty, has been a pioneer of effective social campaigns.) The centerpiece of the campaign was the aforementioned Doug, who appeared as a character on Facebook in videos and via chat. (If you are not familiar with Doug, you can see him in action here, and read the WSJ story here.) You may be thinking puppet ads are a sign of Internet Bubble 2.0 and want to stop now, but bear with me. The Journal reported that Ford spent about $95M on its overall Ford Focus campaign, with TV accounting for over $60M of that spend. The Internet buy for the campaign was just over $10M, which included ad buys to drive traffic to Facebook for people to meet and ‘Like’ Doug and some amount on Facebook ads, too, to promote Doug and by extension, the Ford Focus. So far, a fairly straightforward consumer marketing story in the Internet Era. Yet here’s the curious thing: once Doug reached 10,000 fans on Facebook, Ford stopped paying for Facebook ads. Doug had gone viral with people sharing his videos with one another; once critical mass was reached there was no need to buy more ads on Facebook. Doug went on to be Liked by over 43,000 people, and 61% of his fans said they would be more likely to consider buying a Focus. According to the article, Ford says Focus sales are up this year—and increasing sales is every marketer’s goal. And so in effect, Ford found its Facebook campaign so successful that it could stop paying for it, instead letting its target consumers communicate its messages for fun—and for free. Not only did they get a 3X increase in fans beyond their paid campaign, they had thousands of customers sharing their messages in video form for months. Since free advertising is the Holy Grail of marketing both old and new-- and it appears social networks have an advantage in generating that buzz—it seems reasonable to ask: what would happen to brands’ advertising strategies—and the media they use to engage customers, if this success were repeated at scale? It seems logical to conclude that, at least initially, more ad dollars would be spent with social networks like Facebook as brands attempt to replicate Ford’s success. Certainly Facebook ad revenues are on the rise—eMarketer expects Facebook’s ad revenues to quintuple by 2012 compared with 2009 levels, to nearly 2.9B. That’s bad news for TV and the already battered print media and good news for Facebook. But perhaps not so over the longer run. With TV buys, you have to keep paying to generate impressions. If Doug the spokespuppet is any guide, however, that may not be true for social media campaigns. After an initial outlay, if a social campaign takes off, the audience will generate more impressions on its own. Thus a social medium like Facebook could be the victim of its own success when it comes to ad revenue. It may be there is an inherent limiting factor in the ad spend they can capture, as exemplified by Ford’s experience with Dough and the Focus. And brands may spend much less overall on advertising, with as good or better results, than they ever have in the past. How will these trends evolve? Can brands create social campaigns that repeat Ford’s formula for the Focus with effective results? Can social networks find ways to capture more spend and overcome their potential tendency to make further spend unnecessary? And will consumers become tired and insulated from social campaigns, much as they have to traditional advertising channels? These are the questions CMOs and Facebook execs alike will be asking themselves in the brave new world of customer engagement. As always, your thoughts and comments are most welcome.

    Read the article

  • Why Software Sucks...and What You Can Do About It – book review

    - by DigiMortal
        How do our users see the products we are writing for them and how happy they are with our work? Are they able to get their work done without fighting with cool features and crashes or are they just switching off resistance part of their brain to survive our software? Yeah, the overall picture of software usability landscape is not very nice. Okay, it is not even nice. But, fortunately, Why Software Sucks...and What You Can Do About It by David S. Platt explains everything. Why Software Sucks… is book for software users but I consider it as a-must reading also for developers and specially for their managers whose politics often kills all usability topics as soon as they may appear. For managers usability is soft topic that can be manipulated the way it is best in current state of project. Although developers are not UI designers and usability experts they are still very often forced to deal with these topics and this is how usability problems start (of course, also designers are able to produce designs that are stupid and too hard to use for users, but this blog here is about development). I found this book to be very interesting and funny reading. It is not humor book but it explains you all so you remember later very well what you just read. It took me about three evenings to go through this book and I am still enjoying what I found and how author explains our weird young working field to end users. I suggest this book to all developers – while you are demanding your management to hire or outsource usability expert you are at least causing less pain to end users. So, go and buy this book, just like I did. And… they thanks to mr. Platt :) There is one book more I suggest you to read if you are interested in usability - Don't Make Me Think: A Common Sense Approach to Web Usability, 2nd Edition by Steve Krug. Editorial review from Amazon Today’s software sucks. There’s no other good way to say it. It’s unsafe, allowing criminal programs to creep through the Internet wires into our very bedrooms. It’s unreliable, crashing when we need it most, wiping out hours or days of work with no way to get it back. And it’s hard to use, requiring large amounts of head-banging to figure out the simplest operations. It’s no secret that software sucks. You know that from personal experience, whether you use computers for work or personal tasks. In this book, programming insider David Platt explains why that’s the case and, more importantly, why it doesn’t have to be that way. And he explains it in plain, jargon-free English that’s a joy to read, using real-world examples with which you’re already familiar. In the end, he suggests what you, as a typical user, without a technical background, can do about this sad state of our software—how you, as an informed consumer, don’t have to take the abuse that bad software dishes out. As you might expect from the book’s title, Dave’s expose is laced with humor—sometimes outrageous, but always dead on. You’ll laugh out loud as you recall incidents with your own software that made you cry. You’ll slap your thigh with the same hand that so often pounded your computer desk and wished it was a bad programmer’s face. But Dave hasn’t written this book just for laughs. He’s written it to give long-overdue voice to your own discovery—that software does, indeed, suck, but it shouldn’t. Table of contents Acknowledgments xiii Introduction Chapter 1: Who’re You Calling a Dummy? Where We Came From Why It Still Sucks Today Control versus Ease of Use I Don’t Care How Your Program Works A Bad Feature and a Good One Stopping the Proceedings with Idiocy Testing on Live Animals Where We Are and What You Can Do Chapter 2: Tangled in the Web Where We Came From How It Works Why It Still Sucks Today Client-Centered Design versus Server-Centered Design Where’s My Eye Opener? It’s Obvious—Not! Splash, Flash, and Animation Testing on Live Animals What You Can Do about It Chapter 3: Keep Me Safe The Way It Was Why It Sucks Today What Programmers Need to Know, but Don’t A Human Operation Budgeting for Hassles Users Are Lazy Social Engineering Last Word on Security What You Can Do Chapter 4: Who the Heck Are You? Where We Came From Why It Still Sucks Today Incompatible Requirements OK, So Now What? Chapter 5: Who’re You Looking At? Yes, They Know You Why It Sucks More Than Ever Today Users Don’t Know Where the Risks Are What They Know First Milk You with Cookies? Privacy Policy Nonsense Covering Your Tracks The Google Conundrum Solution Chapter 6: Ten Thousand Geeks, Crazed on Jolt Cola See Them in Their Native Habitat All These Geeks Who Speaks, and When, and about What Selling It The Next Generation of Geeks—Passing It On Chapter 7: Who Are These Crazy Bastards Anyway? Homo Logicus Testosterone Poisoning Control and Contentment Making Models Geeks and Jocks Jargon Brains and Constraints Seven Habits of Geeks Chapter 8: Microsoft: Can’t Live With ’Em and Can’t Live Without ’Em They Run the World Me and Them Where We Came From Why It Sucks Today Damned if You Do, Damned if You Don’t We Love to Hate Them Plus ça Change Growing-Up Pains What You Can Do about It The Last Word Chapter 9: Doing Something About It 1. Buy 2. Tell 3. Ridicule 4. Trust 5. Organize Epilogue About the Author

    Read the article

  • Big Data – Role of Cloud Computing in Big Data – Day 11 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the NewSQL. In this article we will understand the role of Cloud in Big Data Story What is Cloud? Cloud is the biggest buzzword around from last few years. Everyone knows about the Cloud and it is extremely well defined online. In this article we will discuss cloud in the context of the Big Data. Cloud computing is a method of providing a shared computing resources to the application which requires dynamic resources. These resources include applications, computing, storage, networking, development and various deployment platforms. The fundamentals of the cloud computing are that it shares pretty much share all the resources and deliver to end users as a service.  Examples of the Cloud Computing and Big Data are Google and Amazon.com. Both have fantastic Big Data offering with the help of the cloud. We will discuss this later in this blog post. There are two different Cloud Deployment Models: 1) The Public Cloud and 2) The Private Cloud Public Cloud Public Cloud is the cloud infrastructure build by commercial providers (Amazon, Rackspace etc.) creates a highly scalable data center that hides the complex infrastructure from the consumer and provides various services. Private Cloud Private Cloud is the cloud infrastructure build by a single organization where they are managing highly scalable data center internally. Here is the quick comparison between Public Cloud and Private Cloud from Wikipedia:   Public Cloud Private Cloud Initial cost Typically zero Typically high Running cost Unpredictable Unpredictable Customization Impossible Possible Privacy No (Host has access to the data Yes Single sign-on Impossible Possible Scaling up Easy while within defined limits Laborious but no limits Hybrid Cloud Hybrid Cloud is the cloud infrastructure build with the composition of two or more clouds like public and private cloud. Hybrid cloud gives best of the both the world as it combines multiple cloud deployment models together. Cloud and Big Data – Common Characteristics There are many characteristics of the Cloud Architecture and Cloud Computing which are also essentially important for Big Data as well. They highly overlap and at many places it just makes sense to use the power of both the architecture and build a highly scalable framework. Here is the list of all the characteristics of cloud computing important in Big Data Scalability Elasticity Ad-hoc Resource Pooling Low Cost to Setup Infastructure Pay on Use or Pay as you Go Highly Available Leading Big Data Cloud Providers There are many players in Big Data Cloud but we will list a few of the known players in this list. Amazon Amazon is arguably the most popular Infrastructure as a Service (IaaS) provider. The history of how Amazon started in this business is very interesting. They started out with a massive infrastructure to support their own business. Gradually they figured out that their own resources are underutilized most of the time. They decided to get the maximum out of the resources they have and hence  they launched their Amazon Elastic Compute Cloud (Amazon EC2) service in 2006. Their products have evolved a lot recently and now it is one of their primary business besides their retail selling. Amazon also offers Big Data services understand Amazon Web Services. Here is the list of the included services: Amazon Elastic MapReduce – It processes very high volumes of data Amazon DynammoDB – It is fully managed NoSQL (Not Only SQL) database service Amazon Simple Storage Services (S3) – A web-scale service designed to store and accommodate any amount of data Amazon High Performance Computing – It provides low-tenancy tuned high performance computing cluster Amazon RedShift – It is petabyte scale data warehousing service Google Though Google is known for Search Engine, we all know that it is much more than that. Google Compute Engine – It offers secure, flexible computing from energy efficient data centers Google Big Query – It allows SQL-like queries to run against large datasets Google Prediction API – It is a cloud based machine learning tool Other Players Besides Amazon and Google we also have other players in the Big Data market as well. Microsoft is also attempting Big Data with the Cloud with Microsoft Azure. Additionally Rackspace and NASA together have initiated OpenStack. The goal of Openstack is to provide a massively scaled, multitenant cloud that can run on any hardware. Thing to Watch The cloud based solutions provides a great integration with the Big Data’s story as well it is very economical to implement as well. However, there are few things one should be very careful when deploying Big Data on cloud solutions. Here is a list of a few things to watch: Data Integrity Initial Cost Recurring Cost Performance Data Access Security Location Compliance Every company have different approaches to Big Data and have different rules and regulations. Based on various factors, one can implement their own custom Big Data solution on a cloud. Tomorrow In tomorrow’s blog post we will discuss about various Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Smarter, Faster, Cheaper: The Insurance Industry’s Dream

    - by Jenna Danko
    On June 3rd, I saw the Gaylord Resort Centre in Washington D.C. become the hub of C level executives and managers of insurance carriers for the IASA 2013 Conference.  Insurance Accounting/Regulation and Technology sessions took the focus, but there were plenty of tertiary sessions for career development, which complemented the overall strong networking side of the conference.  As an exhibitor, Oracle, along with several hundred other product providers, welcomed the opportunity to display and demonstrate our solutions and we were encouraged by hustle and bustle of the exhibition floor.  The IASA organizers had pre-arranged fast track tours whereby interested conference delegates could sign up for a series of like-themed presentations from Vendors, giving them a level of 'Speed Dating' introductions to possible solutions and services.  Oracle participated in a number of these, which were very well subscribed.  Clearly, the conference had a strong business focus; however, attendees saw technology as a key enabler to get their processes done smarter, faster and cheaper.  As we navigated through the exhibition, it became clear from the inquiries that came to us that insurance carriers are gravitating to a number of focus areas: Navigating the maze of upcoming regulatory reporting changes. For US carriers with European holdings, Solvency II carries a myriad of rules and reporting requirements. Alignment across the globe of the Own Risk and Solvency Assessment (ORSA) processes brings to the fore the National Insurance of Insurance commissioners' (NAIC) recent guidance manual publication. Doing more with less and to certainly expect more from technology for less dollars. The overall cost of IT, in particular hardware, has dropped in real terms (though the appetite for more has risen: more CPU, more RAM, more storage), but software has seen less change. Clearly, customers expect either to pay less or get a lot more from their software solutions for the same buck. Doing things smarter – A recognition that with the advance of technology to stand still no longer means you are technically going backwards. Technology and, in particular technology interactions with human business processes, has undergone incredible change over the past 5 years. Consumer usage (iPhones, etc.) has been at the forefront, but now at the Enterprise level ever more effective technology exploitation is beginning to take place. That data and, in particular gleaning knowledge from data, is refining and improving business processes.  Organizations are now consuming more data than ever before, and it is set to grow exponentially for some time to come.  Amassing large volumes of data is one thing, but effectively analyzing that data is another.  It is the results of such analysis that leads to improvements both in terms of insurance product offerings and the processes to support them. Regulatory Compliance, damned if you do and damned if you don’t! Clearly, around the globe at lot is changing from a regulatory perspective and it is evident that in terms of regulatory requirements, whilst there is a greater convergence across jurisdictions bringing uniformity, there is also a lot of work to be done in the next 5 years. Just like the big data, hidden behind effective regulatory compliance there often lies golden nuggets that can give competitive advantages. From Oracle's perspective, our Rating Engine, Billing, Document Management and Insurance Analytics solutions on display served to strike up good conversations and, as is always the case at conferences, it was a great opportunity to meet and speak with existing Oracle customers that we might not have otherwise caught up with for a while. Fortunately, I was able to catch up on a few sessions at the close of the Exhibition.  The speaker quality was high and the audience asked challenging, but pertinent, questions.  During Dr. Jackie Freiberg’s keynote “Bye Bye Business as Usual,” the author discussed 8 strategies to help leaders create a culture where teams consistently deliver innovative ideas by disrupting the status quo.  The very first strategy: Get wired for innovation.  Freiberg admitted that folks in the insurance and financial services industry understand and know innovation is important, but oftentimes they are slow adopters.  Today, technology and innovation go hand in hand. In speaking to delegates during and after the conference, a high degree of satisfaction could be measured from their positive comments of speaker sessions and the exhibitors. I suspect many will be back in 2014 with Indianapolis as the conference location. Did you attend the IASA Conference in Washington D.C.?  If so, I would love to hear your comments. Andrew Collins is the Director, Solvency II of Oracle Financial Services. He can be reached at andrew.collins AT oracle.com.

    Read the article

  • Weekend reading: Microsoft/Oracle and SkyDrive based code-editor

    - by jamiet
    A couple of news item caught my eye this weekend that I think are worthy of comment. Microsoft/Oracle partnership to be announced tomorrow (24/06/2013) According to many news site Microsoft and Oracle are about to announce a partnership (Oracle set for major Microsoft, Salesforce, Netsuite partnerships) and they all seem to be assuming that it will be something to do with “the cloud”. I wouldn’t disagree with that assessment, Microsoft are heavily pushing Azure and Oracle seem (to me anyway) to be rather lagging behind in the cloud game. More specifically folks seem to be assuming that Oracle’s forthcoming 12c database release will be offered on Azure. I did a bit of reading about Oracle 12c and one of its key pillars appears to be that it supports multi-tenant topologies and multi-tenancy is a common usage scenario for databases in the cloud. I’m left wondering then, if Microsoft are willing to push a rival’s multi-tenant solution what is happening to its own cloud-based multi-tenant offering – SQL Azure Federations. We haven’t heard anything about federations for what now seems to be a long time and moreover the main Program Manager behind the technology, Cihan Biyikoglu, recently left Microsoft to join Twitter. Furthermore, a Principle Architect for SQL Server, Conor Cunningham, recently presented the opening keynote at SQLBits 11 where he talked about multi-tenant solutions on SQL Azure and not once did he mention federations. All in all I don’t have a warm fuzzy feeling about the future of SQL Azure Federations so I hope that that question gets asked at some point following the Microsoft/Oracle announcement. Text Editor on SkyDrive with coding-specific features Liveside.net got a bit of a scoop this weekend with the news (Exclusive: SkyDrive.com to get web-based text file editing features) that Microsoft’s consumer-facing file storage service is going to get a new feature – a web-based code editor. Here’s Liveside’s screenshot: I’ve long had a passing interest in online code editors, indeed back in December 2009 I wondered out loud on this blog site: I started to wonder when the development tools that we use would also become cloud-based. After all, if we’re using cloud-based services does it not make sense to have cloud-based tools that work with them? I think it does. Project Houston Since then the world has moved on. Cloud 9 IDE (https://c9.io/) have blazed a trail in the fledgling world of online code editors and I have been wondering when Microsoft were going to start playing catch-up. I had no doubt that an online code editor was in Microsoft’s future; its an obvious future direction, why would I want to have to download and install a bloated text editor (which, arguably, is exactly what Visual Studio amounts to) and have to continually update it when I can simply open a web browser and have ready access to all of my code from wherever I am. There are signs that Microsoft is already making moves in this direction, after all the URL for their new offering Team Foundation Service doesn’t mention TFS at all – my own personalised URL for Team Foundation Service is http://jamiet.visualstudio.com – using “Visual Studio” as the domain name for a service that isn’t strictly speaking part of Visual Studio leads me to think that there’s a much bigger play here and that one day http://visualstudio.com will house an online code editor. With that in mind then I find Liveside’s revelation rather intriguing, why would a code editing tool show up in Skydrive? Perhaps SkyDrive is going to get integrated more tightly into TFS, I’m very interested to see where this goes. The larger question playing on my mind though is whether an online code editor from Microsoft will support SQL Server developers. I have opined before (see The SQL developer gap) about the shoddy treatment that SQL Server developers have to experience from Microsoft and I haven’t seen any change in Microsoft’s attitude in the three and a half years since I wrote that post. I’m constantly bewildered by the lack of investment in SQL Server developer productivity compared to the riches that are lavished upon our appdev brethren. When you consider that SQL Server is Microsoft’s third biggest revenue stream it is, frankly, rather insulting. SSDT was a step in the right direction but the hushed noises I hear coming out of Microsoft of late in regard to SSDT don’t bode fantastically well for its future. So, will an online code editor from Microsoft support T-SQL development? I have to assume not given the paucity of investment on us lowly SQL Server developers over the last few years, but I live in hope! Your thoughts in the comments section please. I would be very interested in reading them. @Jamiet

    Read the article

  • HTG Explains: Are You Using IPv6 Yet? Should You Even Care?

    - by Chris Hoffman
    IPv6 is extremely important for the long-term health of the Internet. But is your Internet service provider providing IPv6 connectivity yet? Does your home network support it? Should you even care if you’re using IPv6 yet? Switching from IPv4 to IPv6 will give the Internet a much larger pool of IP addresses. It should also allow every device to have its own public IP address, rather than be hidden behind a NAT router. IPv6 is Important Long-Term IPv6 is very important for the long-term health of the Internet. There are only about 3.7 billion public IPv4 addresses. This may sound like a lot, but it isn’t even one IP address for each person on the planet. Considering people have more and more Internet-connected devices — everything from light bulbs to thermostats are starting to become network-connected — the lack of IP addresses is already proving to be a serious problem. This may not affect those of us in well-off developed countries just yet, but developing countries are already running out of IPv4 addresses. So, if you work at an Internet service provider, manage Internet-connected servers, or develop software or hardware — yes, you should care about IPv6! You should be deploying it and ensuring your software and hardware works properly with it. It’s important to prepare for the future before the current IPv4 situation becomes completely unworkable. But, if you’re just typical user or even a typical geek with a home Internet connection and a home network, should you really care about your home network just yet? Probably not. What You Need to Use IPv6 To use IPv6, you’ll need three things: An IPv6-Compatible Operating System: Your operating system’s software must be capable of using IPv6. All modern desktop operating systems should be compatible — Windows Vista and newer versions of Windows, as well as modern versions of Mac OS X and Linux. Windows XP doesn’t have IPv6 support installed by default, but you shouldn’t be using Windows XP anymore, anyway. A Router With IPv6 Support: Many — maybe even most — consumer routers in the wild don’t support IPv6. Check your router’s specifications details to see if it supports IPv6 if you’re curious. If you’re going to buy a new router, you’ll probably want to get one with IPv6 support to future-proof yourself. If you don’t have an IPv6-enabled router yet, you don’t need to buy a new one just to get it. An ISP With IPv6 Enabled:  Your Internet service provider must also have IPv6 set up on their end. Even if you have modern software and hardware on your end, your ISP has to provide an IPv6 connection for you to use it. IPv6 is rolling out steadily, but slowly — there’s a good chance your ISP hasn’t enabled it for you yet. How to Tell If You’re Using IPv6 The easiest way to tell if you have IPv6 connectivity is to visit a website like testmyipv6.com. This website allows you to connect to it in different ways — click the links near the top to see if you can connect to the website via different types of connections. If you can’t connect via IPv6, it’s either because your operating system is too old (unlikely), your router doesn’t support IPv6 (very possible), or because your ISP hasn’t enabled it for you yet (very likely). Now What? If you can connect to the test website above via IPv6, congratulations! Everything is working as it should. Your ISP is doing a good job of rolling out IPv6 rather than dragging its feet. There’s a good chance you won’t have IPv6 working properly, however. So what should you do about this — should you head to Amazon and buy a new IPv6-enabled router or switch to an ISP that offers IPv6? Should you use a “tunnel broker,” as the test site recommends, to tunnel into IPv6 via your IPv4 connection? Well, probably not. Typical users shouldn’t have to worry about this yet. Connecting to the Internet via IPv6 shouldn’t be perceptibly faster, for example. It’s important for operating system vendors, hardware companies, and Internet service providers to prepare for the future and get IPv6 working, but you don’t need to worry about this on your home network. IPv6 is all about future-proofing. You shouldn’t be racing to implement this at home yet or worrying about it too much — but, when you need to buy a new router, try to buy one that supports IPv6. Image Credit: Adobe of Chaos on Flickr, hisperati on Flickr, Vox Efx on Flickr     

    Read the article

  • Responsible BI for Excel, Even for Older Versions

    - by andrewbrust
    On Wednesday, I will have the honor of co-presenting, for both The Data Warehouse Institute (TDWI) and the New York Technology Council. on the subject of Excel and BI. My co-presenter will be none other than Bill Baker, who was a Microsoft Distinguished Engineer and, essentially, the father of BI at that company.  Details on the events are here and here. We'll be talking about PowerPivot, of course, but that's not all. Probably even more important than any one product, will be our discussion of whether the usual characterization of Excel as the nemesis of IT, the guilty pleasure of business users and the antithesis of formal BI is really valid and/or hopelessly intractable. Without giving away our punchline, I'll tell you that we are much more optimistic than that. There are huge upsides to Excel and while there are real dangers to using it in the BI space, there are standards and practices you can employ to ensure Excel is used responsibly. And when those practices are followed, Excel becomes quite powerful indeed. One of the keys to this is using Excel as a data consumer rather than data storage mechanism. Caching data in Excel is OK, but only if that data is (a) not modified and (b) configured for automated periodic refresh. PowerPivot meets both criteria -- it stores a read-only copy of your data in the form of a model, and once workbook containing a PowerPivot model is published to SharePoint, it can be configured for scheduled data refresh, on the server, requiring no user intervention whatsoever. Data refresh is a bit like hard drive backup: it will only happen reliably if it's automated, and super-easy to configure. PowerPivot hits a real home run here (as does Windows Home Server for PC backup, but I digress). The thing about PowerPivot is that it's an add-in for Excel 2010. What if you're not planning to go to that new version for quite a while? What if you’ve just deployed Office 2007 in your organization? What if you're still on Office 2003, or an even earlier version? What can you do immediately to share data responsibly and easily? As it turns out, there's a feature in Excel that's been around for quite a while, that can help: Web Queries.  The Web Query feature was introduced, ostensibly, to allow Excel to pull data in from Internet Web pages…for example, data in a stock quote history table will come in nicely, as will any data in a Web page that is displayed in an HTML table.  To use the feature In Excel 2007 or 2010, click the Data Tab or the ribbon and click the “From Web” button towards the left; in older versions use the corresponding option in  the menu or  toolbars.  Next, paste a URL into the resulting dialog box and tap Enter or click the Go button.  A preview of the Web page will come up, and the dialog will allow you to select the specific table within the page whose data you’d like to import.  Here’s an example: Now just click the table, click the Import button, and the Import Data dialog appears.  You can simply click OK to bring in your data or you can first click the Properties… button and configure the data import to be refreshed at an interval in minutes that you select.  Now your data’s in the spreadsheet and ready to worked with: Your data may be vulnerable to modification, but if you’ve set up the data refresh, any accidental or malicious changes will be corrected in time anyway. The thing about this feature is that it’s most useful not for public Web pages, but for pages behind the firewall.  In effect, the Web Query feature provides an incredibly easy way to consume data in Excel that’s “published” from an application.  Users just need a URL.  They don’t need to know server and database names and since the data is read-only, providing credentials may be unnecessary, or can be handled using integrated security.  If that’s not good enough, the Web Query can be saved to a special .iqy file, which can be edited to provide POST parameter data. The only requirement is that the data must be provided in an HTML table, with the first row providing the column names.  From an ASP.NET project, it couldn’t be easier: a simple bound GridView control is totally compatible.  Use a data source control with it, and you don’t even have to write any code.  Users can link to pages that are part of an application’s UI, or developers can create pages that are specially designed for the purpose of providing an interface to the Web Query import feature.  And none of this is Microsoft- or .NET-specific.  You can create pages in any language you want (PHP comes to mind) that output the result set of a query in HTML table format, and then consume that data in a Web Query.  Then build PivotTables and charts on the data, and in Excel 2007 or 2010 you can use conditional formatting to create scorecards and dashboards. This strategy allows you to create pages that function quite similarly to the OData XML feeds rendered when .NET developers create an “Astoria” WCF Data Service.  And while it’s cool that PowerPivot and Excel 2010 can import such OData feeds, it’s good to know that older versions of Excel can function in a similar fashion, and can consume data produced by virtually any Web development platform. As a final matter, instead of just telling you that “older versions” of Excel support this feature, I’ll be more specific.  To discover what the first version of Excel was to support Web queries, go to http://bit.ly/OldSchoolXL.

    Read the article

  • ACORD LOMA Session Highlights Policy Administration Trends

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance, attended and is blogging from the ACORD LOMA Insurance Forum this week. Above: Paul Vancheri, Chief Information Officer, Fidelity Investments Life Insurance Company. Vancheri gave a presentation during the ACORD LOMA Insurance Systems Forum about the key elements of modern policy administration systems and how insurers can mitigate risk during legacy system migrations to safely introduce new technologies. When I had a few particularly challenging honors courses in college my father, a long-time technology industry veteran, used to say, "If you don't know how to do something go ask the experts. Find someone who has been there and done that, don't be afraid to ask the tough questions, and apply and build upon what you learn." (Actually he still offers this same advice today.) That's probably why my favorite sessions at industry events, like the ACORD LOMA Insurance Forum this week, are those that include insight on industry trends and case studies from carriers who share their experiences and offer best practices based upon their own lessons learned. I had the opportunity to attend a particularly insightful session Wednesday as Craig Weber, senior vice president of Celent's Insurance practice, and Paul Vancheri, CIO of Fidelity Life Investments, presented, "Managing the Dynamic Insurance Landscape: Enabling Growth and Profitability with a Modern Policy Administration System." Policy Administration Trends Growing the business is the top issue when it comes to IT among both life and annuity and property and casualty carriers according to Weber. To drive growth and capture market share from competitors, carriers are looking to modernize their core insurance systems, with 65 percent of those CIOs participating in recent Celent research citing plans to replace their policy administration systems. Weber noted that there has been continued focus and investment, particularly in the last three years, by software and technology vendors to offer modern, rules-based, configurable policy administration solutions. He added that these solutions are continuing to evolve with the ongoing aim of helping carriers rapidly meet shifting business needs--whether it is to launch new products to market faster than the competition, adapt existing products to meet shifting consumer and /or regulatory demands, or to exit unprofitable markets. He closed by noting the top four trends for policy administration either in the process of being adopted today or on the not-so-distant horizon for the future: Underwriting and service desktops New business automation Convergence of ultra-configurable and domain content-rich systems Better usability and screen design Mitigating the Risk When Making the Decision to Modernize Third-party analyst research from advisory firms like Celent was a key part of the due diligence process for Fidelity as it sought a replacement for its legacy policy administration system back in 2005, according to Vancheri. The company's business opportunities were outrunning system capability. Its legacy system had not been upgraded in several years and was deficient from a functionality and currency standpoint. This was constraining the carrier's ability to rapidly configure and bring new and complex products to market. The company sought a new, modern policy administration system, one that would enable it to keep pace with rapid and often unexpected industry changes and ahead of the competition. A cross-functional team that included representatives from finance, actuarial, operations, client services and IT conducted an extensive selection process. This process included deep documentation review, pilot evaluations, demonstrations of required functionality and complex problem-solving, infrastructure integration capability, and the ability to meet the company's desired cost model. The company ultimately selected an adaptive policy administration system that met its requirements to: Deliver ease of use - eliminating paper and rework, while easing the burden on representatives to sell and service annuities Provide customer parity - offering Web-based capabilities in alignment with the company's focus on delivering a consistent customer experience across its business Deliver scalability, efficiency - enabling automation, while simplifying and standardizing systems across its technology stack Offer desired functionality - supporting Fidelity's product configuration / rules management philosophy, focus on customer service and technology upgrade requirements Meet cost requirements - including implementation, professional services and licenses fees and ongoing maintenance Deliver upon business requirements - enabling the ability to drive time to market for new products and flexibility to make changes Best Practices for Addressing Implementation Challenges Based upon lessons learned during the company's implementation, Vancheri advised carriers to evaluate staffing capabilities and cultural impacts, review business requirements to avoid rebuilding legacy processes, factor in dependent systems, and review policies and practices to secure customer data. His formula for success: upfront planning + clear requirements = precision execution. Achieving a Return on Investment Vancheri said the decision to replace their legacy policy administration system and deploy a modern, rules-based system--before the economic downturn occurred--has been integral in helping the company adapt to shifting market conditions, while enabling growth in its direct channel sales of variable annuities. Since deploying its new policy admin system, the company has reduced its average time to market for new products from 12-15 months to 4.5 months. The company has since migrated its other products to the new system and retired its legacy system, significantly decreasing its overall product development cycle. From a processing standpoint Vancheri noted the company has achieved gains in automation, information, and ease of use, resulting in improved real-time data edits, controls for better quality, and tax handling capability. Plus, with by having only one platform to manage, the company has simplified its IT environment and is well positioned to deliver system enhancements for greater efficiencies. Commitment to Continuing the Investment In the short and longer term future Vancheri said the company plans to enhance business functionality to support money movement, wire automation, divorce processing on payout contracts and cost-based tracking improvements. It also plans to continue system upgrades to remain current as well as focus on further reducing cycle time, driving down maintenance costs, and integrating with other products. Helen Pitts is senior product marketing manager for Oracle Insurance focused on life/annuities and enterprise document automation.

    Read the article

  • The standards that fail us and the intellectual bubble

    - by Jeff
    There has been a great deal of noise in the techie community about standards, and a sudden and unexplainable hate for Flash. This noise isn't coming from consumers... the countless soccer moms, teens and your weird uncle Bob, it's coming from the people who build (or at least claim to build) the stuff those consumers consume. If you could survey the position of consumers on the topic, they'd likely tell you that they just want stuff on the Web to work.The noise goes something like this: Web standards are the correct and right thing to use across the Intertubes, and anything not a part of those standards (Flash) is bad. Furthermore, the more recent noise is centered around the idea that HTML 5, along with Javascript, is the right thing to use. The arguments against Flash are, well, the truth is I haven't seen a good argument. I see anecdotal nonsense about high CPU usage and things I'd never think to check when I'm watching Piano Cat on YouTube, but these aren't arguments to me. Sure, I've seen it crash a browser a few times, but it's totally rare.But let's go back to standards. Yes, standards have played an important role in establishing the ubiquity of the Web. The protocols themselves, TCP/IP and HTTP, have been critical. HTML, which has served us well for a very long time, established an incredible foundation. Javascript did an OK job, and thanks to clever programmers writing great frameworks like JQuery, is becoming more and more useful. CSS is awful (there, I said it, I feel SO much better), and I'll never understand why it's so disconnected and different from anything else. It doesn't help that it's so widely misinterpreted by different browsers. Still, there's no question that standards are a good thing, and they've been good for the Web, consumers and publishers alike.HTML 4 has been with us for more than a decade. In Web years, that might as well be 80. HTML 5, contrary to popular belief, is not a standard, and likely won't be for many years to come. In fact, the Web hasn't really evolved at all in terms of its standards. The tools that generate the standard markup and script have, but at the end of the day, we're still living with standards that are more than ten years old. The "official" standards process has failed us.The Web evolved anyway, and did not wait for standards bodies to decide what to do next. It evolved in part because Macromedia, then Adobe, kept evolving Flash. In the earlier days, it mostly just did obnoxious splash pages, but then it started doing animation, and then rich apps as they added form input. Eventually it found its killer app: video. Now more than 95% of browsers have Flash installed. Consumers are better for it.But I'll do it one better... I'll go out on a limb and say that Flash is a standard. If it's that pervasive, I don't care what you tell me, it's a standard. Just because a company owns it doesn't mean that it's evil or not a standard. And hey, it pains me to say that as a developer, because I think the dev tools are the suck (more on that in a minute). But again, consumers don't care. They don't even pay for Flash. The bottom line is that if I put something Flash based on the Internet, it's likely that my audience will see it.And what about the speed of standards owned by a company? Look no further than Silverlight. Silverlight 2 (which I consider the "real" start to the story) came out about a year and a half ago. Now version 4 is out, and it has come a very long way in its capabilities. If you believe Riastats.com, more than half of browsers have it now. It didn't have to wait for standards bodies and nerds drafting documents, it's out today. At this rate, Silverlight will be on version 6 or 7 by the time HTML 5 is a ratified standard.Back to the noise, one of the things that has continually disappointed me about this profession is the number of people who get stuck in an intellectual bubble, color it with dogmatic principles, and completely ignore the actual marketplace where this stuff all has to live. We aren't machines; Binary thinking that forces us to choose between "open standards" and "proprietary lock-in" (the most loaded b.s. FUD term evar) isn't smart at all. The truth is that the <object> tag has allowed us to build incredible stuff on top of the old standards, and consumers have benefitted greatly. Consumer desire, capitalism, and yes, standards ratified by nerds who think about this stuff for years have all played a role in the broad adoption of the Interwebs.We could all do without the noise. At the end of the day, I'm going to build stuff for the Web that's good for my users, and I'm not going to base my decisions on a techie bubble religion. Imagine what the brilliant minds behind the noise could do for the Web if they joined me in that pursuit.

    Read the article

  • Employee Engagement Q&A with John Brunswick

    - by Kellsey Ruppel
    As we are focusing this week on Employee Engagement, I recently sat down with industry expert and thought leader John Brunswick on the topic. Here is the Q&A dialogue we shared.  Q: How do you effectively engage employees to drive business value?A: Motivation, both extrinsic and intrinsic, combined with the relevancy of various channels to support it.  Beyond chaining business strategies like compensation models within an organization, engagement ultimately is most successful when driven by employee's motivations.  Business value derived from engagement through technical capabilities can be objectively measured through metrics like the rate and accuracy of problem solving for a given business function or frequency of innovation created.  Providing employees performing "knowledge work" with capabilities that allow them to perform work with a higher degree of accuracy in the same or ideally less time, adds value for that individual and in turn, drives their level of engagement to drive business value. Q: Organizations with high levels of employee engagement outperform the total stock market index by 22%. Can you comment on why you think this might be? A: Alignment through shared purpose.  Zappos is an excellent example of a culture that arguably has higher than average levels of employee engagement and it permeates every aspect of their organization – embodied externally through their customer experience.  I recently made my first purchase with them and it was obvious through their web experience, visual design, communication style, customer service and attention to detail down to green packaging, that they have an amazingly strong shared purpose.  The Zappos.com ‘About page’ outlines their "Family Core Values", the first three being "Deliver WOW Through Service, Embrace and Drive Change & Create Fun and A Little Weirdness" – all reflected externally in my interaction with them.  Strong shared purpose enables higher product and service experience, equating to a dedicated customer base, repeat purchases and expanded marketshare. Q: Have you seen any trends in the market regarding employee engagement? A: Some companies now see offering a form of social engagement similar to Facebook and LinkedIn as standard communication infrastructure like email or instant messaging.  Originally offered as standalone tools, the value is now seen when these capabilities are offered in an integrated fashion in the context of business entities.  An emerging area of focus is around employee activities related to their organization on external social platforms, implicitly creating external communities with employees acting on behalf of the brand and interacting with each other (e.g. Twitter).  Companies have reached a formal understand that this now established communication medium requires strategies allowing employees to engage.  I have personally met colleagues from Oracle, like Oracle User Experience Director Ultan O'Broin (@ultan), via Twitter before meeting first through internal channels. Q: Employee engagement is important, but what about engaging customers and partners? A: The last few years we have witnessed an interesting evolution from the novelty of self-service to expectations of "intelligent" self-service.  From a consumer standpoint, engagement can end up being a key differentiator, especially in mature markets.  Customers that perform some level of interaction with a brand develop greater affinity for the brand and have a greater probability of acting as an advocate.  As organizations move toward a model of deeper engagement, they must ensure that their business is positioned to support deeper relationships, offering potentially greater transparency. From a partner standpoint greater engagement can lead to new types of business opportunities, much in the way that Amazon.com offers a unified shopping experience that can potentially span various vendors.  This same model can be extended to blending services and product delivery models, based on a closeness not easily possible before increased capability of engagement mechanisms. Q: What types of solutions are available to successfully deliver employee engagement? A: Solutions enabling higher levels of engagement do so on the basis of relevancy.  This relevancy is generally supported by aspects of content management, social collaboration, business intelligence, portal and process management technologies.  These technologies can help deliver an experience tailored to a given role or process within an organization that applies equally to work that is structured or unstructured, appearing in the form of functionality as simple as an online employee directory search, knowledge communities supported by social collaboration, as well as more feature rich business intelligence dashboards and portals. Looking to learn more about how to effectively engage your employees? Check out this webcast, or read more from John Brunswick. 

    Read the article

  • OpenSSL Versions in Solaris

    - by darrenm
    Those of you have have installed Solaris 11 or have read some of the blogs by my colleagues will have noticed Solaris 11 includes OpenSSL 1.0.0, this is a different version to what we have in Solaris 10.  I hope the following explains why that is and how it fits with the expectations on binary compatibility between Solaris releases. Solaris 10 was the first release where we included OpenSSL libraries and headers (part of it was actually statically linked into the SSH client/server in Solaris 9).  At time we were building and releasing Solaris 10 the current train of OpenSSL was 0.9.7.  The OpenSSL libraries at that time were known to not always be completely API and ABI (binary) compatible between releases (some times even in the lettered patch releases) though mostly if you stuck with the documented high level APIs you would be fine.   For this reason OpenSSL was classified as a 'Volatile' interface and in Solaris 10 Volatile interfaces were not part of the default library search path which is why the OpenSSL libraries live in /usr/sfw/lib on Solaris 10.  Okay, but what does Volatile mean ? Quoting from the attributes(5) man page description of Volatile (which was called External in older taxonomy): Volatile interfaces can change at any time and for any reason. The Volatile interface stability level allows Sun pro- ducts to quickly track a fluid, rapidly evolving specif- ication. In many cases, this is preferred to providing additional stability to the interface, as it may better meet the expectations of the consumer. The most common application of this taxonomy level is to interfaces that are controlled by a body other than Sun, but unlike specifications controlled by standards bodies or Free or Open Source Software (FOSS) communities which value interface compatibility, it can not be asserted that an incompatible change to the interface specifica- tion would be exceedingly rare. It may also be applied to FOSS controlled software where it is deemed more important to track the community with minimal latency than to provide stability to our customers. It also common to apply the Volatile classification level to interfaces in the process of being defined by trusted or widely accepted organization. These are generically referred to as draft standards. An "IETF Internet draft" is a well understood example of a specification under development. Volatile can also be applied to experimental interfaces. No assertion is made regarding either source or binary compatibility of Volatile interfaces between any two releases, including patches. Applications containing these interfaces might fail to function properly in any future release. Note that last paragraph!  OpenSSL is only one example of the many interfaces in Solaris that are classified as Volatile.  At the other end of the scale we have Committed (Stable in Solaris 10 terminology) interfaces, these include things like the POSIX APIs or Solaris specific APIs that we have no intention of changing in an incompatible way.  There are also Private interfaces and things we declare as Not-an-Interface (eg command output not intended for scripting against only to be read by humans). Even if we had declared OpenSSL as a Committed/Stable interface in Solaris 10 there are allowed exceptions, again quoting from attributes(5): 4. An interface specification which isn't controlled by Sun has been changed incompatibly and the vast majority of interface consumers expect the newer interface. 5. Not making the incompatible change would be incomprehensible to our customers. In our opinion and that of our large and small customers keeping up with the OpenSSL community is important, and certainly both of the above cases apply. Our policy for dealing with OpenSSL on Solaris 10 was to stay at 0.9.7 and add fixes for security vulnerabilities (the version string includes the CVE numbers of fixed vulnerabilities relevant to that release train).  The last release of OpenSSL 0.9.7 delivered by the upstream community was more than 4 years ago in Feb 2007. Now lets roll forward to just before the release of Solaris 11 Express in 2010. By that point in time the current OpenSSL release was 0.9.8 with the 1.0.0 release known to be coming soon.  Two significant changes to OpenSSL were made between Solaris 10 and Solaris 11 Express.  First in Solaris 11 Express (and Solaris 11) we removed the requirement that Volatile libraries be placed in /usr/sfw/lib, that means OpenSSL is now in /usr/lib, secondly we upgraded it to the then current version stream of OpenSSL (0.9.8) as was expected by our customers. In between Solaris 11 Express in 2010 and the release of Solaris 11 in 2011 the OpenSSL community released version 1.0.0.  This was a huge milestone for a long standing and highly respected open source project.  It would have been highly negligent of Solaris not to include OpenSSL 1.0.0e in the Solaris 11 release. It is the latest best supported and best performing version.     In fact Solaris 11 isn't 'just' OpenSSL 1.0.0 but we have added our SPARC T4 engine and the AES-NI engine to support the on chip crypto acceleration. This gives us 4.3x better AES performance than OpenSSL 0.9.8 running on AIX on an IBM POWER7. We are now working with the OpenSSL community to determine how best to integrate the SPARC T4 changes into the mainline OpenSSL.  The OpenSSL 'pkcs11' engine we delivered in Solaris 10 to support the CA-6000 card and the SPARC T1/T2/T3 hardware is still included in Solaris 11. When OpenSSL 1.0.1 and 1.1.0 come out we will asses what is best for Solaris customers. It might be upgrade or it might be parallel delivery of more than one version stream.  At this time Solaris 11 still classifies OpenSSL as a Volatile interface, it is our hope that we will be able at some point in a future release to give it a higher interface stability level. Happy crypting! and thank-you OpenSSL community for all the work you have done that helps Solaris.

    Read the article

  • The Virtues and Challenges of Implementing Basel III: What Every CFO and CRO Needs To Know

    - by Jenna Danko
    The Basel Committee on Banking Supervision (BCBS) is a group tasked with providing thought-leadership to the global banking industry.  Over the years, the BCBS has released volumes of guidance in an effort to promote stability within the financial sector.  By effectively communicating best-practices, the Basel Committee has influenced financial regulations worldwide.  Basel regulations are intended to help banks: More easily absorb shocks due to various forms of financial-economic stress Improve risk management and governance Enhance regulatory reporting and transparency In June 2011, the BCBS released Basel III: A global regulatory framework for more resilient banks and banking systems.  This new set of regulations included many enhancements to previous rules and will have both short and long term impacts on the banking industry.  Some of the key features of Basel III include: A stronger capital base More stringent capital standards and higher capital requirements Introduction of capital buffers  Additional risk coverage Enhanced quantification of counterparty credit risk Credit valuation adjustments  Wrong  way risk  Asset Value Correlation Multiplier for large financial institutions Liquidity management and monitoring Introduction of leverage ratio Even more rigorous data requirements To implement these features banks need to embark on a journey replete with challenges. These can be categorized into three key areas: Data, Models and Compliance. Data Challenges Data quality - All standard dimensions of Data Quality (DQ) have to be demonstrated.  Manual approaches are now considered too cumbersome and automation has become the norm. Data lineage - Data lineage has to be documented and demonstrated.  The PPT / Excel approach to documentation is being replaced by metadata tools.  Data lineage has become dynamic due to a variety of factors, making static documentation out-dated quickly.  Data dictionaries - A strong and clean business glossary is needed with proper identification of business owners for the data.  Data integrity - A strong, scalable architecture with work flow tools helps demonstrate data integrity.  Manual touch points have to be minimized.   Data relevance/coverage - Data must be relevant to all portfolios and storage devices must allow for sufficient data retention.  Coverage of both on and off balance sheet exposures is critical.   Model Challenges Model development - Requires highly trained resources with both quantitative and subject matter expertise. Model validation - All Basel models need to be validated. This requires additional resources with skills that may not be readily available in the marketplace.  Model documentation - All models need to be adequately documented.  Creation of document templates and model development processes/procedures is key. Risk and finance integration - This integration is necessary for Basel as the Allowance for Loan and Lease Losses (ALLL) is calculated by Finance, yet Expected Loss (EL) is calculated by Risk Management – and they need to somehow be equal.  This is tricky at best from an implementation perspective.  Compliance Challenges Rules interpretation - Some Basel III requirements leave room for interpretation.  A misinterpretation of regulations can lead to delays in Basel compliance and undesired reprimands from supervisory authorities. Gap identification and remediation - Internal identification and remediation of gaps ensures smoother Basel compliance and audit processes.  However business lines are challenged by the competing priorities which arise from regulatory compliance and business as usual work.  Qualification readiness - Providing internal and external auditors with robust evidence of a thorough examination of the readiness to proceed to parallel run and Basel qualification  In light of new regulations like Basel III and local variations such as the Dodd Frank Act (DFA) and Comprehensive Capital Analysis and Review (CCAR) in the US, banks are now forced to ask themselves many difficult questions.  For example, executives must consider: How will Basel III play into their Risk Appetite? How will they create project plans for Basel III when they haven’t yet finished implementing Basel II? How will new regulations impact capital structure including profitability and capital distributions to shareholders? After all, new regulations often lead to diminished profitability as well as an assortment of implementation problems as we discussed earlier in this note.  However, by requiring banks to focus on premium growth, regulators increase the potential for long-term profitability and sustainability.  And a more stable banking system: Increases consumer confidence which in turn supports banking activity  Ensures that adequate funding is available for individuals and companies Puts regulators at ease, allowing bankers to focus on banking Stability is intended to bring long-term profitability to banks.  Therefore, it is important that every banking institution takes the steps necessary to properly manage, monitor and disclose its risks.  This can be done with the assistance and oversight of an independent regulatory authority.  A spectrum of banks exist today wherein some continue to debate and negotiate with regulators over the implementation of new requirements, while others are simply choosing to embrace them for the benefits I highlighted above. Do share with me how your institution is coping with and embracing these new regulations within your bank. Dr. Varun Agarwal is a Principal in the Banking Practice for Capgemini Financial Services.  He has over 19 years experience in areas that span from enterprise risk management, credit, market, and to country risk management; financial modeling and valuation; and international financial markets research and analyses.

    Read the article

  • Highlights from the Oracle Customer Experience Summit @ OpenWorld

    - by Kathryn Perry
    A guest post by David Vap, Group Vice President, Oracle Applications Product Development The Oracle Customer Experience Summit was the first-ever event covering the full breadth of Oracle's CX portfolio -- Marketing, Sales, Commerce, and Service. The purpose of the Summit was to articulate the customer experience imperative and to showcase the suite of Oracle products that can help our customers create the best possible customer experience. This topic has always been a very important one, but now that there are so many alternative companies to do business with and because people have such public ways to voice their displeasure, it's necessary for vendors to have multiple listening posts in place to gauge consumer sentiment. They need to know what is going on in real time and be able to react quickly to turn negative situations into positive ones. Those can then be shared in a social manner to enhance the brand and turn the customer into a repeat customer. The Summit was focused on Oracle's portfolio of products and entirely dedicated to customers who are committed to building great customer experiences within their businesses. Rather than DBAs, the attendees were business people looking to collaborate with other like-minded experts and find out how Oracle can help in terms of technology, best practices, and expertise. The event was at the Westin St. Francis Hotel in San Francisco as part of Oracle OpenWorld. We had eight hundred people attend, which was great for the first year. Next year, there's no doubt in my mind, we can raise that number to 5,000. Alignment and Logic Oracle's Customer Experience portfolio is made up of a combination of acquired and organic products owned by many people who are new to Oracle. We include homegrown Fusion CRM, as well as RightNow, Inquira, OPA, Vitrue, ATG, Endeca, and many others. The attendees knew of the acquisitions, so naturally they wanted to see how the products all fit together and hear the logic behind the portfolio. To tell them about our alignment, we needed to be aligned. To accomplish that, a cross functional team at Oracle agreed on the messaging so that every single Oracle presenter could cover the big picture before going deep into a product or topic. Talking about the full suite of products in one session produced overflow value for other products. And even though this internal coordination was a huge effort, everyone saw the value for our customers and for our long-term cooperation and success. Keynotes, Workshops, and Tents of Innovation We scored by having Seth Godin as our keynote speaker ? always provocative and popular. The opening keynote was a session orchestrated by Mark Hurd, Anthony Lye, and me. Mark set the stage by giving real-world examples of bad customer experiences, Anthony clearly articulated the business imperative for addressing these experiences, and I brought it all to life by taking the audience around the Customer Lifecycle and showing demos and videos, with partners included at each of the stops around the lifecycle. Brian Curran, a VP for RightNow Product Strategy, presented a session that was in high demand called The Economics of Customer Experience. People loved hearing how to build a business case and justify the cost of building a better customer experience. John Kembel, another VP for RightNow Product Strategy, held a workshop that customers raved about. It was based on the journey mapping methodology he created, which is a way to talk to customers about where they want to make improvements to their customers' experiences. He divided the audience into groups led by facilitators. Each person had the opportunity to engage with experts and peers and construct some real takeaways. From left to right: Brian Curran, John Kembel, Seth Godin, and George Kembel The conference hotel was across from Union Square so we used that space to set up Innovation Tents. During the day we served lunch in the tents and partners showed their different innovative ideas. It was very interesting to see all the technologies and advancements. It also gave people a place to mix and mingle and to think about the fringe of where we could all take these ideas. Product Portfolio Plus Thought Leadership Of course there is always room for improvement, but the feedback on the format of the conference was positive. Ninety percent of the sessions had either a partner or a customer teamed with an Oracle presenter. The presentations weren't dry, one-way information dumps, but more interactive. I just followed up with a CEO who attended the conference with his Head of Marketing. He told me that they are using John Kembel's journey mapping methodology across the organization to pull people together. This sort of thought leadership in these highly competitive areas gives Oracle permission to engage around the technology. We have to differentiate ourselves and it's harder to do on the product side because everyone looks the same on paper. But on thought leadership ? we can, and did, take some really big steps. David VapGroup Vice PresidentOracle Applications Product Development

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • JavaOne 2012 Sunday Strategy Keynote

    - by Janice J. Heiss
    At the Sunday Strategy Keynote, held at the Masonic Auditorium, Hasan Rizvi, EVP, Middleware and Java Development, stated that the theme for this year's JavaOne is: “Make the future Java”-- meaning that Java continues in its role as the most popular, complete, productive, secure, and innovative development platform. But it also means, he qualified, the process by which we make the future Java -- an open, transparent, collaborative, and community-driven evolution. "Many of you have bet your businesses and your careers on Java, and we have bet our business on Java," he said.Rizvi detailed the three factors they consider critical to the success of Java--technology innovation, community participation, and Oracle's leadership/stewardship. He offered a scorecard in these three realms over the past year--with OS X and Linux ARM support on Java SE, open sourcing of JavaFX by the end of the year, the release of Java Embedded Suite 7.0 middleware platform, and multiple releases on the Java EE side. The JCP process continues, with new JSR activity, and JUGs show a 25% increase in participation since last year. Oracle, meanwhile, continues its commitment to both technology and community development/outreach--with four regional JavaOne conferences last year in various part of the world, as well as the release of Java Magazine, with over 120,000 current subscribers. Georges Saab, VP Development, Java SE, next reviewed features of Java SE 7--the first major revision to the platform under Oracle's stewardship, which has included near-monthly update releases offering hundreds of fixes, performance enhancements, and new features. Saab indicated that developers, ISVs, and hosting providers have all been rapid adopters of the platform. He also noted that Oracle's entire Fusion middleware stack is supported on SE 7. The supported platforms for SE 7 has also increased--from Windows, Linux, and Solaris, to OS X, Linux ARM, and the emerging ARM micro-server market. "In the last year, we've added as many new platforms for Java, as were added in the previous decade," said Saab.Saab also explored the upcoming JDK 8 release--including Project Lambda, Project Nashorn (a modern implementation of JavaScript running on the JVM), and others. He noted that Nashorn functionality had already been used internally in NetBeans 7.3, and announced that they were planning to contribute the implementation to OpenJDK. Nandini Ramani, VP Development, Java Client, ME and Card, discussed the latest news pertaining to JavaFX 2.0--releases on Windows, OS X, and Linux, release of the FX Scene Builder tool, the JavaFX WebView component in NetBeans 7.3, and an OpenJFX project in OpenJDK. Nandini announced, as of Sunday, the availability for download of JavaFX on Linux ARM (developer preview), as well as Scene Builder on Linux. She noted that for next year's JDK 8 release, JavaFX will offer 3D, as well as third-party component integration. Avinder Brar, Senior Software Engineer, Navis, and Dierk König, Canoo Fellow, next took the stage and demonstrated all that JavaFX offers, with a feature-rich, animation-rich, real-time cargo management application that employs Canoo's just open-sourced Dolphin technology.Saab also explored Java SE 9 and beyond--Jigsaw modularity, Penrose Project for interoperability with OSGi, improved multi-tenancy for Java in the cloud, and Project Sumatra. Phil Rogers, HSA Foundation President and AMD Corporate Fellow, explored heterogeneous computing platforms that combine the CPU and the parallel processor of the GPU into a single piece of silicon and shared memory—a hardware technology driven by such advanced functionalities as HD video, face recognition, and cloud workloads. Project Sumatra is an OpenJDK project targeted at bringing Java to such heterogeneous platforms--with hardware and software experts working together to modify the JVM for these advanced applications and platforms.Ramani next discussed the latest with Java in the embedded space--"the Internet of things" and M2M--declaring this to be "the next IT revolution," with Java as the ideal technology for the ecosystem. Last week, Oracle released Java ME Embedded 3.2 (for micro-contollers and low-power devices), and Java Embedded Suite 7.0 (a middleware stack based on Java SE 7). Axel Hansmann, VP Strategy and Marketing, Cinterion, explored his company's use of Java in M2M, and their new release of EHS5, the world's smallest 3G-capable M2M module, running Java ME Embedded. Hansmaan explained that Java offers them the ability to create a "simple to use, scalable, coherent, end-to-end layer" for such diverse edge devices.Marc Brule, Chief Financial Office, Royal Canadian Mint, also explored the fascinating use-case of JavaCard in his country's MintChip e-cash technology--deployable on smartphones, USB device, computer, tablet, or cloud. In parting, Ramani encouraged developers to download the latest releases of Java Embedded, and try them out.Cameron Purdy, VP, Fusion Middleware Development and Java EE, summarized the latest developments and announcements in the Enterprise space--greater developer productivity in Java EE6 (with more on the way in EE 7), portability between platforms, vendors, and even cloud-to-cloud portability. The earliest version of the Java EE 7 SDK is now available for download--in GlassFish 4--with WebSocket support, better JSON support, and more. The final release is scheduled for April of 2013. Nicole Otto, Senior Director, Consumer Digital Technology, Nike, explored her company's Java technology driven enterprise ecosystem for all things sports, including the NikeFuel accelerometer wrist band. Looking beyond Java EE 7, Purdy mentioned NoSQL database functionality for EE 8, the concurrency utilities (possibly in EE 7), some of the Avatar projects in EE 7, some in EE 8, multi-tenancy for the cloud, supporting SaaS applications, and more.Rizvi ended by introducing Dr. Robert Ballard, oceanographer and National Geographic Explorer in Residence--part of Oracle's philanthropic relationship with the National Geographic Society to fund K-12 education around ocean science and conservation. Ballard is best known for having discovered the wreckage of the Titanic. He offered a fascinating video and overview of the cutting edge technology used in such deep-sea explorations, noting that in his early days, high-bandwidth exploration meant that you’d go down in a submarine and "stick your face up against the window." Now, it's a remotely operated, technology telepresence--"I think of my Hercules vehicle as my equivalent of a Na'vi. When I go beneath the sea, I actually send my spirit." Using high bandwidth satellite links, such amazing explorations can now occur via smartphone, laptop, or whatever platform. Ballard’s team regularly offers live feeds and programming out to schools and the world, spanning 188 countries--with embedding educators as part of the expeditions. It's technology at its finest, inspiring the next-generation of scientists and explorers!

    Read the article

  • Customer Engagement: Are Your Customers Engaged With Your Brands?

    - by Michael Snow
    Engaging Customers is Critical for Business Growth This week we'll be spending some time looking at Customer Engagement. We all have stories about how we try to engage our customers better than ever before.  We all know that successfully engaging customers is critical to an organization’s business success. We also know that engaging our customers is more challenging today than ever before. There is so much noise to compete with for getting anyone's attention. Over the last decade and a half we’ve watched as the online channel became a primary one for conducting our business and even managing our lives. And during this whole process or evolution, the customer journey has grown increasingly complex. Customers themselves have assumed increasing power and influence over the purchase process and for setting the tone and pace of the relationships they have with brands and you see the evidence of this in the really high expectations that customers have today. They expect brand experiences that are personalized and relevant -- In other words they want experiences that demonstrate that the brand understands their interests, preferences and past interactions with them. They also expect their experience with a brand and the community surrounding it to be social and interactive – it’s no longer acceptable to have a static, one-way dialogue with your customer base or to fail to connect your customers with fellow customers, or with your employees and partners. And on top of all this, customers expect us to deliver this rich and engaging, personalized and interactive experience, in a consistent way across a variety of channels including web, mobile and social channels or even offline venues such as in-store or via a call center. And as a result, we see that delivery on these expectations and successfully engaging your customers is a great challenge today. Customers expect a personal, engaging and consistent online customer experience. Today’s consumer expects to engage with your brand and the community surrounding it in an interactive and social way. Customers have come to expect a lot for the online customer experience.  ·        They expect it to be personal: o   Accessible:  - Regardless of my device  Via my existing online identities  o   Relevant:  Content that interests me  o   Customized:  To be able to tailor my online experience  ·        They expect it to be engaging: o   Social:  So I can share content with my social networks  o   Intuitive:  To easily find what I need   o   Interactive:  So I can interact with online communities And they expect it to be consistent across the online experience – so you better have your brand and information ducks in a row. These expectations are not only limited to your customers by any means. Your employees (and partners) are also expecting to be empowered with engagement tools across their internal and external communications and interactions with customers, partners and other employees. We had a great conversation with Ted Schadler from Forrester Research entitled: "Mobile is the New Face of Engagement" that is now available On-Demand. Take a look at all the webcasts available to watch from our Social Business Thought Leader Series. Social capabilities have become so pervasive and changed customers’ expectations for their online experiences. The days of one-direction communication with customers are at an end. Today’s customers expect to engage in a dialogue with your brand and the community surrounding it in an interactive and social way. You have at a very short window of opportunity to engage a customer before they go to another site in their pursuit of information, product, or services. In fact, customers who engage with brands via social media tend to spend more that customers who don’t, between 20% and 40% more.  And your customers are also increasingly influenced by their social networks too – 40% of consumers say they factor in Facebook recommendations when making purchasing decisions.  This means a few different things for today’s businesses. Incorporating forms of social interaction such as commenting or reviews as well as tightly integrating your online experience with your customers’ social networking experiences into the online customer experience are crucial for maintaining the eyeballs on your desired pages. --- Notes/Sources: 93% - Cone Finds that Americans Expect Companies to Have a Presence in Social Media - http://www.coneinc.com/content1182 40% of consumers factor in Facebook recommendations when making decisions about purchasing (Increasing Campaign Effectiveness with Social Media, Syncapse, March 2011) 20%-40% - Customers who engage with a company via social media spend this percentage more with that company than other customers (Source: Bain & Company Report – Putting Social Media to Work)

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42  | Next Page >