Search Results

Search found 9975 results on 399 pages for 'enterprise architecture'.

Page 33/399 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Deploying axis2 on glassfish

    - by user115524
    I'm trying to deploy Axis2 v1.6.2 war on Glassfish v3.1.2 and I'm having some problems... I need to develop a few web services and since the main app is beeing served on Glassfish I was hoping to deploy axis2 on it so I could test it. I used Glassfish administration pages to deploy a war downloaded from an apache site, but after pointing the application deployment form to this war I'm getting Error and this is the (long) stack trace: [#|2013-06-27T11:34:40.701+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|WebModule[/axis23403634363287739103]StandardWrapper.Throwable java.lang.ExceptionInInitializerError at org.apache.axis2.transport.http.AxisServlet.initConfigContext(AxisServlet.java:584) at org.apache.axis2.transport.http.AxisServlet.init(AxisServlet.java:454) at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1453) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1250) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5093) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5380) at com.sun.enterprise.web.WebModule.start(WebModule.java:498) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:917) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:901) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:733) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:2019) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1669) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:109) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:130) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:269) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:301) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:461) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:240) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:389) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:348) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:363) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1085) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1200(CommandRunnerImpl.java:95) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1291) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1259) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:214) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:207) at org.glassfish.admin.rest.resources.TemplateListOfResource.createResource(TemplateListOfResource.java:148) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer._service(GrizzlyContainer.java:182) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer.service(GrizzlyContainer.java:147) at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:148) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:179) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:117) at com.sun.enterprise.v3.services.impl.ContainerMapper$Hk2DispatcherCallable.call(ContainerMapper.java:354) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1056) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:724) Caused by: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable. at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:874) at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:604) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:336) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:310) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685) at org.apache.axis2.deployment.DeploymentEngine.(DeploymentEngine.java:76) ... 68 more |#] -- cut -- ... the end of the stacktrace: [#|2013-06-27T11:34:40.714+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.tools.admin.org.glassfish.deployment.admin|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|Exception while invoking class com.sun.enterprise.web.WebApplication start method java.lang.Exception: java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.apache.catalina.LifecycleException: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable. at com.sun.enterprise.web.WebApplication.start(WebApplication.java:138) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:130) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:269) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:301) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:461) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:240) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:389) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:348) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:363) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1085) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1200(CommandRunnerImpl.java:95) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1291) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1259) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:214) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:207) at org.glassfish.admin.rest.resources.TemplateListOfResource.createResource(TemplateListOfResource.java:148) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer._service(GrizzlyContainer.java:182) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer.service(GrizzlyContainer.java:147) at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:148) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:179) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:117) at com.sun.enterprise.v3.services.impl.ContainerMapper$Hk2DispatcherCallable.call(ContainerMapper.java:354) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1056) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:724) |#] [#|2013-06-27T11:34:40.715+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|Exception while loading the app|#] [#|2013-06-27T11:34:40.862+0200|SEVERE|glassfish3.1.2|javax.enterprise.system.tools.admin.org.glassfish.deployment.admin|_ThreadID=84;_ThreadName=admin-thread-pool-4848(4);|Exception while loading the app : java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.apache.catalina.LifecycleException: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable.|#] [#|2013-06-27T11:34:40.875+0200|INFO|glassfish3.1.2|org.glassfish.admingui|_ThreadID=85;_ThreadName=admin-thread-pool-4848(5);|Exception Occurred :Error occurred during deployment: Exception while loading the app : java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.apache.catalina.LifecycleException: org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable.. Please see server.log for more details.|#] Is it even possible to deploy axis2 on glassfish?

    Read the article

  • Push-Based Events in a Services Oriented Architecture

    - by Colin Morelli
    I have come to a point, in building a services oriented architecture (on top of Thrift), that I need to expose events and allow listeners. My initial thought was, "create an EventService" to handle publishing and subscribing to events. That EventService can use whatever implementation it desires to actually distribute the events. My client automatically round-robins service requests to available service hosts which are determined using Zookeeper-based service discovery. So, I'd probably use JMS inside of EventService mainly for the purpose of persisting messages (in the event that a service host for EventService goes down before it can distribute the message to all of the available listeners). When I started considering this, I began looking into the differences between Queues and Topics. Topics unfortunately won't work for me, because (at least for now), all listeners must receive the message (even if they were down at the time the event was pushed, or hadn't made a subscription yet because they haven't completed startup (during deployment, for example) - messages should be queued until the service is available). However, I don't want EventService to be responsible for handling all of the events. I don't think it should have the code to react to events inside of it. Each of the services should do what it needs with a given event. This would indicate that each service would need a JMS connection, which questions the value of having EventService at all (as the services could individually publish and subscribe to JMS directly). However, it also couples all of the services to JMS (when I'd rather that there be a single service that's responsible for determining how to distribute events). What I had thought was to publish an event to EventService, which pulls a configuration of listeners from some configuration source (database, flat file, irrelevant for now). It replicates the message and pushes each one back into a queue with information specific to that listener (so, if there are 3 listeners, 1 event would become 3 events in JMS). Then, another thread in EventService (which is replicated, running on multiple hots) would be pulling from the queue, attempting to make the service call to the "listener", and returning the message to the queue (if the service is down), or discarding the message (if the listener completed successfully). tl;dr If I have an EventService that is responsible for receiving events and delegating service calls to "event listeners," (which are really just endpoints on other services), how should it know how to craft the service call? Should I create a generic "Event" object that is shared among all services? Then, the EventService can just construct this object and pass it to the service call. Or is there a better answer to this problem entirely?

    Read the article

  • Software Architecture Analysis Method (SAAM)

    Software Architecture Analysis Method (SAAM) is a methodology used to determine how specific application quality attributes were achieved and how possible changes in the future will affect quality attributes based on hypothetical cases studies. Common quality attributes that can be utilized by this methodology include modifiability, robustness, portability, and extensibility. Quality Attribute: Application Modifiability The Modifiability quality attribute refers to how easy it changing the system in the future will be. This to me is a very open-ended attribute because a business could decide to transform a Point of Sale (POS) system in to a Lead Tracking system overnight. (Yes, this did actually happen to me) In order for SAAM to be properly applied for checking this attribute specific hypothetical case studies need to be created and review for the modifiability attribute due to the fact that various scenarios would return various results based on the amount of changes. In the case of the POS change out a payment gateway or adding an additional payment would have scored very high in comparison to changing the system over to a lead management system. I personally would evaluate this quality attribute based on the S.O.I.L.D Principles of software design. I have found from my experience the use of S.O.I.L.D in software design allows for the adoption of changes within a system. Quality Attribute: Application Robustness The Robustness quality attribute refers to how an application handles the unexpected. The unexpected can be defined but is not limited to anything not anticipated in the originating design of the system. For example: Bad Data, Limited to no network connectivity, invalid permissions, or any unexpected application exceptions. I would personally evaluate this quality attribute based on how the system handled the exceptions. Robustness Considerations Did the system stop or did it handle the unexpected error? Did the system log the unexpected error for future debugging? What message did the user receive about the error? Quality Attribute: Application Portability The Portability quality attribute refers to the ease of porting an application to run in a new operating system or device. For example, It is much easier to alter an ASP.net website to be accessible by a PC, Mac, IPhone, Android Phone, Mini PC, or Table in comparison to desktop application written in VB.net because a lot more work would be involved to get the desktop app to the point where it would be viable to port the application over to the various environments and devices. I would personally evaluate this quality attribute based on each new environment for which the hypothetical case study identifies. I would pay particular attention to the following items. Portability Considerations Hardware Dependencies Operating System Dependencies Data Source Dependencies Network Dependencies and Availabilities  Quality Attribute: Application Extensibility The Extensibility quality attribute refers to the ease of adding new features to an existing application without impacting existing functionality. I would personally evaluate this quality attribute based on each new environment for the following Extensibility  Considerations Hard coded Variables versus Configurable variables Application Documentation (External Documents and Codebase Documentation.) The use of Solid Design Principles

    Read the article

  • Alcatel-Lucent: Enterprise 2.0: The Top 5 Things I would Do Over

    - by Kellsey Ruppel
    Happy Monday! Does anyone else feel as if the weekend went entirely too quickly? At least for those of us in the United States, we have the 4th of July Holiday next week to look forward to This week on the blog, we are going to focus on "WebCenter by Example" and highlight best practices from customers and partners. I recently came across this article and I think this is a great example of how we can learn from one another when it comes to social collaboration adoption. Do you agree with Jem? What things or best practices have you learned in your organizations?  By Jem Janik, Enterprise community manager, Alcatel-Lucent  Not so long ago, Engage, the Alcatel-Lucent employee social network and collaboration platform, celebrated its third birthday. With more than 25,000 members actively interacting each month, Engage has been a big enough success that it’s been the subject of external articles, and often those of us who helped launch it will go out and speak about what aspects contributed to that success. Hindsight is still 20/20 and what it takes to successfully launch an enterprise 2.0 community is fairly well-known now.  Today I want to tell you what I suspect you really want to know about.  As the enterprise community manager for Engage, after three years in, what are the top 5 things I wish we (and I mostly mean me) could do over? #5 Define your analytics solution from the start There is so much to do when you launch a community and initially growing it without complete chaos is quite a task.  It doesn’t take too long to get to a point where you want to focus your continued efforts in growing company collaboration.  Do people truly talk across regional boundaries or have we shifted siloed conversations to a new platform.  Is there one organization that doesn’t interact with another? If you are lucky you’ll have someone in your community team well versed in the world of databases and SQL queries, but it takes time to figure out what backend analytics data actually means. Professional support can be expensive and it may be hard to justify later as it typically has the community manager as the only main customer.  Figure out what you think you’ll want to know and how to get it early on. The sooner the better even if it doesn’t seem that critical at the time. #4 Lobbies guide you to the right places One piece of feedback that comes up more and more as we keep growing Engage is it’s hard to find stuff, or new people are not sure where to start. Something we’re doing now is defining some general topic areas of interest to be like “lobbies” into the platform and some common hashtags to go with them. I liken this to walking into a large medical or professional building for the first time.  There are hundreds of offices, and you look to a sign in the lobby to get guided to the right place for you.  We’re building that sign for members now, but again we missed the boat as the majority of the company has had their initial Engage experience. #3 Clean up, clean up, clean up Knowledge work and folksonomies are messy! The day we opened the doors to Engage I would have said we should keep everything ever created in Engage with an argument that it was a window into our collective knowledge so nothing should go.  Well, 6000+ groups and 200,000+ pieces of content later, I’ve changed my mind.  As previously mentioned, with too much “stuff” the system can be overwhelming to new members and it makes it harder to get what you’re looking for.   Do we need that help document about a tool we no longer have? NO!  Do we need that group that had 1 document and 2 discussions in the last two years? NO! Should we only have one group about a given topic instead of 4?  YES! Last fall, Engage defined a cleanup process for groups not used for a long time.  We also formed a volunteer cleaning army who are extra eyes on the hunt for “stuff” that should be updated, merged, or deleted.  It’s better late than never, but in line with what’s becoming a theme I wish these efforts had started earlier. #2 Communications & local community management One of the most important aspects of my job is to make sure people who should be talking to each other are actually doing it.  Connecting people to the other people they should know, the groups they should join, a piece of content that shouldn’t be missed.   I have worked both inside and outside of communications teams, and they are the best informed people in your company.  They know when something big is coming, how it impacts employees, how it fits with strategy, who else knows more, etc.  Having communications professionals who are power users can help scale up community management because they are already so well connected.  They also need to have the platform skills to pay attention without suffering email overload, how to grab someone’s attention, etc.  I wish I’d had figured this out much earlier.  If I had I would have groomed more communications colleagues into advocates and power members right at the start. #1 Grooming advocates vs. natural advocates I’ve just alluded to this above already. The very best advocates are those who naturally embrace your platform and automatically start to see new ways to work within it.  Those advocates seem to come out of the woodwork naturally since some of them are early adopters.  Not surprisingly, our best advocates today are those same people who were willing to come kick the tires when the community was completely empty.  Unfortunately, we didn’t get a global spread of those natural advocates.  I did ask around when we first launched for other people who might be good candidates, but didn’t push too hard as there were so many other things to get ready.  That was a mistake.  If I could get a redo I would have formally asked for people to be assigned where there were gaps and groomed them into an advocate.  Today as we find new advocates to fill the gaps, people are hesitant as the initial set has three years of practice are ahead of the curve power members; it definitely would have been easier earlier on. As fairly early adopters to corporate scale enterprise collaboration, there hasn’t been a roadmap to follow as we’ve grown Engage, which is part of the fun! It’s clear a lot of issues are more easily tackled the earlier you identify and begin to correct them, and I’ve identified the main five I wish I could redo.  In the spirit of collaboration, I hope someone else learns from my mistakes! View the original article by Jem here. 

    Read the article

  • The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

    - by tonyrogerson
    When comparing SSD against Hard drive performance it really makes me cross when folk think comparing an array of SSD running on 3GBits/sec to hard drives running on 6GBits/second is somehow valid. In a paper from DELL (http://www.dell.com/downloads/global/products/pvaul/en/PowerEdge-PowerVaultH800-CacheCade-final.pdf) on increasing database performance using the DELL PERC H800 with Solid State Drives they compare four SSD drives connected at 3Gbits/sec against ten 10Krpm drives connected at 6Gbits [Tony slaps forehead while shouting DOH!]. It is true in the case of hard drives it probably doesn’t make much difference 3Gbit or 6Gbit because SAS and SATA are both end to end protocols rather than shared bus architecture like SCSI, so the hard drive doesn’t share bandwidth and probably can’t get near the 600MiBytes/second throughput that 6Gbit gives unless you are doing contiguous reads, in my own tests on a single 15Krpm SAS disk using IOMeter (8 worker threads, queue depth of 16 with a stripe size of 64KiB, an 8KiB transfer size on a drive formatted with an allocation size of 8KiB for a 100% sequential read test) I only get 347MiBytes per second sustained throughput at an average latency of 2.87ms per IO equating to 44.5K IOps, ok, if that was 3GBits it would be less – around 280MiBytes per second, oh, but wait a minute [...fingers tap desk] You’ll struggle to find in the commodity space an SSD that doesn’t have the SATA 3 (6GBits) interface, SSD’s are fast not only low latency and high IOps but they also offer a very large sustained transfer rate, consider the OCZ Agility 3 it so happens that in my masters dissertation I did the same test but on a difference box, I got 374MiBytes per second at an average latency of 2.67ms per IO equating to 47.9K IOps – cost of an 240GB Agility 3 is £174.24 (http://www.scan.co.uk/products/240gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-500mb-s-85k-iops), but that same drive set in a box connected with SATA 2 (3Gbits) would only yield around 280MiBytes per second thus losing almost 100MiBytes per second throughput and a ton of IOps too. So why the hell are “enterprise” vendors still only connecting SSD’s at 3GBits? Well, my conspiracy states that they have no interest in you moving to SSD because they’ll lose so much money, the argument that they use SATA 2 doesn’t wash, SATA 3 has been out for some time now and all the commodity stuff you buy uses it now. Consider the cost, not in terms of price per GB but price per IOps, SSD absolutely thrash Hard Drives on that, it was true that the opposite was also true that Hard Drives thrashed SSD’s on price per GB, but is that true now, I’m not so sure – a 300GByte 2.5” 15Krpm SAS drive costs £329.76 ex VAT (http://www.scan.co.uk/products/300gb-seagate-st9300653ss-savvio-15k3-25-hdd-sas-6gb-s-15000rpm-64mb-cache-27ms) which equates to £1.09 per GB compared to a 480GB OCZ Agility 3 costing £422.10 ex VAT (http://www.scan.co.uk/products/480gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-410mb-s-30k-iops) which equates to £0.88 per GB. Ok, I compared an “enterprise” hard drive with a “commodity” SSD, ok, so things get a little more complicated here, most “enterprise” SSD’s are SLC and most commodity are MLC, SLC gives more performance and wear, I’ll talk about that another day. For now though, don’t get sucked in by vendor marketing, SATA 2 (3Gbit) just doesn’t cut it, SSD need 6Gbit to breath and even that SSD’s are pushing. Alas, SSD’s are connected using SATA so all the controllers I’ve seen thus far from HP and DELL only do SATA 2 – deliberate? Well, I’ll let you decide on that one.

    Read the article

  • Why Your ERP System Isn't Ready for the Next Evolution of the Enterprise

    - by ken.pulverman
      ERP has been the backbone of enterprise software.  The data held in your ERP system is core of most companies.  Efficiencies gained through the accounting and resource allocation through ERP software have literally saved companies trillions of dollars. Not only does everything seem to be fine with your ERP system, you haven't had to touch it in years.  Why aren't you ready for what comes next? Well judging by the growth rates in the space (Oracle posted only a 3% growth rate, while SAP showed a 12% decline) there hasn't been much modernization going on, just a little replacement activity. If you are like most companies, your ERP system is connected to a proprietary middleware solution that only effectively talks with a handful of other systems you might have acquired from the same vendor.   Connecting your legacy system through proprietary middleware is expensive and brittle and if you are like most companies, you were only willing to pay an SI so much before you said "enough."  So your ERP is working.  It's humming along.  You might not be able to get Order to Promise information when you take orders in your call center, but there are work arounds that work just fine. So what's the problem? The problem is that you built your business around your ERP core, and now there is such pressure to innovate your business processes to keep up that you need a whole new slew of modern apps and you need ERP data to be accessible from everywhere.   Every time you change a sales territory or a comp plan or change a benefits provider your ERP system, literally the economic brain of your business, needs to know what's going on.  And this giant need to access and provide information to your ERP is only growing. What makes matters even more challenging is that apps today come in every flavor under the Sun™.   SaaS, cloud, managed, hybrid, outsourced, composite....and they all have different integration protocols. The only easy way to get ahead of all this is to modernize the way you connect and run your applications.  Unlike the middleware solutions of yesteryear, modern middleware is effectively the operating system of the enterprise.  In the same way that you rely on Apple, Microsoft, and Google to find a video driver for your 23" monitor or to ensure the Word or Keynote runs, modern middleware takes care of intra-application connectivity and process execution.  It effectively allows you to take ERP out of the middle while ensuring connectivity to your vital data for anything you want to do.  The diagram below reflects that change.    In this model, the hegemony of ERP is over.  It too has to become a stealthy modern app to help you quickly adapt to business changes while managing vital information.  And through modern middleware it will connect to everything.  So yes ERP as we've know it is dead, but long live ERP as a connected application member of the modern enterprise. I want to Thank Andrew Zoldan, Group Vice President Oracle Manufacturing Industries Business Unit for introducing me to how some of his biggest customers have benefited by modernizing their applications infrastructure and making ERP a connected application. by John Burke, Group Vice President, Applications Business Unit

    Read the article

  • What is the relationship between Turing Machine & Modern Computer ? [closed]

    - by smwikipedia
    I heard a lot that modern computers are based on Turing machine. I just cannot build a bridge between a conceptual Turing Machine and a modern computer. Could someone help me build this bridge? Below is my current understanding. I think the computer is a big general-purpose Turing machine. Each program we write is a small specific-purpose Turing machine. The classical Turing machine do its job based on the input and its current state inside and so do our programs. Let's take a running program (a process) as an example. We know that in the process's address space, there's areas for stack, heap, and code. A classical Turing machine doesn't have the ability to remember many things, so we borrow the concept of stack from the push-down automaton. The heap and stack areas contains the state of our specific-purpose Turing machine (our program). The code area represents the logic of this small Turing machine. And various I/O devices supply input to this Turing machine.

    Read the article

  • Why Your ERP System Isn't Ready for the Next Evolution of the Enterprise

    - by [email protected]
    By ken.pulverman on March 24, 2010 8:51 AM ERP has been the backbone of enterprise software. The data held in your ERP system is core of most companies. Efficiencies gained through the accounting and resource allocation through ERP software have literally saved companies trillions of dollars. Not only does everything seem to be fine with your ERP system, you haven't had to touch it in years. Why aren't you ready for what comes next? Well judging by the growth rates in the space (Oracle posted only a 3% growth rate, while SAP showed a 12% decline) there hasn't been much modernization going on, just a little replacement activity. If you are like most companies, your ERP system is connected to a proprietary middleware solution that only effectively talks with a handful of other systems you might have acquired from the same vendor. Connecting your legacy system through proprietary middleware is expensive and brittle and if you are like most companies, you were only willing to pay an SI so much before you said "enough." So your ERP is working. It's humming along. You might not be able to get Order to Promise information when you take orders in your call center, but there are work arounds that work just fine. So what's the problem? The problem is that you built your business around your ERP core, and now there is such pressure to innovate your business processes to keep up that you need a whole new slew of modern apps and you need ERP data to be accessible from everywhere. Every time you change a sales territory or a comp plan or change a benefits provider your ERP system, literally the economic brain of your business, needs to know what's going on. And this giant need to access and provide information to your ERP is only growing. What makes matters even more challenging is that apps today come in every flavor under the Sun™. SaaS, cloud, managed, hybrid, outsourced, composite....and they all have different integration protocols. The only easy way to get ahead of all this is to modernize the way you connect and run your applications. Unlike the middleware solutions of yesteryear, modern middleware is effectively the operating system of the enterprise. In the same way that you rely on Apple, Microsoft, and Google to find a video driver for your 23" monitor or to ensure that Word or Keynote runs, modern middleware takes care of intra-application connectivity and process execution. It effectively allows you to take ERP out of the middle while ensuring connectivity to your vital data for anything you want to do. The diagram below reflects that change. In this model, the hegemony of ERP is over. It too has to become a stealthy modern app to help you quickly adapt to business changes while managing vital information. And through modern middleware it will connect to everything. So yes ERP as we've know it is dead, but long live ERP as a connected application member of the modern enterprise. I want to Thank Andrew Zoldan, Group Vice President Oracle Manufacturing Industries Business Unit for introducing me to how some of his biggest customers have benefited by modernizing their applications infrastructure and making ERP a connected application. by John Burke, Group Vice President, Applications Business Unit

    Read the article

  • Get Your Enterprise Working With Oracle On Track Communication 1.0

    - by Josh Lannin
    The On Track Development team is very pleased to announce that today On Track is available for our customers to download and evaluate.  To learn more about what On Track does start with our whitepaper and datasheet.   If you are a developer, take a look at our documentation and samples posted to our OTN page. For this first blog post, I’ll be speaking to several notable points about our product. Graceful Escalation via Conversations: On Track addresses the “Collaboration Problem” through a single guiding principle – graceful escalation – within the construct of a Conversation. In On Track, collaboration is based on a context (called a “Conversation”) that gracefully escalates in form, structure, and content, as dictated by the particular needs of a given collaboration.  Within that context, On Track provides a rich set of tools to choose from.  These tools provide for communication, coordination, content management, organization, decision making, and analysis -- all essential aspects of collaboration, but not all of them are essential all of the time.  Every collaborative interaction will evolve differently.  Some will evolve to represent work spreading over the course of years and involving a large, distributed team, while others may involve few people and not evolve at all.  Regardless, all collaborative contexts are built from the same parts, utilize the same concepts, and start the same way.  The principle of graceful escalation is that you only use the tools and structure you need; so you only incur the complexity you need. Purposeful Collaboration: Through application integration, On Track Conversations bring enterprise application users the communication and collaboration capabilities required to complete business process.  By association with specific processes or business objects conversations extend the possible interactions and broaden participation to internal or external non-application users and provide a sophisticated interaction experience, all the while enhancing the data set within the owning application.  Purposeful collaboration not only needs to happen in the context of applications, it must support a full range of real-time and long-running interactions to provide the greatest value. Multi Client, Multi Modal: This On Track 1.0 product release includes the same day availability of  multiple clients, including iPhone and iPad applications which are now available on the Apple Store, a fully capable and accessible Outlook Add-In, along with our browser web client.  With each client we have sought to leverage the strengths of each unique device- our iPhone client supports picture and voice posts, the iPad is optimized for meeting room situations and document viewing, and our Outlook add-in allows you to take emails in context and bring them into On Track.  In addition to supporting a diverse array of clients, On Track provides a unified multi modal experience support starting with basic messages moving through to integrated documents with live annotations, snapshots, application sharing, and voice. Next Generation Web Architecture: We believe On Track will help move the bar higher for what users can expect from all web applications, most notably ones that involve real-time activity.  On Track is built from the ground up with an innovative, real-time architecture that leverages the extensive push capabilities of our server.  Whether you are receiving a new message, viewing where crowds of people are collaborating, or doing live annotation on a document with a set of people, that information comes to you immediately without refreshes or moving back and forth between pages.  We’ve leveraged this core architecture across the product experience and raised the user experience bar for this type of application.  As well these capabilities are based on open standards and protocols, and are fully extensible by anyone- enabling sophisticated integrations to be created with a wide variety of both legacy and next-generation applications. Agile Product Development: As a product team we operate using continuous feedback and modified agile development methodologies.  We have thousands of active internal Oracle users who have helped pilot our product for critical business functions, and the On Track product development team uses our product as our primary vehicle for all our collaboration.  Additionally we been working with early access customers who are adopting our technology and providing us valuable feedback - which our process has rapidly realized in improvements to our software.  On Track agility extends to our server as well, which is built to scale, and is very simple to install and configure. We are pleased to make this product announcement and encourage you to join us on Facebook or follow us on Twitter, as well as checking back here for the latest product information.

    Read the article

  • Turing Machine & Modern Computer

    - by smwikipedia
    I heard a lot that modern computers are based on Turing machine. I'd like to share my understanding and hear your comments. I think the computer is a big general-purpose Turing machine. Each program we write is a small specific-purpose Turing machine. The classical Turing machine do its job based on the input and its current state inside and so do our programs. Let's take a running program (a process) as an example. We know that in the process's address space, there's areas for stack, heap, and code. A classical Turing machine doesn't have the ability to remember many things, so we borrow the concept of stack from the push-down automaton. The heap and stack areas contains the state of our specific-purpose Turing machine (our program). The code area represents the logic of this small Turing machine. And various I/O devices supply input to this Turing machine. The above is my naive understanding about the working paradigm of modern computer. I couln't wait to hear your comments. Thanks very much.

    Read the article

  • Exalytics and Oracle Business Intelligence Enterprise Edition (OBIEE) Partner Workshop

    - by mseika
    Workshop Description Oracle Fusion Middleware 11g is the #1 application infrastructure foundation. It enables enterprises to create and run agile and intelligent business applications and maximize IT efficiency by exploiting modern hardware and software architectures. Oracle Exalytics Business Intelligence Machine is the world’s first engineered system specifically designed to deliver high performance analysis, modeling and planning. Built using industry-standard hardware, market-leading business intelligence software and in-memory database technology, Oracle Exalytics is an optimized system that delivers unmatched speed, visualizations and scalability for Business Intelligence and Enterprise Performance Management applications. This FREE hands-on, partner workshop highlights both the hardware and software components that are engineered to work together to deliver Oracle Exalytics - an optimized version of the industry-leading Oracle TimesTen In-Memory Database with analytic extensions, a highly scalable Oracle server designed specifically for in-memory business intelligence, and Oracle’s proven Business Intelligence Foundation with enhanced visualization capabilities and performance optimizations. This workshop will provide hands-on experience with Oracle's latest engineered system. Topics covered will include TimesTen In-Memory Database and the new Summary Advisor for Exalytics, the technical details (including mobile features) of the latest release of visualization enhancements for OBI-EE, and technical updates on Essbase. After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Exalytics solution. You will also be able to extend your current analytical and enterprise performance management application implementations with numerous Oracle technologies specifically enhanced to take advantage of the compute capacity and in-memory capabilities of Oracle Exalytics.If you are a BI or Data Warehouse Architect, developer or consultant, you don’t want to miss this 3-day workshop. Register Now! Presentations Exalytics Architectural Overview Upgrade and Lifecycle Management Times Ten for Exalytics Summary Advisor Utility Essbase and EPM System on Exalytics Dashboard and Analysis Interactions OBIEE 11.1.1.6 Features and Advanced Topics Lab OutlineThe labs showcase Oracle Exalytics core components and functionality and provide expertise of Oracle Business Intelligence 11.1.1.6 new features and updates from prior releases. The hands-on activities are based on an Oracle VirtualBox image with software and training samples pre-installed. Lab Environment Setup Creating and Working with Oracle TimesTen In-Memory Database Running Summary Advisor Utility Working with Exalytics Visualization Features – Dashboard and Analysis Interactions Audience Oracle Partners BI and EPM Application Developers and Implementers System Integrators and Solution Consultants Data Warehouse Developers Enterprise Architects Prerequisites Experience and understanding of OBIEE 11g is required Previous attendance of Oracle Business Intelligence Foundation Suite Workshop or BIEE 11gIntroduction Workshop is highly recommended Good understanding of data warehousing and data modeling for reporting and analysis purpose Strong experience with database technologies preferred Equipment RequirementsThis workshop requires attendees to provide their own laptops for this class.Attendee laptops must meet the following minimum hardware/software requirements: Hardware Minimum 8GB RAM 60 GB free space (includes staging) USB 2.0 port (at least one available) It is strongly recommended that you bring a mouse. You will be working in a development environment and using the mouse heavily. Software One of the following operating systems: 64-bit Windows host/laptop OS 64-bit host/laptop OS with a Windows VM (XP, Server, or Win 7, BIC2g, etc.) Internet Explorer 7.x/8.x or Firefox 3.5.x WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle VirtualBox 4.0.2 or higher Downloadable from http://www.virtualbox.org/wiki/Downloads CPU virtualization mode needs to be enabled. We will provide guidance on the day of the workshop. Attendees will be given a VirtualBox image containing a pre-installed Oracle Exalytics environment. Schedule This workshop is 3 days. - Times vary by country!9:00am: Sign-in and technical setup 9:30am: Workshop starts 5:00pm: Workshop ends Oracle Exalytics and Business Intelligence (OBIEE) Workshop December 11-13, 2012: Oracle BVP, Birmingham, UK Register Here. Questions? Send email to: [email protected] Oracle Platform Technologies Enablement Services

    Read the article

  • EPM 11.1.2.2 Architecture: Financial Performance Management Applications

    - by Marc Schumacher
     Financial Management can be accessed either by a browser based client or by SmartView. Starting from release 11.1.2.2, the Financial Management Windows client does not longer access the Financial Management Consolidation server. All tasks that require an on line connection (e.g. load and extract tasks) can only be done using the web interface. Any client connection initiated by a browser or SmartView is send to the Oracle HTTP server (OHS) first. Based on the path given (e.g. hfmadf, hfmofficeprovider) in the URL, OHS makes a decision to forward this request either to the new Financial Management web application based on the Oracle Application Development Framework (ADF) or to the .NET based application serving SmartView retrievals running on Internet Information Server (IIS). Any requests send to the ADF web interface that need to be processed by the Financial Management application server are send to the IIS using HTTP protocol and will be forwarded further using DCOM to the Financial Management application server. SmartView requests, which are processes by IIS in first row, are forwarded to the Financial Management application server using DCOM as well. The Financial Management Application Server uses OLE DB database connections via native database clients to talk to the Financial Management database schema. Communication between the Financial Management DME Listener, which handles requests from EPMA, and the Financial Management application server is based on DCOM.  Unlike most other components Essbase Analytics Link (EAL) does not have an end user interface. The only user interface is a plug-in for the Essbase Administration Services console, which is used for administration purposes only. End users interact with a Transparent or Replicated Partition that is created in Essbase and populated with data by EAL. The Analytics Link Server deployed on WebLogic communicates through HTTP protocol with the Analytics Link Financial Management Connector that is deployed in IIS on the Financial Management web server. Analytics Link Server interacts with the Data Synchronisation server using the EAL API. The Data Synchronization server acts as a target of a Transparent or Replicated Partition in Essbase and uses a native database client to connect to the Financial Management database. Analytics Link Server uses JDBC to connect to relational repository databases and Essbase JAPI to connect to Essbase.  As most Oracle EPM System products, browser based clients and SmartView can be used to access Planning. The Java based Planning web application is deployed on WebLogic, which is configured behind an Oracle HTTP Server (OHS). Communication between Planning and the Planning RMI Registry Service is done using Java Remote Message Invocation (RMI). Planning uses JDBC to access relational repository databases and talks to Essbase using the CAPI. Be aware of the fact that beside the Planning System database a dedicated database schema is needed for each application that is set up within Planning.  As Planning, Profitability and Cost Management (HPCM) has a pretty simple architecture. Beside the browser based clients and SmartView, a web service consumer can be used as a client too. All clients access the Java based web application deployed on WebLogic through Oracle HHTP Server (OHS). Communication between Profitability and Cost Management and EPMA Web Server is done using HTTP protocol. JDBC is used to access the relational repository databases as well as data sources. Essbase JAPI is utilized to talk to Essbase.  For Strategic Finance, two clients exist, SmartView and a Windows client. While SmartView communicates through the web layer to the Strategic Finance Server, Strategic Finance Windows client makes a direct connection to the Strategic Finance Server using RPC calls. Connections from Strategic Finance Web as well as from Strategic Finance Web Services to the Strategic Finance Server are made using RPC calls too. The Strategic Finance Server uses its own file based data store. JDBC is used to connect to the EPM System Registry from web and application layer.  Disclosure Management has three kinds of clients. While the browser based client and SmartView interact with the Disclosure Management web application directly through Oracle HTTP Server (OHS), Taxonomy Designer does not connect to the Disclosure Management server. Communication to relational repository databases is done via JDBC, to connect to Essbase the Essbase JAPI is utilized.

    Read the article

  • Resources how to architect a iPhone application?

    - by Frank Martin
    What resources can you recommend for learning how to architect a iPhone application? Background of the question is that most of the resources explain the usage of a single class or concept (and i appreciate that a lot to learn something about the specific topic) but as far as i can see they lack unfortunately to describe how to put things together for typical real world applications.

    Read the article

  • Oracle Enterprise Computing Summit??!??????/EM????????

    - by Oracle Japan Marketing
    .NewsType1107 img{border:none; vertical-align:bottom;} .NewsType1107 p{margin:0; padding:0;} .NewsType1107 td{color:#333333; line-height:1.5; font-family:"MS P????", Osaka, Hiragino Kaku Gothic Pro; font-size:12px;} .NewsType1107 table.t10 td, .small{font-size:10px;} .NewsType1107 a:link, a:visited{color:#ff0000;} .NewsType1107 a:hover, a:active{color:#ff0000; text-decoration:none;} .NewsType1107 a.l01:link, a.l01:visited, a.l01:hover, a.l01:active{color:#333333;} .NewsType1107 span.r, td.r{color:#ff0000;} .NewsType1107 table.tbl-semi td{padding:5px;} ?????BCP????????????????????????! ??????????????????????????????Oracle Enterprise Computing? ??????????????????????????????????Oracle Enterprise Computing?????????????????????? ??·?????????? >> ????????????????IT???????????????????? ???????????????????????ID??·??????????·?????3??????????????????????????????? ????????? >> ??????????????????????????????????????????????????????? Oracle EPM & BI Summit???????·?????????????????????????????????????·???????????????????????? ??·?????????? >> ???????????????????????????? ???5??????????????????????????????????????????????????????????????????????????????????? ??·?????????? >> -- ?????????????????????? ????????????????????????! ???????????????????????????????????????? ????????? >> ?????????????!??????????????? ? Sun????&?????·?????????????????????IT????????? ? ???????????·???????????????????IT???????????? ????????????? ? ?????????·????????????????????BI?? more solutions ? LIXIL ?????ERP?????????????????????????????????????????????? ? ?????????? Oracle EBS???·?????????????????????????????????????????? ? ????? Oracle EBS???/??????????????????????????????????????????????????????? more success stories IT?????????????????????????????????????·???·?????? >> ???????????????????????? ?? ???? ?? 7/6(?)10:30~18:00 ?????????? 2011 ?????????(??) 7/7(?)14:00~19:00 Java SE 7 ?????? ?????? ??????????(??) 7/13(?)13:30~16:45 ?????????????????????????? ??????????(??) 7/15(?)13:00~18:00 ?????!??????????????????????? ??????????(??) 7/20(?)9:30~17:50 ????·????????&????????2011 ?????????????(??) 7/20(?)13:30~17:00 ???????????????????????????? ??????????(??) 7/25(?)14:00~17:00 MySQL?????????????? ??????????(??) 7/26(?)13:30~17:45 Oracle Enterprise Computing Summit ???????????(??) Copyright © 2011, Oracle.All Rights Reserved. ???????????? | ???????????? | ??????????/????????

    Read the article

  • ASP.NET WebAPI Security 2: Identity Architecture

    - by Your DisplayName here!
    Pedro has beaten me to the punch with a detailed post (and diagram) about the WebAPI hosting architecture. So go read his post first, then come back so we can have a closer look at what that means for security. The first important takeaway is that WebAPI is hosting independent-  currently it ships with two host integration implementations – one for ASP.NET (aka web host) and WCF (aka self host). Pedro nicely shows the integration into the web host. Self hosting is not done yet so we will mainly focus on the web hosting case and I will point out security related differences when they exist. The interesting part for security (amongst other things of course) is the HttpControllerHandler (see Pedro’s diagram) – this is where the host specific representation of an HTTP request gets converted to the WebAPI abstraction (called HttpRequestMessage). The ConvertRequest method does the following: Create a new HttpRequestMessage. Copy URI, method and headers from the HttpContext. Copies HttpContext.User to the Properties<string, object> dictionary on the HttpRequestMessage. The key used for that can be found on HttpPropertyKeys.UserPrincipalKey (which resolves to “MS_UserPrincipal”). So the consequence is that WebAPI receives whatever IPrincipal has been set by the ASP.NET pipeline (in the web hosting case). Common questions are: Are there situations where is property does not get set? Not in ASP.NET – the DefaultAuthenticationModule in the HTTP pipeline makes sure HttpContext.User (and Thread.CurrentPrincipal – more on that later) are always set. Either to some authenticated user – or to an anonymous principal. This may be different in other hosting environments (again more on that later). Why so generic? Keep in mind that WebAPI is hosting independent and may run on a host that materializes identity completely different compared to ASP.NET (or .NET in general). This gives them a way to evolve the system in the future. How does WebAPI code retrieve the current client identity? HttpRequestMessage has an extension method called GetUserPrincipal() which returns the property as an IPrincipal. A quick look at self hosting shows that the moral equivalent of HttpControllerHandler.ConvertRequest() is HttpSelfHostServer.ProcessRequestContext(). Here the principal property gets only set when the host is configured for Windows authentication (inconsisteny). Do I like that? Well – yes and no. Here are my thoughts: I like that it is very straightforward to let WebAPI inherit the client identity context of the host. This might not always be what you want – think of an ASP.NET app that consists of UI and APIs – the UI might use Forms authentication, the APIs token based authentication. So it would be good if the two parts would live in a separate security world. It makes total sense to have this generic hand off point for identity between the host and WebAPI. It also makes total sense for WebAPI plumbing code (especially handlers) to use the WebAPI specific identity abstraction. But – c’mon we are running on .NET. And the way .NET represents identity is via IPrincipal/IIdentity. That’s what every .NET developer on this planet is used to. So I would like to see a User property of type IPrincipal on ApiController. I don’t like the fact that Thread.CurrentPrincipal is not populated. T.CP is a well established pattern as a one stop shop to retrieve client identity on .NET.  That makes a lot of sense – even if the name is misleading at best. There might be existing library code you want to call from WebAPI that makes use of T.CP (e.g. PrincipalPermission, or a simple .Name or .IsInRole()). Having the client identity as an ambient property is useful for code that does not have access to the current HTTP request (for calling GetUserPrincipal()). I don’t like the fact that that the client identity conversion from host to WebAPI is inconsistent. This makes writing security plumbing code harder. I think the logic should always be: If the host has a client identity representation, copy it. If not, set an anonymous principal on the request message. Btw – please don’t annoy me with the “but T.CP is static, and static is bad for testing” chant. T.CP is a getter/setter and, in fact I find it beneficial to be able to set different security contexts in unit tests before calling in some logic. And, in case you have wondered – T.CP is indeed thread static (and the name comes from a time where a logical operation was bound to a thread – which is not true anymore). But all thread creation APIs in .NET actually copy T.CP to the new thread they create. This is the case since .NET 2.0 and is certainly an improvement compared to how Win32 does things. So to sum it up: The host plumbing copies the host client identity to WebAPI (this is not perfect yet, but will surely be improved). or in other words: The current WebAPI bits don’t ship with any authentication plumbing, but solely use whatever authentication (and thus client identity) is set up by the host. WebAPI developers can retrieve the client identity from the HttpRequestMessage. Hopefully my proposed changes around T.CP and the User property on ApiController will be added. In the next post, I will detail how to add WebAPI specific authentication support, e.g. for Basic Authentication and tokens. This includes integrating the notion of claims based identity. After that we will look at the built-in authorization bits and how to improve them as well. Stay tuned.

    Read the article

  • Trying to compile x264 and ffmpeg for iPhone - "missing required architecture arm in file"

    - by jtrim
    I'm trying to compile x264 for use in an iPhone application. I see there are instructions on how to compile ffmpeg for use on the platform here: http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/2009-October/076618.html , but I can't seem to find anything this complete for compiling x264 on the iPhone. I've found this source tree: http://gitorious.org/x264-arm that seems to have support for the ARM platform. Here is my config line: ./configure --cross-prefix=/usr/bin/ --host=arm-apple-darwin10 --extra-cflags="-B /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.2.sdk/usr/lib/ -I /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.2.sdk/usr/lib/" ...and inside configure I'm using the gas-preprocessor script (first link above) as my assembler: gas-preprocessor.pl gcc When I start compiling, it chunks away for a little while, then it spits out these warnings and a huge list of undefined symbols: ld: warning: option -s is obsolete and being ignored ld: warning: -force_cpusubtype_ALL will become unsupported for ARM architectures ld: warning: in /usr/lib/crt1.o, missing required architecture arm in file ld: warning: in /usr/X11R6/lib/libX11.dylib, missing required architecture arm in file ld: warning: in /usr/lib/libm.dylib, missing required architecture arm in file ld: warning: in /usr/lib/libpthread.dylib, missing required architecture arm in file ld: warning: in /usr/lib/libgcc_s.1.dylib, missing required architecture arm in file ld: warning: in /usr/lib/libSystem.dylib, missing required architecture arm in file Undefined symbols: My guess would be that the problem has to do with the "missing required architecture arm in file" warning...any ideas?

    Read the article

  • Oracle Enterprise Manager 12c Configuration Best Practices (Part 3 of 3)

    - by Bethany Lapaglia
    <span id="XinhaEditingPostion"></span>&amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; This is part 3 of a three-part blog series that summarizes the most commonly implemented configuration changes to improve performance and operation of a large Enterprise Manager 12c environment. A “large” environment is categorized by the number of agents, targets and users. See the Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration Guide chapter on Sizing for more details on sizing your environment properly. Part 1 of this series covered recommended configuration changes for the OMS and Repository Part 2 covered recommended changes for the Weblogic server Part 3 covers general configuration recommendations and a few known issues The entire series can be found in the My Oracle Support note titled Oracle Enterprise Manager 12c Configuration Best Practices [1553342.1]. Configuration Recommendations Configure E-Mail Notifications for EM related Alerts In some environments, the notifications for events for different target types may be sent to different support teams (i.e. notifications on host targets may be sent to a platform support team). However, the EM application administrators should be well informed of any alerts or problems seen on the EM infrastructure components. Recommendation: Create a new Incident rule for monitoring all EM components and setup the notifications to be sent to the EM administrator(s). The notification methods available can create or update an incident, send an email or forward to an event connector. To setup the incident rule set follow the steps below. Note that each individual rule in the rule set can have different actions configured. 1.  To create an incident rule for monitoring the EM components, click on Setup / Incidents / Incident Rules. On the All Enterprise Rules page, click on the out-of-box rule called “Incident management Ruleset for all targets” and then click on the Actions drop down list and select “Create Like Rule Set…” 2. For the rule set name, enter a name such as MTM Ruleset. Under the Targets tab, select “All targets of types” and select “OMS and Repository” from the drop down list. This target type contains all of the key EM components (OMS servers, repository, domains, etc.) 3. Click on the Rules tab. To edit a rule, click on the rule name and click on Edit as seen below 4. Modify the following rules: a. Incident creation Rule for metric alerts i. Leave the Type set as is but change the Severity to add Warning by clicking on the drop down list and selecting “Warning”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications). Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. b. Incident creation Rule for target unreachable. i.   Leave the Type set as is but change the Target type to add OMS and Repository by clicking on the drop down list selecting “OMS and Repository”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications) Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. 5.  Modify the actions for any other rule as required and be sure to click the “Save” push button to save the rule set or all changes will be lost. Configure Out-of-Band Notifications for EM Agent Out-of-Band notifications act as a backup when there’s a complete EM outage or a repository database issue. This is configured on the agent of the OMS server and can be used to send emails or execute another script that would create a trouble ticket. It will send notifications about the following issues: • Repository Database down • All OMS are down • Repository side collection job that is broken or has an invalid schedule • Notification job that is broken or has an invalid schedule Recommendation: To setup Out-of-Band Notifications, refer to the MOS note “How To Setup Out Of Bound Email Notification In 12c” (Doc ID 1472854.1) Modify the Performance Test for the EM Console Service The EM Console Service has an out-of-box defined performance test that will be run to determine the status of this service. The test issues a request via an HTTP method to a specific URL. By default, the HTTP method used for this test is a GET but for performance reasons, should be changed to HEAD. The URL used for this request is set to point to a specific OMS server by default. If a multi-OMS system has been implemented and the OMS servers are behind a load balancer, then the URL in this section must be modified to point to the load balancer name instead of a specific server name. If this is not done and a portion of the infrastructure is down then the EM Console Service will show down as this test will fail. Recommendation: Modify the HTTP Method for the EM Console Service test and the URL if required following the detailed steps below. 1.  To create an incident rule for monitoring the EM components, click on Targets / Services. From the list of services, click on the EM Console Service. 2. On the EM Console Service page, click on the Test Performance tab. 3.  At the bottom of the page, click on the Web Transaction test called EM Console Service Test 4.  Click on the Service Tests and Beacons breadcrumb near the top of the page. 5.  Under the Service Tests section, make sure the EM Console Service Test is selected and click on the Edit push button. 6.  Under the Transaction section, make sure the Access Logout page transaction is selected and click on the Edit push button 7) Under the Request section, change the HTTP Method from the default of GET to the recommended value of HEAD. The URL in this section must be modified to point to the load balancer name instead of a specific server name if multi-OMSes have been implemented. Check for Known Issues Job Purge Repository Job is Shown as Down This issue is caused after upgrading EM from 12c to 12cR2. On the Repository page under Setup ? Manage Cloud Control ? Repository, the job called “Job Purge” is shown as down and the Next Scheduled Run is blank. Also, repvfy reports that this is a missing DBMS_SCHEDULER job. Recommendation: In EM 12cR2, the apply_purge_policies have been moved from the MGMT_JOB_ENGINE package to the EM_JOB_PURGE package. To remove this error, execute the commands below: $ repvfy verify core -test 2 -fix To confirm that the issue resolved, execute $ repvfy verify core -test 2 It can also be verified by refreshing the Job Service page in EM and check the status of the job, it should now be Up. Configure the Listener Targets in EM with the Listener Password (where required) EM will report this error every time it is encountered in the listener log file. In a RAC environment, typically the grid home and rdbms homes are owned by different OS users. The listener always runs from the grid home. Only the listener process owner can query or change the listener properties. The listener uses a password to allow other OS users (ex. the agent user) to query the listener process for parameters. EM has a default listener target metric that will query these properties. If the agent is not permitted to do this, the TNS incident (TNS-1190) will be logged in the listener’s log file. This means that the listener targets in EM also need to have this password set. Not doing so will cause many TNS incidents (TNS-1190). Below is a sample of this error from the listener log file: Recommendation: Set a listener password and include it in the configuration of the listener targets in EM For steps on setting the listener passwords, see MOS notes: 260986.1 , 427422.1

    Read the article

  • WebLogic Server Provisioning and Patching with Enterprise Manager Cloud Control 12c Now Available

    - by JuergenKress
    For access to the Oracle demo systems please visit OPN and talk to your Partner Expert. SOA Suite and BPM Suite runs on WebLogic! We are pleased to announce the availability of a WebLogic Server Management demo that showcases some of the key provisioning and patching capabilities of WebLogic Server Management Pack Enterprise Edition (EE). To learn more about these features - as well as other features of the pack - please visit the pack's saleskit page. Demo Highlights The demo showcases the following capabilities: Patching Oracle WebLogic Servers Standardizing WebLogic Server Patch Rollouts Creating a WebLogic Domain Provisioning Profile Cloning a WebLogic Domain from a Provisioning Profile Deploying a Java EE Application Scaling Out an Oracle WebLogic Cluster Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Software Lifecycle Automation” section, click on the link “EM Cloud Control 12c WLS Provisioning and Patching” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: WebLogic,Enterprise Manager,EM12c,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Drive project success & financial performance with business critical Enterprise Project Portfolio Management

    - by Sylvie MacKenzie, PMP
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Oracle Primavera invites you to the first in a series of three webcasts linking Enterprise Project Portfolio Management with enhanced operational performance and better financial results. Few organizations fully understand the impact projects have on their business. Consistently delivering successful projects is vital to the financial success of an asset intensive organization. Enterprise Project Portfolio Management (EPPM) is not a new concept yet for many organizations it is not considered "business critical". Webcast 1: Plan – Aligning project selection and prioritization with corporate objectives This webcast will look at 2 key questions: Are you aligning portfolio decisions with strategic objectives? How do you effectively measure the success of your portfolio decisions? Hear from Accenture who'll present a compelling case for why asset intensive organizations should consider EPPM as business critical. They'll explore: How technology is being used to enhance project delivery How collaboration enhances delivery performance The major challenges associated with the planning phase of a project Next hear from Geoff Roberts, Industry Strategist from Oracle Primavera. With over 30 years experience in project management/project controls in the construction, utilities and oil & gas sectors, Geoff will investigate how EPPM is a best practice and can support an organization through project selection and prioritization ensuring that decisions are aligned with corporate objectives. Don’t miss out, register today!

    Read the article

  • Enterprise Software Development with Java by Markus Eisele

    - by JuergenKress
    This is a blog about software development for the enterprise. It focuses on Java Enterprise Edition (J2EE/Java EE). Beside this, I blog about Oracle WebLogic and GlassFish Server and other technologies that hit my road. Java Mission Control 5.2 is Finally Here! Welcome 7u40! It has been a while since we last heard of this fancy little thing called Mission Control. It came all the way from JRockit and was renamed to Java Mission Control. This is one of the parts which literally survived the convergence strategy between HotSpot and JRockit. With today's Java SE 7 Update 40 you can actually use it again. Java Mission Control 5.2 The former JRockit Mission Control (JRMC) is now called Java Mission Control (JMC) and is a tools suite which includes tools to monitor, manage, profile, and eliminate memory leaks in your Java application without introducing the performance overhead normally associated with tools of this type. Up to today the 5.1 version was available within the Oracle HotSpot downloads which could only be received by paying customers from the Oracle Support Website. Todays release is the first release of Java Mission Control that is bundled with the Hotspot JDK! The convergence project between JRockit and Hotspot has reached critical mass. With the 7u40 release of the Hotspot JDK there is an equivalent amount of Flight Recorder information available from Hotspot. Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Markus Eisele,Java Development,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Hill International Wins Oracle Eco-Enterprise Innovation Award

    - by Evelyn Neumayr
    In my last blog entry, I discussed Oracle’s Eco-Enterprise Innovation Award, part of the Oracle Excellence awards. Nominations for this year’s awards are due July 17. These awards are presented to organizations that use Oracle products to reduce their environmental footprint while improving their operational efficiency. One of last year’s winners was Hill International. Engineering News-Record magazine recently ranked Hill as the eighth-largest construction management firm in the United States. Hill International was able to streamline its forecasting and improve its visibility into its construction projects’ productivity and profitability using Oracle Primavera. They also implemented Oracle Hyperion Financial Management to standardize its financial reporting and forecasting processes and support its decision-making. With Oracle, Hill gained visibility into the true productivity of each project and cut its financial reporting cycle time from two weeks to one. The company also used the data generated to support new construction project proposals and determine the profitability of potential projects. Hill International realized significant cost savings and reduced its environmental impact on its US$400 million Comcast Center construction project in Philadelphia by centralizing its data storage, reducing paper usage, and maximizing project efficiency. It also leveraged the increased visibility offered by the Oracle solutions to make more environmentally-sound business decisions regarding on-site demolition, re-use of previous structures, green design of new facilities, procurement, and materials usage. See more about Hill International and the other Eco-Enterprise Innovation award winners here.  

    Read the article

  • Delivering Oracle DBaaS: Journey to Enterprise Cloud with Oracle Database 12c

    - by B R Clouse
    The release of Oracle Database 12c  is accompanied by extensive supporting collateral that details the new features and options of this major release.  But with so much to read and investigate, where to start?  If you don't have the time to pore through everything, then you may wish organize your reading in terms of the use case you're most interested in.  If your interest is Database as a Service in private database clouds then may I suggest that you start here: Accelerate the Journey to Enterprise Cloud with Oracle Database 12c Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} This paper describes the phases of the journey to enterprise cloud, and enumerates the new features and options in Oracle Database 12c that support each phase.  Oracle Multitenant figures prominently, but it's not the only cloud-enabling topic: Oracle Database Quality of Service Management, Application Continuity, Automatic Data Optimization, Global Data Services and Active Data Guard Far Sync all deliver key benefits for delivering database as a service.  Further reading and research is suggested by the references included in the paper. Happy clouding!

    Read the article

  • New Book - Oracle ADF Enterprise Application Development Made Simple

    - by Shay Shmeltzer
    It's nice to see another ADF book out there, this one from Sten Vesteli titled "Oracle ADF Enterprise Application Development Made Simple" comes from Packet Publishing Unlike other ADF books out there, this one doesn't aim to teach you Oracle ADF, but rather focuses on the right way to structure and manage a project that leverages ADF. This is a welcomed addition to the bookshelf for people who are looking into ADF based development. One thing I find is that some organization just start developing an ADF application without first doing much planning, something that is understandable given that it is very easy to start building a prototype with ADF and then just grow it into a full blown application. However, as the book points out, doing a bit of planning before you delve into the actual project development can save you a lot of time in the future. For example it is much better to have the right breakdown and structure of your project to allow you to do efficient team development right out of the gate, then to find out 1 year down the road that you are dealing with one monolithic size project which is hard to manage. The book touches on such topics as project organization (workspaces, projects, packages), planning your infrastructure (templates, framework classes), coding standards, team structure, etc. It also covers various aspects of application lifecycle management such as versioning, build, testing, deployment and managing requirements and tasks and how all of those are done when using JDeveloper and Oracle ADF. It's nice to see that the book covers working with Oracle Team Productivity Center - a solution that might not be getting the exposure it deserves. The book also has some chapters about security, internalization and customization of applications both with MDS and with ADF Faces skins (and it even covers the brand new skin editor). Overall I think this is definitely a book you should read if you are about to start your way on a new enterprise scale ADF application. Taking into account the topics that the book discusses before you start your work will save you time and effort down the road. By the way, don't forget that as an OTN member you can get discount on this and other books.

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >