Search Results

Search found 30367 results on 1215 pages for 'service reference'.

Page 48/1215 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Circular reference with entity manager factory

    - by CodesLikeA_Mokey
    Every time I try and start my app, I get this error on startup: SEVERE: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor#0': Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'identityAccessAppConfig': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: com.package.identityaccess.identity.UserRepository com.package.identityaccess.IdentityAccessAppConfig.userRepository; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'userRepository': Cannot create inner bean '(inner bean)' of type [org.springframework.orm.jpa.SharedEntityManagerCreator] while setting bean property 'entityManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)#1': Cannot resolve reference to bean 'identityAccessEntityManagerFactory' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCurrentlyInCreationException: Error creating bean with name 'identityAccessEntityManagerFactory': Requested bean is currently in creation: Is there an unresolvable circular reference? at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:529) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:458) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:223) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:198) at org.springframework.context.support.AbstractApplicationContext.registerBeanPostProcessors(AbstractApplicationContext.java:741) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:464) at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:389) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:294) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:112) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4797) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5291) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1559) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1549) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'identityAccessAppConfig': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: com.package.identityaccess.identity.UserRepository com.package.identityaccess.IdentityAccessAppConfig.userRepository; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'userRepository': Cannot create inner bean '(inner bean)' of type [org.springframework.orm.jpa.SharedEntityManagerCreator] while setting bean property 'entityManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)#1': Cannot resolve reference to bean 'identityAccessEntityManagerFactory' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCurrentlyInCreationException: Error creating bean with name 'identityAccessEntityManagerFactory': Requested bean is currently in creation: Is there an unresolvable circular reference? at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:288) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1116) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:458) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:223) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194) at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:353) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1025) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:921) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:487) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:458) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:223) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:198) at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeansOfType(DefaultListableBeanFactory.java:438) at org.springframework.beans.factory.BeanFactoryUtils.beansOfTypeIncludingAncestors(BeanFactoryUtils.java:277) at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.detectPersistenceExceptionTranslators(PersistenceExceptionTranslationInterceptor.java:139) at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.<init>(PersistenceExceptionTranslationInterceptor.java:79) at org.springframework.dao.annotation.PersistenceExceptionTranslationAdvisor.<init>(PersistenceExceptionTranslationAdvisor.java:71) at org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor.setBeanFactory(PersistenceExceptionTranslationPostProcessor.java:85) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeAwareMethods(AbstractAutowireCapableBeanFactory.java:1502) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1470) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:521) ... 20 more Caused by: org.springframework.beans.factory.BeanCreationException: Could not autowire field: com.package.identityaccess.identity.UserRepository com.package.identityaccess.IdentityAccessAppConfig.userRepository; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'userRepository': Cannot create inner bean '(inner bean)' of type [org.springframework.orm.jpa.SharedEntityManagerCreator] while setting bean property 'entityManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)#1': Cannot resolve reference to bean 'identityAccessEntityManagerFactory' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCurrentlyInCreationException: Error creating bean with name 'identityAccessEntityManagerFactory': Requested bean is currently in creation: Is there an unresolvable circular reference? at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:514) at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87) at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:285) ... 45 more Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'userRepository': Cannot create inner bean '(inner bean)' of type [org.springframework.orm.jpa.SharedEntityManagerCreator] while setting bean property 'entityManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)#1': Cannot resolve reference to bean 'identityAccessEntityManagerFactory' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCurrentlyInCreationException: Error creating bean with name 'identityAccessEntityManagerFactory': Requested bean is currently in creation: Is there an unresolvable circular reference? at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:282) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:126) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1387) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1128) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:458) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:223) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194) at org.springframework.beans.factory.support.DefaultListableBeanFactory.findAutowireCandidates(DefaultListableBeanFactory.java:912) at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:855) at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:770) at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:486) ... 47 more Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)#1': Cannot resolve reference to bean 'identityAccessEntityManagerFactory' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCurrentlyInCreationException: Error creating bean with name 'identityAccessEntityManagerFactory': Requested bean is currently in creation: Is there an unresolvable circular reference? at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:329) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:107) at org.springframework.beans.factory.support.ConstructorResolver.resolveConstructorArguments(ConstructorResolver.java:615) at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:441) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1025) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:921) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:487) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:458) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:271) ... 60 more Caused by: org.springframework.beans.factory.BeanCurrentlyInCreationException: Error creating bean with name 'identityAccessEntityManagerFactory': Requested bean is currently in creation: Is there an unresolvable circular reference? at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.beforeSingletonCreation(DefaultSingletonBeanRegistry.java:327) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:217) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:323) I am new to configuring spring through java and have been spinning circles for days. I have my application which includes 2 separate modules. The identity-access module will handle user and access information and will have a datasource to an older database. The other module module2 will eventually have its own database. Here are the config files: application: @Configuration @EnableWebMvc @ComponentScan(basePackages = "com.package.myapp.web") @ImportResource({"classpath:com/package/appbase/appbase-context.xml"}) @Import(com.package.identityaccess.IdentityAccessAppConfig.class) public class IMSAppConfig extends WebMvcConfigurerAdapter { @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { registry.addResourceHandler("/resources/**").addResourceLocations("ims/resources/"); } } identity-access: @Configuration @ComponentScan(basePackages = "com.package.identityaccess") @EnableJpaRepositories(entityManagerFactoryRef = "identityAccessEntityManagerFactory", value = "com.package.identityaccess") @EnableTransactionManagement public class IdentityAccessAppConfig { @Autowired UserRepository userRepository; @Bean public DataSource identityAccessDataSource() throws IOException, SQLException { return new SelfConfiguringBasicDataSource(System.getProperty("db_properties_path") + "/dev.oracle.properties", ""); } @Bean public Map<String, Object> identityAccessJpaProperties() { Map<String, Object> props = new HashMap<>(); props.put("eclipselink.weaving", "false"); props.put("eclipselink.target-database", OraclePlatform.class.getName()); return props; } @Bean public JpaVendorAdapter identityAccessJpaVendorAdapter() { EclipseLinkJpaVendorAdapter eclipseLinkJpaVendorAdapter = new EclipseLinkJpaVendorAdapter(); eclipseLinkJpaVendorAdapter.setDatabasePlatform(OraclePlatform.class.getName()); eclipseLinkJpaVendorAdapter.setGenerateDdl(false); eclipseLinkJpaVendorAdapter.setShowSql(true); return eclipseLinkJpaVendorAdapter; } @Bean public PlatformTransactionManager identityAccessTransactionManager() throws IOException, SQLException { JpaTransactionManager txManager = new JpaTransactionManager(); txManager.setEntityManagerFactory(identityAccessEntityManagerFactory().getObject()); return txManager; } @Bean public LocalContainerEntityManagerFactoryBean identityAccessEntityManagerFactory() throws IOException, SQLException { LocalContainerEntityManagerFactoryBean lef = new LocalContainerEntityManagerFactoryBean(); lef.setDataSource(identityAccessDataSource()); lef.setJpaPropertyMap(identityAccessJpaProperties()); lef.setJpaVendorAdapter(identityAccessJpaVendorAdapter()); lef.setPackagesToScan("com.package.identityaccess"); return lef; } @Bean public AuthenticationService authenticationService() { return new AuthenticationServiceImpl(userRepository); } } module2: @Configuration @ComponentScan(basePackages = "com.package.myapp.core") @EnableJpaRepositories("com.package.myapp.core.domain") @EnableTransactionManagement public class ModuleTwoAppConfig { @Bean public DataSource dataSource() { EmbeddedDatabaseBuilder embeddedDatabaseBuilder = new EmbeddedDatabaseBuilder(); embeddedDatabaseBuilder.setType(EmbeddedDatabaseType.HSQL); embeddedDatabaseBuilder.addScript("setup.sql"); return embeddedDatabaseBuilder.build(); } @Bean public Map<String, Object> jpaProperties() { Map<String, Object> props = new HashMap<>(); props.put("eclipselink.weaving", "false"); props.put("eclipselink.ddl-generation", "create-tables"); props.put("eclipselink.target-database", HSQLPlatform.class.getName()); return props; } @Bean public JpaVendorAdapter jpaVendorAdapter() { EclipseLinkJpaVendorAdapter eclipseLinkJpaVendorAdapter = new EclipseLinkJpaVendorAdapter(); eclipseLinkJpaVendorAdapter.setDatabasePlatform(HSQLPlatform.class.getName()); eclipseLinkJpaVendorAdapter.setGenerateDdl(true); eclipseLinkJpaVendorAdapter.setShowSql(true); return eclipseLinkJpaVendorAdapter; } @Bean public PlatformTransactionManager transactionManager() { JpaTransactionManager txManager = new JpaTransactionManager(); txManager.setEntityManagerFactory(entityManagerFactory().getObject()); return txManager; } @Bean public LocalContainerEntityManagerFactoryBean entityManagerFactory() { LocalContainerEntityManagerFactoryBean lef = new LocalContainerEntityManagerFactoryBean(); lef.setDataSource(dataSource()); lef.setJpaPropertyMap(jpaProperties()); lef.setJpaVendorAdapter(jpaVendorAdapter()); lef.setPackagesToScan("com.package.myapp.core.domain"); return lef; } } UserRepository (uses spring-data): public interface UserRepository extends JpaRepository<User, Long> { @Query("select u from User u where u.user_id = ?1") User findByUserId(String userId); } Can you see any problems with what I've done?

    Read the article

  • How do I lock the workstation from a windows service?

    - by Brad Mathews
    Hello, I need to lock the workstation from a windows service written in VB.Net. I am writing the app on Windows 7 but it needs to work under Vista and XP as well. User32 API LockWorkStation does not work as it requires an interactive desktop and I get return value of 0. I tried calling %windir%\System32\rundll32.exe user32.dll,LockWorkStation from both a Process and from Shell, but still nothing happens. Setting the service to interact with the desktop is a no-go as I am running the service under the admin account so it can do some other stuff that requires admin rights - like disabling the network, and you can only select the interact with desktop option if running under Local System Account. That would be secondary question - how to run another app with admin rights from a service running under Local System Account without bugging the user. I am writing an app to control my kids computer/internet access (which I plan to open source when done) so I need everything to happen as stealthily as possible. I have a UI that handles settings and status notifications in the taskbar, but that is easy to kill and thus defeat the locking. I could make another hidden Windows Forms app to handle the locking, but that just seems a rather inelegant solution. Better ideas anyone? Thanks! Brad

    Read the article

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • VirtualBox appliance for the Oracle Communications Service Delivery Platform (SDP) Products

    - by chlander
    It's been quite awhile since we last blogged. This blog is written by Leif Lourie, a Curriculum Developer for the Oracle Communications Service Delivery Platform (SDP) products. For the last 8 years, Leif has worked as a Curriculum Developer for many of the telecom-oriented products that Oracle offers. He has been working in the telecom industry for about 25 years and has also worked as a software developer, project manager, and solutions architect. He is currently working on courseware for an upcoming release for one of the Service Delivery Platform products. Thanks to Leif not only for this blog, but for making the VM described in the blog available. Cheryl Lander, Oracle Communications InfoDev Senior Director To be able to download, install and test a product within a day is many times very important for people that are doing the primary evaluation of a software product. If it takes longer, it will require a bigger effort, like a proof-of-concept project with many people involved. Of course, if the product is chosen for a more thorough test, it will probably happen anyway, but then maybe with focus on integration instead of product features. We have a long tradition of creating complex software that is easy to install and test and we have often been praised for the ease of getting our products up and running. One key for this has been that there has always been an installer for Windows, as well as for the production environments that usually are Unix and Linux. And, the windows installer has, in most cases, been released for developing and testing purposes. Lately, this has changed. Our products are very seldom released for the Windows platform, at all. And even the Linux versions are almost always released for 64-bit systems. This is creating problems for many of the people that want to try out our products, since few have access to a 64-bit Linux system of the right platform. Most of us are using a laptop with Windows or Mac OS. Some of us are using Linux or Solaris, but probably a non certified distribution for the product you want to test. My job, among other things, is to develop hands-on practices for our products. For me, it is crucial to have access to environments for installing and using our products. For this reason I have been using virtual machines for many years.I have a ready-made base system, with the necessary tools installed for all the products I create hands-on practices for. Whenever I start working on hands-on practices for a new product or a new version, I just copy the base system and start working with a clean slate. This saves me a lot of time! Now, I would like to start saving time for my favorite student: You! If you are using our products and regularly test new versions you might benefit from the virtual machine that is now available on Oracle Technology Network: The Virtual Machine for the Oracle Communications Service Delivery Platform (SDP) Products. This virtual machine contains an installation of the 64-bit version of Oracle Enterprise Linux, version 6. It also has Oracle Database Express Edition (XE), Oracle Java and Oracle Enterprise Pack for Eclipse installed. By using Oracle VM VirtualBox you may use Windows, OS X, Linux or Solaris on your laptop. VirtualBox can be installed on top of any of these platforms and give you the ability to run virtual machines in your laptop. After downloading and starting the virtual machine you will also need to download the installation files for the product you want to test; for example Oracle Communications Services Gatekeeper or Oracle Communications Online Mediation Controller. In some cases there are lessons and practices available for the products. The freely available courses are listed in Oracle Learning Library as a Collection of Oracle Communications Service Delivery Platform Courses. As time goes by, we will make this list collection bigger. Also, the goal is to update the virtual machine about one to two times per year. So you will always be able to get a well maintained virtual machine for the Service Delivery Platform products from us. We Value Your Feedback If you would like to suggest improvements or report issues on any of the product documentation, curriculum, or training produced by the Oracle Communications Information Development team, you can use these channels: Email [email protected]. Post a comment on this blog. Thanks for reading!

    Read the article

  • Transactional Messaging in the Windows Azure Service Bus

    - by Alan Smith
    Introduction I’m currently working on broadening the content in the Windows Azure Service Bus Developer Guide. One of the features I have been looking at over the past week is the support for transactional messaging. When using the direct programming model and the WCF interface some, but not all, messaging operations can participate in transactions. This allows developers to improve the reliability of messaging systems. There are some limitations in the transactional model, transactions can only include one top level messaging entity (such as a queue or topic, subscriptions are no top level entities), and transactions cannot include other systems, such as databases. As the transaction model is currently not well documented I have had to figure out how things work through experimentation, with some help from the development team to confirm any questions I had. Hopefully I’ve got the content mostly correct, I will update the content in the e-book if I find any errors or improvements that can be made (any feedback would be very welcome). I’ve not had a chance to look into the code for transactions and asynchronous operations, maybe that would make a nice challenge lab for my Windows Azure Service Bus course. Transactional Messaging Messaging entities in the Windows Azure Service Bus provide support for participation in transactions. This allows developers to perform several messaging operations within a transactional scope, and ensure that all the actions are committed or, if there is a failure, none of the actions are committed. There are a number of scenarios where the use of transactions can increase the reliability of messaging systems. Using TransactionScope In .NET the TransactionScope class can be used to perform a series of actions in a transaction. The using declaration is typically used de define the scope of the transaction. Any transactional operations that are contained within the scope can be committed by calling the Complete method. If the Complete method is not called, any transactional methods in the scope will not commit.   // Create a transactional scope. using (TransactionScope scope = new TransactionScope()) {     // Do something.       // Do something else.       // Commit the transaction.     scope.Complete(); }     In order for methods to participate in the transaction, they must provide support for transactional operations. Database and message queue operations typically provide support for transactions. Transactions in Brokered Messaging Transaction support in Service Bus Brokered Messaging allows message operations to be performed within a transactional scope; however there are some limitations around what operations can be performed within the transaction. In the current release, only one top level messaging entity, such as a queue or topic can participate in a transaction, and the transaction cannot include any other transaction resource managers, making transactions spanning a messaging entity and a database not possible. When sending messages, the send operations can participate in a transaction allowing multiple messages to be sent within a transactional scope. This allows for “all or nothing” delivery of a series of messages to a single queue or topic. When receiving messages, messages that are received in the peek-lock receive mode can be completed, deadlettered or deferred within a transactional scope. In the current release the Abandon method will not participate in a transaction. The same restrictions of only one top level messaging entity applies here, so the Complete method can be called transitionally on messages received from the same queue, or messages received from one or more subscriptions in the same topic. Sending Multiple Messages in a Transaction A transactional scope can be used to send multiple messages to a queue or topic. This will ensure that all the messages will be enqueued or, if the transaction fails to commit, no messages will be enqueued.     An example of the code used to send 10 messages to a queue as a single transaction from a console application is shown below.   QueueClient queueClient = messagingFactory.CreateQueueClient(Queue1);   Console.Write("Sending");   // Create a transaction scope. using (TransactionScope scope = new TransactionScope()) {     for (int i = 0; i < 10; i++)     {         // Send a message         BrokeredMessage msg = new BrokeredMessage("Message: " + i);         queueClient.Send(msg);         Console.Write(".");     }     Console.WriteLine("Done!");     Console.WriteLine();       // Should we commit the transaction?     Console.WriteLine("Commit send 10 messages? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     } } Console.WriteLine(); messagingFactory.Close();     The transaction scope is used to wrap the sending of 10 messages. Once the messages have been sent the user has the option to either commit the transaction or abandon the transaction. If the user enters “yes”, the Complete method is called on the scope, which will commit the transaction and result in the messages being enqueued. If the user enters anything other than “yes”, the transaction will not commit, and the messages will not be enqueued. Receiving Multiple Messages in a Transaction The receiving of multiple messages is another scenario where the use of transactions can improve reliability. When receiving a group of messages that are related together, maybe in the same message session, it is possible to receive the messages in the peek-lock receive mode, and then complete, defer, or deadletter the messages in one transaction. (In the current version of Service Bus, abandon is not transactional.)   The following code shows how this can be achieved. using (TransactionScope scope = new TransactionScope()) {       while (true)     {         // Receive a message.         BrokeredMessage msg = q1Client.Receive(TimeSpan.FromSeconds(1));         if (msg != null)         {             // Wrote message body and complete message.             string text = msg.GetBody<string>();             Console.WriteLine("Received: " + text);             msg.Complete();         }         else         {             break;         }     }     Console.WriteLine();       // Should we commit?     Console.WriteLine("Commit receive? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     }     Console.WriteLine(); }     Note that if there are a large number of messages to be received, there will be a chance that the transaction may time out before it can be committed. It is possible to specify a longer timeout when the transaction is created, but It may be better to receive and commit smaller amounts of messages within the transaction. It is also possible to complete, defer, or deadletter messages received from more than one subscription, as long as all the subscriptions are contained in the same topic. As subscriptions are not top level messaging entities this scenarios will work. The following code shows how this can be achieved. try {     using (TransactionScope scope = new TransactionScope())     {         // Receive one message from each subscription.         BrokeredMessage msg1 = subscriptionClient1.Receive();         BrokeredMessage msg2 = subscriptionClient2.Receive();           // Complete the message receives.         msg1.Complete();         msg2.Complete();           Console.WriteLine("Msg1: " + msg1.GetBody<string>());         Console.WriteLine("Msg2: " + msg2.GetBody<string>());           // Commit the transaction.         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     Unsupported Scenarios The restriction of only one top level messaging entity being able to participate in a transaction makes some useful scenarios unsupported. As the Windows Azure Service Bus is under continuous development and new releases are expected to be frequent it is possible that this restriction may not be present in future releases. The first is the scenario where messages are to be routed to two different systems. The following code attempts to do this.   try {     // Create a transaction scope.     using (TransactionScope scope = new TransactionScope())     {         BrokeredMessage msg1 = new BrokeredMessage("Message1");         BrokeredMessage msg2 = new BrokeredMessage("Message2");           // Send a message to Queue1         Console.WriteLine("Sending Message1");         queue1Client.Send(msg1);           // Send a message to Queue2         Console.WriteLine("Sending Message2");         queue2Client.Send(msg2);           // Commit the transaction.         Console.WriteLine("Committing transaction...");         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     The results of running the code are shown below. When attempting to send a message to the second queue the following exception is thrown: No active Transaction was found for ID '35ad2495-ee8a-4956-bbad-eb4fedf4a96e:1'. The Transaction may have timed out or attempted to span multiple top-level entities such as Queue or Topic. The server Transaction timeout is: 00:01:00..TrackingId:947b8c4b-7754-4044-b91b-4a959c3f9192_3_3,TimeStamp:3/29/2012 7:47:32 AM.   Another scenario where transactional support could be useful is when forwarding messages from one queue to another queue. This would also involve more than one top level messaging entity, and is therefore not supported.   Another scenario that developers may wish to implement is performing transactions across messaging entities and other transactional systems, such as an on-premise database. In the current release this is not supported.   Workarounds for Unsupported Scenarios There are some techniques that developers can use to work around the one top level entity limitation of transactions. When sending two messages to two systems, topics and subscriptions can be used. If the same message is to be sent to two destinations then the subscriptions would have the default subscriptions, and the client would only send one message. If two different messages are to be sent, then filters on the subscriptions can route the messages to the appropriate destination. The client can then send the two messages to the topic in the same transaction.   In scenarios where a message needs to be received and then forwarded to another system within the same transaction topics and subscriptions can also be used. A message can be received from a subscription, and then sent to a topic within the same transaction. As a topic is a top level messaging entity, and a subscription is not, this scenario will work.

    Read the article

  • How to call Office365 web service in a Console application using WCF

    - by ybbest
    In my previous post, I showed you how to call the SharePoint web service using a console application. In this post, I’d like to show you how to call the same web service in the cloud, aka Office365.In office365, it uses claims authentication as opposed to windows authentication for normal in-house SharePoint Deployment. For Details of the explanation you can see Wictor’s post on this here. The key to make it work is to understand when you authenticate from Office365, you get your authentication token. You then need to pass this token to your HTTP request as cookie to make the web service call. Here is the code sample to make it work.I have modified Wictor’s by removing the client object references. static void Main(string[] args) { MsOnlineClaimsHelper claimsHelper = new MsOnlineClaimsHelper( "[email protected]", "YourPassword","https://ybbest.sharepoint.com/"); HttpRequestMessageProperty p = new HttpRequestMessageProperty(); var cookie = claimsHelper.CookieContainer; string cookieHeader = cookie.GetCookieHeader(new Uri("https://ybbest.sharepoint.com/")); p.Headers.Add("Cookie", cookieHeader); using (ListsSoapClient proxy = new ListsSoapClient()) { proxy.Endpoint.Address = new EndpointAddress("https://ybbest.sharepoint.com/_vti_bin/Lists.asmx"); using (new OperationContextScope(proxy.InnerChannel)) { OperationContext.Current.OutgoingMessageProperties[HttpRequestMessageProperty.Name] = p; XElement spLists = proxy.GetListCollection(); foreach (var el in spLists.Descendants()) { //System.Console.WriteLine(el.Name); foreach (var attrib in el.Attributes()) { if (attrib.Name.LocalName.ToLower() == "title") { System.Console.WriteLine("> " + attrib.Name + " = " + attrib.Value); } } } } System.Console.ReadKey(); } } You can download the complete code from here. Reference: Managing shared cookies in WCF How to do active authentication to Office 365 and SharePoint Online

    Read the article

  • Service Level Loggin/Tracing

    - by Ahsan Alam
    We all love to develop services, right? First timers want to learn technologies like WCF and Web Services. Some simply want to build services; whereas, others may find services as natural architectural decision for particular systems. Whatever the reason might be, services are commonly used in building wide range of systems. Developers often encapsulates various functionality (small or big) within one or more services, and expose them for multiple applications. Sometimes from day one (and definitely over time) these services may evolve into a set of black boxes. Services or not, black boxes or not, issues and exceptions are sometimes hard to avoid, especially in highly evolving and transactional systems. We can try to be methodical with our unit testing, QA and overall process; but we may not be able to avoid some type of system issues. When issues arise from one or more highly transactional services, it becomes necessary to resolve them very quickly. When systems handle thousands of transaction in matter of hours, some issues may not surface immediately. That is when service level logging becomes very useful. Technologies such as WCF, allow us to enable service level tracing with minimal effort; but that may not provide us with complete picture. Developers may need to add tracing within critical areas of the code with various degrees of verbosity. Programmer can always utilize some logging framework such as the 'Logging Application Block' to get the job done. It may seem overkill sometimes; but I have noticed from my experience that service level logging helps programmer trace many issues very quickly.

    Read the article

  • How to call Office365 web service in a Console application using WCF

    - by ybbest
    In my previous post, I showed you how to call the SharePoint web service using a console application. In this post, I’d like to show you how to call the same web service in the cloud, aka Office365.In office365, it uses claims authentication as opposed to windows authentication for normal in-house SharePoint Deployment. For Details of the explanation you can see Wictor’s post on this here. The key to make it work is to understand when you authenticate from Office365, you get your authentication token. You then need to pass this token to your HTTP request as cookie to make the web service call. Here is the code sample to make it work.I have modified Wictor’s by removing the client object references. static void Main(string[] args) { MsOnlineClaimsHelper claimsHelper = new MsOnlineClaimsHelper( "[email protected]", "YourPassword","https://ybbest.sharepoint.com/"); HttpRequestMessageProperty p = new HttpRequestMessageProperty(); var cookie = claimsHelper.CookieContainer; string cookieHeader = cookie.GetCookieHeader(new Uri("https://ybbest.sharepoint.com/")); p.Headers.Add("Cookie", cookieHeader); using (ListsSoapClient proxy = new ListsSoapClient()) { proxy.Endpoint.Address = new EndpointAddress("https://ybbest.sharepoint.com/_vti_bin/Lists.asmx"); using (new OperationContextScope(proxy.InnerChannel)) { OperationContext.Current.OutgoingMessageProperties[HttpRequestMessageProperty.Name] = p; XElement spLists = proxy.GetListCollection(); foreach (var el in spLists.Descendants()) { //System.Console.WriteLine(el.Name); foreach (var attrib in el.Attributes()) { if (attrib.Name.LocalName.ToLower() == "title") { System.Console.WriteLine("> " + attrib.Name + " = " + attrib.Value); } } } } System.Console.ReadKey(); } } You can download the complete code from here. Reference: Managing shared cookies in WCF How to do active authentication to Office 365 and SharePoint Online

    Read the article

  • When Do OS Questions Belong on Hardware Service Requests?

    - by Get Proactive Customer Adoption Team
    Untitled Document My Oracle Support—Logging an Operating System Service Request One of the concerns we hear from our customers with Premier Support for Systems is that they have difficulty logging a Service Request (SR) for an operating system issue. Because Premier Support for Systems includes support for the hardware and the associated operating system, you log any operating system issues through a hardware Service Request. To create a hardware Service Request, you enter the information into the Hardware tab of the Create Service Request screen, but to ensure that the hardware Service Request you enter is recognized and routed appropriately for an operating system issue, you need to change the product from your specific hardware to the operating system that the hardware is running. The example below shows you how to create a Service Request for the operating system when the support level is Premier Support for Systems. The key to success is remembering that the operating system coverage is part of the hardware support. To begin, from anywhere within My Oracle Support, click on the Create SR button as you would to log any SR: Enter your Problem Summary and the Problem Description Next, click on the Hardware tab. Enter the System Serial Number (in this case “12345”) and click on Validate Serial Number: Notice that the product name for the hardware indicates “Sunfire T2000 Server” with an option for a drop down List of Values. Click on the product drop down and choose the correct operating system from the list. In this case I have chosen “OpenSolaris Operating System” Next, you will need to enter the correct operating system version: At this point, you may proceed to complete and submit the Service Request. If your company has Premier Support for Systems, just remember that your operating system has coverage under the hardware it runs on, so start with a Hardware tab on the Service Request screen and change the product related information to reflect the operating system you need help with. Following these simple steps will ensure that the system assigns your Service Request to the right support team for an operating system issue and the support engineer can quickly begin working your issue.

    Read the article

  • UPK 3.6.1 Enablement Service Pack 1

    - by marc.santosusso
    UPK 3.6.1 Enablement Service Pack 1 now available on My Oracle Support as Patch ID 9533920 (requires My Oracle Support account). Below is a list of the enhancements included in this Enablement Service Pack. Tabbed Gateway Users now have the option to deliver multiple help resources through the in-application support using UPK's new tabbed gateway. This feature is managed using the Configuration Utility for In-Application Support. This feature is documented in the In-Application Support Guide. Firefox 3.6 The latest release of Mozilla Firefox, version 3.6, is now supported by the UPK Player, SmartHelp browser add-on, and SmartMatch recording technology. Oracle E-Business Suite -- Added support for version 12.1.2 for enhanced object and context recognition. -- The UPK PLL is no longer need for Oracle versions 12.1.2 and higher. Agile PLM Agile PLM version 9.3 supported for enhanced object recognition. Customer Needs Management Customer Needs Management schema 1.0.014 is supported for context recognition. Siebel CRM Siebel CRM (On Premise) versions 8.2, 8.1.1.2, 8.0.0.9, and 8.1.1 build 21112 (in addition to the previously supported build 21111) supported for enhanced object and context recognition. SAP SAP GUI for HTML version 7.10 patch 16 supported for enhanced object and context recognition. CA -- CA Clarity PPM version R12.5 supported for context recognition. -- CA Service Desk version R12.5 supported for context recognition. Java Added support for Java 6 update 12

    Read the article

  • NTP service, offset increasing after sync

    - by Ajay
    I have installed Ubuntu 12.10 version on my PC. I am running NTP service having NTP server as GPS. I found that when we start NTP service by ntp start command, PC is able to sync with GPS as i get '*' symbol before GPS IP when i run ntpq -p command. This remains good for some time and then the * symbol is removed which means that PC is not synchronized to that server. Now, by running command ntpq -p it shows that all parameter are OK but as '*' is removed, slowly offset goes on increasing. remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 7 16 1 2.333 23.799 0.808 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 14 16 3 2.333 23.799 0.879 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 11 16 7 2.333 23.799 1.500 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 8 16 17 2.333 23.799 2.177 below are the last 4 ntp status when sync is lost with GPS ============================================================================== 192.168.100.33 .GPS. 1 u 1 16 377 2.404 1169.94 1.735 remote refid st t when poll reach delay offset jitter ============================================================================== 192.168.100.33 .GPS. 1 u - 16 377 2.513 1171.80 0.898 remote refid st t when poll reach delay offset jitter ============================================================================== 192.168.100.33 .GPS. 1 u 15 16 377 2.513 1171.80 0.898 Since, GPS is already available, PC never re-synchronize itself to GPS later ON. I have to restart the ntp service and then PC synchronizes to GPS and '*' symbol arrives.

    Read the article

  • Security in a private web service

    - by Oni
    I am developing a web site and a web service for a small on-line game. Technically, I'll be using Express (node.js) and MongoDB+Redis for the databases. This the structure I came up with: One Express server that will server as the Web Service. This will connect to the databases. One Express server that will provide the web site. It will connect to the Web Service to retrieve and push the information. iOS and Android application will be able to interact with the WebService. Taking into account: It is a small game. The information transferred is not critical. There will NOT be third party applications. At least for the moment. My concern is about which level of security I should use in each of the scenarios: Security of the user playing through web browser Security of the applications and the Web Server connecting to the WS. I have take a look at the different options and: OAuth and/or Https is too much for this scenario, isn't it? Will be a good option to hash the user and password with MD5(or similar) and some salt? I would like to get some directions and investigate by my own rather than getting a response like "you should you use this node.js module..." Thanks in advance,

    Read the article

  • How Service Component Architecture (SCA) Can Be Incorporated Into Existing Enterprise Systems

    After viewing Rob High’s presentation “The SOA Component Model” hosted on InfoQ.com, I can foresee how Service Component Architecture (SCA) can be incorporated in to an existing enterprise. According to IBM’s DeveloperWorks website, SCA is a set of conditions which outline a model for constructing applications/systems using a Service-Oriented Architecture (SOA). In addition, SCA builds on open standards such as Web services. In the future, I can easily see how some large IT shops could potently divide development teams or work groups up into Component/Data Object Groups, and Standard Development Groups. The Component/Data Object Group would only work on creating and maintaining components that are reused throughout the entire enterprise. The Standard Development Group would work on new and existing projects that incorporate the use of various components to accomplish various business tasks. In my opinion the incorporation of SCA in to any IT department will initially slow down the number of new features developed due to the time needed to create the new and loosely-coupled components. However once a company becomes more mature in its SCA process then the number of program features developed will greatly increase. I feel this is due to the fact that the loosely-coupled components needed in order to add the new features will already be built and ready to incorporate into any new development feature request. References: BEA Systems, Cape Clear Software, IBM, Interface21, IONA Technologies PLC, Oracle, Primeton Technologies Ltd, Progress Software, Red Hat Inc., Rogue Wave Software, SAP AG, Siebel Systems, Software AG, Sun Microsystems, Sybase, TIBCO Software Inc. (2006). Service Component Architecture. Retrieved 11 27, 2011, from DeveloperWorks: http://www.ibm.com/developerworks/library/specification/ws-sca/ High, R. (2007). The SOA Component Model. Retrieved 11 26, 2011, from InfoQ: http://www.infoq.com/presentations/rob-high-sca-sdo-soa-programming-model

    Read the article

  • SOA Starting Point: Methods for Service Identification and Definition

    As more and more companies start to incorporate a Service Oriented Architectural design approach into their existing enterprise systems, it creates the need for a standardized integration technology. One common technology used by companies is an Enterprise Service Bus (ESB). An ESB, as defined by Progress Software, connects and mediates all communications and interactions between services. In essence an ESB is a form of middleware that allows services to communicate with one another regardless of framework, environment, or location. With the emergence of ESB, a new emphasis is now being placed on approaches that can be used to determine what Web services should be built. In addition, what order should these services be built? In May 2011, SOA Magazine published an article that identified 10 common methods for identifying and defining services. SOA’s Ten Common Methods for Service Identification and Definition: Business Process Decomposition Business Functions Business Entity Objects Ownership and Responsibility Goal-Driven Component-Based Existing Supply (Bottom-Up) Front-Office Application Usage Analysis Infrastructure Non-Functional Requirements  Each of these methods provides various pros and cons in regards to their use within the design process. I personally feel that during a design process, multiple methodologies should be used in order to accurately define a design for a system or enterprise system. Personally, I like to create a custom cocktail derived from combining these methodologies in order to ensure that my design fits with the project’s and business’s needs while still following development standards and guidelines. Of these ten methods, I am particularly fond of Business Process Decomposition, Business Functions, Goal-Driven, Component-Based, and routinely use them in my designs.  Works Cited Hubbers, J.-W., Ligthart, A., & Terlouw , L. (2007, 12 10). Ten Ways to Identify Services. Retrieved from SOA Magazine: http://www.soamag.com/I13/1207-1.php Progress.com. (2011, 10 30). ESB ARCHITECTURE AND LIFECYCLE DEFINITION. Retrieved from Progress.com: http://web.progress.com/en/esb-architecture-lifecycle-definition.html

    Read the article

  • Installing Visual Studio 2010 Service Pack 1

    - by Martin Hinshelwood
    As has become customary when the product team releases a new patch, SP or version I like to document the install. This post seams almost redundant as I had no problems, but I think that is as valuable to other thinking of installing the Service Pack as all the problems that we sometimes get. As per Brian's post I am Installing Visual Studio Team Foundation Server Service Pack 1 first and indeed as this is a single server local deployment I need to install both. If I only install one it will leave the other product broken. Figure: Hopefully this will be more uneventful It takes a little while for your system to be checked to see what components need updating. On my main computer this was pretty quick, but on the laptop it took some time. Figure: There are a lot of components to update With this update also comes an update to .NET as well as many other components. Figure: I downloaded the full 1.5GB’s, but you could do a web install It depends on how good you internet connection is to how long it would take to download, but as I am now in the US I decided not to trust the internet connection speeds. It took around 30-40 minutes to download the full thing which is a little slow. Figure: I did not need to download, but that would increase the install time So on my main computer again this was fast, but again on my netbook this took a little while. Figure: The actual install took around 30-40 minutes (2 hours on netbook) I was pretty impressed with the speed of the install, and as Team Explore is now out of the box with Visual Studio 2010 I don’t get the problem of the SP being installed before Team Explorer and having a disjointed experience Figure: As I suspected, no problems with the install Figure: Checking in Visual Studio shows that all the servicing points were successful This was an easy experience even if the SP was over 1.5GB’s to download Hopefully I will be discovering things that work better for a good while to come, as well as not seeing holes in the product that I had no encountered yet. What were your experiences of installing Visual Studio 2010 Service pack 1?

    Read the article

  • Planning and Budgeting Cloud Service - Partner Webcast

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Please join us for a 90 minutes live Partner Webcast which will overview the upcoming Oracle Planning and Budgeting Cloud Service (PBCS) offering on Tuesday, 26th November, 2013 at 5:00 pm CET / 4:00 pm UK. Look out for the joining URL and instruction in my November Newsletter coming soon. As a reminder, there was also a Partner Webcast recorded in August 2103 about PBCS which included a demo. Replay link here. Topics include: Latest news from Product Management; live demo; overview of assets and collaterals; Q&A session Oracle Planning and Budgeting Cloud Service (PBCS) offers organizations the market-leading Oracle Hyperion Planning and Budgeting solution delivered via Oracle’s public cloud service. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization

    - by ShawnBailey
    There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it. The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials. DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml. The contents are fairly short and at the bottom you will find the following entry: <dumpConfiguration>     <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/> </dumpConfiguration> The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format. You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.

    Read the article

  • Emailing Service: To or Bcc?

    - by Shelakel
    I'm busy coding a reusable e-mail service for my company. The e-mail service will be doing quite a few things via injection through the strategy pattern (such as handling e-mail send rate throttling, switching between Smtp and AmazonSES or Google AppEngine for e-mail clients when daily quotas are exceeded, send statistics tracking (mostly because it is neccessary in order to stay within quotas) to name a few). Because e-mail sending will need to be throttled and other limitations exist (ex. max recipient quota on AmazonSES limiting recipients to 50 per send), the e-mails typically need to be broken up. From your experience, would it be better to send bulk (multiple recipients per e-mail) or a single e-mail per recipient? The implications of the above would be to send to a 1000 recipients, with a limit of 50 per send, you would send 20 e-mails using BCC in a newsletter scenario. When sending an e-mail per recipient, it would send 1000 e-mails. E-mail sending is asynchronous (due to inherit latency when sending, it's typically only possible to send 5 e-mails per second unless you are using multiple client asynchronously). Edit Just for full disclosure, this service won't be used by or sold to spammers and will as far as possible automatically comply with national and international laws. Closed< Thanks for all the valuable feedback. The concerns regarding compliance towards laws, user experience (generic vs. personalized unsubscribe) and spam regulation via ISP blacklisting does make To the preferred and possibly the only choice when sending system generated e-mails to recipients.

    Read the article

  • Change Casing in WCF Service Reference

    - by Eric J.
    I'm creating a service reference to a web service written in Java. The generated classes now follow the Java casing convention used in the web service, for example class names are camelCase rather than PascalCase. Is there a way to get the desired casing from the service reference? CLARIFICATION: With WSE based services, one could modify the generated Reference.cs to provide .NET standard casing and use XmlElementAttribute to map to the Java naming presented by the external web service, like this: [System.Xml.Serialization.XmlElementAttribute("resultType", Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] [System.Runtime.Serialization.DataMember] public virtual MyResultType ResultType { ... } Not terribly maintenance-friendly without writing custom code to either generate the proxy code or modify it after it's been generated. What I'm after is one or more options to present a WCF generated client proxy to calling applications using the .NET casing conventions, achieving the same as I did previously with WSE. Hopefully with less manual effort.

    Read the article

  • How can I set up .Net unhanlded exception handling in a windows service?

    - by Mike Pateras
    protected override void OnStart(string[] args) { AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException); Thread.Sleep(10000); throw new Exception(); } void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e) { } I attached a debugger to the above code in my windows service, setting a breakpoint in CurrentDomain_UnhandledException, but it was never hit. The exception pops up saying that it is unhandled, and then the service stops. I even tried putting some code in the event handler, in case it was getting optimized away. Is this not the proper way to set up unhandled exception handling in a windows service?

    Read the article

  • Adding WCF service reference adds DataContract types too

    - by Avi Shilon
    Hi everybody, I've used Visual Studio's Add Service Reference feature to add a service (actually it is a workflow service, created in WF4 RC1, but I don't think this makes any difference), and it also added the DataContracts that the service uses. At first this seemed fine, because All I've had in the DataContracts was simply properties, with no implementations. But now I've added code in the constructor of one data contracts that initializes creates an instance of one of the properties that exposes a list of other DCs, and when I've updated the service reference via VS (2010 RC1), the implementation was not updated. What should I do? Should I use my DCs instead of the ones created by VS or should I use the ones VS created? I've noticed that the properties in the VS-generated DCs contain some additional logic for checking equality in the setters and they also implement some interfaces too (like IExtensibleDataObject and INotifyPropertyChanged) which might get handy I guess in the future (I'm not knowledgeable at WCF). Thank you for your time folks, Avi

    Read the article

  • Great Customer Service Example

    - by MightyZot
    A few days ago I wrote about what I consider a poor customer service interaction with TiVo, a company that I have been faithful to for the past 12 years or so. In that post I talked about how they helped me, but I felt like I was doing something wrong at the end of the call – when in reality I was just following through with an offer that TiVo made possible through my cable company. Today I had a wonderful customer service interaction with American Express, another company that I have been loyal to for many years.(I am a Gold Card member.) I like my Amex card because I can use it for big purchases and it forces me to pay them off at the end of the month. Well, the reality is that I’m not always so good at doing that, so sometimes my payments are over a couple of months.  :) A few days ago I received an email from “American Express” fraud detection. The email stated that I should call a toll free number and have the last four digits of my card handy. I grew up during the BBS era with some creative and somewhat mischievous friends. I’ve learned to be extremely cautious with regard to my online life! So, I did what you would expect…I sent them a nice reply that said “Go screw yourself.” For the past couple of days someone has been trying to call me and I assumed it was the same prankster trying to get the last four digits of my card. The last caller left a message indicating that they were from American Express and they wanted to talk to me about my card. After looking up their customer service numbers on the www.americanexpress.com web site, I called and was put through to the fraud detection group. The rep explained that there were some charges on my wife’s card that did not fit our purchase profile. She went through each charge and, for the most part, they looked like charges my wife may have made. My wife had asked to use the card for some Christmas shopping during the same timeframe as the charges. The American Express rep very politely explained that these looked out of character to her. She continued through the charges. She listed a charge for $160 – at this point my adrenaline started kicking in. My wife said she was going to charge about $25 or $30 dollars, not $160. Next, the rep listed a charge for over $1200. Uh oh!! Now I know that my account has been compromised. I informed the rep that we definitely did not make those charges. She replied with, “that’s ok Mr Pope, we declined those charges as well as some others.” We went through the pending charges and there were a couple more that were questionable. The rep very patiently waited while I called my wife on my office phone to verify the charges. Sure enough, my wife had not ordered anything from Netflix or purchased anything with Yahoo Wallet! “No problem Mr Pope, we will remove those charges as well.” “We are going to cancel your wife’s card and send her a new one. She will receive it by 7pm tomorrow via Federal Express. Please watch your statements over the next couple of months. If you notice anything fishy, give us a call and we will take care of it for you.” (Wow, I’m thinking to myself!) “Is there anything else I can help you with Mr Pope?” “Nope, thank you very much for catching this so early and declining those charges!”, I said smiling. Apparently she could hear me smiling on the other end of the phone line because she replied with “keep smiling Mr Pope and have a good rest of your week.” Now THAT’s customer service!  Thank you American Express!!! I shall remain an ever faithful customer. Interesting…

    Read the article

  • Database – Beginning with Cloud Database As A Service

    - by Pinal Dave
    I love my weekend projects. Everybody does different activities in their weekend – like traveling, reading or just nothing. Every weekend I try to do something creative and different in the database world. The goal is I learn something new and if I enjoy my learning experience I share with the world. This weekend, I decided to explore Cloud Database As A Service – Morpheus. In my career I have managed many databases in the cloud and I have good experience in managing them. I should highlight that today’s applications use multiple databases from SQL for transactions and analytics, NoSQL for documents, In-Memory for caching to Indexing for search.  Provisioning and deploying these databases often require extensive expertise and time.  Often these databases are also not deployed on the same infrastructure and can create unnecessary latency between the application layer and the databases.  Not to mention the different quality of service based on the infrastructure and the service provider where they are deployed. Moreover, there are additional problems that I have experienced with traditional database setup when hosted in the cloud: Database provisioning & orchestration Slow speed due to hardware issues Poor Monitoring Tools High network latency Now if you have a great software and expert network engineer, you can continuously work on above problems and overcome them. However, not every organization have the luxury to have top notch experts in the field. Now above issues are related to infrastructure, but there are a few more problems which are related to software/application as well. Here are the top three things which can be problems if you do not have application expert: Replication and Clustering Simple provisioning of the hard drive space Automatic Sharding Well, Morpheus looks like a product build by experts who have faced similar situation in the past. The product pretty much addresses all the pain points of developers and database administrators. What is different about Morpheus is that it offers a variety of databases from MySQL, MongoDB, ElasticSearch to Reddis as a service.  Thus users can pick and chose any combination of these databases.  All of them can be provisioned in a matter of minutes with a simple and intuitive point and click user interface.  The Morpheus cloud is built on Solid State Drives (SSD) and is designed for high-speed database transactions.  In addition it offers a direct link to Amazon Web Services to minimize latency between the application layer and the databases. Here are the few steps on how one can get started with Morpheus. Follow along with me.  First go to http://www.gomorpheus.com and register for a new and free account. Step 1: Signup It is very simple to signup for Morpheus. Step 2: Select your database   I use MySQL for my daily routine, so I have selected MySQL. Upon clicking on the big red button to add Instance, it prompted a dialogue of creating a new instance.   Step 3: Create User Now we just have to create a user in our portal which we will use to connect to a database hosted at Morpheus. Click on your database instance and it will bring you to User Screen. Over here you will notice once again a big red button to create a new user. I created a user with my first name.   Step 4: Configure your MySQL client I used MySQL workbench and connected to MySQL instance, which I had created with an IP address and user.   That’s it! You are connecting to MySQL instance. Now you can create your objects just like you would create on your local box. You will have all the features of the Morpheus when you are working with your database. Dashboard While working with Morpheus, I was most impressed with its dashboard. In future blog posts, I will write more about this feature.  Also with Morpheus you use the same process for provisioning and connecting with other databases: MongoDB, ElasticSearch and Reddis. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Culture Shmulture?

    - by steve.diamond
    I've been thinking about "Customer Experience Management" lately. Here at Oracle, we arguably have the most complete suite of applications for managing the customer experience across and in the context of multiple channels -- from marketing to loyalty to contact center to self-service to analytics offerings, and more. And stay tuned, because in coming months let's just say we'll have even more to talk about on this front. But that said............ Last weekend my wife and I stayed at one of the premiere hotel chains on the planet. I won't name them, but we all know the short list. It could have been the St. Regis or the Ritz Carlton or Four Seasons or Hyatt Park or....This stay, at this particular hotel, was simply outstanding. Within a chain known for providing "above and beyond" levels of service, this particular hotel, under this particular manager, exceeded expectations on so many fronts. For example, at the Spa we mentioned to the two attendants that my wife is seven months pregnant and that we had previously had a lot of trouble conceiving. We then went to our room. Ten minutes later we heard a knock at the door and received a plate of chocolate covered strawberries with a heartfelt note and an inspiring quote, signed by the two spa attendees. The following day we arranged to have a bellhop drive us to the beach. Although they had a pre-arranged beach shuttle service with time limits, etc., he greeted us by saying, "I'm yours for the day until 4 p.m. Whatever you want to do is fine by me, as long as it's legal!" The morning that we left we arranged to have a taxi drive us to the airport--a nearly 40 mile drive. What showed up was a private coach complete with navy blue suited driver dude. And we were charged the taxi fare price. And there were many other awesome exchanges I won't mention here, although I did email the GM of this hotel two nights ago and expressed our effusive praise and gratitude. I'd submit that this hotel chain would have a definitive advantage using even more Oracle software to manage and optimize its customer interactions (yes, they are a customer). But WITHOUT the culture--that management team--and that instillation of aligned values across all employees of exemplifying 'the golden rule,' I wonder how much technology really matters in providing a distinctively positive and memorable customer experience. Lest you think I'm alone in these pontifications, have you read Paul Greenberg's blog lately? Have you seen one of his most recent posts? Now this SPECIFIC post is NOT about customer service per se. But it is about people. So yes, please think long and hard about the technology you seek to deploy. But never forget who will be interacting with your systems, and your customers.

    Read the article

  • Why is there no service-oriented language?

    - by Wolfgang
    Edit: To avoid further confusion: I am not talking about web services and such. I am talking about structuring applications internally, it's not about how computers communicate. It's about programming languages, compilers and how the imperative programming paradigm is extended. Original: In the imperative programming field, we saw two paradigms in the past 20 years (or more): object-oriented (OO), and service-oriented (SO) aka. component-based (CB). Both paradigms extend the imperative programming paradigm by introducing their own notion of modules. OO calls them objects (and classes) and lets them encapsulates both data (fields) and procedures (methods) together. SO, in contrast, separates data (records, beans, ...) from code (components, services). However, only OO has programming languages which natively support its paradigm: Smalltalk, C++, Java and all other JVM-compatibles, C# and all other .NET-compatibles, Python etc. SO has no such native language. It only comes into existence on top of procedural languages or OO languages: COM/DCOM (binary, C, C++), CORBA, EJB, Spring, Guice (all Java), ... These SO frameworks clearly suffer from the missing native language support of their concepts. They start using OO classes to represent services and records. This leads to designs where there is a clear distinction between classes that have methods only (services) and those that have fields only (records). Inheritance between services or records is then simulated by inheritance of classes. Technically, its not kept so strictly but in general programmers are adviced to make classes to play only one of the two roles. They use additional, external languages to represent the missing parts: IDL's, XML configurations, Annotations in Java code, or even embedded DSL like in Guice. This is especially needed, but not limited to, since the composition of services is not part of the service code itself. In OO, objects create other objects so there is no need for such facilities but for SO there is because services don't instantiate or configure other services. They establish an inner-platform effect on top of OO (early EJB, CORBA) where the programmer has to write all the code that is needed to "drive" SO. Classes represent only a part of the nature of a service and lots of classes have to be written to form a service together. All that boiler plate is necessary because there is no SO compiler which would do it for the programmer. This is just like some people did it in C for OO when there was no C++. You just pass the record which holds the data of the object as a first parameter to the procedure which is the method. In a OO language this parameter is implicit and the compiler produces all the code that we need for virtual functions etc. For SO, this is clearly missing. Especially the newer frameworks extensively use AOP or introspection to add the missing parts to a OO language. This doesn't bring the necessary language expressiveness but avoids the boiler platform code described in the previous point. Some frameworks use code generation to produce the boiler plate code. Configuration files in XML or annotations in OO code is the source of information for this. Not all of the phenomena that I mentioned above can be attributed to SO but I hope it clearly shows that there is a need for a SO language. Since this paradigm is so popular: why isn't there one? Or maybe there are some academic ones but at least the industry doesn't use one.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >