Search Results

Search found 21719 results on 869 pages for 'password security'.

Page 376/869 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • Netbeans Error "build-impl.xml:688" : The module has not been deployed.

    - by Sarang
    Hi everyone, I am getting this error while deploying the jsp project : In-place deployment at C:\Users\Admin\Documents\NetBeansProjects\send-mail\build\web Initializing... deploy?path=C:\Users\Admin\Documents\NetBeansProjects\send-mail\build\web&name=send-mail&force=true failed on GlassFish Server 3 C:\Users\Admin\Documents\NetBeansProjects\send-mail\nbproject\build-impl.xml:688: The module has not been deployed. BUILD FAILED (total time: 0 seconds) Is there any solution for this ? Stack Trace : SEVERE: DPL8015: Invalid Deployment Descriptors in Deployment descriptor file WEB-INF/web.xml in archive [web]. Line 19 Column 23 -- cvc-complex-type.2.4.a: Invalid content was found starting with element 'display-name'. One of '{"http://java.sun.com/xml/ns/javaee":servlet-class, "http://java.sun.com/xml/ns/javaee":jsp-file, "http://java.sun.com/xml/ns/javaee":init-param, "http://java.sun.com/xml/ns/javaee":load-on-startup, "http://java.sun.com/xml/ns/javaee":enabled, "http://java.sun.com/xml/ns/javaee":async-supported, "http://java.sun.com/xml/ns/javaee":run-as, "http://java.sun.com/xml/ns/javaee":security-role-ref, "http://java.sun.com/xml/ns/javaee":multipart-config}' is expected. SEVERE: DPL8005: Deployment Descriptor parsing failure : cvc-complex-type.2.4.a: Invalid content was found starting with element 'display-name'. One of '{"http://java.sun.com/xml/ns/javaee":servlet-class, "http://java.sun.com/xml/ns/javaee":jsp-file, "http://java.sun.com/xml/ns/javaee":init-param, "http://java.sun.com/xml/ns/javaee":load-on-startup, "http://java.sun.com/xml/ns/javaee":enabled, "http://java.sun.com/xml/ns/javaee":async-supported, "http://java.sun.com/xml/ns/javaee":run-as, "http://java.sun.com/xml/ns/javaee":security-role-ref, "http://java.sun.com/xml/ns/javaee":multipart-config}' is expected. SEVERE: Exception while deploying the app java.io.IOException: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'display-name'. One of '{"http://java.sun.com/xml/ns/javaee":servlet-class, "http://java.sun.com/xml/ns/javaee":jsp-file, "http://java.sun.com/xml/ns/javaee":init-param, "http://java.sun.com/xml/ns/javaee":load-on-startup, "http://java.sun.com/xml/ns/javaee":enabled, "http://java.sun.com/xml/ns/javaee":async-supported, "http://java.sun.com/xml/ns/javaee":run-as, "http://java.sun.com/xml/ns/javaee":security-role-ref, "http://java.sun.com/xml/ns/javaee":multipart-config}' is expected. at org.glassfish.javaee.core.deployment.DolProvider.load(DolProvider.java:170) at org.glassfish.javaee.core.deployment.DolProvider.load(DolProvider.java:79) at com.sun.enterprise.v3.server.ApplicationLifecycle.loadDeployer(ApplicationLifecycle.java:612) at com.sun.enterprise.v3.server.ApplicationLifecycle.setupContainerInfos(ApplicationLifecycle.java:554) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:262) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:183) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:272) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:305) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:320) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1176) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$900(CommandRunnerImpl.java:83) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1235) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1224) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:365) at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:204) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:166) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:100) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:245) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:662) Caused by: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'display-name'. One of '{"http://java.sun.com/xml/ns/javaee":servlet-class, "http://java.sun.com/xml/ns/javaee":jsp-file, "http://java.sun.com/xml/ns/javaee":init-param, "http://java.sun.com/xml/ns/javaee":load-on-startup, "http://java.sun.com/xml/ns/javaee":enabled, "http://java.sun.com/xml/ns/javaee":async-supported, "http://java.sun.com/xml/ns/javaee":run-as, "http://java.sun.com/xml/ns/javaee":security-role-ref, "http://java.sun.com/xml/ns/javaee":multipart-config}' is expected. at com.sun.enterprise.deployment.io.DeploymentDescriptorFile.read(DeploymentDescriptorFile.java:304) at com.sun.enterprise.deployment.io.DeploymentDescriptorFile.read(DeploymentDescriptorFile.java:225) at com.sun.enterprise.deployment.archivist.Archivist.readStandardDeploymentDescriptor(Archivist.java:614) at com.sun.enterprise.deployment.archivist.Archivist.readDeploymentDescriptors(Archivist.java:366) at com.sun.enterprise.deployment.archivist.Archivist.open(Archivist.java:238) at com.sun.enterprise.deployment.archivist.Archivist.open(Archivist.java:247) at com.sun.enterprise.deployment.archivist.Archivist.open(Archivist.java:208) at com.sun.enterprise.deployment.archivist.ApplicationFactory.openArchive(ApplicationFactory.java:148) at org.glassfish.javaee.core.deployment.DolProvider.load(DolProvider.java:162) ... 31 more Caused by: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'display-name'. One of '{"http://java.sun.com/xml/ns/javaee":servlet-class, "http://java.sun.com/xml/ns/javaee":jsp-file, "http://java.sun.com/xml/ns/javaee":init-param, "http://java.sun.com/xml/ns/javaee":load-on-startup, "http://java.sun.com/xml/ns/javaee":enabled, "http://java.sun.com/xml/ns/javaee":async-supported, "http://java.sun.com/xml/ns/javaee":run-as, "http://java.sun.com/xml/ns/javaee":security-role-ref, "http://java.sun.com/xml/ns/javaee":multipart-config}' is expected. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:195) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:131) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:384) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:318) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator$XSIErrorReporter.reportError(XMLSchemaValidator.java:417) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.reportSchemaError(XMLSchemaValidator.java:3182) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.handleStartElement(XMLSchemaValidator.java:1806) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.startElement(XMLSchemaValidator.java:705) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:400) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2755) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:140) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:511) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at javax.xml.parsers.SAXParser.parse(SAXParser.java:395) at com.sun.enterprise.deployment.io.DeploymentDescriptorFile.read(DeploymentDescriptorFile.java:298) ... 39 more Any Solution for this ?

    Read the article

  • Could not synchronize database state with session

    - by user359427
    Hello all, I'm having trouble trying to persist an entity which ID is a generated value. This entity (A), at persistence time, has to persist in cascade another entity(B). The relationship within A and B is OneToMany, and the property related in B is part of a composite key. I'm using Eclipse, JBOSS Runtime, JPA/Hibernate Here is my code: Entity A: @Entity public class Cambios implements Serializable { private static final long serialVersionUID = 1L; @SequenceGenerator(name="CAMBIOS_GEN",sequenceName="CAMBIOS_SEQ",allocationSize=1) @Id @GeneratedValue(strategy=GenerationType.SEQUENCE, generator="CAMBIOS_GEN") @Column(name="ID_CAMBIO") private Long idCambio; //bi-directional many-to-one association to ObjetosCambio @OneToMany(cascade={CascadeType.PERSIST},mappedBy="cambios") private List<ObjetosCambio> objetosCambioList; public Cambios() { } ... } Entity B: @Entity @Table(name="OBJETOS_CAMBIO") public class ObjetosCambio implements Serializable { private static final long serialVersionUID = 1L; @EmbeddedId private ObjetosCambioPK id; //bi-directional many-to-one association to Cambios @ManyToOne @JoinColumn(name="ID_CAMBIO", insertable=false, updatable=false) private Cambios cambios; //bi-directional many-to-one association to Objetos @ManyToOne @JoinColumn(name="ID_OBJETO", insertable=false, updatable=false) private Objetos objetos; public ObjetosCambio() { } ... Entity B PK: @Embeddable public class ObjetosCambioPK implements Serializable { //default serial version id, required for serializable classes. private static final long serialVersionUID = 1L; @Column(name="ID_OBJETO") private Long idObjeto; @Column(name="ID_CAMBIO") private Long idCambio; public ObjetosCambioPK() { } Client: public String generarCambio(){ ServiceLocator serviceLocator = null; try { serviceLocator = serviceLocator.getInstance(); FachadaLocal tcLocal; tcLocal = (FachadaLocal)serviceLocator.getFacadeService("java:comp/env/negocio/Fachada"); Cambios cambio = new Cambios(); Iterator it = objetosLocal.iterator(); //OBJETOSLOCAL IS ALREADY POPULATED OUTSIDE OF THIS METHOD List<ObjetosCambio> ocList = new ArrayList(); while (it.hasNext()){ Objetos objeto = (Objetos)it.next(); ObjetosCambio objetosCambio = new ObjetosCambio(); objetosCambio.setCambios(cambio); //AT THIS TIME THIS "CAMBIO" DOES NOT HAVE ITS ID, ITS SUPPOSED TO BE GENERATED AT PERSISTENCE TIME ObjetosCambioPK ocPK = new ObjetosCambioPK(); ocPK.setIdObjeto(objeto.getIdObjeto()); objetosCambio.setId(ocPK); ocList.add(objetosCambio); } cambio.setObjetosCambioList(ocList); tcLocal.persistEntity(cambio); return "exito"; } catch (NamingException e) { // TODO e.printStackTrace(); } return null; } ERROR: 15:23:25,717 WARN [JDBCExceptionReporter] SQL Error: 1400, SQLState: 23000 15:23:25,717 ERROR [JDBCExceptionReporter] ORA-01400: no se puede realizar una inserción NULL en ("CDC"."OBJETOS_CAMBIO"."ID_CAMBIO") 15:23:25,717 WARN [JDBCExceptionReporter] SQL Error: 1400, SQLState: 23000 15:23:25,717 ERROR [JDBCExceptionReporter] ORA-01400: no se puede realizar una inserción NULL en ("CDC"."OBJETOS_CAMBIO"."ID_CAMBIO") 15:23:25,717 ERROR [AbstractFlushingEventListener] Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:94) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:266) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:365) at org.hibernate.ejb.AbstractEntityManagerImpl$1.beforeCompletion(AbstractEntityManagerImpl.java:504) at com.arjuna.ats.internal.jta.resources.arjunacore.SynchronizationImple.beforeCompletion(SynchronizationImple.java:101) at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.beforeCompletion(TwoPhaseCoordinator.java:269) at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.end(TwoPhaseCoordinator.java:89) at com.arjuna.ats.arjuna.AtomicAction.commit(AtomicAction.java:177) at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.commitAndDisassociate(TransactionImple.java:1423) at com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.commit(BaseTransaction.java:137) at com.arjuna.ats.jbossatx.BaseTransactionManagerDelegate.commit(BaseTransactionManagerDelegate.java:75) at org.jboss.aspects.tx.TxPolicy.endTransaction(TxPolicy.java:170) at org.jboss.aspects.tx.TxPolicy.invokeInOurTx(TxPolicy.java:87) at org.jboss.aspects.tx.TxInterceptor$Required.invoke(TxInterceptor.java:190) at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) at org.jboss.aspects.tx.TxPropagationInterceptor.invoke(TxPropagationInterceptor.java:76) at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) at org.jboss.ejb3.tx.NullInterceptor.invoke(NullInterceptor.java:42) at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) at org.jboss.ejb3.security.Ejb3AuthenticationInterceptorv2.invoke(Ejb3AuthenticationInterceptorv2.java:186) at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) at org.jboss.ejb3.ENCPropagationInterceptor.invoke(ENCPropagationInterceptor.java:41) at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) at org.jboss.ejb3.BlockContainerShutdownInterceptor.invoke(BlockContainerShutdownInterceptor.java:67) at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) at org.jboss.aspects.currentinvocation.CurrentInvocationInterceptor.invoke(CurrentInvocationInterceptor.java:67) at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) at org.jboss.ejb3.session.SessionSpecContainer.invoke(SessionSpecContainer.java:176) at org.jboss.ejb3.session.SessionSpecContainer.invoke(SessionSpecContainer.java:216) at org.jboss.ejb3.proxy.impl.handler.session.SessionProxyInvocationHandlerBase.invoke(SessionProxyInvocationHandlerBase.java:207) at org.jboss.ejb3.proxy.impl.handler.session.SessionProxyInvocationHandlerBase.invoke(SessionProxyInvocationHandlerBase.java:164) at $Proxy298.persistEntity(Unknown Source) at backing.SolicitudCambio.generarCambio(SolicitudCambio.java:521) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at com.sun.faces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:146) at com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:92) at javax.faces.component.UICommand.broadcast(UICommand.java:332) at javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:287) at javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:401) at com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:95) at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:245) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:110) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:213) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.myfaces.webapp.filter.ExtensionsFilter.doFilter(ExtensionsFilter.java:301) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:190) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:92) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:598) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Unknown Source) Caused by: java.sql.BatchUpdateException: ORA-01400: no se puede realizar una inserción NULL en ("CDC"."OBJETOS_CAMBIO"."ID_CAMBIO") Thanks in advance! JM.-

    Read the article

  • memory leak error when using an iterator

    - by Adnane Jaafari
    please i'm having this error if any one can explain it : while using an iterator in my methode public void createDemandeP() { if (demandep.getDateDebut().after(demandep.getDateFin())) { FacesContext .getCurrentInstance() .addMessage( null, new FacesMessage(FacesMessage.SEVERITY_WARN, "Attention aux dates", "la date de debut doit être avant la date de fin!")); } else if (demandep.getDateDebut().before(demandep.getDateFin())) { List<DemandeP> list = new ArrayList<DemandeP>(); list.addAll(chaletService.getChaletBylibelle(chaletChoisi).get(0) .getListDemandesP()); Iterator<DemandeP> it = list.iterator(); DemandeP d = it.next(); while (it.hasNext()) { if ((d.getDateDebut().compareTo(demandep.getDateDebut()) == 0) || (d.getDateFin().compareTo(demandep.getDateDebut()) == 0) || (d.getDateFin().compareTo(demandep.getDateFin()) == 0) || (d.getDateDebut().compareTo(demandep.getDateDebut()) == 0) || (d.getDateDebut().before(demandep.getDateDebut()) && d .getDateFin().after(demandep.getDateFin())) || (d.getDateDebut().before(demandep.getDateFin()) && d .getDateDebut().after(demandep.getDateDebut())) || (d.getDateFin().after(demandep.getDateDebut()) && d .getDateFin().before(demandep.getDateFin()))) { FacesContext.getCurrentInstance().getMessageList().clear(); FacesContext .getCurrentInstance() .addMessage( null, new FacesMessage( FacesMessage.SEVERITY_FATAL, "Periode Ou chalet indisponicle ", "Veillez choisir une autre marge de date !")); } } } else { demandep.setEtat("En traitement"); DateFormat dateFormat = new SimpleDateFormat("dd/MM/yyyy"); Date date = new Date(); try { demandep.setDateDemande(dateFormat.parse(dateFormat .format(date))); } catch (ParseException e) { System.out.println("errooor date"); e.printStackTrace(); } nameUser = auth.getName(); // System.out.println(nameUser); adherent = utilisateurService.findAdherentByNom(nameUser).get(0); demandep.setUtilisateur(adherent); // System.out.println(chaletService.getChaletBylibelle(chaletChoisi).get(0).getLibelle()); demandep.setChalet(chaletService.getChaletBylibelle(chaletChoisi) .get(0)); demandep.setNouvelleDemande(true); demandePService.ajouterDemandeP(demandep); } } oct. 23, 2013 7:19:30 PM org.apache.catalina.core.StandardContext reload INFO: Le rechargement du contexte [/ONICLFINAL] a démarré oct. 23, 2013 7:19:30 PM org.apache.catalina.core.StandardWrapper unload INFO: Waiting for 1 instance(s) to be deallocated oct. 23, 2013 7:19:31 PM org.apache.catalina.core.StandardWrapper unload INFO: Waiting for 1 instance(s) to be deallocated oct. 23, 2013 7:19:32 PM org.apache.catalina.core.StandardWrapper unload INFO: Waiting for 1 instance(s) to be deallocated oct. 23, 2013 7:19:32 PM org.apache.catalina.core.ApplicationContext log INFO: Closing Spring root WebApplicationContext oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/ONICLFINAL] registered the JDBC driver [com.mysql.jdbc.Driver] b but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/ONICLFINAL] appears to have started a thread named [MySQL Statement Cancellation Timer] but has failed to stop it. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SE VERE: The web application [/ONICLFINAL] is still processing a request that has yet to finish. This is very likely to create a memory leak. You can control the time allowed for requests to finish by using the unloadDelay attribute of the standard Context implementation. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [org.springframework.core.NamedThreadLocal] (value [Hibernate Sessions registered for deferred close]) and a value of type [java.util.HashMap] (value [{org.hibernate.impl.SessionFactoryImpl@f6e256=[SessionImpl(PersistenceContext[entityKeys=[EntityKey[bo.DemandeP#1], EntityKey[bo.Utilisateur#3], EntityKey[bo.Chalet#1], EntityKey[bo.Role#2], EntityKey[bo.DemandeP#2]],collectionKeys=[CollectionKey[bo.Role.ListeUsers#2], CollectionKey[bo.Chalet.listPeriodes#1], CollectionKey[bo.Utilisateur.demandes#3], CollectionKey[bo.Utilisateur.demandesP#3], CollectionKey[bo.Chalet.listDemandesP#1]]];ActionQueue[insertions=[] updates=[] deletions=[] collectionCreations=[] collectionRemovals=[] collectionUpdates=[]])]}]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [org.springframework.core.NamedThreadLocal] (value [Request attributes]) and a value of type [org.springframework.web.context.request.ServletRequestAttributes] (value [org.apache.catalina.connector.RequestFacade@17f3488]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@51f78b]) and a value of type [org.springframework.security.core.context.SecurityContextImpl] (value [org.springframework.security.core.context.SecurityContextImpl@8e463c8b: Authentication: org.springframework.security.authentication.UsernamePasswordAuthenticationToken@8e463c8b: Principal: org.springframework.security.core.userdetails.User@311aa119: Username: maatouf; Password: [PROTECTED]; Enabled: true; AccountNonExpired: true; credentialsNonExpired: true; AccountNonLocked: true; Granted Authorities: ROLE_ADHER; Credentials: [PROTECTED]; Authenticated: true; Details: org.springframework.security.web.authentication.WebAuthenticationDetails@ffff4c9c: RemoteIpAddress: 0:0:0:0:0:0:0:1; SessionId: 14CD5D4E8E0E3AEB0367AB7115038FED; Granted Authorities: ROLE_ADHER]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@152e9b7]) and a value of type [net.sf.cglib.proxy.Callback[]] (value [[Lnet.sf.cglib.proxy.Callback;@6e1f4c]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [javax.faces.context.FacesContext$1] (value [javax.faces.context.FacesContext$1@9ecc6d]) and a value of type [com.sun.faces.context.FacesContextImpl] (value [com.sun.faces.context.FacesContextImpl@1c8bbed]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@1a9e75f]) and a value of type [com.sun.faces.context.FacesContextImpl] (value [com.sun.faces.context.FacesContextImpl@1c8bbed]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [org.springframework.core.NamedThreadLocal] (value [Locale context]) and a value of type [org.springframework.context.i18n.SimpleLocaleContext] (value [fr_FR]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:32 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap SEVERE: The web application [/ONICLFINAL] created a ThreadLocal with key of type [com.sun.faces.application.ApplicationAssociate$1] (value [com.sun.faces.application.ApplicationAssociate$1@195266b]) and a value of type [com.sun.faces.application.ApplicationAssociate] (value [com.sun.faces.application.ApplicationAssociate@10d595c]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. oct. 23, 2013 7:19:33 PM org.apache.catalina.loader.WebappClassLoader validateJarFile INFO: validateJarFile(D:\newWorkSpace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\ONICLF INAL\WEB-INF\lib\servlet-api-2.5.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class oct. 23, 2013 7:19:33 PM org.apache.catalina.core.ApplicationContext log

    Read the article

  • Connecting to SQL Server in Php - Extension Err

    - by John Doe
    <html> <head> <title>Connecting </title> </head> <body> <?php $host = "*.*.*.*"; $username = "xxx"; $password = "xxx"; $db_name = "xxx"; $db = mssql_connect($host, $username,$password) or die("Couldnt Connect"); $selected = mssql_select_db($db_name, $db) or die("Couldnt open database"); ?> </body> </html> My error message is: Fatal error: Call to undefined function mssql_connect() in C:\wamp\www\php\dbase.php on line 12 I am using WampServer 2.0 on Php 5.3.0 When I check the extensions, php_mssql is Checked. I also checked the php.ini file to make sure it is not commented out. I have my file dbase.php saved in C:\wamp\www\php. I have tried stopping the service, closing everything, and running it again. I know the problem is that the extension file is not being included somehow. The below is copied from my php.ini file. Note I made all http = /http to avoid posting Links. ;;;;;;;;;;;;;;;;;;;;;;;;; ; Paths and Directories ; ;;;;;;;;;;;;;;;;;;;;;;;;; ; UNIX: "/path1:/path2" ;include_path = ".:/php/includes" ; Windows: "\path1;\path2" include_path = "C:\wamp\bin\php\php5.3.0\ext" ; ; PHP's default setting for include_path is ".;/path/to/php/pear" ; /http://php.net/include-path ; The root of the PHP pages, used only if nonempty. ; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root ; if you are running php as a CGI under any web server (other than IIS) ; see documentation for security issues. The alternate is to use the ; cgi.force_redirect configuration below ; /http://php.net/doc-root doc_root = ; The directory under which PHP opens the script using /~username used only ; if nonempty. ; /http://php.net/user-dir user_dir = ; Directory in which the loadable extensions (modules) reside. ; /http://php.net/extension-dir ; extension_dir = "./" ; On windows: ; extension_dir = "ext" extension_dir = "c:/wamp/bin/php/php5.3.0/ext/" ; Whether or not to enable the dl() function. The dl() function does NOT work ; properly in multithreaded servers, such as IIS or Zeus, and is automatically ; disabled on them. ; /http://php.net/enable-dl enable_dl = Off ; cgi.force_redirect is necessary to provide security running PHP as a CGI under ; most web servers. Left undefined, PHP turns this on by default. You can ; turn it off here AT YOUR OWN RISK ; You CAN safely turn this off for IIS, in fact, you MUST. ; /http://php.net/cgi.force-redirect ;cgi.force_redirect = 1 ; if cgi.nph is enabled it will force cgi to always sent Status: 200 with ; every request. PHP's default behavior is to disable this feature. ;cgi.nph = 1 ; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape ; (iPlanet) web servers, you MAY need to set an environment variable name that PHP ; will look for to know it is OK to continue execution. Setting this variable MAY ; cause security issues, KNOW WHAT YOU ARE DOING FIRST. ; /http://php.net/cgi.redirect-status-env ;cgi.redirect_status_env = ; ; cgi.fix_pathinfo provides real PATH_INFO/PATH_TRANSLATED support for CGI. PHP's ; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok ; what PATH_INFO is. For more information on PATH_INFO, see the cgi specs. Setting ; this to 1 will cause PHP CGI to fix its paths to conform to the spec. A setting ; of zero causes PHP to behave as before. Default is 1. You should fix your scripts ; to use SCRIPT_FILENAME rather than PATH_TRANSLATED. ; /http://php.net/cgi.fix-pathinfo ;cgi.fix_pathinfo=1 ; FastCGI under IIS (on WINNT based OS) supports the ability to impersonate ; security tokens of the calling client. This allows IIS to define the ; security context that the request runs under. mod_fastcgi under Apache ; does not currently support this feature (03/17/2002) ; Set to 1 if running under IIS. Default is zero. ; /http://php.net/fastcgi.impersonate ;fastcgi.impersonate = 1; ; Disable logging through FastCGI connection. PHP's default behavior is to enable ; this feature. ;fastcgi.logging = 0 ; cgi.rfc2616_headers configuration option tells PHP what type of headers to ; use when sending HTTP response code. If it's set 0 PHP sends Status: header that ; is supported by Apache. When this option is set to 1 PHP will send ; RFC2616 compliant header. ; Default is zero. ; /http://php.net/cgi.rfc2616-headers ;cgi.rfc2616_headers = 0 ;;;;;;;;;;;;;;;; ; File Uploads ; ;;;;;;;;;;;;;;;; ; Whether to allow HTTP file uploads. ; /http://php.net/file-uploads file_uploads = On ; Temporary directory for HTTP uploaded files (will use system default if not ; specified). ; /http://php.net/upload-tmp-dir upload_tmp_dir = "c:/wamp/tmp" ; Maximum allowed size for uploaded files. ; /http://php.net/upload-max-filesize upload_max_filesize = 2M Also, my php.ini file is saved in: C:\wamp\bin\apache\Apache2.2.11\bin

    Read the article

  • Connecting to mssql in Php - Extension Err

    - by John Doe
    <html> <head> <title>Connecting </title> </head> <body> <?php $host = "*.*.*.*"; $username = "xxx"; $password = "xxx"; $db_name = "xxx"; $db = mssql_connect($host, $username,$password) or die("Couldnt Connect"); $selected = mssql_select_db($db_name, $db) or die("Couldnt open database"); ?> </body> </html> My error message is: Fatal error: Call to undefined function mssql_connect() in C:\wamp\www\php\dbase.php on line 12 I am using WampServer 2.0 on Php 5.3.0 When I check the extensions, php_mssql is Checked. I also checked the php.ini file to make sure it is not commented out. I have my file dbase.php saved in C:\wamp\www\php. I have tried stopping the service, closing everything, and running it again. I know the problem is that the extension file is not being included somehow. The below is copied from my php.ini file. Note I made all http = /http to avoid posting Links. ;;;;;;;;;;;;;;;;;;;;;;;;; ; Paths and Directories ; ;;;;;;;;;;;;;;;;;;;;;;;;; ; UNIX: "/path1:/path2" ;include_path = ".:/php/includes" ; Windows: "\path1;\path2" include_path = "C:\wamp\bin\php\php5.3.0\ext" ; ; PHP's default setting for include_path is ".;/path/to/php/pear" ; /http://php.net/include-path ; The root of the PHP pages, used only if nonempty. ; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root ; if you are running php as a CGI under any web server (other than IIS) ; see documentation for security issues. The alternate is to use the ; cgi.force_redirect configuration below ; /http://php.net/doc-root doc_root = ; The directory under which PHP opens the script using /~username used only ; if nonempty. ; /http://php.net/user-dir user_dir = ; Directory in which the loadable extensions (modules) reside. ; /http://php.net/extension-dir ; extension_dir = "./" ; On windows: ; extension_dir = "ext" extension_dir = "c:/wamp/bin/php/php5.3.0/ext/" ; Whether or not to enable the dl() function. The dl() function does NOT work ; properly in multithreaded servers, such as IIS or Zeus, and is automatically ; disabled on them. ; /http://php.net/enable-dl enable_dl = Off ; cgi.force_redirect is necessary to provide security running PHP as a CGI under ; most web servers. Left undefined, PHP turns this on by default. You can ; turn it off here AT YOUR OWN RISK ; You CAN safely turn this off for IIS, in fact, you MUST. ; /http://php.net/cgi.force-redirect ;cgi.force_redirect = 1 ; if cgi.nph is enabled it will force cgi to always sent Status: 200 with ; every request. PHP's default behavior is to disable this feature. ;cgi.nph = 1 ; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape ; (iPlanet) web servers, you MAY need to set an environment variable name that PHP ; will look for to know it is OK to continue execution. Setting this variable MAY ; cause security issues, KNOW WHAT YOU ARE DOING FIRST. ; /http://php.net/cgi.redirect-status-env ;cgi.redirect_status_env = ; ; cgi.fix_pathinfo provides real PATH_INFO/PATH_TRANSLATED support for CGI. PHP's ; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok ; what PATH_INFO is. For more information on PATH_INFO, see the cgi specs. Setting ; this to 1 will cause PHP CGI to fix its paths to conform to the spec. A setting ; of zero causes PHP to behave as before. Default is 1. You should fix your scripts ; to use SCRIPT_FILENAME rather than PATH_TRANSLATED. ; /http://php.net/cgi.fix-pathinfo ;cgi.fix_pathinfo=1 ; FastCGI under IIS (on WINNT based OS) supports the ability to impersonate ; security tokens of the calling client. This allows IIS to define the ; security context that the request runs under. mod_fastcgi under Apache ; does not currently support this feature (03/17/2002) ; Set to 1 if running under IIS. Default is zero. ; /http://php.net/fastcgi.impersonate ;fastcgi.impersonate = 1; ; Disable logging through FastCGI connection. PHP's default behavior is to enable ; this feature. ;fastcgi.logging = 0 ; cgi.rfc2616_headers configuration option tells PHP what type of headers to ; use when sending HTTP response code. If it's set 0 PHP sends Status: header that ; is supported by Apache. When this option is set to 1 PHP will send ; RFC2616 compliant header. ; Default is zero. ; /http://php.net/cgi.rfc2616-headers ;cgi.rfc2616_headers = 0 ;;;;;;;;;;;;;;;; ; File Uploads ; ;;;;;;;;;;;;;;;; ; Whether to allow HTTP file uploads. ; /http://php.net/file-uploads file_uploads = On ; Temporary directory for HTTP uploaded files (will use system default if not ; specified). ; /http://php.net/upload-tmp-dir upload_tmp_dir = "c:/wamp/tmp" ; Maximum allowed size for uploaded files. ; /http://php.net/upload-max-filesize upload_max_filesize = 2M Also, my php.ini file is saved in: C:\wamp\bin\apache\Apache2.2.11\bin

    Read the article

  • Connection to Weblogic Server through ServiceMix fails

    - by bertolami
    I connect from a OSGi bundle deployed on Apache ServiceMix to a Weblogic Server to call some EJBs. The lookup happens with JNDI. In my unit test everything works fine. But when a deploy the bundle on ServiceMix a CommunicationException exception is raised on JNDI ContextFactory initialisation. The class that performs the lookup during initialisation: public DummyJndiLookup(JndiTemplate jndiTemplate) { try { String securityServiceURL = "ejb/xyz/Service"; reference = jndiTemplate.lookup(securityServiceURL); log.info("Successfully connected to JNDI Server: " + reference); } catch (Throwable t) { throw new RuntimeException(t); } } The beans in the spring context: <bean id="dummy" class="xyz.DummyJndiLookup"> <constructor-arg ref="jndiTemplate"></constructor-arg> </bean> <bean id="jndiTemplate" class="org.springframework.jndi.JndiTemplate" lazy-init="true"> <property name="environment"> <props> <prop key="java.naming.factory.initial">weblogic.jndi.WLInitialContextFactory</prop> <prop key="java.naming.provider.url">t3://xyz:22225</prop> <prop key="java.naming.security.principal">weblogic</prop> <prop key="java.naming.security.credentials">weblogic</prop> </props> </property> </bean> The resulting exception stack trace: Caused by: javax.naming.CommunicationException [Root exception is java.net.ConnectException: t3://xyz7:22225: Bootstrap to: xyz/192.168.108.22:22225' over: 't3' got an error or timed out] at weblogic.jndi.internal.ExceptionTranslator.toNamingException(ExceptionTranslator.java:40) at weblogic.jndi.WLInitialContextFactoryDelegate.toNamingException(WLInitialContextFactoryDelegate.java:783) at weblogic.jndi.WLInitialContextFactoryDelegate.getInitialContext(WLInitialContextFactoryDelegate.java:365) at weblogic.jndi.Environment.getContext(Environment.java:315) at weblogic.jndi.Environment.getContext(Environment.java:285) at weblogic.jndi.WLInitialContextFactory.getInitialContext(WLInitialContextFactory.java:117) at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:667) at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288) at javax.naming.InitialContext.init(InitialContext.java:223) at javax.naming.InitialContext.<init>(InitialContext.java:197) at org.springframework.jndi.JndiTemplate.createInitialContext(JndiTemplate.java:137) at org.springframework.jndi.JndiTemplate.getContext(JndiTemplate.java:104) at org.springframework.jndi.JndiTemplate.execute(JndiTemplate.java:86) at org.springframework.jndi.JndiTemplate.lookup(JndiTemplate.java:153) at xyz.DummyJndiLookup.<init>(DummyJndiLookup.java:36) ... 26 more Caused by: java.net.ConnectException: t3://xyz:22225: Bootstrap to: xyz/192.168.108.22:22225' over: 't3' got an error or timed out at weblogic.rjvm.RJVMFinder.findOrCreateInternal(RJVMFinder.java:216) at weblogic.rjvm.RJVMFinder.findOrCreate(RJVMFinder.java:170) at weblogic.rjvm.ServerURL.findOrCreateRJVM(ServerURL.java:153) at weblogic.jndi.WLInitialContextFactoryDelegate$1.run(WLInitialContextFactoryDelegate.java:344) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:147) at weblogic.jndi.WLInitialContextFactoryDelegate.getInitialContext(WLInitialContextFactoryDelegate.java:339) ... 38 more Caused by: java.rmi.ConnectException: Bootstrap to: xyz/192.168.108.22:22225' over: 't3' got an error or timed out at weblogic.rjvm.ConnectionManager.bootstrap(ConnectionManager.java:359) at weblogic.rjvm.RJVMManager.findOrCreateRemoteInternal(RJVMManager.java:251) at weblogic.rjvm.RJVMManager.findOrCreate(RJVMManager.java:194) at weblogic.rjvm.RJVMFinder.findOrCreateRemoteServer(RJVMFinder.java:238) at weblogic.rjvm.RJVMFinder.findOrCreateInternal(RJVMFinder.java:200) Any ideas what could cause the exception? Escpecially why it does work in the unit test and not after having bundled and deployed on Apache ServiceMix? Additional Info: I dumped the threads stack trace of ServiceMix (after having removed all JNDI related spring stuff): 2010-03-22 16:18:23 Full thread dump Java HotSpot(TM) Server VM (11.2-b01 mixed mode): "SpringOsgiExtenderThread-14" prio=6 tid=0x054d6400 nid=0x17c4 waiting for monitor entry [0x06f3e000..0x06f3fb14] java.lang.Thread.State: BLOCKED (on object monitor) at weblogic.rjvm.RJVMFinder.findOrCreate(RJVMFinder.java:168) - waiting to lock <0x595876f8> (a weblogic.rjvm.RJVMFinder) at weblogic.rjvm.ServerURL.findOrCreateRJVM(ServerURL.java:153) at weblogic.jndi.WLInitialContextFactoryDelegate.getInitialContext(WLInitialContextFactoryDelegate.java:352) at weblogic.jndi.Environment.getContext(Environment.java:315) at weblogic.jndi.Environment.getContext(Environment.java:285) at weblogic.jndi.WLInitialContextFactory.getInitialContext(WLInitialContextFactory.java:117) at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:667) at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288) at javax.naming.InitialContext.init(InitialContext.java:223) at javax.naming.InitialContext.<init>(InitialContext.java:197) at xyz.DummyJndiLookup.getInitialContext(DummyJndiLookup.java:62) at xyz.DummyJndiLookup.<init>(DummyJndiLookup.java:32) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:100) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:61) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:877) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:839) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:440) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) - locked <0x595959c0> (a java.util.concurrent.ConcurrentHashMap) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429) - locked <0x59598370> (a java.util.concurrent.ConcurrentHashMap) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.access$1600(AbstractDelegatedExecutionApplicationContext.java:69) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext$4.run(AbstractDelegatedExecutionApplicationContext.java:355) - locked <0x595431a8> (a java.lang.Object) at org.springframework.osgi.util.internal.PrivilegedUtils.executeWithCustomTCCL(PrivilegedUtils.java:85) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.completeRefresh(AbstractDelegatedExecutionApplicationContext.java:320) at org.springframework.osgi.extender.internal.dependencies.startup.DependencyWaiterApplicationContextExecutor$CompleteRefreshTask.run(DependencyWaiterApplicationContextExecutor.java:136) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None "SpringOsgiExtenderThread-12" prio=6 tid=0x05465400 nid=0x14cc in Object.wait() [0x06f8e000..0x06f8fc94] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x595b3800> (a java.lang.Object) at weblogic.rjvm.ConnectionManager.bootstrap(ConnectionManager.java:320) - locked <0x595b3800> (a java.lang.Object) at weblogic.rjvm.RJVMManager.findOrCreateRemoteInternal(RJVMManager.java:251) - locked <0x595885b8> (a java.lang.Object) at weblogic.rjvm.RJVMManager.findOrCreate(RJVMManager.java:194) at weblogic.rjvm.RJVMFinder.findOrCreateRemoteServer(RJVMFinder.java:238) at weblogic.rjvm.RJVMFinder.findOrCreateInternal(RJVMFinder.java:200) at weblogic.rjvm.RJVMFinder.findOrCreate(RJVMFinder.java:170) - locked <0x595876f8> (a weblogic.rjvm.RJVMFinder) at weblogic.rjvm.ServerURL.findOrCreateRJVM(ServerURL.java:153) at weblogic.jndi.WLInitialContextFactoryDelegate.getInitialContext(WLInitialContextFactoryDelegate.java:352) at weblogic.jndi.Environment.getContext(Environment.java:315) at weblogic.jndi.Environment.getContext(Environment.java:285) at weblogic.jndi.WLInitialContextFactory.getInitialContext(WLInitialContextFactory.java:117) at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:667) at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288) at javax.naming.InitialContext.init(InitialContext.java:223) at javax.naming.InitialContext.<init>(InitialContext.java:197) at xyz.DummyJndiLookup.getInitialContext(DummyJndiLookup.java:62) at xyz.DummyJndiLookup.<init>(DummyJndiLookup.java:32) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:100) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:61) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:877) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:839) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:440) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) - locked <0x595b3af0> (a java.util.concurrent.ConcurrentHashMap) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429) - locked <0x595b3b18> (a java.util.concurrent.ConcurrentHashMap) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.access$1600(AbstractDelegatedExecutionApplicationContext.java:69) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext$4.run(AbstractDelegatedExecutionApplicationContext.java:355) - locked <0x595b3be0> (a java.lang.Object) at org.springframework.osgi.util.internal.PrivilegedUtils.executeWithCustomTCCL(PrivilegedUtils.java:85) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.completeRefresh(AbstractDelegatedExecutionApplicationContext.java:320) at org.springframework.osgi.extender.internal.dependencies.startup.DependencyWaiterApplicationContextExecutor$CompleteRefreshTask.run(DependencyWaiterApplicationContextExecutor.java:136) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None "RMI TCP Connection(idle)" daemon prio=6 tid=0x05329400 nid=0x1100 waiting on condition [0x069af000..0x069afa14] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x200a1380> (a java.util.concurrent.SynchronousQueue$TransferStack) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323) at java.util.conCurrent.SynchronousQueue.poll(SynchronousQueue.java:874) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None "Timer-4" daemon prio=6 tid=0x053aa400 nid=0xfa4 in Object.wait() [0x06eef000..0x06eefc94] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x59585388> (a java.util.TaskQueue) at java.lang.Object.wait(Object.java:485) at java.util.TimerThread.mainLoop(Timer.java:483) - locked <0x59585388> (a java.util.TaskQueue) at java.util.TimerThread.run(Timer.java:462) Locked ownable synchronizers: - None "weblogic.timers.TimerThread" daemon prio=10 tid=0x05151800 nid=0x11fc in Object.wait() [0x06e9f000..0x06e9fd14] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x5959c3c0> (a weblogic.timers.internal.TimerThread) at weblogic.timers.internal.TimerThread$Thread.run(TimerThread.java:267) - locked <0x5959c3c0> (a weblogic.timers.internal.TimerThread) Locked ownable synchronizers: - None "ExecuteThread: '4' for queue: 'default'" daemon prio=6 tid=0x04880c00 nid=0x117c in Object.wait() [0x06e4f000..0x06e4fd94] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x595855a8> (a weblogic.kernel.ServerExecuteThread) at java.lang.Object.wait(Object.java:485) at weblogic.kernel.ExecuteThread.waitForRequest(ExecuteThread.java:91) - locked <0x595855a8> (a weblogic.kernel.ServerExecuteThread) at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:115) Locked ownable synchronizers: - None "ExecuteThread: '3' for queue: 'default'" daemon prio=6 tid=0x05242400 nid=0xd34 in Object.wait() [0x06dff000..0x06dffa14] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x59585998> (a weblogic.kernel.ServerExecuteThread) at java.lang.Object.wait(Object.java:485) at weblogic.kernel.ExecuteThread.waitForRequest(ExecuteThread.java:91) - locked <0x59585998> (a weblogic.kernel.ServerExecuteThread) at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:115) Locked ownable synchronizers: - None "ExecuteThread: '2' for queue: 'default'" daemon prio=6 tid=0x04509800 nid=0x1600 in Object.wait() [0x06daf000..0x06dafa94] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x59585c78> (a weblogic.kernel.ServerExecuteThread) at java.lang.Object.wait(Object.java:485) at weblogic.kernel.ExecuteThread.waitForRequest(ExecuteThread.java:91) - locked <0x59585c78> (a weblogic.kernel.ServerExecuteThread) at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:115) Locked ownable synchronizers: - None "ExecuteThread: '1' for queue: 'default'" daemon prio=6 tid=0x05170800 nid=0x894 in Object.wait() [0x06d5f000..0x06d5fb14] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x59585f58> (a weblogic.kernel.ServerExecuteThread) at java.lang.Object.wait(Object.java:485) at weblogic.kernel.ExecuteThread.waitForRequest(ExecuteThread.java:91) - locked <0x59585f58> (a weblogic.kernel.ServerExecuteThread) at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:115) Locked ownable synchronizers: - None "ExecuteThread: '0' for queue: 'default'" daemon prio=6 tid=0x05329800 nid=0x10a8 in Object.wait() [0x06c1f000..0x06c1fb94] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x59586238> (a weblogic.kernel.ServerExecuteThread) at java.lang.Object.wait(Object.java:485) at weblogic.kernel.ExecuteThread.waitForRequest(ExecuteThread.java:91) - locked <0x59586238> (a weblogic.kernel.ServerExecuteThread) at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:115) Locked ownable synchronizers: - None "Timer-3" daemon prio=6 tid=0x0484bc00 nid=0xebc waiting for monitor entry [0x06cbf000..0x06cbfa94] java.lang.Thread.State: BLOCKED (on object monitor) at org.springframework.osgi.extender.internal.dependencies.startup.DependencyWaiterApplicationContextExecutor.close(DependencyWaiterApplicationContextExecutor.java:355) - waiting to lock <0x595b3be0> (a java.lang.Object) - locked <0x595b3c48> (a java.lang.Object) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.doClose(AbstractDelegatedExecutionApplicationContext.java:236) at org.springframework.context.support.AbstractApplicationContext.close(AbstractApplicationContext.java:794) - locked <0x595b4128> (a java.lang.Object) at org.springframework.osgi.extender.internal.activator.ContextLoaderListener$3.run(ContextLoaderListener.java:807) at org.springframework.osgi.extender.internal.util.concurrent.RunnableTimedExecution$MonitoredRunnable.run(RunnableTimedExecution.java:60) at org.springframework.scheduling.timer.DelegatingTimerTask.run(DelegatingTimerTask.java:66) at java.util.TimerThread.mainLoop(Timer.java:512) at java.util.TimerThread.run(Timer.java:462) Locked ownable synchronizers: - None "Timer-2" daemon prio=6 tid=0x04780400 nid=0x1388 in Object.wait() [0x06c6f000..0x06c6fb14] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x20783b60> (a java.util.TaskQueue) at java.lang.Object.wait(Object.java:485) at java.util.TimerThread.mainLoop(Timer.java:483) - locked <0x20783b60> (a java.util.TaskQueue) at java.util.TimerThread.run(Timer.java:462) Locked ownable synchronizers: - None "AWT-Windows" daemon prio=6 tid=0x04028000 nid=0x83c runnable [0x06b8f000..0x06b8fb14] java.lang.Thread.State: RUNNABLE at sun.awt.windows.WToolkit.eventLoop(Native Method) at sun.awt.windows.WToolkit.run(WToolkit.java:291) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None "Java2D Disposer" daemon prio=10 tid=0x0469c400 nid=0x1164 in Object.wait() [0x0695f000..0x0695fc14] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x206f4200> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:116) - locked <0x206f4200> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:132) at sun.java2d.Disposer.run(Disposer.java:125) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None "NioSocketAcceptor-1" prio=6 tid=0x055acc00 nid=0xf80 runnable [0x068bf000..0x068bfd94] java.lang.Thread.State: RUNNABLE at sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(Native Method) at sun.nio.ch.WindowsSelectorImpl$SubSelector.poll(WindowsSelectorImpl.java:274) at sun.nio.ch.WindowsSelectorImpl$SubSelector.access$400(WindowsSelectorImpl.java:256) at sun.nio.ch.WindowsSelectorImpl.doSelect(WindowsSelectorImpl.java:137) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69) - locked <0x2069e820> (a sun.nio.ch.Util$1) - locked <0x2069e810> (a java.util.Collections$UnmodifiableSet) - locked <0x2069e3d8> (a sun.nio.ch.WindowsSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:84) at org.apache.mina.transport.socket.nio.NioSocketAcceptor.select(NioSocketAcceptor.java:288) at org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:402) at org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - <0x2069e0f8> (a java.util.concurrent.locks.ReentrantLock$NonfairSync) "RMI RenewClean-[192.168.114.60:1640]" daemon prio=6 tid=0x05312400 nid=0x1058 in Object.wait() [0x06b3f000..0x06b3fa94] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x20669858> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:116) - locked <0x20669858> (a java.lang.ref.ReferenceQueue$Lock) at sun.rmi.transport.DGCClient$EndpointEntry$RenewCleanThread.run(DGCClient.java:516) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None "RMI Scheduler(0)" daemon prio=6 tid=0x05132800 nid=0x146c waiting on condition [0x06aef000..0x06aefb14] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x200a1508> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963) at java.util.concurrent.DelayQueue.take(DelayQueue.java:164) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:583) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:576) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None "GC Daemon" daemon prio=2 tid=0x05678400 nid=0x166c in Object.wait() [0x06a9f000..0x06a9fc14] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x2060d790> (a sun.misc.GC$LatencyLock) at sun.misc.GC$Daemon.run(GC.java:100) - locked <0x2060d790> (a sun.misc.GC$LatencyLock) Locked ownable synchronizers: - None "RMI Reaper" prio=6 tid=0x04fee800 nid=0x828 in Object.wait() [0x06a4f000..0x06a4fd14] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x200a79c8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:116) - locked <0x200a79c8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:132) at sun.rmi.transport.ObjectTable$Reaper.run(ObjectTable.java:333) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None "RMI TCP Accept-0" daemon prio=6 tid=0x0488dc00 nid=0x129c runnable [0x069ff000..0x069ffc94] java.lang.Thread.State: RUNNABLE at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384) - locked <0x20606780> (a java.net.SocksSocketImpl) at java.net.ServerSocket.implAccept(ServerSocket.java:453) at java.net.ServerSocket.accept(ServerSocket.java:421) at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369) at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None "RMI TCP Accept-20220" daemon prio=6 tid=0x05319800 nid=0x1634 runnable [0x0690f000..0x0690fa94] java.lang.Thread.State: RUNNABLE at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384) - locked <0x205fb908> (a java.net.SocksSocketImpl) at java.net.ServerSocket.implAccept(ServerSocket.java:453) at java.net.ServerSocket.accept(ServerSocket.java:421) at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369) at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None "gogo shell pipe thread" daemon prio=6 tid=0x0511f400 nid=0x920 runnable [0x0586f000..0x0586fb94] java.lang.Thread.State: RUNNABLE at jline.WindowsTerminal.readByte(Native Method) at jline.WindowsTerminal.readCharacter(WindowsTerminal.java:237) at jline.AnsiWindowsTerminal.readDirectChar(AnsiWindowsTerminal.java:44) at org.apache.felix.karaf.shell.console.jline.Console$Pipe.run(Console.java:346) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None "Karaf Shell Console Thread" prio=6 tid=0x05134400 nid=0xf54 waiting on condition [0x0581f000..0x0581fc14] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x20573970> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925) at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:317) at org.apache.felix.karaf.shell.console.jline.Console$ConsoleInputStream.read(Console.java:286) at org.apache.felix.karaf.shell.console.jline.Console$ConsoleInputStream.read(Console.java:303) at jline.AnsiWindowsTerminal.readCharacter(AnsiWindowsTerminal.java:40) at jline.WindowsTerminal.readVirtualKey(WindowsTerminal.java:359) at jline.ConsoleReader.readVirtualKey(ConsoleReader.java:1504) at jline.ConsoleReader.readBinding(ConsoleReader.java:674) at jline.ConsoleReader.readLine(ConsoleReader.java:514) at jline.ConsoleReader.readLine(ConsoleReader.java:468) at org.apache.felix.karaf.shell.console.jline.Console.run(Console.java:169) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None "pool-2-thread-3" prio=6 tid=0x04522c00 nid=0xf7c waiting on condition [0x04f9f000..0x04f9fc94] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x202a6220> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at ja

    Read the article

  • Complete Guide to Networking Windows 7 with XP and Vista

    - by Mysticgeek
    Since there are three versions of Windows out in the field these days, chances are you need to share data between them. Today we show how to get each version to be share files and printers with one another. In a perfect world, getting your computers with different Microsoft operating systems to network would be as easy as clicking a button. With the Windows 7 Homegroup feature, it’s almost that easy. However, getting all three of them to communicate with each other can be a bit of a challenge. Today we’ve put together a guide that will help you share files and printers in whatever scenario of the three versions you might encounter on your home network. Sharing Between Windows 7 and XP The most common scenario you’re probably going to run into is sharing between Windows 7 and XP.  Essentially you’ll want to make sure both machines are part of the same workgroup, set up the correct sharing settings, and making sure network discovery is enabled on Windows 7. The biggest problem you may run into is finding the correct printer drivers for both versions of Windows. Share Files and Printers Between Windows 7 & XP  Map a Network Drive Another method of sharing data between XP and Windows 7 is mapping a network drive. If you don’t need to share a printer and only want to share a drive, then you can just map an XP drive to Windows 7. Although it might sound complicated, the process is not bad. The trickiest part is making sure you add the appropriate local user. This will allow you to share the contents of an XP drive to your Windows 7 computer. Map a Network Drive from XP to Windows 7 Sharing between Vista and Windows 7 Another scenario you might run into is having to share files and printers between a Vista and Windows 7 machine. The process is a bit easier than sharing between XP and Windows 7, but takes a bit of work. The Homegroup feature isn’t compatible with Vista, so we need to go through a few different steps. Depending on what your printer is, sharing it should be easier as Vista and Windows 7 do a much better job of automatically locating the drivers. How to Share Files and Printers Between Windows 7 and Vista Sharing between Vista and XP When Windows Vista came out, hardware requirements were intensive, drivers weren’t ready, and sharing between them was complicated due to the new Vista structure. The sharing process is pretty straight-forward if you’re not using password protection…as you just need to drop what you want to share into the Vista Public folder. On the other hand, sharing with password protection becomes a bit more difficult. Basically you need to add a user and set up sharing on the XP machine. But once again, we have a complete tutorial for that situation. Share Files and Folders Between Vista and XP Machines Sharing Between Windows 7 with Homegroup If you have one or more Windows 7 machine, sharing files and devices becomes extremely easy with the Homegroup feature. It’s as simple as creating a Homegroup on on machine then joining the other to it. It allows you to stream media, control what data is shared, and can also be password protected. If you don’t want to make your Windows 7 machines part of the same Homegroup, you can still share files through the Public Folder, and setup a printer to be shared as well.   Use the Homegroup Feature in Windows 7 to Share Printers and Files Create a Homegroup & Join a New Computer To It Change which Files are Shared in a Homegroup Windows Home Server If you want an ultimate setup that creates a centralized location to share files between all systems on your home network, regardless of the operating system, then set up a Windows Home Server. It allows you to centralize your important documents and digital media files on one box and provides easy access to data and the ability to stream media to other machines on your network. Not only that, but it provides easy backup of all your machines to the server, in case disaster strikes. How to Install and Setup Windows Home Server How to Manage Shared Folders on Windows Home Server Conclusion The biggest annoyance is dealing with printers that have a different set of drivers for each OS. There is no real easy way to solve this problem. Our best advice is to try to connect it to one machine, and if the drivers won’t work, hook it up to the other computer and see if that works. Each printer manufacturer is different, and Windows doesn’t always automatically install the correct drivers for the device. We hope this guide helps you share your data between whichever Microsoft OS scenario you might run into! Here are some other articles that will help you accomplish your home networking needs: Share a Printer on a Home Network from Vista or XP to Windows 7 How to Share a Folder the XP Way in Windows Vista Similar Articles Productive Geek Tips Delete Wrong AutoComplete Entries in Windows Vista MailSvchost Viewer Shows Exactly What Each svchost.exe Instance is DoingFixing "BOOTMGR is missing" Error While Trying to Boot Windows VistaShow Hidden Files and Folders in Windows 7 or VistaAdd Color Coding to Windows 7 Media Center Program Guide TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • Add Hotmail & Live Email Accounts to Outlook 2010

    - by Matthew Guay
    Microsoft has recently been promoting upcoming updates to their Hotmail service, promising to make it an even better webmail service. But Microsoft’s revamped Outlook 2010 is already here. Here’s how to integrate Hotmail with Outlook. Outlook 2010 works with a wide variety of email accounts, including POP3, IMAP, and Exchange accounts.  The only problem with POP3 and IMAP accounts is that they only sync email, but not your calendar and contacts like Exchange does.  Hotmail, however, lets you sync your email, contacts, and calendar with Outlook with the Hotmail Connector.  This lets you keep all of your PIM data accessible from everywhere.  Let’s look at how we can set this up on our account. Getting Started The easiest way to add Hotmail to Outlook is to first install the Outlook Hotmail Connector (link below).  Make sure Outlook is closed first, and then proceed with the installation as usual. If you enter your Hotmail account into the New Account setup in Outlook before installing the Hotmail Connector, Outlook will prompt you to download the Hotmail Connector.  However, you’ll have to exit Outlook before you can install the Connector, and then will have to re-enter your information when you restart Outlook, so it’s easier to just install it first. Add Your Hotmail Account to Outlook Now you’re ready to add your Hotmail account to Outlook.  If this is the first time you’ve run Outlook 2010, you’ll be greeted with the following screen.  Click Next to proceed with setup. Then select Yes and click Next again. If you’ve already got an email account setup in Outlook, you can add a new account by clicking File and then selecting Add account. Now, enter your Hotmail account information, and click Next. Outlook will search for your account settings and automatically setup your account with the Hotmail connector we previously installed. If you entered your password incorrectly previously, you may see the following popup.  Re-enter your password and click OK, and Outlook will re-verify your settings. Once everything’s finished and setup, you’ll see the following completion screen.  Click Finish to complete the setup and check out your Hotmail in Outlook. Welcome to your Hotmail account in Outlook 2010.  You’ll notice a small notification at the bottom of the window notifying you that you’re connected to Windows Live Hotmail.  Now your email will synchronize with your Hotmail account, and your Outlook calendar and contacts will be synced with your Live calendar and contacts, respectively.  This is the closest you can get to full Exchange without an Exchange account, and in our experience it works great.  In fact, Hotmail Sync seems to work faster than IMAP sync for us. Setup Hotmail With POP3 Access If you need to access your Hotmail email account but don’t want to install the Outlook Connector, then you can add it with POP3 sync.  We recommend going with the Outlook Connector for the best experience, but if you can’t install it (eg. you’re not allowed to install applications on your work PC) then this is a good alternative. To do this, follow our tutorial on setting up a Gmail POP3 account in Outlook. Although the article concentrates on Gmail, the settings are essentially the same. The only thing you’ll want to change is the Incoming and Outgoing mail server. Incoming mail server – pop3.live.com Outgoing mail server – smtp.live.com User name – your Hotmail or Live email address Incoming Server (POP3) – 995 Outgoing Server (SMTP) – 587 Also, check This server requires and encrypted connection Just as in the Gmail example, select TLS for the type of encrypted connection.  Then, on the bottom, make sure to uncheck the box to Remove messages from the server after a number of days.  This way your messages will still be accessible from your Hotmail account online. Conclusion Even though Hotmail is generally not as popular as Gmail, it works great with Outlook integration.  If you’re a heavy user of Windows Live services, or want to try them out, Outlook Connector is the easiest way to keep your desktop activity synced with the cloud.  If you’re just one of the millions of Hotmail users who want to access their old Hotmail account alongside their other accounts, this method works great for you too. If you’re using Outlook 2003 or 2007, check out our article on using Hotmail from Microsoft Outlook. Links Download Outlook Hotmail Connector 32-bit Download Outlook Hotmail Connector 64-bit – note, only for users of Office 2010 x64 Similar Articles Productive Geek Tips Use Hotmail from Microsoft OutlookHow to add any POP3 Email Account to HotmailHow to Send and Receive Hotmail from Your Gmail AccountAdd Your Gmail To Windows Live MailManage Your Windows Live Account in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes

    Read the article

  • SQL Server service accounts and SPNs

    - by simonsabin
    Service Principal Names (SPNs) are a must for kerberos authentication which is a must when using sharepoint, reporting services and sql server where you access one server that then needs to access another resource, this is called the double hop. The reason this is a complex problem is that the second hop has to be done with impersonation/delegation. For this to work there needs to be a way for the security system to make sure that the service in the middle is allowed to impersonate you, after all you are not giving the service your password. To do this you need to be using kerberos. The following is my simple interpretation of how kerberos works. I find the Kerberos documentation rediculously complex so the following might be sligthly wrong but I think its close enough. Keberos works on a ticketing system, the prinicipal is that you get a security token from AD and then you can pass that to the service in the middle which can then use that token to impersonate you. For that to work AD has to be able to identify who is allowed to use the token, in this case the service account.But how do you as a client know what service account the service in the middle is configured with. The answer is SPNs. The SPN is the mapping between your logical connection to the service account. One type of SPN is for the DNS name for the server and the port. i.e. MySQL.mydomain.com and 1433. You can see how this maps to SQL Server on that server, but how does it map to the account. Well it can be done in two ways, either you can have a mapping defined in AD or AD can use a default mapping (this is something I didn't know about). To map the SPN in AD then you have to add the SPN to the user account, this is documented in the first link below either directly or using a tool called SetSPN. You might say that is complex, well it is and thats why SQL Server tries to do it for you, at start up it tries to connect to AD and set the SPN on the account it is running as, clearly that can only happen IF SQL is running as a domain account AND importantly it has permission to do so. By default a normal domain user account doesn't have the correct permission, and is why so many people have this problem. If the account is a domain admin then it will have permission, but non of us run SQL using domain admin accounts do we. You might also note that the SPN contains the port number (this isn't a requirement now in sql 2008 but I won't go into that), so if you set it manually and you are using dynamic ports (the default for a named instance) what do you do, well every time the port changes you need to change the SPN allocated to the account. Thats why its advised to let SQL Server register the SPN itself. You may also have thought, well what happens if I change my service account, won't that lead to two accounts with the same SPN. Possibly. Having two accounts with the same SPN is definitely a problem. Why? Well because if there are two accounts Kerberos can't identify the exact account that the service is running as, it could be either account, and so your security falls back to NTLM. SETSPN is useful for finding duplicate SPNs Reading this you will probably be thinking Oh my goodness this is really difficult. It is however I've found today in investigating something else that there is an easy option. Use Network Service as your service account. Network Service is a special account and is tied to the computer. It appears that Network Service has the update rights to AD to set an SPN mapping for the computer account. This then allows the SPN mapping to work. I believe this also works for the local system account. To get all the SPNs in your AD run the following, it could be a large file, so you might want to restrict it to a specific OU, or CN ldifde -d "DC=<domain>" -l servicePrincipalName -F spn.txt You will read in the links below that you need SQL to register the SPN this is done how to use Kerberos authenticaiton in SQL Server - http://support.microsoft.com/kb/319723 Using Kerberos with SQL Server - http://blogs.msdn.com/sql_protocols/archive/2005/10/12/479871.aspx Understanding Kerberos and NTLM authentication in SQL Server Connections - http://blogs.msdn.com/sql_protocols/archive/2006/12/02/understanding-kerberos-and-ntlm-authentication-in-sql-server-connections.aspx Summary The only reason I personally know to use a domain account is when you can't get kerberos to work and you want to do BULK INSERT or other network service that requires access to a a remote server. In this case you have to resort to using SQL authentication and the SQL Server uses its service account to access the remote service, and thus you need a domain account. You migth need this if using some forms of replication. I've always found Kerberos awkward to setup and so fallen back to this domain account approach. So in summary to get Kerberos to work try using the network service or local system accounts. For a great post from the Adam Saxton of the SQL Server support team go to http://blogs.msdn.com/psssql/archive/2010/03/09/what-spn-do-i-use-and-how-does-it-get-there.aspx 

    Read the article

  • Browsing Your ADF Application Module Pooling Params with WLST

    - by Duncan Mills
    In ADF 11g you can of course use Enterprise Manager (EM) to browse and configure the settings used by ADF Business Components  Application Modules, as shown here for one of my sample deployed applications. This screen you can access from the EM homepage by pulling down the Application Deployment menu, and then ADF > Configure ADF Business Components. Then select the profile that you are actually using (Hint: look in the DataBindings.cpx file to work this out - probably the "Local" version unless you've explicitly changed it. )So, from this screen you can change the pooling parameters and the world is good. But what if you don't have EM installed? In that case you can use the WebLogic scripting capabilities to view (and Update) the MBean Properties. Explanation The pooling parameters and many others are handled through Message Driven Beans that are created for the deployed application in the server. In the case of the ADF BC pooling parameters, this MBean will combine the configuration deployed as part of the application, along with any overrides defined as -D environement commands on the JVM startup for the application server instance. Using WLST to Browse the Bean ValuesFor our purposes here I'm doing this interactively, although you can also write a script or write Java to achieve the same thing.Step 0: Before You Start You will need the followingAccess to the console on the machine that is running the serverThe WebLogic Admin username and password (I'll use weblogic/password as my example here - yours will be different)The name of the deployed application (in this example FMWdh_application1)The package path to the bc4j.xcfg file (in this example oracle.demo.fmwdh.model.service.common.bc4j.xcfg) This is based on the default path for your model project so it shoudl be fairly easy to work out.The BC configuration your AM is actually running with (look in the DataBindings.cpx for that. In this example DealHelpServiceDeployed is the profile being used..)Step 1: Start the WLST consoleTo start at the beginning, you need to run the WLST command but that needs a little setup:Change to the wlserver_10.3/server/bin directory e.g. under your Fusion Middleware Home[oracle@mymachine] cd /home/oracle/FMW_R1/wlserver_10.3/server/binSet your environment using the setWLSEnv script. e.g. on Oracle Enterprise Linux:[oracle@mymachine bin] source setWLSEnv.shStart the WLST interactive console[oracle@mymachine bin] java weblogic.WLSTInitializing WebLogic Scripting Tool (WLST) ...Welcome to WebLogic Server Administration Scripting ShellType help() for help on available commandswls:/offline> Step 2:Enter the WLST commandsConnect to the server wls:> connect('weblogic','password')Change to the Custom root, this is where the AMPooling MBeans are registered wls:> custom()Change to the b4j MBean directorywls:> cd ('oracle.bc4j.mbean.config')Work out the correct directory for the AM configuration you need. This is the difficult bit, not because it's hard to do, but because the names are long. The structure here is such that every child MBean is displayed at the same level as the parent, so for each deployed application there will be many directories shown. In fact, do an ls() command here and you'll see what I mean. Each application will have one MBean for the app as a whole, and then for each deployed configuration in the .xcfg file you'll see: One for the config entry itself, and then one each for Security, DB Connection and AM Pooling. So if you deploy an app with just one configuration you'll see 5 directories, if it has two configurations in the .xcfg you'll see 9 and so on.The directory you are looking for will contain those bits of information you gathered in Step 0, specifically the Application Name, the configuration you are using and the xcfg name: First of all narrow your list to just those directories returned from the ls() command that begin oracle.bc4j.mbean.config:name=AMPool. These identify the AM pooling MBeans for all the deployed applications. Now look for the correct application name e.g. Application=FMWdh_application1The config setting in that sub-list should already be correct and match what you expect e.g. oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfgFinally look for the correct value for the AppModuleConfigType e.g. oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployedNow you have identified the correct directory name, change to that (keep the name on one line of course - I've had to split it across lines here for clarity:wls:> cd ('oracle.bc4j.mbean.config:name=AMPool,     type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,    oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,    Application=FMWdh_application1,    oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed') Now you can actually view the parameter values with a simple ls() commandwls:> ls()And here's the output in which you can view the realtime values of the various pool settings: -rw- AmpoolConnectionstrategyclass oracle.jbo.common.ampool.DefaultConnectionStrategy -rw- AmpoolDoampooling true -rw- AmpoolDynamicjdbccredentials false -rw- AmpoolInitpoolsize 2 -rw- AmpoolIsuseexclusive true -rw- AmpoolMaxavailablesize 40 -rw- AmpoolMaxinactiveage 600000 -rw- AmpoolMaxpoolsize 4096 -rw- AmpoolMinavailablesize 2 -rw- AmpoolMonitorsleepinterval 600000 -rw- AmpoolResetnontransactionalstate true -rw- AmpoolSessioncookiefactoryclass oracle.jbo.common.ampool.DefaultSessionCookieFactory -rw- AmpoolTimetolive 3600000 -rw- AmpoolWritecookietoclient false -r-- ConfigMBean true -rw- ConnectionPoolManager oracle.jbo.server.ConnectionPoolManagerImpl -rw- Doconnectionpooling false -rw- Dofailover false -rw- Initpoolsize 0 -rw- Maxpoolcookieage -1 -rw- Maxpoolsize 4096 -rw- Poolmaxavailablesize 25 -rw- Poolmaxinactiveage 600000 -rw- Poolminavailablesize 5 -rw- Poolmonitorsleepinterval 600000 -rw- Poolrequesttimeout 30000 -rw- Pooltimetolive -1 -r-- ReadOnly false -rw- Recyclethreshold 10 -r-- RestartNeeded false -r-- SystemMBean false -r-- eventProvider true -r-- eventTypes java.lang.String[jmx.attribute.change] -r-- objectName oracle.bc4j.mbean.config:name=AMPool,type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,Application=FMWdh_application1,oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed -rw- poolClassName oracle.jbo.common.ampool.ApplicationPoolImpl Thanks to Brian Fry on the JDeveloper PM Team who did most of the work to put this sequence of steps together with me badgering him over his shoulder.

    Read the article

  • Benefits of PerformancePoint Services Using SharePoint Server 2010

    - by Wayne
    What is PerformancePoint Services? Most of the time it happens that the metrics that make up your key performance indicators are not simple values from a data source. In SharePoint Server 2007 PerformancePoint Services, you could create two kinds of KPI metrics: Simple single value metrics from any supported data source or Complex multiple value metrics from a single Analysis Services data source using MDX. Now things are even easier with Performance Point Services in SharePoint 2010. Let us check what is it? PerformancePoint Services in SharePoint Server 2010 is a performance management service that you can use to monitor and analyze your business. By providing flexible, easy-to-use tools for building dashboards, scorecards, reports, and key performance indicators (KPIs), PerformancePoint Services can help everyone across an organization make informed business decisions that align with companywide objectives and strategy. Scorecards, dashboards, and KPIs help drive accountability. Integrated analytics help employees move quickly from monitoring information to analyzing it and, when appropriate, sharing it throughout the organization. Prior to the addition of PerformancePoint Services to SharePoint Server, Microsoft Office PerformancePoint Server 2007 functioned as a standalone server. Now PerformancePoint functionality is available as an integrated part of the SharePoint Server Enterprise license, as is the case with Excel Services in Microsoft SharePoint Server 2010. The popular features of earlier versions of PerformancePoint Services are preserved along with numerous enhancements and additional functionality. New PerformancePoint Services features PerformancePoint Services now can utilize SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. Dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. New features and enhancements of SharePoint 2010 PerformancePoint Services • With PerformancePoint Services, functioning as a service in SharePoint Server, dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. The new architecture also takes advantage of SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. You also can include and link PerformancePoint Services Web Parts with other SharePoint Server Web Parts on the same page. The new architecture also streamlines security models that simplify access to report data. • The Decomposition Tree is a new visualization report type available in PerformancePoint Services. You can use it to quickly and visually break down higher-level data values from a multi-dimensional data set to understand the driving forces behind those values. The Decomposition Tree is available in scorecards and analytic reports and ultimately in dashboards. • You can access more detailed business information with improved scorecards. Scorecards have been enhanced to make it easy for you to drill down and quickly access more detailed information. PerformancePoint scorecards also offer more flexible layout options, dynamic hierarchies, and calculated KPI features. Using this enhanced functionality, you can now create custom metrics that use multiple data sources. You can also sort, filter, and view variances between actual and target values to help you identify concerns or risks. • Better Time Intelligence filtering capabilities that you can use to create and use dynamic time filters that are always up to date. Other improved filters improve the ability for dashboard users to quickly focus in on information that is most relevant. • Ability to include and link PerformancePoint Services Web Parts together with other PerformancePoint Services Web parts on the same page. • Easier to author and publish dashboard items by using Dashboard Designer. • SQL Server Analysis Services 2008 support. • Increased support for accessibility compliance in individual reports and scorecards. • The KPI Details report is a new report type that displays contextually relevant information about KPIs, metrics, rows, columns, and cells within a scorecard. The KPI Details report works as a Web part that links to a scorecard or individual KPI to show relevant metadata to the end user in SharePoint Server. This Web part can be added to PerformancePoint dashboards or any SharePoint Server page. • Create analytics reports to better understand underlying business forces behind the results. Analytic reports have been enhanced to support value filtering, new chart types, and server-based conditional formatting. To conclude, PerformancePoint Services, by becoming tightly integrated with SharePoint Server 2010, takes advantage of many enterprise-level SharePoint Server 2010 features. Unfortunately, SharePoint Foundation 2010 doesn’t include this feature. There are still many choices in SharePoint family of products that include SharePoint Server 2010, SharePoint Foundation, SharePoint Server 2007 and associated free SharePoint web parts and templates.

    Read the article

  • 11gR2 11.2.0.3 Database Certified with E-Business Suie

    - by Elke Phelps (Oracle Development)
    The 11gR2 11.2.0.2 Database was certified with E-Business Suite (EBS) 11i and EBS 12 almost one year ago today.  I’m pleased to announce that 11.2.0.3, the second patchset for the 11gR2 Database is now certified. Be sure to review the interoperability notes for R11i and R12 for the most up-to-date requirements for deployment. This certification announcement is important as you plan upgrades to the technology stack for your environment. For additional upgrade direction, please refer to the recently published EBS upgrade recommendations article. Database support implications may also be reviewed in the database patching and support article. Oracle E-Business Suite Release 11i Prerequisites 11.5.10.2 + ATG PF.H RUP 6 and higher Certified Platforms Linux x86 (Oracle Linux 4, 5) Linux x86 (RHEL 4, 5) Linux x86 (SLES 10) Linux x86-64 (Oracle Linux 4, 5) -- Database-tier only Linux x86-64 (RHEL 4, 5) -- Database-tier only Linux x86-64 (SLES 10--Database-tier only) Oracle Solaris on SPARC (64-bit) (10) Oracle Solaris on x86-64 (64-bit) (10) -- Database-tier only Pending Platform Certifications Microsoft Windows Server (32-bit) Microsoft Windows Server (64-bit) HP-UX PA-RISC (64-bit) HP-UX Itanium IBM: Linux on System z  IBM AIX on Power Systems Oracle E-Business Suite Release 12 Prerequisites Oracle E-Business Suite Release 12.0.4 or later; or,Oracle E-Business Suite Release 12.1.1 or later Certified Platforms Linux x86 (Oracle Linux 4, 5) Linux x86 (RHEL 4, 5) Linux x86 (SLES 10) Linux x86-64 (Oracle Linux 4, 5) Linux x86-64 (RHEL 4, 5) Linux x86-64 (SLES 10) Oracle Solaris on SPARC (64-bit) (10) Oracle Solaris on x86-64 (64-bit) (10)  -- Database-tier only Pending Platform Certifications Microsoft Windows Server (32-bit) Microsoft Windows Server (64-bit) HP-UX PA-RISC (64-bit) IBM: Linux on System z IBM AIX on Power Systems HP-UX Itanium Database Feature and Option CertificationsThe following 11gR2 11.2.0.2 database options and features are supported for use: Advanced Compression Active Data Guard Advanced Security Option (ASO) / Advanced Networking Option (ANO) Database Vault  Database Partitioning Data Guard Redo Apply with Physical Standby Databases Native PL/SQL compilation Oracle Label Security (OLS) Real Application Clusters (RAC) Real Application Testing SecureFiles Virtual Private Database (VPD) Certification of the following database options and features is still underway: Transparent Data Encryption (TDE) Column Encryption 11gR2 version 11.2.0.3 Transparent Data Encryption (TDE) Tablespace Encryption 11gR2 version 11.2.0.3 About the pending certifications Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates, but you're welcome to monitor or subscribe to this blog for updates, which I'll post as soon as soon as they're available.     EBS 11i References Interoperability Notes - Oracle E-Business Suite Release 11i with Oracle Database 11g Release 2 (11.2.0) (Note 881505.1) Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i (Note 823586.1) Encrypting Oracle E-Business Suite Release 11i Network Traffic using Advanced Security Option and Advanced Networking Option (Note 391248.1) Using Transparent Data Encryption with Oracle E-Business Release 11i (Note 403294.1) Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 11gR2 (Note 1091086.1) Using Oracle E-Business Suite with a Split Configuration Database Tier on Oracle 11gR2 Version 11.2.0.1.0 (Note 946413.1) Export/Import Process for Oracle E-Business Suite Release 11i Database Instances Using Oracle Database 11g Release 1 or 2 (Note 557738.1) Database Initialization Parameters for Oracle Applications Release 11i (Note 216205.1) EBS 12 References Interoperability Notes - Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) (Note 1058763.1) Database Initialization Parameters for Oracle Applications Release 12 (Note 396009.1) Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 (Note 823587.1) Using Transparent Data Encryption with Oracle E-Business Suite Release 12 (Note 732764.1) Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11gR2 (Note 1091083.1) Export/Import Process for Oracle E-Business Suite Release 12 Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 (Note 741818.1) Enabling SSL in Oracle Applications Release 12 (Note 376700.1) Related Articles 11gR2 Database Certified with E-Business Suite 11i 11gR2 Database Certified with E-Business Suite 12 11gR2 11.2.0.2 Database Certified with E-Business Suite 12 Can E-Business Users Apply Database Patch Set Updates? On Apps Tier Patching and Support: A Primer for E-Business Suite Users On Database Patching and Support:  A Primer for E-Business Suite Users Quarterly E-Business Suite Upgrade Recommendations;  October 2011 Edition The preceding is intended to outline our general product direction.  It is intended for information purposes only, and may not be incorporated into any contract.   It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision.  The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle.

    Read the article

  • OTN Architect Day Headed to Reston, VA - May 16

    - by Bob Rhubart
    In 2011 OTN Architect Day made stops in Chicago, Denver, Phoenix, Redwood Shores, and Toronto. The 2012 series begins with OTN Architect Day in Reston, VA on Wednesday May 16. Registration is now open for this free event, but don't get caught napping -- seating is limited, and the event is just 5 weeks away. The information below reflects the most recent updates to the event agenda, including the addition of Oracle ACE Director Kai Yu as the guest keynote speaker. Kai is Senior System Engineer / Architect at Dell, Inc., and has been very busy of late as a speaker at various industry and Oracle User Group events. I'm very happy Kai has agreed to make the trek from his hometown in Austin, TX to share his insight at the Architect Day event in Reston.  If you're in the area, put this one on your calendar. You won't be sorry.   Venue Sheraton Reston Hotel 11810 Sunrise Valley Drive Reston, VA 20191 Event Agenda 8:30 am - 9:00 am Registration and Continental Breakfast 9:00 am - 9:15 am Welcome and Opening Comments 9:15 am - 10:00 am Engineered Systems: Oracle's Vision for the Future | Ralf Dossman Oracle's Exadata and Exalogic are impressive products in their own right. But working in combination they deliver unparalleled transaction processing performance with up to a 30x increase over existing legacy systems, with the lowest cost of ownership over a 3 or 5 year basis than any other hardware. In this session you'll learn how to leverage Oracle's Engineered Systems within your enterprise to deliver record-breaking performance at the lowest TCO. 10:00 am - 10:30 am High Availability Infrastructure for Cloud Computing | Kai Yu Infrastructure high availability is extremely critical to Cloud Computing. In a Cloud system that hosts a large number of databases and applications with different SLAs, any unplanned outage can be devastating, and even a small planned downtime may be unacceptable. This presentation will discuss various technology solutions and the related best practices that system architects should consider in cloud infrastructure design to ensure high availability. 10:30 am - 10:45 am Break 10:45 am - 11:30 am Breakout Sessions: (pick one) Innovations in Grid Computing with Oracle Coherence | Bjorn Boe Learn how Coherence can increase the availability, scalability and performance of your existing applications with its advanced low-latency data-grid technologies. Also hear some interesting industry-specific use cases that customers had implemented and how Oracle is integrating Coherence into its Enterprise Java stack. Cloud Computing - Making IT Simple | Scott Mattoon The road to Cloud Computing is not without a few bumps. This session will help to smooth out your journey by tackling some of the potential complications. We'll examine whether standardization is a prerequisite for the Cloud. We'll look at why refactoring isn't just for application code. We'll check out deployable entities and their simplification via higher levels of abstraction. And we'll close out the session with a look at engineered systems and modular clouds. 11:30 pm - 12:15 pm Breakout Sessions: (pick one) Oracle Enterprise Manager | Joe Diemer Oracle Enterprise Manager (EM) provides complete lifecycle management for the cloud - from automated cloud setup to self-service delivery to cloud operations. In this session you'll learn how to take control of your cloud infrastructure with EM features including Consolidation Planning and Self-Service provisioning with Metering and Chargeback. Come hear how Oracle is expanding its management capabilities into the cloud! Rationalization and Defense in Depth - Two Steps Closer to the Clouds | Dave Chappelle Security represents one of the biggest concerns about cloud computing. In this session we'll get past the FUD with a real-world look at some key issues. We'll discuss the infrastructure necessary to support rationalization and security services, explore architecture for defense -in-depth, and deal frankly with the good, the bad, and the ugly in Cloud security. 12:15 pm - 1:15 pm Lunch 1:40 pm - 2:00 pm Panel Discussion - Q&A 2:00 pm - 2:45 pm Breakout Sessions: (pick one) 21st Century SOA | Peter Belknap Service Oriented Architecture has evolved from concept to reality in the last decade. The right methodology coupled with mature SOA technologies has helped customers demonstrate success in both innovation and ROI. In this session you will learn how Oracle SOA Suite's orchestration, virtualization, and governance capabilities provide the infrastructure to run mission critical business and system applications. And we'll take a special look at the convergence of SOA & BPM using Oracle's Unified technology stack. Track B: Oracle Cloud Reference Architecture | Anbu Krishnaswamy Cloud initiatives are beginning to dominate enterprise IT roadmaps. Successful adoption of Cloud and the subsequent governance challenges warrant a Cloud reference architecture that is applied consistently across the enterprise. This presentation gives an overview of Oracle's Cloud Reference Architecture, which is part of the Cloud Enterprise Technology Strategy (ETS). Concepts covered include common management layer capabilities, service models, resource pools, and use cases. 2:45 pm - 3:00 pm Break 3:00 pm - 4:00 pm Roundtable Discussions 4:00 pm - 4:15 pm Closing Comments & Readouts from Roundtable 4:15 pm - 5:00 pm Cocktail Reception / Networking Session schedule and content subject to change.

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • Oracle Business Intelligence Advanced - Hands-on Workshop para Parceiros - 18 a 21 de Janeiro

    - by Claudia Costa
    Workshop Description This FREE hands-on workshop highlights strengths of OBIEE 11g by providing attendees a hands-on experience with BI 11g product. OBIEE 11g has adopted the standardized infrastructure of Fusion Middleware to provide robust server capability along with highly anticipated advanced visualization components like Maps, Flash based charts, Scorecards and KPIs. This workshop focuses on new features and infrastructure components for the BI practitioners who are familiar with either OBIEE 10g or previous BI releases. After taking this course, Oracle Business Intelligence 11g Advanced, you will gain insight into OBIEE11g technology, reporting solutions and new features. Workshop provides opportunities to practice with OBIEE11g environment as hands on activities. Participant will gain in-depth understanding of new architecture of OBIEE 11g, security mode, installation/configuration as well as reporting aspects like, new ROLAP/MOLAP style hierarchical browsing, new chart types, Action Framework and Advanced Visualization. If you are a Business Intelligence practitioners and familiar with BI10g - you cannot afford to miss this 3-day workshop. Register Now! PresentationsBusiness Intelligence EE (OBIEE) 11g: Advanced Workshop ·         OBIEE 11g Overview ·         OBIEE 11g Architecture and Infrastructure ·         OBIEE 11g Installation, Configuration and Monitoring ·         OBIEE11g Security Model and BI Components ·         OBIEE 11g Homepage Overview ·         New Visualizations: Master-Detail Events, Charts, Hierarchies ·         Reports Building with OBIEE 11g and Catalog Management ·         Spatial Integration, Action Framework, Scorecards ·         OBIEE 11g Dashboards ·         OBIEE Integration Options  Lab OutlineOracle Business Intelligence (OBIEE) 11g: Advanced Workshop The labs enable OBIEE Core functionality through hands-on activities are based on a Oracle VirtualBox image with software and training samples pre-installed. This Advanced course has few labs optional during the workshop to allow for students to practice them on their own. The primary purpose of the workshop is to provide expertise of 11g features and infrastructure changes from 10g. Labs will allow you to explore concepts to: ·         Have a clear understanding of the OBIEE 11g architecture ·         Have a clear understanding of the OBIEE differentiators ·         OBIEE11g Security Model ·         OBIEE11g Environment Management ·         Report Building with OBIEE11g ·         OBIEE11g Dashboard and Homepage Environment ·         New Visualization features ·         Management of Reports, Dashboards and BI Catalog Objects Audience ·         Business Intelligence Evangelist ·         Business Intelligence Application Developer or Consultant ·         Data Warehouse Developer ·         Enterprise Architects ·         Industry Solutions Architects Prerequisites ·         Experience and Understanding of OBIEE 10g is required. ·         Good understanding of data modeling for reporting purpose ·         Strong experience with database technologies preferred Equipment RequirementsThis workshop requires attendees to provide their own laptops. Attendee laptops must meet the following minimum hardware/software requirements: OBIEE 11g environments requires at least 3 GB of RAM (4GB Preferred), without which student will not be able to complete labs. This workshop has environment that includes VM Image and also a software components that students will install on their laptop for the labs. ·         Minimum 3GB RAM. 25GB free disk space ·         Internet Explorer 7 ·         VirtualBox (the latest version) ·         Downloadable from http://www.virtualbox.org ·         WINRAR or 7zip ·         Downloadable from http://www.win-rar.com/download.html ·         Downloadable from http://www.7zip.com/ Attendees will be given a VirtualBox image for Oraclee BI 11g Workshop containing the software along with required toolset, database and data sets for the labs. AgendaThis class duration is 3 Days9:00am: Sign-in and Technical Set up9:30am : Workshop Starts5:00pm : Workhop Ends LocalHotel Holiday Inn Express - Porto Salvo - Lisboa This class is Free. Register early to confirm a seat! Oracle BI Advanced 11g Hands-on Workshop - Schedule Register Now! January 11-13, 2011: Kista, Sweden January 18-20, 2011: Lisbon, Portugal March 1-3, 2011: Reading, Berkshire, UK March 15-17, 2011: Colombes, Paris, France March 29-31, 2011: Amsterdam, Netherlands Questions? For registration questions please send an email to [email protected]. Para outras informações, por favor contacte Claudia Costa, telf: 214235027 ou pelo email   

    Read the article

  • Windows 8 Live Accounts and the actual Windows Account

    - by Rick Strahl
    As if Windows Security wasn't confusing enough, in Windows 8 we get thrown yet another curve ball with Windows Live accounts to logon. When I set up my Windows 8 machine I originally set it up with a 'real', non-live account that I always use on my Windows machines. I did this mainly so I have a matching account for resources around my home and intranet network so I could log on to network resources properly. At some point later I decided to set up Windows Live security just to see how changes things. Windows wants you to use Windows Live Windows 8 logins are required in order for the Windows RT account info to work. Not that I care - since installing Windows 8 I've maybe spent 10 minutes with Windows RT because - well it's pretty freaking sucky on the desktop. From shitty apps to mis-managed screen real estate I can't say that there's anything compelling there to date, but then I haven't looked that hard either. Anyway… I set up the Windows Live account to see if that changes things. It does - I do get all my live logins to work from Live Account so that Twitter and Facebook posts and pictures and calendars all show up on live tiles on the start screen and in the actual apps. That's nice-ish, but hardly that exciting given that all of the apps tied to those live tiles are average at best. And it would have been nice if all of this could be done without being forced into running with a Windows Live User Account - this all feels like strong-arming you into moving into Microsofts walled garden… and that's probably what it's meant to do. Who am I? The real problem to me though is that these Windows Live and raw Windows User accounts are a bit unpredictable especially when it comes to developer information about the account and which credentials to use. So for example Windows reports folder security like this: Notice it's showing my Windows Live account. Now if I go to Edit and try to add my Windows user account (rstrahl) it'll just automatically show up as the live account. On the other hand though the underlying system sees everything as my real Windows account. After I switched to a Windows Live login account and I have to login to Windows with my Live account, what do you suppose this returns?Console.WriteLine(Environment.UserName); It returns my raw Windows user account (rstrahl). All my permissions, all my actual settings and the desktop console altogether run under that account. If I look in TaskManager (or Process Explorer for me) I see: Everything running on the desktop shell with my login running under my Windows user account. I suppose it makes sense, but where is that association happening? When I switched to a Windows Live account, nowhere did I associate my real account with the Live account - it just happened. And looking through the account configuration dialogs I can't find any reference to the raw Windows account. Other than switching back I see no mention anywhere of the raw Windows account - everything refers to the Live account. Right then, clear as potato soup! So this is who you really are! The problem is that in some situations this schizophrenic account behavior gets a bit weird. Today I was running a local Web application in IIS that uses Windows Authentication - I tried to log-in with my real Windows account login because that's what I'm used to using with WINDOWS freaking Authentication through IIS. But… it failed. I checked my IIS settings, my apps login settings and I just could not for the life of me get into the site with my Windows username. That is until I finally realized that I should try using my Windows Live credentials instead. And that worked. So now in this Windows Authentication dialog I had to type in my Live ID and password, which is - just weird. Then in IIS if I look at a Trace page (or in my case my app's Status page) I see that the logged on account is - my Windows user account. What's really annoying about this is that in some places it uses the live account in other places it uses my Windows account. If I remote desktop into my Web server online - I have to use the local authentication dialog but I have to put in my real Windows credentials not the Live account. Oh yes, it's all so terribly intuitive and logical… So in summary, when you log on with a Live account you are actually mapped to an underlying Windows user. In any application if you check the user name it'll be the underlying user account (not sure what happens in a Windows RT app or even what mechanism is used there to get the user name info).  When logging on to local machine resource with user name and password you have to use your Live IDs even if the permissions on the resources are mapped to your underlying Windows account. Easy enough I suppose, but still not exactly intuitive behavior…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Wednesday at OpenWorld: Identity Management

    - by Tanu Sood
    Divide and conquer! Yes, divide and conquer today at Oracle OpenWorld with your colleagues to make the most of all things Identity Management since there’s a lot going on. Here’ the line-up for today: Wednesday, October 3, 2012 CON9458: End End-User-Managed Passwords and Increase Security with Oracle Enterprise Single Sign-On Plus 10:15 a.m. – 11:15 a.m., Moscone West 3008 Most customers have a broad variety of applications (internal, external, web, client server, host etc) and single sign-on systems that extend to some, but not all systems. This session will focus on how customers are using enterprise single sign-on can help extend single sign-on to virtually any application, without costly application modification while laying a foundation that will enable integration with a broader identity management platform. CON9494: Sun2Oracle: Identity Management Platform Transformation 11:45 a.m. – 12:45 p.m., Moscone West 3008 Sun customers are actively defining strategies for how they will modernize their identity deployments. Learn how customers like Avea and SuperValu are leveraging their Sun investment, evaluating areas of expansion/improvement and building momentum. CON9631: Entitlement-centric Access to SOA and Cloud Services 11:45 a.m. – 12:45 p.m., Marriott Marquis, Salon 7 How do you enforce that a junior trader can submit 10 trades/day, with a total value of $5M, if market volatility is low? How can hide sensitive patient information from clerical workers but make it visible to specialists as long as consent has been given or there is an emergency? In this session, Uberether and HerbaLife take the stage with Oracle to demonstrate how you can enforce such entitlements on a service not just within your intranet but also right at the perimeter. CON3957 - Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 11:45 a.m. – 12:45 p.m., Moscone West 3003 In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. CON9493: Identity Management and the Cloud 1:15 p.m. – 2:15 p.m., Moscone West 3008 Security is the number one barrier to cloud service adoption.  Not so for industry leading companies like SaskTel, ConAgra foods and UPMC. This session will explore how these organizations are using Oracle Identity with cloud services and how some are offering identity management as a cloud service. CON9624: Real-Time External Authorization for Middleware, Applications, and Databases 3:30 p.m. – 4:30 p.m., Moscone West 3008 As organizations seek to grant access to broader and more diverse user populations, the importance of centrally defined and applied authorization policies become critical; both to identify who has access to what and to improve the end user experience.  This session will explore how customers are using attribute and role-based access to achieve these goals. CON9625: Taking Control of WebCenter Security 5:00 p.m. – 6:00 p.m., Moscone West 3008 Many organizations are extending WebCenter in a business to business scenario requiring secure identification and authorization of business partners and their users. Leveraging LADWP’s use case, this session will focus on how customers are leveraging, securing and providing access control to Oracle WebCenter portal and mobile solutions. EVENTS: Identity Management Customer Advisory Board 2:30 p.m. – 3:30 p.m., Four Seasons – Yerba Buena Room This invitation-only event is designed exclusively for Customer Advisory Board (CAB) members to provide product strategy and roadmap updates. Identity Management Meet & Greet Networking Event 3:30 p.m. – 4:30 p.m., Meeting Session 4:30 p.m. – 5:30 p.m., Cocktail Reception Yerba Buena Room, Four Seasons Hotel, 757 Market Street, San Francisco The CAB meeting will be immediately followed by an open Meet & Greet event hosted by Oracle Identity Management executives and product management team. Do take this opportunity to network with your peers and connect with the Identity Management customers. For a complete listing, refer to the Focus on Identity Management document. And as always, you can find us on @oracleidm on twitter and FaceBook. Use #oow and #idm to join in the conversation.

    Read the article

  • Deploying Data Mining Models using Model Export and Import, Part 2

    - by [email protected]
    In my last post, Deploying Data Mining Models using Model Export and Import, we explored using DBMS_DATA_MINING.EXPORT_MODEL and DBMS_DATA_MINING.IMPORT_MODEL to enable moving a model from one system to another. In this post, we'll look at two distributed scenarios that make use of this capability and a tip for easily moving models from one machine to another using only Oracle Database, not an external file transport mechanism, such as FTP. The first scenario, consider a company with geographically distributed business units, each collecting and managing their data locally for the products they sell. Each business unit has in-house data analysts that build models to predict which products to recommend to customers in their space. A central telemarketing business unit also uses these models to score new customers locally using data collected over the phone. Since the models recommend different products, each customer is scored using each model. This is depicted in Figure 1.Figure 1: Target instance importing multiple remote models for local scoring In the second scenario, consider multiple hospitals that collect data on patients with certain types of cancer. The data collection is standardized, so each hospital collects the same patient demographic and other health / tumor data, along with the clinical diagnosis. Instead of each hospital building it's own models, the data is pooled at a central data analysis lab where a predictive model is built. Once completed, the model is distributed to hospitals, clinics, and doctor offices who can score patient data locally.Figure 2: Multiple target instances importing the same model from a source instance for local scoring Since this blog focuses on model export and import, we'll only discuss what is necessary to move a model from one database to another. Here, we use the package DBMS_FILE_TRANSFER, which can move files between Oracle databases. The script is fairly straightforward, but requires setting up a database link and directory objects. We saw how to create directory objects in the previous post. To create a database link to the source database from the target, we can use, for example: create database link SOURCE1_LINK connect to <schema> identified by <password> using 'SOURCE1'; Note that 'SOURCE1' refers to the service name of the remote database entry in your tnsnames.ora file. From SQL*Plus, first connect to the remote database and export the model. Note that the model_file_name does not include the .dmp extension. This is because export_model appends "01" to this name.  Next, connect to the local database and invoke DBMS_FILE_TRANSFER.GET_FILE and import the model. Note that "01" is eliminated in the target system file name.  connect <source_schema>/<password>@SOURCE1_LINK; BEGIN  DBMS_DATA_MINING.EXPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_SOURCE_DIR_OBJECT',                                 'name =''MY_MINING_MODEL'''); END; connect <target_schema>/<password>; BEGIN  DBMS_FILE_TRANSFER.GET_FILE ('MY_SOURCE_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '01.dmp',                               'SOURCE1_LINK',                               'MY_TARGET_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '.dmp' );  DBMS_DATA_MINING.IMPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_TARGET_DIR_OBJECT'); END; To clean up afterward, you may want to drop the exported .dmp file at the source and the transferred file at the target. For example, utl_file.fremove('&directory_name', '&model_file_name' || '.dmp');

    Read the article

  • Windows Azure Recipe: Enterprise LOBs

    - by Clint Edmonson
    Enterprises are more and more dependent on their specialized internal Line of Business (LOB) applications than ever before. Naturally, the more software they leverage on-premises, the more infrastructure they need manage. It’s frequently the case that our customers simply can’t scale up their hardware purchases and operational staff as fast as internal demand for software requires. The result is that getting new or enhanced applications in the hands of business users becomes slower and more expensive every day. Being able to quickly deliver applications in a rapidly changing business environment while maintaining high standards of corporate security is a challenge that can be met right now by moving enterprise LOBs out into the cloud and leveraging Azure’s Access Control services. In fact, we’re seeing many of our customers (both large and small) see huge benefits from moving their web based business applications such as corporate help desks, expense tracking, travel portals, timesheets, and more to Windows Azure. Drivers Cost Reduction Time to market Security Solution Here’s a sketch of how many Windows Azure Enterprise LOBs are being architected and deployed: Ingredients Web Role – this will host the core of the application. Each web role is a virtual machine hosting an application written in ASP.NET (or optionally php, or node.js). The number of web roles can be scaled up or down as needed to handle peak and non-peak traffic loads. Many Java based applications are also being deployed to Windows Azure with a little more effort. Database – every modern web application needs to store data. SQL Azure databases look and act exactly like their on-premise siblings but are fault tolerant and have data redundancy built in. Access Control – this service is necessary to establish federated identity between the cloud hosted application and an enterprise’s corporate network. It works in conjunction with a secure token service (STS) that is hosted on-premises to establish the corporate user’s identity and credentials. The source code for an on-premises STS is provided in the Windows Azure training kit and merely needs to be customized for the corporate environment and published on a publicly accessible corporate web site. Once set up, corporate users see a near seamless single sign-on experience. Reporting – businesses live and die by their reports and SQL Azure Reporting, based on SQL Server Reporting 2008 R2, can serve up reports with tables, charts, maps, gauges, and more. These reports can be accessed from the Windows Azure Portal, through a web browser, or directly from applications. Service Bus (optional) – if deep integration with other applications and systems is needed, the service bus is the answer. It enables secure service layer communication between applications hosted behind firewalls in on-premises or partner datacenters and applications hosted inside Windows Azure. The Service Bus provides the ability to securely expose just the information and services that are necessary to create a simpler, more secure architecture than opening up a full blown VPN. Data Sync (optional) – in cases where the data stored in the cloud needs to be shared internally, establishing a secure one-way or two-way data-sync connection between the on-premises and off-premises databases is a perfect option. It can be very granular, allowing us to specify exactly what tables and columns to synchronize, setup filters to sync only a subset of rows, set the conflict resolution policy for two-way sync, and specify how frequently data should be synchronized Training Labs These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Building a better mouse-trap &ndash; Improving the creation of XML Message Requests using Reflection, XML &amp; XSLT

    - by paulschapman
    Introduction The way I previously created messages to send to the GovTalk service I used the XMLDocument to create the request. While this worked it left a number of problems; not least that for every message a special function would need to created. This is OK for the short term but the biggest cost in any software project is maintenance and this would be a headache to maintain. So the following is a somewhat better way of achieving the same thing. For the purposes of this article I am going to be using the CompanyNumberSearch request of the GovTalk service – although this technique would work for any service that accepted XML. The C# functions which send and receive the messages remain the same. The magic sauce in this is the XSLT which defines the structure of the request, and the use of objects in conjunction with reflection to provide the content. It is a bit like Sweet Chilli Sauce added to Chicken on a bed of rice. So on to the Sweet Chilli Sauce The Sweet Chilli Sauce The request to search for a company based on it’s number is as follows; <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID>1</TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID>????????????????????????????????</SenderID> <Authentication> <Method>CHMD5</Method> <Value>????????????????????????????????</Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber>99999999</PartialCompanyNumber> <DataSet>LIVE</DataSet> <SearchRows>1</SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> This is the XML that we send to the GovTalk Service and we get back a list of companies that match the criteria passed A message is structured in two parts; The envelope which identifies the person sending the request, with the name of the request, and the body which gives the detail of the company we are looking for. The Chilli What makes it possible is the use of XSLT to define the message – and serialization to convert each request object into XML. To start we need to create an object which will represent the contents of the message we are sending. However there is a common properties in all the messages that we send to Companies House. These properties are as follows SenderId – the id of the person sending the message SenderPassword – the password associated with Id TransactionId – Unique identifier for the message AuthenticationValue – authenticates the request Because these properties are unique to the Companies House message, and because they are shared with all messages they are perfect candidates for a base class. The class is as follows; using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Security.Cryptography; using System.Text; using System.Text.RegularExpressions; using Microsoft.WindowsAzure.ServiceRuntime; namespace CompanyHub.Services { public class GovTalkRequest { public GovTalkRequest() { try { SenderID = RoleEnvironment.GetConfigurationSettingValue("SenderId"); SenderPassword = RoleEnvironment.GetConfigurationSettingValue("SenderPassword"); TransactionId = DateTime.Now.Ticks.ToString(); AuthenticationValue = EncodePassword(String.Format("{0}{1}{2}", SenderID, SenderPassword, TransactionId)); } catch (System.Exception ex) { throw ex; } } /// <summary> /// returns the Sender ID to be used when communicating with the GovTalk Service /// </summary> public String SenderID { get; set; } /// <summary> /// return the password to be used when communicating with the GovTalk Service /// </summary> public String SenderPassword { get; set; } // end SenderPassword /// <summary> /// Transaction Id - uses the Time and Date converted to Ticks /// </summary> public String TransactionId { get; set; } // end TransactionId /// <summary> /// calculate the authentication value that will be used when /// communicating with /// </summary> public String AuthenticationValue { get; set; } // end AuthenticationValue property /// <summary> /// encodes password(s) using MD5 /// </summary> /// <param name="clearPassword"></param> /// <returns></returns> public static String EncodePassword(String clearPassword) { MD5CryptoServiceProvider md5Hasher = new MD5CryptoServiceProvider(); byte[] hashedBytes; UTF32Encoding encoder = new UTF32Encoding(); hashedBytes = md5Hasher.ComputeHash(ASCIIEncoding.Default.GetBytes(clearPassword)); String result = Regex.Replace(BitConverter.ToString(hashedBytes), "-", "").ToLower(); return result; } } } There is nothing particularly clever here, except for the EncodePassword method which hashes the value made up of the SenderId, Password and Transaction id. Each message inherits from this object. So for the Company Number Search in addition to the properties above we need a partial number, which dataset to search – for the purposes of the project we only need to search the LIVE set so this can be set in the constructor and the SearchRows. Again all are set as properties. With the SearchRows and DataSet initialized in the constructor. public class CompanyNumberSearchRequest : GovTalkRequest, IDisposable { /// <summary> /// /// </summary> public CompanyNumberSearchRequest() : base() { DataSet = "LIVE"; SearchRows = 1; } /// <summary> /// Company Number to search against /// </summary> public String PartialCompanyNumber { get; set; } /// <summary> /// What DataSet should be searched for the company /// </summary> public String DataSet { get; set; } /// <summary> /// How many rows should be returned /// </summary> public int SearchRows { get; set; } public void Dispose() { DataSet = String.Empty; PartialCompanyNumber = String.Empty; DataSet = "LIVE"; SearchRows = 1; } } As well as inheriting from our base class, I have also inherited from IDisposable – not just because it is just plain good practice to dispose of objects when coding, but it gives also gives us more versatility when using the object. There are four stages in making a request and this is reflected in the four methods we execute in making a call to the Companies House service; Create a request Send a request Check the status If OK then get the results of the request I’ve implemented each of these stages within a static class called Toolbox – which also means I don’t need to create an instance of the class to use it. When making a request there are three stages; Get the template for the message Serialize the object representing the message Transform the serialized object using a predefined XSLT file. Each of my templates I have defined as an embedded resource. When retrieving a resource of this kind we have to include the full namespace to the resource. In making the code re-usable as much as possible I defined the full ‘path’ within the GetRequest method. requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); So we now have the full path of the file within the assembly. Now all we need do is retrieve the assembly and get the resource. asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); Once retrieved  So this can be returned to the calling function and we now have a stream of XSLT to define the message. Time now to serialize the request to create the other side of this message. // Serialize object containing Request, Load into XML Document t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); First off we need the type of the object so we make a call to the GetType method of the object containing the Message properties. Next we need a MemoryStream, XmlSerializer and an XMLTextWriter so these can be initialized. The object is serialized by making the call to the Serialize method of the serializer object. The result of that is then converted into a MemoryStream. That MemoryStream is then converted into a string. ConvertByteArrayToString This is a fairly simple function which uses an ASCIIEncoding object found within the System.Text namespace to convert an array of bytes into a string. public static String ConvertByteArrayToString(byte[] bytes) { System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); return enc.GetString(bytes); } I only put it into a function because I will be using this in various places. The Sauce When adding support for other messages outside of creating a new object to store the properties of the message, the C# components do not need to change. It is in the XSLT file that the versatility of the technique lies. The XSLT file determines the format of the message. For the CompanyNumberSearch the XSLT file is as follows; <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID> <xsl:value-of select="CompanyNumberSearchRequest/TransactionId"/> </TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID><xsl:value-of select="CompanyNumberSearchRequest/SenderID"/></SenderID> <Authentication> <Method>CHMD5</Method> <Value> <xsl:value-of select="CompanyNumberSearchRequest/AuthenticationValue"/> </Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber> <xsl:value-of select="CompanyNumberSearchRequest/PartialCompanyNumber"/> </PartialCompanyNumber> <DataSet> <xsl:value-of select="CompanyNumberSearchRequest/DataSet"/> </DataSet> <SearchRows> <xsl:value-of select="CompanyNumberSearchRequest/SearchRows"/> </SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> </xsl:template> </xsl:stylesheet> The outer two tags define that this is a XSLT stylesheet and the root tag from which the nodes are searched for. The GovTalkMessage is the format of the message that will be sent to Companies House. We first set up the XslCompiledTransform object which will transform the XSLT template and the serialized object into the request to Companies House. xslt = new XslCompiledTransform(); resultStream = new MemoryStream(); writer = new XmlTextWriter(resultStream, Encoding.ASCII); doc = new XmlDocument(); The Serialize method require XmlTextWriter to write the XML (writer) and a stream to place the transferred object into (writer). The XML will be loaded into an XMLDocument object (doc) prior to the transformation. // create XSLT Template xslTemplate = Toolbox.GetRequest(Template); xslTemplate.Seek(0, SeekOrigin.Begin); templateReader = XmlReader.Create(xslTemplate); xslt.Load(templateReader); I have stored all the templates as a series of Embedded Resources and the GetRequestCall takes the name of the template and extracts the relevent XSLT file. /// <summary> /// Gets the framwork XML which makes the request /// </summary> /// <param name="RequestFile"></param> /// <returns></returns> public static Stream GetRequest(String RequestFile) { String requestFile = String.Empty; Stream sr = null; Assembly asm = null; try { requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); } catch (Exception) { throw; } finally { asm = null; } return sr; } // end private static stream GetRequest We first take the template name and expand it to include the full namespace to the Embedded Resource I like to keep all my schemas in the same directory and so the namespace reflects this. The rest is the default namespace for the project. Then we get the currently executing assembly (which will contain the resources with the call to GetExecutingAssembly() ) Finally we get a stream which contains the XSLT file. We use this stream and then load an XmlReader with the contents of the template, and that is in turn loaded into the XslCompiledTransform object. We convert the object containing the message properties into Xml by serializing it; calling the Serialize() method of the XmlSerializer object. To set up the object we do the following; t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); We first determine the type of the object being transferred by calling GetType() We create an XmlSerializer object by passing the type of the object being serialized. The serializer writes to a memory stream and that is linked to an XmlTextWriter. Next job is to serialize the object and load it into an XmlDocument. serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; xmlRequest = new XmlTextReader(ms); GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); doc.LoadXml(GovTalkRequest); Time to transform the XML to construct the full request. xslt.Transform(doc, writer); resultStream.Seek(0, SeekOrigin.Begin); request = Toolbox.ConvertByteArrayToString(resultStream.ToArray()); So that creates the full request to be sent  to Companies House. Sending the request So far we have a string with a request for the Companies House service. Now we need to send the request to the Companies House Service. Configuration within an Azure project There are entire blog entries written about configuration within an Azure project – most of this is out of scope for this article but the following is a summary. Configuration is defined in two files within the parent project *.csdef which contains the definition of configuration setting. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRole name="CompanyHub.Host"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="80" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="DataConnectionString" /> </ConfigurationSettings> </WebRole> <WebRole name="CompanyHub.Services"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="8080" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="SenderId"/> <Setting name="SenderPassword" /> <Setting name="GovTalkUrl"/> </ConfigurationSettings> </WebRole> <WorkerRole name="CompanyHub.Worker"> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> </ConfigurationSettings> </WorkerRole> </ServiceDefinition>   Above is the configuration definition from the project. What we are interested in however is the ConfigurationSettings tag of the CompanyHub.Services WebRole. There are four configuration settings here, but at the moment we are interested in the second to forth settings; SenderId, SenderPassword and GovTalkUrl The value of these settings are defined in the ServiceDefinition.cscfg file; <?xml version="1.0"?> <ServiceConfiguration serviceName="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="CompanyHub.Host"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> <Role name="CompanyHub.Services"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="SenderId" value="UserID"/> <Setting name="SenderPassword" value="Password"/> <Setting name="GovTalkUrl" value="http://xmlgw.companieshouse.gov.uk/v1-0/xmlgw/Gateway"/> </ConfigurationSettings> </Role> <Role name="CompanyHub.Worker"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> </ServiceConfiguration>   Look for the Role tag that contains our project name (CompanyHub.Services). Having configured the parameters we can now transmit the request. This is done by ‘POST’ing a stream of XML to the Companies House servers. govTalkUrl = RoleEnvironment.GetConfigurationSettingValue("GovTalkUrl"); request = WebRequest.Create(govTalkUrl); request.Method = "POST"; request.ContentType = "text/xml"; writer = new StreamWriter(request.GetRequestStream()); writer.WriteLine(RequestMessage); writer.Close(); We use the WebRequest object to send the object. Set the method of sending to ‘POST’ and the type of data as text/xml. Once set up all we do is write the request to the writer – this sends the request to Companies House. Did the Request Work Part I – Getting the response Having sent a request – we now need the result of that request. response = request.GetResponse(); reader = response.GetResponseStream(); result = Toolbox.ConvertByteArrayToString(Toolbox.ReadFully(reader));   The WebRequest object has a GetResponse() method which allows us to get the response sent back. Like many of these calls the results come in the form of a stream which we convert into a string. Did the Request Work Part II – Translating the Response Much like XSLT and XML were used to create the original request, so it can be used to extract the response and by deserializing the result we create an object that contains the response. Did it work? It would be really great if everything worked all the time. Of course if it did then I don’t suppose people would pay me and others the big bucks so that our programmes do not a) Collapse in a heap (this is an area of memory) b) Blow every fuse in the place in a shower of sparks (this will probably not happen this being real life and not a Hollywood movie, but it was possible to blow the sound system of a BBC Model B with a poorly coded setting) c) Go nuts and trap everyone outside the airlock (this was from a movie, and unless NASA get a manned moon/mars mission set up unlikely to happen) d) Go nuts and take over the world (this was also from a movie, but please note life has a habit of being of exceeding the wildest imaginations of Hollywood writers (note writers – Hollywood executives have no imagination and judging by recent output of that town have turned plagiarism into an art form). e) Freeze in total confusion because the cleaner pulled the plug to the internet router (this has happened) So anyway – we need to check to see if our request actually worked. Within the GovTalk response there is a section that details the status of the message and a description of what went wrong (if anything did). I have defined an XSLT template which will extract these into an XML document. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <xsl:template match="/"> <GovTalkStatus xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Status> <xsl:value-of select="ev:GovTalkMessage/ev:Header/ev:MessageDetails/ev:Qualifier"/> </Status> <Text> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Text"/> </Text> <Location> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Location"/> </Location> <Number> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Number"/> </Number> <Type> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Type"/> </Type> </GovTalkStatus> </xsl:template> </xsl:stylesheet>   Only thing different about previous XSL files is the references to two namespaces ev & gt. These are defined in the GovTalk response at the top of the response; xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" If we do not put these references into the XSLT template then  the XslCompiledTransform object will not be able to find the relevant tags. Deserialization is a fairly simple activity. encoder = new ASCIIEncoding(); ms = new MemoryStream(encoder.GetBytes(statusXML)); serializer = new XmlSerializer(typeof(GovTalkStatus)); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); messageStatus = (GovTalkStatus)serializer.Deserialize(ms);   We set up a serialization object using the object type containing the error state and pass to it the results of a transformation between the XSLT above and the GovTalk response. Now we have an object containing any error state, and the error message. All we need to do is check the status. If there is an error then we can flag an error. If not then  we extract the results and pass that as an object back to the calling function. We go this by guess what – defining an XSLT template for the result and using that to create an Xml Stream which can be deserialized into a .Net object. In this instance the XSLT to create the result of a Company Number Search is; <?xml version="1.0" encoding="us-ascii"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:sch="http://xmlgw.companieshouse.gov.uk/v1-0/schema" exclude-result-prefixes="ev"> <xsl:template match="/"> <CompanySearchResult xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <CompanyNumber> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyNumber"/> </CompanyNumber> <CompanyName> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyName"/> </CompanyName> </CompanySearchResult> </xsl:template> </xsl:stylesheet> and the object definition is; using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace CompanyHub.Services { public class CompanySearchResult { public CompanySearchResult() { CompanyNumber = String.Empty; CompanyName = String.Empty; } public String CompanyNumber { get; set; } public String CompanyName { get; set; } } } Our entire code to make calls to send a request, and interpret the results are; String request = String.Empty; String response = String.Empty; GovTalkStatus status = null; fault = null; try { using (CompanyNumberSearchRequest requestObj = new CompanyNumberSearchRequest()) { requestObj.PartialCompanyNumber = CompanyNumber; request = Toolbox.CreateRequest(requestObj, "CompanyNumberSearch.xsl"); response = Toolbox.SendGovTalkRequest(request); status = Toolbox.GetMessageStatus(response); if (status.Status.ToLower() == "error") { fault = new HubFault() { Message = status.Text }; } else { Object obj = Toolbox.GetGovTalkResponse(response, "CompanyNumberSearchResult.xsl", typeof(CompanySearchResult)); } } } catch (FaultException<ArgumentException> ex) { fault = new HubFault() { FaultType = ex.Detail.GetType().FullName, Message = ex.Detail.Message }; } catch (System.Exception ex) { fault = new HubFault() { FaultType = ex.GetType().FullName, Message = ex.Message }; } finally { } Wrap up So there we have it – a reusable set of functions to send and interpret XML results from an internet based service. The code is reusable with a little change with any service which uses XML as a transport mechanism – and as for the Companies House GovTalk service all I need to do is create various objects for the result and message sent and the relevent XSLT files. I might need minor changes for other services but something like 70-90% will be exactly the same.

    Read the article

  • Limiting Audit Exposure and Managing Risk – Q&A and Follow-Up Conversation

    - by Tanu Sood
    Thanks to all who attended the live ISACA webcast on Limiting Audit Exposure and Managing Risk with Metrics-Driven Identity Analytics. We were really fortunate to have Don Sparks from ISACA moderate the webcast featuring Stuart Lincoln, Vice President, IT P&L Client Services, BNP Paribas, North America and Neil Gandhi, Principal Product Manager, Oracle Identity Analytics. Stuart’s insights given the team’s role in providing IT for P&L Client Services and his tremendous experience in identity management and establishing sustainable compliance programs were true value-add at yesterday’s webcast. And if you are a healthcare organization looking to solve your compliance and security challenges, we recommend you join us for a live webcast on Tuesday, November 29 at 10 am PT. The webcast will feature experts from Kaiser Permanente, PricewaterhouseCoopers and Oracle and the focus of the discussion will be around the compliance challenges a healthcare organization faces and best practices for tackling those. Here are the details: Healthcare IT News Webcast: Managing Risk and Enforcing Compliance in Healthcare with Identity Analytics Tuesday, November 29, 201110:00 a.m. PT / 1:00 p.m. ET Register Today The ISACA webcast replay is now available on-demand and the slides are also available for download. Since we didn’t have time to address all the questions we received during the live Q&A portion of the webcast, we have captured responses to the remaining questions here. Please continue to provide us your feedback and insights from your experience in deploying identity compliance solutions. Q. Can you please clarify the mechanism utilized to populate the Identity Warehouse from each individual application's access management function / files? A. Oracle Identity Analytics (OIA) supports direct imports from applications. Data collection is based on Extract, Transform and Load (ETL) that eliminates the need to write connectors to different applications. Oracle Identity Analytics’ import engine supports complex entitlement feeds saved as either text files or XML. The imports can be scheduled on a periodic basis or triggered as needed. If the applications are synchronized with a user provisioning solution like Oracle Identity Manager, Oracle Identity Analytics has a seamless integration to pull in data from Oracle Identity Manager. Q.  Can you provide a short summary of the new features in your latest release of Oracle Identity Analytics? A. Oracle recently announced availability of enhanced Oracle Identity Analytics. This release focused on easing the certification process by offering risk analytics driven certification, advanced certification screens, business centric views and significant improvement in performance including 3X faster data imports, 3X faster certification campaign generation and advanced auto-certification features, that  will allow organizations to improve user productivity by up to 80%. Closed-loop risk feedback and IT policy monitoring with Oracle Identity Manager, a leading user provisioning solution, allows for more accurate certification reviews. And, OIA's improved performance enables customers to scale compliance initiatives supporting millions of user entitlements across thousands of applications, whether on premise or in the cloud, without compromising speed or integrity. Q. Will ISACA grant a CPE credit for attending this ISACA-sponsored webinar today? A. From ISACA: Hello and thank you for your interest in the 2011 ISACA Webinar Program!  Unfortunately, there are no CPEs offered for this program, archived or live.  We will be looking into the feasibility of offering them in the future.  Q. Would you be able to use this to help manage licenses for software? That is to say - could it track software that is not used by a user, thus eliminating the software license? A. OIA’s integration with Oracle Identity Manager, a leading user provisioning solution, allows organizations to detect ghost accounts or unused accounts via account reconciliation. Based on company’s policies, this could trigger an automated workflow for account deletion or asking for further investigation. Closed-loop feedback between the two solutions would then allow visibility into the complete audit trail of when the account was detected, the action taken, by whom, when and the current status. Q. We have quarterly attestations and .xls mechanisms are not working. Once the identity data is correlated in Identity Analytics, do you then automate access certification? A. OIA’s identity warehouse analyzes and correlates identity data across various resources that allows OIA to determine a user’s risk profile, who the access review request should go to, along with all the relevant access details of the user. The access certification manager gets notification on what to review, when and the relevant data is presented in a business friendly screen. Based on the result of the access certification process, actions are triggered and results recorded and archived. Access review managers have visual risk indicators that also allow them to prioritize access certification tasks and efforts. Q. How does Oracle Identity Analytics work with Cloud Security? A. For enterprises looking to build their own cloud(s), Oracle offers a set of security services that cloud developers can leverage including Oracle Identity Analytics.  For enterprises looking to manage their compliance requirements but without hosting those in-house and instead having a hosting provider offer managed Identity Management services to the organizations, Oracle Identity Analytics can be leveraged much the same way as you’d in an on-premise (within the enterprise) environment. In fact, organizations today are leveraging Oracle Identity Analytics to manage identity compliance in both these ways. Q. Would you recommend this as a cost effective solution for a smaller organization with @ 2,500 users? A. The key return-on-investment (ROI) on Oracle Identity Analytics is derived from automating compliance processes thereby eliminating administrative overhead, minimizing errors, maintaining cost- and time-effective sustainable compliance processes and minimizing audit exposures and penalties.  Of course, there are other tangible benefits that are derived from an Oracle Identity Analytics implementation as outlined in the webcast. For a quantitative analysis of your requirements and potential ROI calculation, we recommend you refer to the Forrester Study on Total Economic Impact of Oracle Identity Analytics. For an in-person discussion, please email Richard Caldwell.

    Read the article

  • Ok it has been pointed out to me

    - by Ratman21
    That it seems my blog is more of poor me or pity me or I deserve a job blog.   Hmmm I wont say, I have not wined here as I have used this blog to vent my frustration on the whole out of work thing (lack of money, self worth, family issues and the never end bills coming my way) but, it was also me trying to reach to others in the same boat as well as advertising, hay I am out here, employers.   It was also said, that I don’t have any thing listed here on me, like a cover letter or resume. Well there is but, it was so many months and post ago. Also what I had posted is not current. So here is my most current cover and resume.   Scott L Newman 45219 Dutton Way Callahan, Fl. 32011 To Whom It May Concern: I am really interested in the IT vacancie that you have listed for your company. Maybe I don’t have all the qualifications you want (hold on don’t hit delete yet) yet! But maybe I do, as I have over 20 + years experience in "IT” RIGHT NOW.   Read the rest of my cover and my resume. You will see what my “IT” skills are and it will Show that I can to this work! I can bring to your company along with my, can do attitude, a broad range of skills, including: Certified CompTIA A+, Security+  and Network+ Technician §         2.5 years (NOC) Network experience on large Cisco based Wan – UK to Austria §         20 years experience MIS/DP – Yes I can do IBM mainframes and Tandem  non-stops too §         18 years experience as technical Help Desk support – panicking users, no problem §         18 years experience with PC/Server based system, intranet and internet systems §         10+ years experienced on: Microsoft Office, Windows XP and Data Network Fundamentals (YES I do windows) §         Strong trouble shooting skills for software, hard ware and circuit issues (and I can tell you what kind of horrors I had to face on all of them). §         Very experienced on working with customers on problems – again panicking users, no problem §         Working experience with Remote Access (VPN/SecurID) – I didn’t just study them I worked on/with them §         Skilled in getting info for and creating documentation for Operation procedures (I don’t just wait for them to give it to me I go out and get it. Waiting for info on working applications is, well dumb) Multiple software languages (Hey I have done some programming) And much more experiences in “IT” (Mortgage, stocks and financial information systems experience and have worked “IT” in a hospital) Can multitask, also have ability to adapt to change and learn quickly. (once was put in charge of a system that I had not worked with for over two years. Talk about having to relearn and adapt to changes but, I did it.) I would welcome the opportunity to further discuss this position with you. If you have questions or would like to schedule an interview, please contact me by phone at 904-879-4880 or on my cell 352-356-0945 or by e-mail at [email protected] or leave a message on my web site (http://beingscottnewman.webs.com/). I have enclosed/attached my resume for your review and I look forward to hearing from you.   Thank you for taking a moment to consider my cover letter and resume. I appreciate how busy you are. Sincerely, Scott L. Newman    Scott L. Newman 45219 Dutton Way, Callahan, FL 32011? H (904)879-4880 C (352)356-0945 ? [email protected] Web - http://beingscottnewman.webs.com/                                                       ______                                                                                       OBJECTIVE To obtain a Network Operation or Helpdesk position.     PROFILE Information Technology Professional with 20+ years of experience. Volunteer website creator and back-up sound technician at True Faith Christian Fellowship. CompTIA A+, Network+ and Security+ Certified.   TECHNICAL AND PROFESSIONAL SKILLS   §         Technical Support §         Frame Relay §         Microsoft Office Suite §         Inventory Management §         ISDN §         Windows NT/98/XP §         Client/Vendor Relations §         CICS §         Cisco Routers/Switches §         Networking/Administration §         RPG §         Helpdesk §         Website Design/Dev./Management §         Assembler §         Visio §         Programming §         COBOL IV §               EDUCATION ? New HorizonsComputerLearningCenter, Jacksonville, Florida – CompTIA A+, Security+ and Network+ Certified.             Currently working on CCNA Certification ?MottCommunity College, Flint, Michigan – Associates Degree - Data Processing and General Education ? Currently studying Japanese     PROFESSIONAL             TrueFaithChristianFellowshipChurch – Callahan, FL, October 2009 – Present Web site Tech ·        Web site Creator/tech, back up song leader and back up sound technician. Note church web site is (http://ambassadorsforjesuschrist.webs.com/) U.S. Census (temp employee) Feb. 23 to March 8, 2010 ·        Enumerator for NassauCounty   ThomasCreekBaptistChurch – Callahan, FL,     June 2008 – September 2009 Churchsound and video technician      ·        sound and video technician           Fidelity National Information Services ? Jacksonville, FL ? February 01, 2005 to October 28, 2008 Client Server Dev/Analyst I ·        Monitored Multiple Debit Card sites, Check Authorization customers and the Card Auth system (AuthNet) for problems with the sites, connections, servers (on our LAN) and/or applications ·        Night (NOC) Network operator for a large Wide Area Network (WAN) ·        Monitored Multiple Check Authorization customers for problems with circuits, routers and applications ·        Resolved circuit and/or router issues or assist circuit carrier in resolving issue ·        Resolved application problems or assist application support in resolution ·        Liaison between customer and application support ·        Maintained and updated the NetOps Operation procedures Guide ·        Kept the listing of equipment on the raised floor updated ·        Involved in the training of all Night Check and Card server operation operators ·        FNIS acquired Certegy in 2005. Was one of 3 kept on.   Certegy ? St.Pete, FL ? August 31, 2003 to February 1, 2005 Senior NetOps Operator(FNIS acquired Certegy in 2005 all of above jobs/skills were same as listed in FNIS) ·        Converting Documentation to Adobe format ·        Sole trainer of day/night shift System Management Center operators (SMC) ·        Equifax spun off Card/Check Dept. as Certegy. Certegy terminated contract with EDS. One of six in the whole IT dept that was kept on.   EDS  (Certegy Account) ? St.Pete, FL ? July 1, 1999 to August 31, 2003 Senior NetOps Operator ·        Equifax outsourced the NetOps dept. to EDS in 1999. ·        Same job skills as listed above for FNIS.   Equifax ? St.Pete&Tampa, FL ? January 1, 1991 to July 1, 1999 NetOps/Tandem Operator ·        All of the above for FNIS, except for circuit and router issues ·        Operated, monitored and trouble shot Tandem mainframe and servers on LAN ·        Supported in the operation of the Print, Tape and Microfiche rooms ·        Equifax acquired TelaCredit in 1991.   TelaCredit ? Tampa, FL ? June 28, 1989 to January 1, 1991 Tandem Operator ·        Operated and monitored Tandem Non-stop systems for Card and Check Auths ·        Operated multiple high-speed Laser printers and Microfiche printers ·        Mounted, filed and maintained 18 reel-to-reel mainframe tape drives, cartridges tape drives and tape library.

    Read the article

  • Financial Management: Why Move to the Cloud?

    - by Kathryn Perry
    A guest post by Terrance Wampler, Vice President, Financials Product Strategy, Oracle I’ve spent my career designing and developing financial management systems, most of it at Oracle. Every single day I either meet with our customers or talk to them on the phone. The time is usually spent discussing various business challenges facing CFOs and Controllers, who are running Oracle’s Financials. Lately, we’ve been talking a lot about cloud computing and whether it makes sense for finance to go to the cloud. Here are some pros and cons that might help you make that decision. Let’s start with the benefits of cloud solutions. The first is savings. With cloud services, you pay only for those commodities that you use. That makes you feel like you're getting better value for your money. Plus, you can preserve your cash for your core business and you can get a better matching of expenses and revenues. So, at the top of the list is lower total cost of ownership. The second point has to do with optimization. With cloud services, you’ll need less IT infrastructure so you can optimize your IT resources for better-value, higher-end projects. This also leads to greater financial visibility, where there's a clear cost for the set of services or features replaced by cloud services. And, the last benefit is what I call acceleration. You can save money by speeding up the initialization and deployment of the project. You don't have to deal with IT infrastructure and you can start implementing right away. We did a quick survey of about 70 CFOs at the CFO Summit last month in New York City. We asked them why they were looking at cloud services, and not necessarily just for financials. The No. 1 response was perceived lower cost of ownership. But of course there are risks to consider. The first thing most people think about in the cloud is security and ownership of data. So, will your data really be safe? Can you meet your own privacy policy requirements? Do you really want your private financial data exposed? Do you trust the provider? Is what you see really your data? Do you own it or is it managed by someone else? Security is a big concern that comes with an emotional component. The next thing in the risk category is reliability. Is the provider proven? You’re taking what you have control over – for example, standards and policies and internal service level agreements – away from your IT department and giving it to someone else. Will you still be able to adapt to shifts in your business? Will the provider be able to grow with your business effectively? Reliability means having a provider that can give you the service infrastructure that you need. And then there’s performance, which has two components in terms of risk. Going forward, will the provider be able to scale the infrastructure or service level if you have new employees or new businesses? And second, will the price you negotiate and the rate you lock in cover additional costs and rising service fees? Another piece is cost. What happens if you don't get the service level you want? What if you end the service? What happens, if after a few years, you send the service out for bid and change service? Can you move your data? Can you move the applications? Do the integrations work? These are cost components people don’t always take into account. And, the final piece is the business case. The perception is that you can get started really quickly with cloud. It has a perceived lower cost of total ownership and it feels cool because it's cloud. But do you have a good business case for moving to the cloud? Your total cost of ownership is over three years; then you’ll renew it, so your TCO is six years. Have you compared that to other internal services that you’re offering? You might already have product that you can run this new business or division on. In that same survey at the CFO Summit, the execs thought the biggest perceived risks were security of data, ability to move data back, and the ability to create a business case to actually justify the risks. So that’s the list of pros and cons. Not to leave you hanging, I will do another post on how to balance these pros and cons and make the right decision for your business.

    Read the article

  • Writing an ASP.Net Web based TFS Client

    - by Glav
    So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below: <appSettings> <!-- Add reference to TFS Client Cache --> <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" /> </appSettings> With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here. Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message. And that is it. You can then query the collection using something like: var service = currentCollection.GetService<WorkItemStore>(); var proj = service.Projects[0]; var allQueries = proj.StoredQueries; for (int qcnt = 0; qcnt < allQueries.Count; qcnt++) {     var query = allQueries[qcnt];     var queryDesc = string.format(“Query found named: {0}”,query.Name); } You get the idea. If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce. So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

    Read the article

  • Silverlight 4 Twitter Client - Part 2

    - by Max
    We will create a few classes now to help us with storing and retrieving user credentials, so that we don't ask for it every time we want to speak with Twitter for getting some information. Now the class to sorting out the credentials. We will have this class as a static so as to ensure one instance of the same. This class is mainly going to include a getter setter for username and password, a method to check if the user if logged in and another one to log out the user. You can get the code here. Now let us create another class to facilitate easy retrieval from twitter xml format results for any queries we make. This basically involves just creating a getter setter for all the values that you would like to retrieve from the xml document returned. You can get the format of the xml document from here. Here is what I've in my Status.cs data structure class. using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes;  namespace MaxTwitter.Classes { public class Status { public Status() {} public string ID { get; set; } public string Text { get; set; } public string Source { get; set; } public string UserID { get; set; } public string UserName { get; set; } } }  Now let us looking into implementing the Login.xaml.cs, first thing here is if the user is already logged in, we need to redirect the user to the homepage, this we can accomplish using the event OnNavigatedTo, which is fired when the user navigates to this particular Login page. Here you utilize the navigate to method of NavigationService to goto a different page if the user is already logged in. if (GlobalVariable.isLoggedin())         this.NavigationService.Navigate(new Uri("/Home", UriKind.Relative));  On the submit button click event, add the new event handler, which would save the perform the WebClient request and download the results as xml string. WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp);  The following line allows us to create a web client to create a web request to a url and get back the string response. Something that came as a great news with SL 4 for many SL developers.   WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential(TwitterUsername.Text, TwitterPassword.Password);  Here in the following line, we add an event that has to be fired once the xml string has been downloaded. Here you can do all your XLINQ stuff.   myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted);   myService.DownloadStringAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml"));  Now let us look at implementing the TimelineRequestCompleted event. Here we are not actually using the string response we get from twitter, I just use it to ensure the user is authenticated successfully and then save the credentials and redirect to home page. public void TimelineRequestCompleted(object sender, System.Net.DownloadStringCompletedEventArgs e) { if (e.Error != null) { MessageBox.Show("This application must be installed first"); }  If there is no error, we can save the credentials to reuse it later.   else { GlobalVariable.saveCredentials(TwitterUsername.Text, TwitterPassword.Password); this.NavigationService.Navigate(new System.Uri("/Home", UriKind.Relative)); } } Ok so now login page is done. Now the main thing – running this application. This credentials stuff would only work, if the application is run out of the browser. So we need fiddle with a few Silverlioght project settings to enable this. Here is how:    Right click on Silverlight > properties then check the "Enable running application out of browser".    Then click on Out-Of-Browser settings and check "Require elevated trust…" option. That's it, all done to run. Now press F5 to run the application, fix the errors if any. Then once the application opens up in browser with the login page, right click and choose install.  Once you install, it would automatically run and you can login and can see that you are redirected to the Home page. Here are the files that are related to this posts. We will look at implementing the Home page, etc… in the next post. Please post your comments and feedbacks; it would greatly help me in improving my posts!  Thanks for your time, catch you soon.

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >