Search Results

Search found 29101 results on 1165 pages for 'thread local storage'.

Page 13/1165 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • How to Use RDA to Generate WLS Thread Dumps At Specified Intervals?

    - by Daniel Mortimer
    Introduction There are many ways to generate a thread dump of a WebLogic Managed Server. For example, take a look at: Taking Thread Dumps - [an excellent blog post on the Middleware Magic site]or  Different ways to take thread dumps in WebLogic Server (Document 1098691.1) There is another method - use Remote Diagnostic Agent! The solution described below is not documented, but it is relatively straightforward to execute. One advantage of using RDA to collect the thread dumps is RDA will also collect configuration, log files, network, system, performance information at the same time. Instructions 1. Not familiar with Remote Diagnostic Agent? Take a look at my previous blog "Resolve SRs Faster Using RDA - Find the Right Profile" 2. Choose a profile, which includes the WebLogic Server data collection modules (for example the profile "WebLogicServer"). At RDA setup time you should see the prompt below: ------------------------------------------------------------------------------- S301WLS: Collects Oracle WebLogic Server Information ------------------------------------------------------------------------------- Enter the location of the directory where the domains to analyze are located (For example in UNIX, <BEA Home>/user_projects/domains or <Middleware Home>/user_projects/domains) Hit 'Return' to accept the default (/oracle/11AS/Middleware/user_projects/domains) > For a successful WLS connection, ensure that the domain Admin Server is up and running. Data Collection Type:   1  Collect for a single server (offline mode)   2  Collect for a single server (using WLS connection)   3  Collect for multiple servers (using WLS connection) Enter the item number Hit 'Return' to accept the default (1) > 2 Choose option 2 or 3. Note: Collect for a single server or multiple servers using WLS connection means that RDA will attempt to connect to execute online WLST commands against the targeted server(s). The thread dumps are collected using the WLST function - "threadDumps()". If WLST cannot connect to the managed server, RDA will proceed to collect other data and ignore the request to collect thread dumps. If in the final output you see no Thread Dump menu item, then it's likely that the managed server is in a state which prevents new connections to it. If faced with this scenario, you would have to employ alternative methods for collecting thread dumps. 3. The RDA setup will create a setup.cfg file in the RDA_HOME directory. Open this file in an editor. You will find the following parameters which govern the number of thread dumps and thread dump interval. #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=5000 The example lines above show the default settings. In other words, RDA will collect 10 thread dumps at 5000 millisecond (5 second) intervals. You may want to change this to something like: #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=30000 However, bear in mind, that such change will increase the total amount of time it takes for RDA to complete its run. 4. Once you are happy with the setup.cfg, run RDA. RDA will collect, render, generate and package all files in the output directory. 5. For ease of viewing, open up the RDA Start html file - "xxxx__start.htm". The thread dumps can be found under the WLST Collections for the target managed server(s). See screenshots belowScreenshot 1:RDA Start Page - Main Index Screenshot 2: Managed Server Sub Index Screenshot 3: WLST Collections Screenshot 4: Thread Dump Page - List of dump file links Screenshot 5: Thread Dump Dat File Link Additional Comment: A) You can view the thread dump files within the RDA Start Page framework, but most likely you will want to download the dat files for in-depth analysis via thread dump analysis tools such as: Thread Dump Analyzer -  Samurai - a GUI based tail , thread dump analysis tool If you are new to thread dump analysis - take a look at this recorded Support Advisor Webcast  Oracle WebLogic Server: Diagnosing Performance Issues through Java Thread Dumps[Slidedeck from webcast in PDF format]B) I have logged a couple of enhancement requests for the RDA Development Team to consider: Add timestamp to dump file links, dat filename and at the top of the body of the dat file Package the individual thread dumps in a zip so all dump files can be conveniently downloaded in one go.

    Read the article

  • Intranet Ip - Access from Custom Domain

    - by Alexander Wigmore
    I have setup a local intranet in my office using IIS7 (Windows 7 Machine), currently it can be accessed through the PC's static IP, however I would like it so that internally it can just be accessed through an easier method, e.g typing in http://intranet (or something similar). There are over 60 PC's int he office, so individually updating Host files on the PC's is not really ideal. We don't need it to be accessible from the outside world (I.e, we don't care/want it to be an Extranet). Any tips?

    Read the article

  • Spring transaction : Transaction not active

    - by Videanu Adrian
    i develop a app using struts2, spring 3.1, Jpa2 and Hibernate. From Spring i use transactions and IoC. so, i have an ajax code block that calls for a struts2 action every second (this is happening for every user that is logged into application (simultaneous users are around 20-30 at a time)). this action name is PopupAction public class PopupAction extends VActionBase implements ServletRequestAware { private static final long serialVersionUID = -293004532677112584L; private iIntermedService intermedService; private HttpServletRequest servletRequest; @Override public String execute() { Integer agentId = (Integer) session.get("USER_AGENT_ID"); Intermed iObj; try { iObj = intermedService.getIntermed(agentId,locationsString); } catch (Exception e) { logger.error("Cannot get Intermed!!! "+e.getMessage()); return ERROR; } return SUCCESS; } } and then i have the service class : @Transactional(readOnly=true) public class IntermedServiceImpl extends GenericIService<Intermed, Integer> implements iIntermedService { @Override public Intermed getIntermed (int agentId,String queueIds) throws Exception { Intermed intermedObj = null; //TODO - find a better implementation for this queueIds parameter!!!! try{ String sql = "SELECT i FROM bla bla bla.....)"; Query q = this.em.createQuery(sql); List<Intermed> iList = q.getResultList(); if (iList.size() == 1){ intermedObj = (Intermed) iList.get(0); //get latest object from DB em.refresh(intermedObj); } }catch(Exception e){ e.printStackTrace(); logger.error(e.getCause()+e.getMessage()); throw e; } return intermedObj; } } here is the spring configuration : <bean id="emfI" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="inboundDS" /> <property name="persistenceUnitName" value="I2PU"/> <!-- GlassFish load-time weaving setup --> <property name="loadTimeWeaver"> <bean class="org.springframework.instrument.classloading.glassfish.GlassFishLoadTimeWeaver"/> </property> </bean> <tx:annotation-driven transaction-manager="txManagerI" /> <tx:advice id="txManagerInboundAdvice" transaction-manager="txManagerI"> <tx:attributes> <tx:method name="*" rollback-for="java.lang.Exception"/> </tx:attributes> </tx:advice> I have names for transactionManager because i have 3 datasources and 3 transaction managers. the problem is that my glassfish logs are full of messages like these: -- removed in order to be able to add more recent logs -- So the cause is : Caused by: java.lang.IllegalStateException: Transaction not active. But i have no idea what can cause this. Any help ? thanks Updates So i have added to @Transactional annotation the transaction manager name that he has to use, but this still does not solved my problem. I have captured a log from the time that the transaction is created until i got that exception: 2012-02-08T15:08:55.954+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractBeanFactory.java:245) - Returning cached instance of singleton bean 'txManagerVA' 2012-02-08T15:08:55.962+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:365) - Creating new transaction with name [xxx.vs.common.services.inbound.IntermedServiceImpl.getIntermed]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,readOnly; '',-java.lang.Exception 2012-02-08T15:08:55.967+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:368) - Opened new EntityManager [org.hibernate.ejb.EntityManagerImpl@edf83f9] for JPA transaction 2012-02-08T15:08:55.976+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:400) - Exposing JPA transaction as JDBC transaction [org.springframework.orm.jpa.vendor.HibernateJpaDialect$HibernateConnectionHandle@725b979b] 2012-02-08T15:08:55.977+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:193) - Bound value [org.springframework.jdbc.datasource.ConnectionHolder@4fb57177] for key [com.sun.gjc.spi.jdbc40.DataSource40@75fa4851] to thread [thread-pool-1-80(80)] 2012-02-08T15:08:55.978+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:193) - Bound value [org.springframework.orm.jpa.EntityManagerHolder@112c6483] for key [org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean@47d4f12f] to thread [thread-pool-1-80(80)] 2012-02-08T15:08:55.979+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:272) - Initializing transaction synchronization 2012-02-08T15:08:55.980+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionAspectSupport.java:362) - Getting transaction for [xxx.vs.common.services.inbound.IntermedServiceImpl.getIntermed] 2012-02-08T15:08:55.983+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (ExtendedEntityManagerCreator.java:423) - Starting resource local transaction on application-managed EntityManager [org.hibernate.ejb.EntityManagerImpl@46d002f4] 2012-02-08T15:08:55.984+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:193) - Bound value [org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization@797add43] for key [org.hibernate.ejb.EntityManagerImpl@46d002f4] to thread [thread-pool-1-80(80)] 2012-02-08T15:08:55.986+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (ExtendedEntityManagerCreator.java:400) - Joined local transaction 2012-02-08T15:08:55.991+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionAspectSupport.java:391) - Completing transaction for [xxx.vs.common.services.inbound.IntermedServiceImpl.getIntermed] 2012-02-08T15:08:55.992+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:922) - Triggering beforeCommit synchronization 2012-02-08T15:08:55.994+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:935) - Triggering beforeCompletion synchronization 2012-02-08T15:08:56.001+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:243) - Removed value [org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization@797add43] for key [org.hibernate.ejb.EntityManagerImpl@46d002f4] from thread [thread-pool-1-80(80)] 2012-02-08T15:08:56.002+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:752) - Initiating transaction commit 2012-02-08T15:08:56.003+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:507) - Committing JPA transaction on EntityManager [org.hibernate.ejb.EntityManagerImpl@edf83f9] 2012-02-08T15:08:56.008+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:948) - Triggering afterCommit synchronization 2012-02-08T15:08:56.010+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:964) - Triggering afterCompletion synchronization 2012-02-08T15:08:56.011+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:331) - Clearing transaction synchronization 2012-02-08T15:08:56.012+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:243) - Removed value [org.springframework.orm.jpa.EntityManagerHolder@112c6483] for key [org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean@47d4f12f] from thread [thread-pool-1-80(80)] 2012-02-08T15:08:56.021+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:243) - Removed value [org.springframework.jdbc.datasource.ConnectionHolder@4fb57177] for key [com.sun.gjc.spi.jdbc40.DataSource40@75fa4851] from thread [thread-pool-1-80(80)] 2012-02-08T15:08:56.021+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:593) - Closing JPA EntityManager [org.hibernate.ejb.EntityManagerImpl@edf83f9] after transaction 2012-02-08T15:08:56.022+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (EntityManagerFactoryUtils.java:343) - Closing JPA EntityManager 2012-02-08T15:08:56.023+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|ERROR [thread-pool-1-80(80)] (PopupAction.java:39) - Cannot get Intermed!!! Transaction not active; nested exception is java.lang.IllegalStateException: Transaction not active 2012-02-08T15:08:56.024+0200|SEVERE||_ThreadID=184;_ThreadName=Thread-5;|org.springframework.dao.InvalidDataAccessApiUsageException: Transaction not active; nested exception is java.lang.IllegalStateException: Transaction not active at org.springframework.orm.jpa.EntityManagerFactoryUtils.convertJpaAccessExceptionIfPossible(EntityManagerFactoryUtils.java:298) at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:106) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization.convertException(ExtendedEntityManagerCreator.java:501) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization.afterCommit(ExtendedEntityManagerCreator.java:481) at org.springframework.transaction.support.TransactionSynchronizationUtils.invokeAfterCommit(TransactionSynchronizationUtils.java:133) at org.springframework.transaction.support.TransactionSynchronizationUtils.triggerAfterCommit(TransactionSynchronizationUtils.java:121) at org.springframework.transaction.support.AbstractPlatformTransactionManager.triggerAfterCommit(AbstractPlatformTransactionManager.java:950) at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:796) at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723) at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:393) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:120) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202) at $Proxy325.getIntermed(Unknown Source) at xxx.vs.common.actions.PopupAction.execute(PopupAction.java:37) at sun.reflect.GeneratedMethodAccessor1581.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:453) at com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:292) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:255) at org.apache.struts2.interceptor.debugging.DebuggingInterceptor.intercept(DebuggingInterceptor.java:256) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:176) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:265) at org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ConversionErrorInterceptor.intercept(ConversionErrorInterceptor.java:138) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:211) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:211) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.StaticParametersInterceptor.intercept(StaticParametersInterceptor.java:190) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.MultiselectInterceptor.intercept(MultiselectInterceptor.java:75) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.CheckboxInterceptor.intercept(CheckboxInterceptor.java:90) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.FileUploadInterceptor.intercept(FileUploadInterceptor.java:243) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ModelDrivenInterceptor.intercept(ModelDrivenInterceptor.java:100) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ScopedModelDrivenInterceptor.intercept(ScopedModelDrivenInterceptor.java:141) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ChainingInterceptor.intercept(ChainingInterceptor.java:145) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.PrepareInterceptor.doIntercept(PrepareInterceptor.java:171) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.I18nInterceptor.intercept(I18nInterceptor.java:176) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.ServletConfigInterceptor.intercept(ServletConfigInterceptor.java:164) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.AliasInterceptor.intercept(AliasInterceptor.java:192) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ExceptionMappingInterceptor.intercept(ExceptionMappingInterceptor.java:187) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at xxx.vs.common.utils.AuthenticationInterceptor.intercept(AuthenticationInterceptor.java:78) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.googlecode.sslplugin.interceptors.SSLInterceptor.intercept(SSLInterceptor.java:128) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.impl.StrutsActionProxy.execute(StrutsActionProxy.java:54) at org.apache.struts2.dispatcher.Dispatcher.serviceAction(Dispatcher.java:510) at org.apache.struts2.dispatcher.ng.ExecuteOperations.executeAction(ExecuteOperations.java:77) at org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:91) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:98) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:91) at org.apache.catalina 2012-02-08T15:08:56.024+0200|SEVERE||_ThreadID=184;_ThreadName=Thread-5;|.core.StandardHostValve.invoke(StandardHostValve.java:162) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:330) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:174) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:828) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:725) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1019) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:225) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:679) Caused by: java.lang.IllegalStateException: Transaction not active at org.hibernate.ejb.TransactionImpl.commit(TransactionImpl.java:69) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization.afterCommit(ExtendedEntityManagerCreator.java:478) ... 93 more so again..... any ideea ?

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Web image loaded by thread in android

    - by Bostjan
    I have an extended BaseAdapter in a ListActivity: private static class RequestAdapter extends BaseAdapter { and some handlers and runnables defined in it // Need handler for callbacks to the UI thread final Handler mHandler = new Handler(); // Create runnable for posting final Runnable mUpdateResults = new Runnable() { public void run() { loadAvatar(); } }; protected static void loadAvatar() { // TODO Auto-generated method stub //ava.setImageBitmap(getImageBitmap("URL"+pic)); buddyIcon.setImageBitmap(avatar); } In the getView function of the Adapter, I'm getting the view like this: if (convertView == null) { convertView = mInflater.inflate(R.layout.messageitem, null); // Creates a ViewHolder and store references to the two children views // we want to bind data to. holder = new ViewHolder(); holder.username = (TextView) convertView.findViewById(R.id.username); holder.date = (TextView) convertView.findViewById(R.id.dateValue); holder.time = (TextView) convertView.findViewById(R.id.timeValue); holder.notType = (TextView) convertView.findViewById(R.id.notType); holder.newMsg = (ImageView) convertView.findViewById(R.id.newMsg); holder.realUsername = (TextView) convertView.findViewById(R.id.realUsername); holder.replied = (ImageView) convertView.findViewById(R.id.replied); holder.msgID = (TextView) convertView.findViewById(R.id.msgID_fr); holder.avatar = (ImageView) convertView.findViewById(R.id.buddyIcon); holder.msgPreview = (TextView) convertView.findViewById(R.id.msgPreview); convertView.setTag(holder); } else { // Get the ViewHolder back to get fast access to the TextView // and the ImageView. holder = (ViewHolder) convertView.getTag(); } and the image is getting loaded this way: Thread sepThread = new Thread() { public void run() { String ava; ava = request[8].replace(".", "_micro."); Log.e("ava thread",ava+", username: "+request[0]); avatar = getImageBitmap(URL+ava); buddyIcon = holder.avatar; mHandler.post(mUpdateResults); //holder.avatar.setImageBitmap(getImageBitmap(URL+ava)); } }; sepThread.start(); Now, the problem I'm having is that if there are more items that need to display the same picture, not all of those pictures get displayed. When you scroll up and down the list maybe you end up filling all of them. When I tried the commented out line (holder.avatar.setImageBitmap...) the app sometimes force closes with "only the thread that created the view can request...". But only sometimes. Any idea how I can fix this? Either option.

    Read the article

  • Linker Issues with boost::thread under linux using Eclipse and CMake

    - by OcularProgrammer
    I'm in the process of attempting to port some code across from PC to Ubuntu, and am having some issues due to limited experience developing under linux. We use CMake to generate all our build stuff. Under windows I'm making VS2010 projects, and under Linux I'm making Eclipse projects. I've managed to get my OpenCV stuff ported across successfully, but am having major headaches trying to port my threaded boost apps. Just so we're clear, the steps I have followed so-far on a clean Ubuntu 12 installation. (I've done 2 clean re-installs to try and fix potential library cock-ups, now I'm just giving up and asking): Install Eclipse and Eclipse CDT using my package manager Install CMake and CMake Gui using my package manager Install libboost-all-dev using my package manager So-far that's all I've done. I can create the eclipse project using CMake with no errors, so CMake is successfully finding my boost install. When I try and build through eclipse is when I get issues; The app I'm attempting to build uses boost::asio for some UDP I/O and boost::thread to create worker threads for the asio I/O services. I can successfully compile each module, but when I come to link I get spammed with errors such as: /usr/bin/c++ CMakeFiles/RE05DevelopmentDemo.dir/main.cpp.o CMakeFiles/RE05DevelopmentDemo.dir/RE05FusionListener/RE05FusionListener.cpp.o CMakeFiles/RE05DevelopmentDemo.dir/NewEye/NewEye.cpp.o -o RE05DevelopmentDemo -rdynamic -Wl,-Bstatic -lboost_system-mt -lboost_date_time-mt -lboost_regex-mt -lboost_thread-mt -Wl,-Bdynamic /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `void boost::call_once<void (*)()>(boost::once_flag&, void (*)()) [clone .constprop.98]': make[2]: Leaving directory `/home/david/Code/Build/Support/RE05DevDemo' (.text+0xc8): undefined reference to `pthread_key_create' /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `boost::this_thread::interruption_enabled()': (.text+0x540): undefined reference to `pthread_getspecific' make[1]: Leaving directory `/home/david/Code/Build/Support/RE05DevDemo' /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `boost::this_thread::disable_interruption::disable_interruption()': (.text+0x570): undefined reference to `pthread_getspecific' /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `boost::this_thread::disable_interruption::disable_interruption()': (.text+0x59f): undefined reference to `pthread_getspecific' Some Gotchas that I have collected from other StackOverflow posts and have already checked: The boost libs are all present at /usr/lib I am not getting any compile errors for inability to find the boost headers, so they must be getting found. I am trying to link statically, but I believe eclipse should be passing the correct arguments to make that happen since my CMakeLists.txt includes SET(Boost_USE_STATIC_LIBS ON) I'm officially out of ideas here, I have tried doing local builds of boost and a bunch of other stuff with no more success. I even re-installed Ubuntu to ensure I haven't completely fracked the libs directories and links with multiple weird versions or anything else. Any help would be muchly appreciated.

    Read the article

  • Sync between local service with a thread and an activity

    - by Henrik
    Hello all, I'm trying to think of a way on how to sync in between a local service and the main activity. The local service has, A thread with a socket connection that could receive data at any time. A list/array with data. At any time the socket could receive data and add it to the list. The activity needs to display this data. So when the activity starts up it needs to attach or start the local service and fetch the list. It also needs to be notified if the list is updated. I think I would need to sync my list somehow so the local service does not add a new entry to it while the activity fetches the list when connecting to the service. Any ideas? Thanks.

    Read the article

  • Test server on a local network with XAMPP

    - by hopscotch1978
    Hi, I'm not very proficient with networks and could use some help. I've got a Win 7 desktop with XAMPP which acts as my local dev machine. I've configured a virtual host on the desktop which I'm able to access fine. If I'm understanding things correctly, the virtual host uses port 80 (<VirtualHost 127.0.0.1:80>). I've just tried to configure a separate Win XP laptop on the local wireless network to connect to the main desktop for testing purposes. I've added the IP address and virtual host name to my Hosts file on the laptop. My virtual host is imaginatively named "virtualhost1". When I type this into my laptop browser, it connects correctly to the main desktop and I get the XAMPP welcome screen. But I can't seem to get to the actual site, just the XAMPP welcome screen. It kind of jumps the browser to http://virtualhost1/xampp/. I think it's a port issue of some sort but I have no idea how to resolve it. I would get the same XAMPP welcome screen on my desktop if I omitted ":80" from the virtual host declaration. On my main desktop, typing "virtualhost1" to the browser address bar gives me the site correctly, not the XAMPP welcome screen. Help would be appreciated. Thank you.

    Read the article

  • Howto configure openSuSE firewall to route local traffic to local ports

    - by Eduard Wirch
    I have openSUSE 11.3 installed. I'm using the openSUSE firewall configuration mechanism (/etc/sysconfig/SuSEfirewall2). I have a http server application running on port 8080. I want the http service to be accessible using port 80. I created a redirect rule usign: FW_REDIRECT="0/0,0/0,tcp,80,8080" This works fine for every request coming from external. But it doesn't for local requests. (example: wget http://myserver/) Is there a way how I can tell the firewall to redirect local requests addressed for 80 to port 8080? (using the SUSE firewall configuration file)

    Read the article

  • Howto configure openSuSE firewall to route local traffic to local ports

    - by Eduard Wirch
    I have openSUSE 11.3 installed. I'm using the openSUSE firewall configuration mechanism (/etc/sysconfig/SuSEfirewall2). I have a http server application running on port 8080. I want the http service to be accessible using port 80. I created a redirect rule usign: FW_REDIRECT="0/0,0/0,tcp,80,8080" This works fine for every request coming from external. But it doesn't for local requests. (example: wget http://myserver/) Is there a way how I can tell the firewall to redirect local requests addressed for 80 to port 8080? (using the SUSE firewall configuration file)

    Read the article

  • USB drives not recognized all of a sudden

    - by Siddharth
    I have tried most of the advice on askubuntu and other sites, usb_storage enable to fdisk -l. But I am unable to find steps to get it working again. lsusb results Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 002: ID 413c:3012 Dell Computer Corp. Optical Wheel Mouse Bus 005 Device 002: ID 413c:2105 Dell Computer Corp. Model L100 Keyboard Bus 001 Device 005: ID 8564:1000 dmseg | tail reports [ 69.567948] usb 1-4: USB disconnect, device number 4 [ 74.084041] usb 1-6: new high-speed USB device number 5 using ehci_hcd [ 74.240484] Initializing USB Mass Storage driver... [ 74.256033] scsi5 : usb-storage 1-6:1.0 [ 74.256145] usbcore: registered new interface driver usb-storage [ 74.256147] USB Mass Storage support registered. [ 74.257290] usbcore: deregistering interface driver usb-storage fdisk -l reports Device Boot Start End Blocks Id System /dev/sda1 * 2048 972656639 486327296 83 Linux /dev/sda2 972658686 976771071 2056193 5 Extended /dev/sda5 972658688 976771071 2056192 82 Linux swap / Solaris I think I need steps to install and get usb_storage module working. Edit : I tried sudo modprobe -v usb-storage reports sudo modprobe -v usb-storage insmod /lib/modules/3.2.0-48-generic-pae/kernel/drivers/usb/storage/usb-storage.ko Still no usb driver mounted. Nor does a device show up in /dev. Any step by step process to debug and fix this will be really helpful. Thanks.

    Read the article

  • android shut-down errors / thread problems

    - by iQue
    Im starting to deal with some stuff in my game that I thought were "minor problems" and one of these are that I get an error every time I shut down my game that looks like this: 09-05 21:40:58.320: E/AndroidRuntime(30401): FATAL EXCEPTION: Thread-4898 09-05 21:40:58.320: E/AndroidRuntime(30401): java.lang.NullPointerException 09-05 21:40:58.320: E/AndroidRuntime(30401): at nielsen.happy.shooter.MainGamePanel.render(MainGamePanel.java:94) 09-05 21:40:58.320: E/AndroidRuntime(30401): at nielsen.happy.shooter.MainThread.run(MainThread.java:101) on these lines is my GameViews rendering-method and the line in my Thread that calls my GameViews rendering-method. Im guessing Something in my surfaceDestroyed(SurfaceHolder holder) is wrong, or maybe Im not ending the tread in the right place. My surfaceDestroyed-method: public void surfaceDestroyed(SurfaceHolder holder) { Log.d(TAG, "Surface is being destroyed"); boolean retry = true; ((MainThread)thread).setRunning(false); while (retry) { try { ((MainThread)thread).join(); retry = false; } catch (InterruptedException e) { } } Log.d(TAG, "Thread was shut down cleanly"); } Also, In my activity for this View my onPaus, onDestoy and onStop are empty, do I maybe need to add something there? The crash occurs when I press my home-button on the phone, or any other button that makes the game stop. But as I understand it the onPaus is called when you press the Home-button.. This is really new territory for me so Im sorry if im missing something obvious or something really big. adding my surfaceCreated method asweel since that is where I start this thread: public void surfaceCreated(SurfaceHolder holder) { controls = new GameControls(this); setOnTouchListener(controls); timer1.schedule(new Task(this), 0, delay1); thread.setRunning(true); thread.start(); } and aslong as this is running, my game is rendering and updating.

    Read the article

  • Intel Rapid Storage Technology (pre-OS) driver installation

    - by Nero theZero
    My desktop machine is built on Gigabyte GA-Z87-UD3H and Gigabyte provides the latest driver for Intel Rapid Storage Technology (IRST), which I installed after installing the OS. Same goes for my Lenovo Thinkpad-T420. And for both machine, checking the controller device under the IDE ATA/ATAPI Controllers section in Device Manager I see the driver has been updated to the latest version. I set the SATA controller to AHCI from BIOS On the desktop machine I have one WD 2TB BLACK & one WD 3TB Green I don’t use RAID, & no chance of using in near future, but according to Intel IRST improves performance in single disk scenario too. Now I have the following questions – What is the actual purpose of IRST (pre-OS install) driver that doesn’t get served with a post-OS driver that I installed? There must be some difference, otherwise there wouldn’t be a pre-OS version of the driver. Right? In the pre-OS procedure (loading the drivers at OS-installation time) after successfully completing the OS installation, do I need that post-OS driver? Because after installing from that one I got a quick launch icon that runs the IRST configuration application. Where do get that after installing the pre-OS driver? As it is “pre-OS”, when I load it at OS-installation time, does it updates anything at BIOS level or anywhere other than HDD? That’s because I’m going to dual boot Windows 7 with Windows 8.1, and after installing Windows 7 when I install Windows 8.1 & load the IRST driver for that, is there any chance of any “overwriting” or OS-incompatibility? In short, is there anything specific to follow while installing the second OS?

    Read the article

  • Is basing storage requirements based on IOPS sufficient?

    - by Boden
    The current system in question is running SBS 2003, and is going to be migrated on new hardware to SBS 2008. Currently I'm seeing on average 200-300 disk transfers per second total across all the arrays in the system. The array seeing the bulk of activity is a 6 disk 7200RPM RAID 6 and it struggles to keep up during high traffic times (idle time often only 10-20%; response times peaking 20-50+ ms). Based on some rough calculations this makes sense (avg ~245 IOPS on this array at 70/30 read to write ratio). I'm considering using a much simpler disk configuration using a single RAID 10 array of 10K disks. Using the same parameters for my calculations above, I'm getting 583 average random IOPS / sec. Granted SBS 2008 is not the same beast as 2003, but I'd like to make the assumption that it'll be similar in terms of disk performance, if not better (Exchange 2007 is easier on the disk and there's no ISA server). Am I correct in believing that the proposed system will be sufficient in terms of performance, or am I missing something? I've read so much about recommended disk configurations for various products like Exchange, and they often mention things like dedicating spindles to logs, etc. I understand the reasoning behind this, but if I've got more than enough random I/O overhead, does it really matter? I've always at the very least had separate spindles for the OS, but I could really reduce cost and complexity if I just had a single, good performing array. So as not to make you guys do my job for me, the generic version of this question is: if I have a projected IOPS figure for a new system, is it sufficient to use this value alone to spec the storage, ignoring "best practice" configurations? (given similar technology, not going from DAS to SAN or anything)

    Read the article

  • Enterprise class storage best practices

    - by churnd
    One thing that has always perplexed me is storage best practices. Filesystems brag about how they can be petabytes or exabytes in size. Yet, I do not know many sysadmins who are willing to let a single volume grow over several terrabytes. I do know the primary reason behind this is how long it would take to rebuild the array should a drive fail. The more drives in a single LUN, the longer this takes and the greater your risk of losing another drive while the rebuild is taking place. Then there's usage reasons. Admins will carve out a LUN based on how much space they think needs to be allocated to the project. It seems more practical to me for the LUN to be one large array and to use quotas. I understand this wouldn't satisfy every requirement (iSCSI), but I see a lot of NAS systems (NFS) managed this way. I also understand that the underlying volumes can be grown/shrunk as needed quite easily, but wouldn't it be less "risky" to use quotas rather than manipulating volumes and bringing possible data loss into the equation? There may be some other reasons I'm missing, so please enlighten me. Can we not expect filesystems to ever be so large? Are we waiting for the hardware to get faster to cut down on rebuild times?

    Read the article

  • How to diagnose storage system scaling problems?

    - by Unknown
    We are currently testing the maximum sequential read throughput of a storage system (48 disks total behind two HP P2000 arrays) connected to HP DL580 G7 running RHEL 5 with 128 GB of memory. Initial testing has been mainly done by running DD-commands like this: dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=3000 In parallel for each disk. However, we have been unable to scale the results from one array (maximum throughput of 1.3 GB/s) to two (almost the same throughput). Each array is connected to a dedicated host bust adapter, so they should not be the bottleneck. The disks are currently in JBOD configuration, so each disk can be addressed directly. I have two questions: Is running multiple DD commands in parallel really a good way to test maximum read throughput? We have noticed very high SWAPIN-% numbers in iotop, which I find hard to explain because the target is /dev/null How shoud we proceed in trying to find the reason for the scaling problem? Do you thing the server itself is the bottleneck here, or could there be some linux parameters that we have overlooked?

    Read the article

  • Java Stop Server Thread

    - by ikurtz
    the following code is server code in my app: private int serverPort; private Thread serverThread = null; public void networkListen(int port){ serverPort = port; if (serverThread == null){ Runnable serverRunnable = new ServerRunnable(); serverThread = new Thread(serverRunnable); serverThread.start(); } else { } } public class ServerRunnable implements Runnable { public void run(){ try { //networkConnected = false; //netMessage = "Listening for Connection"; //networkMessage = new NetworkMessage(networkConnected, netMessage); //setChanged(); //notifyObservers(networkMessage); ServerSocket serverSocket = new ServerSocket(serverPort, backlog); commSocket = serverSocket.accept(); serverSocket.close(); serverSocket = null; //networkConnected = true; //netMessage = "Connected: " + commSocket.getInetAddress().getHostAddress() + ":" + //commSocket.getPort(); //networkMessage = new NetworkMessage(networkConnected, netMessage); //setChanged(); //notifyObservers(networkMessage); } catch (IOException e){ //networkConnected = false; //netMessage = "ServerRunnable Network Unavailable"; //System.out.println(e.getMessage()); //networkMessage = new NetworkMessage(networkConnected, netMessage); //setChanged(); //notifyObservers(networkMessage); } } } The code sort of works i.e. if im attempting a straight connection both ends communicate and update. The issue is while im listening for a connection if i want to quit listening then the server thread continues running and causes problems. i know i should not use .stop() on a thread so i was wondering what the solution would look like with this in mind? EDIT: commented out unneeded code.

    Read the article

  • Return Double from Boost thread

    - by Benedikt Wutzi
    Hi I have an Boost thread which should return a double. The function looks like this: void analyser::findup(const double startwl, const double max, double &myret){ this->data.begin(); for(int i = (int)data.size() ; i >= 0;i--){ if(this->data[i].lambda > startwl){ if(this->data[i].db >= (max-30)) { myret = this->data[i+1].lambda; std::cout <<"in thread " << myret << std::endl; return; } } } } this function is called by another function: void analyser::start_find_up(const double startwl, const double max){ double tmp = -42.0; boost::thread up(&analyser::findup,*this, startwl,max,tmp); std::cout << "before join " << tmp << std::endl; up.join(); std::cout << "after join " << tmp << std::endl; } Anyway I've tried and googled almost anything but i can't get it to return a value. The output looks like this right now. before join -42 in thread 843.487 after join -42 Thanks for any help.

    Read the article

  • Thread-safety of read-only memory access

    - by Edmund
    I've implemented the Barnes-Hut gravity algorithm in C as follows: Build a tree of clustered stars. For each star, traverse the tree and apply the gravitational forces from each applicable node. Update the star velocities and positions. Stage 2 is the most expensive stage, and so is implemented in parallel by dividing the set of stars. E.g. with 1000 stars and 2 threads, I have one thread processing the first 500 stars and the second thread processing the second 500. In practice this works: it speeds the computation by about 30% with two threads on a two-core machine, compared to the non-threaded version. Additionally, it yields the same numerical results as the original non-threaded version. My concern is that the two threads are accessing the same resource (namely, the tree) simultaneously. I have not added any synchronisation to the thread workers, so it's likely they will attempt to read from the same location at some point. Although access to the tree is strictly read-only I am not 100% sure it's safe. It has worked when I've tested it but I know this is no guarantee of correctness! Questions Do I need to make a private copy of the tree for each thread? Even if it is safe, are there performance problems of accessing the same memory from multiple threads?

    Read the article

  • Passing variables from a thread to another form using C#

    - by kirstom14
    Hi, I am trying to pass 2 variables from a thread, in the MainForm, to another form, but when doing so an error occurs. private void TrackingThread( ) { float targetX = 0; float targetY = 0; while ( true ) { camera1Acquired.WaitOne( ); camera2Acquired.WaitOne( ); lock ( this ) { // stop the thread if it was signaled if ( ( x1 == -1 ) && ( y1 == -1 ) && ( x2 == -1 ) && ( y2 == -1 ) ) { break; } // get middle point targetX = ( x1 + x2 ) / 2; targetY = ( y1 + y2 ) / 2; } if (directionForm != null) { directionForm.RunMotors(targetX, targetY); } } } In the form, directionForm, I am simply displaying the variables targetX and targetY. The code for the directionForm.RunMotors() is the following: public void RunMotors(float x, float y) { label1.Text = "X-ordinate " + x.ToString(); label2.Text = "Y-ordinate " + y.ToString(); } The error happen when I am trying to display the two variables: "InvalidOperationException was unhandled Cross-threading operation not valid: Control label1 accessed from a thread other than . the thread it was created on" What shall I do??? Thanks in advance

    Read the article

  • Need a Java based interruptible timer thread

    - by LambeauLeap
    I have a Main Program which is running a script on the target device(smart phone) and in a while loop waiting for stdout messages. However in this particular case, some of the heartbeat messages on the stdout could be spaced almost 45secs to a 1minute apart. something like: stream = device.runProgram(RESTORE_LOGS, new String[] {}); stream.flush(); String line = stream.readLine(); while (line.compareTo("") != 0) { reporter.commentOnJob(jobId, line); line = stream.readLine(); } So, I want to be a able to start a new interruptible thread after reading line from stdout with a required a sleep window. Upon being able to read a new line, I want to be able to interrupt/stop(having trouble killing the process), handle the newline of stdout text and restart a process. And it the event I am not able to read a line within the timer window(say 45secs) I want to a way to get out of my while loop either. I already tried the thread.run, thread.interrupt approach. But having trouble killing and starting a new thread. Is this the best way out or am I missing something obvious?

    Read the article

  • .NET List Thread-Safe Implementation Suggestion needed

    - by Bamboo
    .Net List class isn't thread safe. I hope to achieve the minimal lock needed and yet still fulfilling the requirement such that as for reading, phantom record is allowed, and for writing, they must be thread-safe so there won't be any lost updates. So I have something like public static List<string> list = new List<string>(); In Methods that have **List.Add**/**List.Remove** , I always lock to assure thread safety lock (lockHelper) { list.Add(obj); or list.Remove(obj); } In Methods that requires **List Reading** I don't care about phantom record so I go ahead to read without any locking. In this case. Return a bool by checking whether a string had been added. if (list.Count() != 0) { return list.Contains("some string") } All I did was locking write accesses, and allow read accesses to go through without any locking. Is my thread safety idea valid? I understand there is List size expansion. Will it be ok? My guess is that when a List is expanding, it may uses a temp. list. This is ok becasue the temp list size will always have a boundary, and .Net class is well implemented, ie. there shouldn't be any indexOutOfBound or circular reference problems when reading was caught in updates.

    Read the article

  • DOM Storage and locks

    - by user535759
    Since DOM storage and its equivalencies persist in between tabs and windows, I've thought about using it for message passing. The problem is that fetch and store are different operations, and therefore not atomic. I have models that rely on UUID generation, conflict resolutions, and beaconing to do the small subset of what I need to do, but my real question is this: Since the local storage is a shared memory resource, what are the locking mechanisms available for mutual access?

    Read the article

  • .NET SerialPort DataReceived event thread interference with main thread

    - by Kiran
    I am writing a serial communication program using the SerialPort class in C# to interact with a strip machine connected via a RS232 cable. When i send the command to the machine it responds with some bytes depending on the command. Like when i send a "\D" command, i am expecting to download the machine program data of 180 bytes as a continous string. As per the machine's manual, it suggests as a best practice to send an unreognized characters like comma (,) character to make sure the machine is initialized before sending the first command in the cycle. My serial communication code is as follows: public class SerialHelper { SerialPort commPort = null; string currentReceived = string.Empty; string receivedStr = string.Empty; private bool CommInitialized() { try { commPort = new SerialPort(); commPort.PortName = "COM1"; if (!commPort.IsOpen) commPort.Open(); commPort.BaudRate = 9600; commPort.Parity = System.IO.Ports.Parity.None; commPort.StopBits = StopBits.One; commPort.DataBits = 8; commPort.RtsEnable = true; commPort.DtrEnable = true; commPort.DataReceived += new SerialDataReceivedEventHandler(commPort_DataReceived); return true; } catch (Exception ex) { return false; } } void commPort_DataReceived(object sender, SerialDataReceivedEventArgs e) { SerialPort currentPort = (SerialPort)sender; currentReceived = currentPort.ReadExisting(); receivedStr += currentReceived; } internal int CommIO(string outString, int outLen, ref string inBuffer, int inLen) { receivedStr = string.Empty; inBuffer = string.Empty; if (CommInitialized()) { commPort.Write(outString); } System.Threading.Thread.Sleep(1500); int i = 0; while ((receivedStr.Length < inLen) && i < 10) { System.Threading.Thread.Sleep(500); i += 1; } if (!string.IsNullOrEmpty(receivedStr)) { inBuffer = receivedStr; } commPort.Close(); return inBuffer.Length; } } I am calling this code from a windows form as follows: len = SerialHelperObj.CommIO(",",1,ref inBuffer, 4) len = SerialHelperObj.CommIO(",",1,ref inBuffer, 4) If(inBuffer == "!?*O") { len = SerialHelperObj.CommIO("\D",2,ref inBuffer, 180) } A valid return value from the serial port looks like this: \D00000010000000000010 550 3250 0000256000 and so on ... I am getting some thing like this: \D00000010D,, 000 550 D,, and so on... I feel that my comm calls are getting interferred with the one when i send commands. But i am trying to make sure the result of the comma command then initiating the actual command. but the received thread is inserting the bytes from the previous communication cycle. Can any one please shed some light into this...? I lost quite some hair just trying to get this work. I am not sure where i am doing wrong

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >