Search Results

Search found 6431 results on 258 pages for 'cache invalidation'.

Page 75/258 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Hibernate Query Exception

    - by dharga
    I've got a hibernate query I'm trying to get working but keep getting an exception with a not so helpful stack trace. I'm including the code, the stack trace, and hibernate chatter before the exception is thrown. If you need me to include the entity classes for MessageTarget and GrpExclusion let me know in comments and I'll add them. public List<MessageTarget> findMessageTargets(int age, String gender, String businessCode, String groupId, String systemCode) { Session session = getHibernateTemplate().getSessionFactory().openSession(); List<MessageTarget> results = new ArrayList<MessageTarget>(); try { String hSql = "from MessageTarget mt where " + "not exists (select GrpExclusion where grp_no = ?) and " + "(trgt_gndr_cd = 'A' or trgt_gndr_cd = ?) and " + "sys_src_cd = ? and " + "bampi_busn_sgmnt_cd = ? and " + "trgt_low_age <= ? and " + "trgt_high_age >= ? and " + "(effectiveDate is null or effectiveDate <= ?) and " + "(termDate is null or termDate >= ?)"; results = session.createQuery(hSql) .setParameter(0, groupId) .setParameter(1, gender) .setParameter(2, systemCode) .setParameter(3, businessCode) .setParameter(4, age) .setParameter(5, age) .setParameter(6, new Date()) .setParameter(7, new Date()) .list(); } catch (Exception e) { System.err.println(e.getMessage()); e.printStackTrace(); } finally { session.close(); } return results; } Here's the stacktrace. [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R java.lang.NullPointerException [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.util.SessionFactoryHelper.findSQLFunction(SessionFactoryHelper.java:365) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.tree.IdentNode.getDataType(IdentNode.java:289) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.tree.SelectClause.initializeExplicitSelectClause(SelectClause.java:165) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.HqlSqlWalker.useSelectClause(HqlSqlWalker.java:831) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.HqlSqlWalker.processQuery(HqlSqlWalker.java:619) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:672) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.collectionFunctionOrSubselect(HqlSqlBaseWalker.java:4465) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.comparisonExpr(HqlSqlBaseWalker.java:4165) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1864) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1839) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.whereClause(HqlSqlBaseWalker.java:818) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:604) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectStatement(HqlSqlBaseWalker.java:288) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.statement(HqlSqlBaseWalker.java:231) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.QueryTranslatorImpl.analyze(QueryTranslatorImpl.java:254) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.QueryTranslatorImpl.doCompile(QueryTranslatorImpl.java:185) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.QueryTranslatorImpl.compile(QueryTranslatorImpl.java:136) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:101) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:80) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.engine.query.QueryPlanCache.getHQLQueryPlan(QueryPlanCache.java:94) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.impl.AbstractSessionImpl.getHQLQueryPlan(AbstractSessionImpl.java:156) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.impl.AbstractSessionImpl.createQuery(AbstractSessionImpl.java:135) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.impl.SessionImpl.createQuery(SessionImpl.java:1651) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.bcbst.bamp.ws.dao.MessageTargetDAOImpl.findMessageTargets(MessageTargetDAOImpl.java:30) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.bcbst.bamp.ws.common.AlertReminder.findMessageTargets(AlertReminder.java:22) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at java.lang.reflect.Method.invoke(Method.java:599) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.jaxws.server.dispatcher.JavaDispatcher.invokeTargetOperation(JavaDispatcher.java:81) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.jaxws.server.dispatcher.JavaBeanDispatcher.invoke(JavaBeanDispatcher.java:98) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.jaxws.server.EndpointController.invoke(EndpointController.java:109) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.jaxws.server.JAXWSMessageReceiver.receive(JAXWSMessageReceiver.java:159) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:188) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.transport.http.HTTPTransportUtils.processHTTPPostRequest(HTTPTransportUtils.java:275) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.websvcs.transport.http.WASAxis2Servlet.doPost(WASAxis2Servlet.java:1389) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at javax.servlet.http.HttpServlet.service(HttpServlet.java:738) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at javax.servlet.http.HttpServlet.service(HttpServlet.java:831) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1536) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:829) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:458) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.servlet.ServletWrapperImpl.handleRequest(ServletWrapperImpl.java:175) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:3742) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.webapp.WebGroup.handleRequest(WebGroup.java:276) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:929) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.WSWebContainer.handleRequest(WSWebContainer.java:1583) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:178) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:455) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:384) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.ready(HttpInboundLink.java:272) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:214) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:113) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:165) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:204) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:775) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:905) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1550) Here's the hibernate chatter. [5/6/10 15:05:20:651 EDT] 00000017 XmlBeanDefini I org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions Loading XML bean definitions from class path resource [beans.xml] [5/6/10 15:05:20:823 EDT] 00000017 Configuration I org.slf4j.impl.JCLLoggerAdapter info configuring from url: file:/C:/workspaces/bampi/AlertReminderWS/WebContent/WEB-INF/classes/hibernate.cfg.xml [5/6/10 15:05:20:838 EDT] 00000017 Configuration I org.slf4j.impl.JCLLoggerAdapter info Configured SessionFactory: java:hibernate/Alert/SessionFactory1.0.3 [5/6/10 15:05:20:838 EDT] 00000017 AnnotationBin I org.hibernate.cfg.AnnotationBinder bindClass Binding entity from annotated class: com.bcbst.bamp.ws.model.MessageTarget [5/6/10 15:05:20:838 EDT] 00000017 EntityBinder I org.hibernate.cfg.annotations.EntityBinder bindTable Bind entity com.bcbst.bamp.ws.model.MessageTarget on table MessageTarget [5/6/10 15:05:20:854 EDT] 00000017 AnnotationBin I org.hibernate.cfg.AnnotationBinder bindClass Binding entity from annotated class: com.bcbst.bamp.ws.model.GrpExclusion [5/6/10 15:05:20:854 EDT] 00000017 EntityBinder I org.hibernate.cfg.annotations.EntityBinder bindTable Bind entity com.bcbst.bamp.ws.model.GrpExclusion on table GrpExclusion [5/6/10 15:05:20:854 EDT] 00000017 CollectionBin I org.hibernate.cfg.annotations.CollectionBinder bindOneToManySecondPass Mapping collection: com.bcbst.bamp.ws.model.MessageTarget.exclusions -> GrpExclusion [5/6/10 15:05:20:885 EDT] 00000017 AnnotationSes I org.springframework.orm.hibernate3.LocalSessionFactoryBean buildSessionFactory Building new Hibernate SessionFactory [5/6/10 15:05:20:901 EDT] 00000017 ConnectionPro I org.slf4j.impl.JCLLoggerAdapter info Initializing connection provider: org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider [5/6/10 15:05:20:901 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info RDBMS: Microsoft SQL Server, version: 9.00.4035 [5/6/10 15:05:20:901 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info JDBC driver: Microsoft SQL Server 2005 JDBC Driver, version: 1.2.2828.100 [5/6/10 15:05:20:901 EDT] 00000017 Dialect I org.slf4j.impl.JCLLoggerAdapter info Using dialect: org.hibernate.dialect.SQLServerDialect [5/6/10 15:05:20:916 EDT] 00000017 TransactionFa I org.slf4j.impl.JCLLoggerAdapter info Transaction strategy: org.springframework.orm.hibernate3.SpringTransactionFactory [5/6/10 15:05:20:916 EDT] 00000017 TransactionMa I org.slf4j.impl.JCLLoggerAdapter info No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Automatic flush during beforeCompletion(): disabled [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Automatic session close at end of transaction: disabled [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Scrollable result sets: enabled [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info JDBC3 getGeneratedKeys(): enabled [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Connection release mode: auto [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Default batch fetch size: 1 [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Generate SQL with comments: disabled [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Order SQL updates by primary key: disabled [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Order SQL inserts for batching: disabled [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory [5/6/10 15:05:20:932 EDT] 00000017 ASTQueryTrans I org.slf4j.impl.JCLLoggerAdapter info Using ASTQueryTranslatorFactory [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Query language substitutions: {} [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info JPA-QL strict compliance: disabled [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Second-level cache: enabled [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Query cache: disabled [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge [5/6/10 15:05:20:932 EDT] 00000017 RegionFactory I org.slf4j.impl.JCLLoggerAdapter info Cache provider: org.hibernate.cache.NoCacheProvider [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Optimize cache for minimal puts: disabled [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Structured second-level cache entries: disabled [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Statistics: disabled [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Deleted entity synthetic identifier rollback: disabled [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Default entity-mode: pojo [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Named query checking : enabled [5/6/10 15:05:20:979 EDT] 00000017 SessionFactor I org.slf4j.impl.JCLLoggerAdapter info building session factory [5/6/10 15:05:21:010 EDT] 00000017 SessionFactor I org.slf4j.impl.JCLLoggerAdapter info Factory name: java:hibernate/Alert/SessionFactory1.0.3 [5/6/10 15:05:21:010 EDT] 00000017 NamingHelper I org.slf4j.impl.JCLLoggerAdapter info JNDI InitialContext properties:{} [5/6/10 15:05:21:010 EDT] 00000017 NamingHelper I org.slf4j.impl.JCLLoggerAdapter info Creating subcontext: java:hibernate [5/6/10 15:05:21:010 EDT] 00000017 NamingHelper I org.slf4j.impl.JCLLoggerAdapter info Creating subcontext: Alert [5/6/10 15:05:21:010 EDT] 00000017 SessionFactor I org.slf4j.impl.JCLLoggerAdapter info Bound factory to JNDI name: java:hibernate/Alert/SessionFactory1.0.3 [5/6/10 15:05:21:026 EDT] 00000017 SessionFactor W org.slf4j.impl.JCLLoggerAdapter warn InitialContext did not implement EventContext [5/6/10 15:05:21:041 EDT] 00000017 PARSER E org.slf4j.impl.JCLLoggerAdapter error <AST>:0:0: unexpected end of subtree

    Read the article

  • SQL SERVER – Parsing SSIS Catalog Messages – Notes from the Field #030

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data. In this episode of the Notes from the Field series I requested SSIS Expert Andy Leonard to discuss one of the most interesting concepts of SSIS Catalog Messages. There are plenty of interesting and useful information captured in the SSIS catalog and we will learn together how to explore the same. The SSIS Catalog captures a lot of cool information by default. Here’s a query I use to parse messages from the catalog.operation_messages table in the SSISDB database, where the logged messages are stored. This query is set up to parse a default message transmitted by the Lookup Transformation. It’s one of my favorite messages in the SSIS log because it gives me excellent information when I’m tuning SSIS data flows. The message reads similar to: Data Flow Task:Information: The Lookup processed 4485 rows in the cache. The processing time was 0.015 seconds. The cache used 1376895 bytes of memory. The query: USE SSISDB GO DECLARE @MessageSourceType INT = 60 DECLARE @StartOfIDString VARCHAR(100) = 'The Lookup processed ' DECLARE @ProcessingTimeString VARCHAR(100) = 'The processing time was ' DECLARE @CacheUsedString VARCHAR(100) = 'The cache used ' DECLARE @StartOfIDSearchString VARCHAR(100) = '%' + @StartOfIDString + '%' DECLARE @ProcessingTimeSearchString VARCHAR(100) = '%' + @ProcessingTimeString + '%' DECLARE @CacheUsedSearchString VARCHAR(100) = '%' + @CacheUsedString + '%' SELECT operation_id , SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))) AS LookupRowsCount , SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))) AS LookupProcessingTime , CASE WHEN (CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))))) = 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) / CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1)))) END AS LookupRowsPerSecond , SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1))) AS LookupBytesUsed ,CASE WHEN (CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))))= 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1)))) / CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) END AS LookupBytesPerRow FROM [catalog].[operation_messages] WHERE message_source_type = @MessageSourceType AND MESSAGE LIKE @StartOfIDSearchString GO Note that you have to set some parameter values: @MessageSourceType [int] – represents the message source type value from the following results: Value     Description 10           Entry APIs, such as T-SQL and CLR Stored procedures 20           External process used to run package (ISServerExec.exe) 30           Package-level objects 40           Control Flow tasks 50           Control Flow containers 60           Data Flow task 70           Custom execution message Note: Taken from Reza Rad’s (excellent!) helper.MessageSourceType table found here. @StartOfIDString [VarChar(100)] – use this to uniquely identify the message field value you wish to parse. In this case, the string ‘The Lookup processed ‘ identifies all the Lookup Transformation messages I desire to parse. @ProcessingTimeString [VarChar(100)] – this parameter is message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Processing Time value. For this execution, I use the string ‘The processing time was ‘. @CacheUsedString [VarChar(100)] – this parameter is also message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Cache  Used value. It returns the memory used, in bytes. For this execution, I use the string ‘The cache used ‘. The other parameters are built from variations of the parameters listed above. The query parses the values into text. The string values are converted to numeric values for ratio calculations; LookupRowsPerSecond and LookupBytesPerRow. Since ratios involve division, CASE statements check for denominators that equal 0. Here are the results in an SSMS grid: This is not the only way to retrieve this information. And much of the code lends itself to conversion to functions. If there is interest, I will share the functions in an upcoming post. If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • How and where to implement basic authentication in Kibana 3

    - by Jabb
    I have put my elasticsearch server behind a Apache reverse proxy that provides basic authentication. Authenticating to Apache directly from the browser works fine. However, when I use Kibana 3 to access the server, I receive authentication errors. Obviously because no auth headers are sent along with Kibana's Ajax calls. I added the below to elastic-angular-client.js in the Kibana vendor directory to implement authentication quick and dirty. But for some reason it does not work. $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); What is the best approach and place to implement basic authentication in Kibana? /*! elastic.js - v1.1.1 - 2013-05-24 * https://github.com/fullscale/elastic.js * Copyright (c) 2013 FullScale Labs, LLC; Licensed MIT */ /*jshint browser:true */ /*global angular:true */ 'use strict'; /* Angular.js service wrapping the elastic.js API. This module can simply be injected into your angular controllers. */ angular.module('elasticjs.service', []) .factory('ejsResource', ['$http', function ($http) { return function (config) { var // use existing ejs object if it exists ejs = window.ejs || {}, /* results are returned as a promise */ promiseThen = function (httpPromise, successcb, errorcb) { return httpPromise.then(function (response) { (successcb || angular.noop)(response.data); return response.data; }, function (response) { (errorcb || angular.noop)(response.data); return response.data; }); }; // check if we have a config object // if not, we have the server url so // we convert it to a config object if (config !== Object(config)) { config = {server: config}; } // set url to empty string if it was not specified if (config.server == null) { config.server = ''; } /* implement the elastic.js client interface for angular */ ejs.client = { server: function (s) { if (s == null) { return config.server; } config.server = s; return this; }, post: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); console.log($http.defaults.headers); path = config.server + path; var reqConfig = {url: path, data: data, method: 'POST'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, get: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on get request, data will be request params var reqConfig = {url: path, params: data, method: 'GET'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, put: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'PUT'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, del: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'DELETE'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, head: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on HEAD request, data will be request params var reqConfig = {url: path, params: data, method: 'HEAD'}; return $http(angular.extend(reqConfig, config)) .then(function (response) { (successcb || angular.noop)(response.headers()); return response.headers(); }, function (response) { (errorcb || angular.noop)(undefined); return undefined; }); } }; return ejs; }; }]); UPDATE 1: I implemented Matts suggestion. However, the server returns a weird response. It seems that the authorization header is not working. Could it have to do with the fact, that I am running Kibana on port 81 and elasticsearch on 8181? OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.46.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.46.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache This is the response HTTP/1.1 401 Authorization Required Date: Fri, 08 Nov 2013 23:47:02 GMT WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 UPDATE 2: Updated all instances with the modified headers in these Kibana files root@localhost:/var/www/kibana# grep -r 'ejsResource(' . ./src/app/controllers/dash.js: $scope.ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/querySrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/filterSrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/dashboard.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); And modified my vhost conf for the reverse proxy like this <VirtualHost *:8181> ProxyRequests Off ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/cake2.2.4/.htpasswd Require valid-user Header always set Access-Control-Allow-Methods "GET, POST, DELETE, OPTIONS, PUT" Header always set Access-Control-Allow-Headers "Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization" Header always set Access-Control-Allow-Credentials "true" Header always set Cache-Control "max-age=0" Header always set Access-Control-Allow-Origin * </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost> Apache sends back the new response headers but the request header still seems to be wrong somewhere. Authentication just doesn't work. Request Headers OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.26.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.26.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache Response Headers HTTP/1.1 401 Authorization Required Date: Sat, 09 Nov 2013 08:48:48 GMT Access-Control-Allow-Methods: GET, POST, DELETE, OPTIONS, PUT Access-Control-Allow-Headers: Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization Access-Control-Allow-Credentials: true Cache-Control: max-age=0 Access-Control-Allow-Origin: * WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 SOLUTION: After doing some more research, I found out that this is definitely a configuration issue with regard to CORS. There are quite a few posts available regarding that topic but it appears that in order to solve my problem, it would be necessary to to make some very granular configurations on apache and also make sure that the right stuff is sent from the browser. So I reconsidered the strategy and found a much simpler solution. Just modify the vhost reverse proxy config to move the elastisearch server AND kibana on the same http port. This also adds even better security to Kibana. This is what I did: <VirtualHost *:8181> ProxyRequests Off ProxyPass /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPassReverse /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/.htpasswd Require valid-user </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost>

    Read the article

  • What is correct HTTP status code when redirecting to a login page?

    - by PHP_Jedi
    When a user is not logged in and tries to access an page that requires login, what is the correct HTTP status code for a redirect to the login page? I don't feel that any of the 3xx fit that description. 10.3.1 300 Multiple Choices The requested resource corresponds to any one of a set of representations, each with its own specific location, and agent- driven negotiation information (section 12) is being provided so that the user (or user agent) can select a preferred representation and redirect its request to that location. Unless it was a HEAD request, the response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content- Type header field. Depending upon the format and the capabilities of the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection. If the server has a preferred choice of representation, it SHOULD include the specific URI for that representation in the Location field; user agents MAY use the Location field value for automatic redirection. This response is cacheable unless indicated otherwise. 10.3.2 301 Moved Permanently The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise. The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request. 10.3.3 302 Found The requested resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 302 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: RFC 1945 and RFC 2068 specify that the client is not allowed to change the method on the redirected request. However, most existing user agent implementations treat 302 as if it were a 303 response, performing a GET on the Location field-value regardless of the original request method. The status codes 303 and 307 have been added for servers that wish to make unambiguously clear which kind of reaction is expected of the client. 10.3.4 303 See Other The response to the request can be found under a different URI and SHOULD be retrieved using a GET method on that resource. This method exists primarily to allow the output of a POST-activated script to redirect the user agent to a selected resource. The new URI is not a substitute reference for the originally requested resource. The 303 response MUST NOT be cached, but the response to the second (redirected) request might be cacheable. The different URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). Note: Many pre-HTTP/1.1 user agents do not understand the 303 status. When interoperability with such clients is a concern, the 302 status code may be used instead, since most user agents react to a 302 response as described here for 303. 10.3.5 304 Not Modified If the client has performed a conditional GET request and access is allowed, but the document has not been modified, the server SHOULD respond with this status code. The 304 response MUST NOT contain a message-body, and thus is always terminated by the first empty line after the header fields. The response MUST include the following header fields: - Date, unless its omission is required by section 14.18.1 If a clockless origin server obeys these rules, and proxies and clients add their own Date to any response received without one (as already specified by [RFC 2068], section 14.19), caches will operate correctly. - ETag and/or Content-Location, if the header would have been sent in a 200 response to the same request - Expires, Cache-Control, and/or Vary, if the field-value might differ from that sent in any previous response for the same variant If the conditional GET used a strong cache validator (see section 13.3.3), the response SHOULD NOT include other entity-headers. Otherwise (i.e., the conditional GET used a weak validator), the response MUST NOT include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers. If a 304 response indicates an entity not currently cached, then the cache MUST disregard the response and repeat the request without the conditional. If a cache uses a received 304 response to update a cache entry, the cache MUST update the entry to reflect any new field values given in the response. 10.3.6 305 Use Proxy The requested resource MUST be accessed through the proxy given by the Location field. The Location field gives the URI of the proxy. The recipient is expected to repeat this single request via the proxy. 305 responses MUST only be generated by origin servers. Note: RFC 2068 was not clear that 305 was intended to redirect a single request, and to be generated by origin servers only. Not observing these limitations has significant security consequences. 10.3.7 306 (Unused) The 306 status code was used in a previous version of the specification, is no longer used, and the code is reserved. 10.3.8 307 Temporary Redirect The requested resource resides temporarily under a different URI. Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s) , since many pre-HTTP/1.1 user agents do not understand the 307 status. Therefore, the note SHOULD contain the information necessary for a user to repeat the original request on the new URI. If the 307 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. I'm using 302 for now, until I find THE correct answer.

    Read the article

  • Hibernate unable to instantiate default tuplizer - cannot find getter

    - by ZeldaPinwheel
    I'm trying to use Hibernate to persist a class that looks like this: public class Item implements Serializable, Comparable<Item> { // Item id private Integer id; // Description of item in inventory private String description; // Number of items described by this inventory item private int count; //Category item belongs to private String category; // Date item was purchased private GregorianCalendar purchaseDate; public Item() { } public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } public int getCount() { return count; } public void setCount(int count) { this.count = count; } public String getCategory() { return category; } public void setCategory(String category) { this.category = category; } public GregorianCalendar getPurchaseDate() { return purchaseDate; } public void setPurchasedate(GregorianCalendar purchaseDate) { this.purchaseDate = purchaseDate; } My Hibernate mapping file contains the following: <property name="puchaseDate" type="java.util.GregorianCalendar"> <column name="purchase_date"></column> </property> When I try to run, I get error messages indicating there is no getter function for the purchaseDate attribute: 577 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - Using Hibernate built-in connection pool (not for production use!) 577 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - Hibernate connection pool size: 20 577 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - autocommit mode: false 592 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - using driver: com.mysql.jdbc.Driver at URL: jdbc:mysql://localhost:3306/home_inventory 592 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - connection properties: {user=root, password=****} 1078 [main] INFO org.hibernate.cfg.SettingsFactory - RDBMS: MySQL, version: 5.1.45 1078 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC driver: MySQL-AB JDBC Driver, version: mysql-connector-java-5.1.12 ( Revision: ${bzr.revision-id} ) 1103 [main] INFO org.hibernate.dialect.Dialect - Using dialect: org.hibernate.dialect.MySQLDialect 1107 [main] INFO org.hibernate.engine.jdbc.JdbcSupportLoader - Disabling contextual LOB creation as JDBC driver reported JDBC version [3] less than 4 1109 [main] INFO org.hibernate.transaction.TransactionFactoryFactory - Using default transaction strategy (direct JDBC transactions) 1110 [main] INFO org.hibernate.transaction.TransactionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 1110 [main] INFO org.hibernate.cfg.SettingsFactory - Automatic flush during beforeCompletion(): disabled 1110 [main] INFO org.hibernate.cfg.SettingsFactory - Automatic session close at end of transaction: disabled 1110 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC batch size: 15 1110 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC batch updates for versioned data: disabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Scrollable result sets: enabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC3 getGeneratedKeys(): enabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Connection release mode: auto 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Maximum outer join fetch depth: 2 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Default batch fetch size: 1 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Generate SQL with comments: disabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Order SQL updates by primary key: disabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Order SQL inserts for batching: disabled 1112 [main] INFO org.hibernate.cfg.SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 1113 [main] INFO org.hibernate.hql.ast.ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Query language substitutions: {} 1113 [main] INFO org.hibernate.cfg.SettingsFactory - JPA-QL strict compliance: disabled 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Second-level cache: enabled 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Query cache: disabled 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Cache region factory : org.hibernate.cache.impl.NoCachingRegionFactory 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Optimize cache for minimal puts: disabled 1114 [main] INFO org.hibernate.cfg.SettingsFactory - Structured second-level cache entries: disabled 1117 [main] INFO org.hibernate.cfg.SettingsFactory - Echoing all SQL to stdout 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Statistics: disabled 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Deleted entity synthetic identifier rollback: disabled 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Default entity-mode: pojo 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Named query checking : enabled 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Check Nullability in Core (should be disabled when Bean Validation is on): enabled 1151 [main] INFO org.hibernate.impl.SessionFactoryImpl - building session factory org.hibernate.HibernateException: Unable to instantiate default tuplizer [org.hibernate.tuple.entity.PojoEntityTuplizer] at org.hibernate.tuple.entity.EntityTuplizerFactory.constructTuplizer(EntityTuplizerFactory.java:110) at org.hibernate.tuple.entity.EntityTuplizerFactory.constructDefaultTuplizer(EntityTuplizerFactory.java:135) at org.hibernate.tuple.entity.EntityEntityModeToTuplizerMapping.<init>(EntityEntityModeToTuplizerMapping.java:80) at org.hibernate.tuple.entity.EntityMetamodel.<init>(EntityMetamodel.java:323) at org.hibernate.persister.entity.AbstractEntityPersister.<init>(AbstractEntityPersister.java:475) at org.hibernate.persister.entity.SingleTableEntityPersister.<init>(SingleTableEntityPersister.java:133) at org.hibernate.persister.PersisterFactory.createClassPersister(PersisterFactory.java:84) at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:295) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1385) at service.HibernateSessionFactory.currentSession(HibernateSessionFactory.java:53) at service.ItemSvcHibImpl.generateReport(ItemSvcHibImpl.java:78) at service.test.ItemSvcTest.testGenerateReport(ItemSvcTest.java:226) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:164) at junit.framework.TestCase.runBare(TestCase.java:130) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:120) at junit.framework.TestSuite.runTest(TestSuite.java:230) at junit.framework.TestSuite.run(TestSuite.java:225) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.hibernate.tuple.entity.EntityTuplizerFactory.constructTuplizer(EntityTuplizerFactory.java:107) ... 29 more Caused by: org.hibernate.PropertyNotFoundException: Could not find a getter for puchaseDate in class domain.Item at org.hibernate.property.BasicPropertyAccessor.createGetter(BasicPropertyAccessor.java:328) at org.hibernate.property.BasicPropertyAccessor.getGetter(BasicPropertyAccessor.java:321) at org.hibernate.mapping.Property.getGetter(Property.java:304) at org.hibernate.tuple.entity.PojoEntityTuplizer.buildPropertyGetter(PojoEntityTuplizer.java:299) at org.hibernate.tuple.entity.AbstractEntityTuplizer.<init>(AbstractEntityTuplizer.java:158) at org.hibernate.tuple.entity.PojoEntityTuplizer.<init>(PojoEntityTuplizer.java:77) ... 34 more I'm new to Hibernate, so I don't know all the ins and outs, but I do have the getter and setter for the purchaseDate attribute. I don't know what I'm missing here - does anyone else? Thanks!

    Read the article

  • Hibernate : load and

    - by Albert Kam
    According to the tutorial : http://jpa.ezhibernate.com/Javacode/learn.jsp?tutorial=27hibernateloadvshibernateget, If you initialize a JavaBean instance with a load method call, you can only access the properties of that JavaBean, for the first time, within the transactional context in which it was initialized. If you try to access the various properties of the JavaBean after the transaction that loaded it has been committed, you'll get an exception, a LazyInitializationException, as Hibernate no longer has a valid transactional context to use to hit the database. But with my experiment, using hibernate 3.6, and postgres 9, it doesnt throw any exception at all. Am i missing something ? Here's my code : import org.hibernate.Session; public class AppLoadingEntities { /** * @param args */ public static void main(String[] args) { HibernateUtil.beginTransaction(); Session session = HibernateUtil.getSession(); EntityUser userFromGet = get(session); EntityUser userFromLoad = load(session); // finish the transaction session.getTransaction().commit(); // try fetching field value from entity bean that is fetched via get outside transaction, and it'll be okay System.out.println("userFromGet.getId() : " + userFromGet.getId()); System.out.println("userFromGet.getName() : " + userFromGet.getName()); // fetching field from entity bean that is fetched via load outside transaction, and it'll be errornous // NOTE : but after testing, load seems to be okay, what gives ? ask forums try { System.out.println("userFromLoad.getId() : " + userFromLoad.getId()); System.out.println("userFromLoad.getName() : " + userFromLoad.getName()); } catch(Exception e) { System.out.println("error while fetching entity that is fetched from load : " + e.getMessage()); } } private static EntityUser load(Session session) { EntityUser user = (EntityUser) session.load(EntityUser.class, 1l); System.out.println("user fetched with 'load' inside transaction : " + user); return user; } private static EntityUser get(Session session) { // safe to set it to 1, coz the table got recreated at every run of this app EntityUser user = (EntityUser) session.get(EntityUser.class, 1l); System.out.println("user fetched with 'get' : " + user); return user; } } And here's the output : 88 [main] INFO org.hibernate.annotations.common.Version - Hibernate Commons Annotations 3.2.0.Final 93 [main] INFO org.hibernate.cfg.Environment - Hibernate 3.6.0.Final 94 [main] INFO org.hibernate.cfg.Environment - hibernate.properties not found 96 [main] INFO org.hibernate.cfg.Environment - Bytecode provider name : javassist 98 [main] INFO org.hibernate.cfg.Environment - using JDK 1.4 java.sql.Timestamp handling 139 [main] INFO org.hibernate.cfg.Configuration - configuring from resource: /hibernate.cfg.xml 139 [main] INFO org.hibernate.cfg.Configuration - Configuration resource: /hibernate.cfg.xml 172 [main] WARN org.hibernate.util.DTDEntityResolver - recognized obsolete hibernate namespace http://hibernate.sourceforge.net/. Use namespace http://www.hibernate.org/dtd/ instead. Refer to Hibernate 3.6 Migration Guide! 191 [main] INFO org.hibernate.cfg.Configuration - Configured SessionFactory: null 237 [main] INFO org.hibernate.cfg.AnnotationBinder - Binding entity from annotated class: EntityUser 263 [main] INFO org.hibernate.cfg.annotations.EntityBinder - Bind entity EntityUser on table MstUser 293 [main] INFO org.hibernate.cfg.Configuration - Hibernate Validator not found: ignoring 296 [main] INFO org.hibernate.cfg.search.HibernateSearchEventListenerRegister - Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. 300 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - Using Hibernate built-in connection pool (not for production use!) 300 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - Hibernate connection pool size: 20 300 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - autocommit mode: false 309 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - using driver: org.postgresql.Driver at URL: jdbc:postgresql://localhost:5432/hibernate 309 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - connection properties: {user=sofco, password=****} 354 [main] INFO org.hibernate.cfg.SettingsFactory - Database -> name : PostgreSQL version : 9.0.1 major : 9 minor : 0 354 [main] INFO org.hibernate.cfg.SettingsFactory - Driver -> name : PostgreSQL Native Driver version : PostgreSQL 9.0 JDBC4 (build 801) major : 9 minor : 0 372 [main] INFO org.hibernate.dialect.Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect 382 [main] INFO org.hibernate.engine.jdbc.JdbcSupportLoader - Disabling contextual LOB creation as createClob() method threw error : java.lang.reflect.InvocationTargetException 383 [main] INFO org.hibernate.transaction.TransactionFactoryFactory - Transaction strategy: org.hibernate.transaction.JDBCTransactionFactory 384 [main] INFO org.hibernate.transaction.TransactionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 384 [main] INFO org.hibernate.cfg.SettingsFactory - Automatic flush during beforeCompletion(): disabled 384 [main] INFO org.hibernate.cfg.SettingsFactory - Automatic session close at end of transaction: disabled 384 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC batch size: 15 384 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC batch updates for versioned data: disabled 384 [main] INFO org.hibernate.cfg.SettingsFactory - Scrollable result sets: enabled 384 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC3 getGeneratedKeys(): enabled 384 [main] INFO org.hibernate.cfg.SettingsFactory - Connection release mode: auto 385 [main] INFO org.hibernate.cfg.SettingsFactory - Default batch fetch size: 1 385 [main] INFO org.hibernate.cfg.SettingsFactory - Generate SQL with comments: disabled 385 [main] INFO org.hibernate.cfg.SettingsFactory - Order SQL updates by primary key: disabled 385 [main] INFO org.hibernate.cfg.SettingsFactory - Order SQL inserts for batching: disabled 385 [main] INFO org.hibernate.cfg.SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 385 [main] INFO org.hibernate.hql.ast.ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory 385 [main] INFO org.hibernate.cfg.SettingsFactory - Query language substitutions: {} 385 [main] INFO org.hibernate.cfg.SettingsFactory - JPA-QL strict compliance: disabled 385 [main] INFO org.hibernate.cfg.SettingsFactory - Second-level cache: enabled 385 [main] INFO org.hibernate.cfg.SettingsFactory - Query cache: disabled 385 [main] INFO org.hibernate.cfg.SettingsFactory - Cache region factory : org.hibernate.cache.impl.NoCachingRegionFactory 386 [main] INFO org.hibernate.cfg.SettingsFactory - Optimize cache for minimal puts: disabled 386 [main] INFO org.hibernate.cfg.SettingsFactory - Structured second-level cache entries: disabled 388 [main] INFO org.hibernate.cfg.SettingsFactory - Echoing all SQL to stdout 389 [main] INFO org.hibernate.cfg.SettingsFactory - Statistics: disabled 389 [main] INFO org.hibernate.cfg.SettingsFactory - Deleted entity synthetic identifier rollback: disabled 389 [main] INFO org.hibernate.cfg.SettingsFactory - Default entity-mode: pojo 389 [main] INFO org.hibernate.cfg.SettingsFactory - Named query checking : enabled 389 [main] INFO org.hibernate.cfg.SettingsFactory - Check Nullability in Core (should be disabled when Bean Validation is on): enabled 402 [main] INFO org.hibernate.impl.SessionFactoryImpl - building session factory 549 [main] INFO org.hibernate.impl.SessionFactoryObjectFactory - Not binding factory to JNDI, no JNDI name configured Hibernate: select entityuser0_.id as id0_0_, entityuser0_.name as name0_0_, entityuser0_.password as password0_0_ from MstUser entityuser0_ where entityuser0_.id=? user fetched with 'get' : 1:Albert Kam xzy:abc user fetched with 'load' inside transaction : 1:Albert Kam xzy:abc userFromGet.getId() : 1 userFromGet.getName() : Albert Kam xzy userFromLoad.getId() : 1 userFromLoad.getName() : Albert Kam xzy

    Read the article

  • Help to edit the Recent Posts Wordpress widget to diplay in all 3 languages at once

    - by CreativEliza
    Site link: http://nuestrafrontera.org/wordpress/ I want the feed of recent post titles to show in the sidebar for all 3 languages, separated by language. So, for example, under Recent Posts the sidebar would have "English" and then the latest 3 posts in English, then "Español" and the latest 3 in Spanish and then French. All in a list in the column and appearing on all pages with the sidebar in all languages. I am using the most current version of Wordpress with the WPML plugin. I believe the Wordpress widget for Recent Posts needs to be tweaked to do this. Here is the code (from wp-includes/default-widgets.php): class WP_Widget_Recent_Posts extends WP_Widget { function WP_Widget_Recent_Posts() { $widget_ops = array('classname' => 'widget_recent_entries', 'description' => __( "The most recent posts on your blog") ); $this->WP_Widget('recent-posts', __('Recent Posts'), $widget_ops); $this->alt_option_name = 'widget_recent_entries'; add_action( 'save_post', array(&$this, 'flush_widget_cache') ); add_action( 'deleted_post', array(&$this, 'flush_widget_cache') ); add_action( 'switch_theme', array(&$this, 'flush_widget_cache') ); } function widget($args, $instance) { $cache = wp_cache_get('widget_recent_posts', 'widget'); if ( !is_array($cache) ) $cache = array(); if ( isset($cache[$args['widget_id']]) ) { echo $cache[$args['widget_id']]; return; } ob_start(); extract($args); $title = apply_filters('widget_title', empty($instance['title']) ? __('Recent Posts') : $instance['title']); if ( !$number = (int) $instance['number'] ) $number = 10; else if ( $number < 1 ) $number = 1; else if ( $number > 15 ) $number = 15; $r = new WP_Query(array('showposts' => $number, 'nopaging' => 0, 'post_status' => 'publish', 'caller_get_posts' => 1)); if ($r->have_posts()) : ?> <?php echo $before_widget; ?> <?php if ( $title ) echo $before_title . $title . $after_title; ?> <ul> <?php while ($r->have_posts()) : $r->the_post(); ?> <li><a href="<?php the_permalink() ?>" title="<?php echo esc_attr(get_the_title() ? get_the_title() : get_the_ID()); ?>"><?php if ( get_the_title() ) the_title(); else the_ID(); ?> </a></li> <?php endwhile; ?> </ul> <?php echo $after_widget; ?> <?php wp_reset_query(); // Restore global post data stomped by the_post(). endif; $cache[$args['widget_id']] = ob_get_flush(); wp_cache_add('widget_recent_posts', $cache, 'widget'); } function update( $new_instance, $old_instance ) { $instance = $old_instance; $instance['title'] = strip_tags($new_instance['title']); $instance['number'] = (int) $new_instance['number']; $this->flush_widget_cache(); $alloptions = wp_cache_get( 'alloptions', 'options' ); if ( isset($alloptions['widget_recent_entries']) ) delete_option('widget_recent_entries'); return $instance; } function flush_widget_cache() { wp_cache_delete('widget_recent_posts', 'widget'); } function form( $instance ) { $title = esc_attr($instance['title']); if ( !$number = (int) $instance['number'] ) $number = 5; ?> <p><label for="<?php echo $this->get_field_id('title'); ?>"><?php _e('Title:'); ?></label> <input class="widefat" id="<?php echo $this->get_field_id('title'); ?>" name="<?php echo $this->get_field_name('title'); ?>" type="text" value="<?php echo $title; ?>" /></p> <p><label for="<?php echo $this->get_field_id('number'); ?>"><?php _e('Number of posts to show:'); ?></label> <input id="<?php echo $this->get_field_id('number'); ?>" name="<?php echo $this->get_field_name('number'); ?>" type="text" value="<?php echo $number; ?>" size="3" /><br /> <small><?php _e('(at most 15)'); ?></small></p> <?php } }

    Read the article

  • Asp.Net MVC and ajax async callback execution order

    - by lrb
    I have been sorting through this issue all day and hope someone can help pinpoint my problem. I have created a "asynchronous progress callback" type functionality in my app using ajax. When I strip the functionality out into a test application I get the desired results. See image below: Desired Functionality When I tie the functionality into my single page application using the same code I get a sort of blocking issue where all requests are responded to only after the last task has completed. In the test app above all request are responded to in order. The server reports a ("pending") state for all requests until the controller method has completed. Can anyone give me a hint as to what could cause the change in behavior? Not Desired Desired Fiddler Request/Response GET http://localhost:12028/task/status?_=1383333945335 HTTP/1.1 X-ProgressBar-TaskId: 892183768 Accept: */* X-Requested-With: XMLHttpRequest Referer: http://localhost:12028/ Accept-Language: en-US Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0) Connection: Keep-Alive DNT: 1 Host: localhost:12028 HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Microsoft-IIS/8.0 X-AspNetMvc-Version: 3.0 X-AspNet-Version: 4.0.30319 X-SourceFiles: =?UTF-8?B?QzpcUHJvamVjdHNcVEVNUFxQcm9ncmVzc0Jhclx0YXNrXHN0YXR1cw==?= X-Powered-By: ASP.NET Date: Fri, 01 Nov 2013 21:39:08 GMT Content-Length: 25 Iteration completed... Not Desired Fiddler Request/Response GET http://localhost:60171/_Test/status?_=1383341766884 HTTP/1.1 X-ProgressBar-TaskId: 838217998 Accept: */* X-Requested-With: XMLHttpRequest Referer: http://localhost:60171/Report/Index Accept-Language: en-US Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0) Connection: Keep-Alive DNT: 1 Host: localhost:60171 Pragma: no-cache Cookie: ASP.NET_SessionId=rjli2jb0wyjrgxjqjsicdhdi; AspxAutoDetectCookieSupport=1; TTREPORTS_1_0=CC2A501EF499F9F...; __RequestVerificationToken=6klOoK6lSXR51zCVaDNhuaF6Blual0l8_JH1QTW9W6L-3LroNbyi6WvN6qiqv-PjqpCy7oEmNnAd9s0UONASmBQhUu8aechFYq7EXKzu7WSybObivq46djrE1lvkm6hNXgeLNLYmV0ORmGJeLWDyvA2 HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Microsoft-IIS/8.0 X-AspNetMvc-Version: 4.0 X-AspNet-Version: 4.0.30319 X-SourceFiles: =?UTF-8?B?QzpcUHJvamVjdHNcSUxlYXJuLlJlcG9ydHMuV2ViXHRydW5rXElMZWFybi5SZXBvcnRzLldlYlxfVGVzdFxzdGF0dXM=?= X-Powered-By: ASP.NET Date: Fri, 01 Nov 2013 21:37:48 GMT Content-Length: 25 Iteration completed... The only difference in the two requests headers besides the auth tokens is "Pragma: no-cache" in the request and the asp.net version in the response. Thanks Update - Code posted (I probably need to indicate this code originated from an article by Dino Esposito ) var ilProgressWorker = function () { var that = {}; that._xhr = null; that._taskId = 0; that._timerId = 0; that._progressUrl = ""; that._abortUrl = ""; that._interval = 500; that._userDefinedProgressCallback = null; that._taskCompletedCallback = null; that._taskAbortedCallback = null; that.createTaskId = function () { var _minNumber = 100, _maxNumber = 1000000000; return _minNumber + Math.floor(Math.random() * _maxNumber); }; // Set progress callback that.callback = function (userCallback, completedCallback, abortedCallback) { that._userDefinedProgressCallback = userCallback; that._taskCompletedCallback = completedCallback; that._taskAbortedCallback = abortedCallback; return this; }; // Set frequency of refresh that.setInterval = function (interval) { that._interval = interval; return this; }; // Abort the operation that.abort = function () { // if (_xhr !== null) // _xhr.abort(); if (that._abortUrl != null && that._abortUrl != "") { $.ajax({ url: that._abortUrl, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId } }); } }; // INTERNAL FUNCTION that._internalProgressCallback = function () { that._timerId = window.setTimeout(that._internalProgressCallback, that._interval); $.ajax({ url: that._progressUrl, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId }, success: function (status) { if (that._userDefinedProgressCallback != null) that._userDefinedProgressCallback(status); }, complete: function (data) { var i=0; }, }); }; // Invoke the URL and monitor its progress that.start = function (url, progressUrl, abortUrl) { that._taskId = that.createTaskId(); that._progressUrl = progressUrl; that._abortUrl = abortUrl; // Place the Ajax call _xhr = $.ajax({ url: url, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId }, complete: function () { if (_xhr.status != 0) return; if (that._taskAbortedCallback != null) that._taskAbortedCallback(); that.end(); }, success: function (data) { if (that._taskCompletedCallback != null) that._taskCompletedCallback(data); that.end(); } }); // Start the progress callback (if any) if (that._userDefinedProgressCallback == null || that._progressUrl === "") return this; that._timerId = window.setTimeout(that._internalProgressCallback, that._interval); }; // Finalize the task that.end = function () { that._taskId = 0; window.clearTimeout(that._timerId); } return that; };

    Read the article

  • Problem with tomcat and getLocalHost exception

    - by xain
    I'm running a Linux server named S1 in a "cloud" server, and when tomcat 6.0.24 starts, I get the exception: org.apache.catalina.connector.Connector pause SEVERE: Protocol handler pause failed java.net.UnknownHostException: S1: S1 at java.net.InetAddress.getLocalHost(InetAddress.java:1353) at org.apache.jk.common.ChannelSocket.unLockSocket(ChannelSocket.java:485) Which then leads to: ERROR ehcache.Cache - Unable to set localhost. This prevents creation of a GUID. Cause was: Sjira1: S1 java.net.UnknownHostException: S1: S1 at java.net.InetAddress.getLocalHost(InetAddress.java:1353) at net.sf.ehcache.Cache.<clinit>(Cache.java:143) My hosts file is: 127.0.0.1 localhost localhost.localdomain (valid-ip-address) S1 S1.(valid domain name) ping S1 and S1.(valid domain name) return valid ip address nslookup S1.(valid domain name) returns valid ip address nslookup S1 throws ** server can't find S1: NXDOMAIN Any ideas about how to fix this ? Thanks

    Read the article

  • Problem with tomcat and getLocalHost exception

    - by xain
    I'm running a Linux server named S1 in a "cloud" server, and when tomcat 6.0.24 starts, I get the exception: org.apache.catalina.connector.Connector pause SEVERE: Protocol handler pause failed java.net.UnknownHostException: S1: S1 at java.net.InetAddress.getLocalHost(InetAddress.java:1353) at org.apache.jk.common.ChannelSocket.unLockSocket(ChannelSocket.java:485) Which then leads to: ERROR ehcache.Cache - Unable to set localhost. This prevents creation of a GUID. Cause was: Sjira1: S1 java.net.UnknownHostException: S1: S1 at java.net.InetAddress.getLocalHost(InetAddress.java:1353) at net.sf.ehcache.Cache.<clinit>(Cache.java:143) My hosts file is: 127.0.0.1 localhost localhost.localdomain (valid-ip-address) S1 S1.(valid domain name) ping S1 and S1.(valid domain name) return valid ip address nslookup S1.(valid domain name) returns valid ip address nslookup S1 throws ** server can't find S1: NXDOMAIN Any ideas about how to fix this ? Thanks

    Read the article

  • Nginx + PHP-FPM executes script, but returns 404

    - by MorfiusX
    I am using Nginx + PHP-FPM to run a Wordpress based site. I have a URL that should return dynamically generated JSON data for use with the DataTables jQuery plugin. The data is returned properly, but with a return code of 404. I think this is a Nginx config issue, but I haven't been able to figure out why. The script 'getTable.php' works properly on the production version of the site which is currently using Apache. Anyone know how I can get this to work on Nginx? URL: http://dev.iloveskydiving.org/wp-content/plugins/ils-workflow/lib/getTable.php SERVER: CentOS 6 + Varnish (caching disabled for development) + Nginx + PHP-FPM + Wordpress + W3 Total Cache Nginx Config: server { # Server Parameters listen 127.0.0.1:8082; server_name dev.iloveskydiving.org; root /var/www/dev.iloveskydiving.org/html; access_log /var/www/dev.iloveskydiving.org/logs/access.log main; error_log /var/www/dev.iloveskydiving.org/logs/error.log error; index index.php; # Rewrite minified CSS and JS files location ~* \.(css|js) { if (!-f $request_filename) { rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; expires max; } } # Set a variable to work around the lack of nested conditionals set $cache_uri $request_uri; # Don't cache uris containing the following segments if ($request_uri ~* "(\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|wp-.*\.php|index\.php|wp\-comments\-popup\.php|wp\-links\-opml\.php|wp\-locations\.php)") { set $cache_uri "no cache"; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp\-postpass|wordpress_logged_in") { set $cache_uri 'no cache'; } # Use cached or actual file if they exists, otherwise pass request to WordPress location / { try_files /wp-content/w3tc/pgcache/$cache_uri/_index.html $uri $uri/ /index.php?q=$uri&$args; } # Cache static files for as long as possible location ~* \.(xml|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { try_files $uri =404; expires max; access_log off; } # Deny access to hidden files location ~* /\.ht { deny all; access_log off; log_not_found off; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_pass unix:/var/lib/php-fpm/php-fpm.sock; # port where FastCGI processes were spawned } } Fast CGI Params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; UPDATE: Upon further digging, it looks like Nginx is generating the 404 and PHP-FPM is executing the script properly and returning a 200. UPDATE: Here are the contents of the script: <?php /** * Connect to Wordpres */ require(dirname(__FILE__) . '/../../../../wp-blog-header.php'); /** * Define temporary array */ $aaData = array(); $aaData['aaData'] = array(); /** * Execute Query */ $query = new WP_Query( array( 'post_type' => 'post', 'posts_per_page' => '-1' ) ); foreach ($query->posts as $post) { array_push( $aaData['aaData'], array( $post->post_title ) ); } /** * Echo JSON encoded array */ echo json_encode($aaData);

    Read the article

  • Disabling kextcache on 10.5.8 and 10.6.3

    - by Jeff Kelley
    We use Radmind to manage our Mac OS X loadsets and, as such, often run into difficulty when new OS releases come out due to, among other things, updated kernel extensions. The workflow in the past (OS revisions <= 10.4) was to delete the kernel extension cache, update the extensions, and then reboot. That worked just fine, as the system would re-create missing caches on boot. In Leopard, you need to delete the caches after replacing the kernel extensions with their new versions, as the system will automatically start creating them when you replace them; the only way to ensure that you don't have invalid extensions cached is to delete the cache before rebooting. I'm looking for a way to prevent the kernel extensions cache from being re-created until the next reboot. If you modify the contents of /System/Library/Extensions/, kextcache will start up automatically. I've looked through /System/Library/LaunchDaemons/ and other places, but I can't find whatever it is that's starting kextcache. Any ideas?

    Read the article

  • Newbie ask Swap Value: How much Swap do I have?

    - by Cintaku
    When looking vmstat, this is what I got: procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 2872 0 0 0 0 8 17 0 14 3 1 94 2 0 the cache is 0. I have no idea how much the whole swap I have. But when not enough RAM (256 MB), the swap will be used and look like below: procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 3 1 2468 0 0 0 0 0 8 17 0 16 3 1 94 2 0

    Read the article

  • Optimizing MySQL for small VPS

    - by Chris M
    I'm trying to optimize my MySQL config for a verrry small VPS. The VPS is also running NGINX/PHP-FPM and Magento; all with a limit of 250MB of RAM. This is an output of MySQL Tuner... -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.41-3ubuntu12.8 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 1M (Tables: 14) [--] Data in InnoDB tables: 29M (Tables: 301) [--] Data in MEMORY tables: 1M (Tables: 17) [!!] Total fragmented tables: 301 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 2d 11h 14m 58s (1M q [8.038 qps], 33K conn, TX: 2B, RX: 618M) [--] Reads / Writes: 83% / 17% [--] Total buffers: 122.0M global + 8.6M per thread (100 max threads) [!!] Maximum possible memory usage: 978.2M (404% of installed RAM) [OK] Slow queries: 0% (37/1M) [OK] Highest usage of available connections: 6% (6/100) [OK] Key buffer size / total MyISAM indexes: 32.0M/282.0K [OK] Key buffer hit rate: 99.7% (358K cached / 1K reads) [OK] Query cache efficiency: 83.4% (1M cached / 1M selects) [!!] Query cache prunes per day: 48301 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 144K sorts) [OK] Temporary tables created on disk: 13% (27K on disk / 203K total) [OK] Thread cache hit rate: 99% (6 created / 33K connections) [!!] Table cache hit rate: 0% (32 open / 51K opened) [OK] Open file limit used: 1% (20/1K) [OK] Table locks acquired immediately: 99% (1M immediate / 1M locks) [!!] InnoDB data size / buffer pool: 29.2M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 64M) table_cache (> 32) innodb_buffer_pool_size (>= 29M) and this is the config. # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 32M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 sort_buffer_size = 4M read_buffer_size = 4M myisam_sort_buffer_size = 16M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 100 table_cache = 32 tmp_table_size = 128M #thread_concurrency = 10 # # * Query Cache Configuration # #query_cache_limit = 1M query_cache_type = 1 query_cache_size = 64M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ The site contains 1 wordpress site,so lots of MYISAM but mostly static content as its not changing all that often (A wordpress cache plugin deals with this). And the Magento Site which consists of a lot of InnoDB tables, some MyISAM and some INMEMORY. The "read" side seems to be running pretty well with a mass of optimizations I've used on Magento, the NGINX setup and PHP-FPM + XCACHE. I'd love to have a kick in the right direction with the MySQL config so I'm not blindly altering it based on the MySQLTuner without understanding what I'm changing. Thanks

    Read the article

  • Memcached - doesn't seem to be working

    - by Trev
    my local.xml <session_save><![CDATA[files]]></session_save> <cache> <backend>memcached</backend> <prefix>MAGE_</prefix> <memcached> <servers> <server> <host><![CDATA[127.0.0.1]]></host> <port><![CDATA[11211]]></port> <persistent><![CDATA[1]]></persistent> </server> </servers> </memcached> </cache> /var/cache is still filling up memachced is running memcache 2685 0.0 0.3 351888 26152 ? Sl 08:07 0:19 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1 How do i know its working? I notice no speed increases.

    Read the article

  • Strategies for very fast delivery of webpages.

    - by Cherian
    I run a website Cucumbertown with an initial pay load of nearly 9KB zipped. All my js is delayed loaded with requirejs and modernizer is the only exception. Now all my webpages are Nginx cached and only 10-15% hits go to the backend proxy. And the cache is invalidated by logged in users as proxy_cache_bypass. So for an anonymous user its nearly always a cache hit. I have some basic OS tuning with default via ip dev eth0 initcwnd 15 net.ipv4.tcp_slow_start_after_idle 0 Despite an all cache & large initcwnd my pages still take 2.5 – 3 seconds. I have a yslow score of And page speed at Are there strategies that can help deliver webpages even faster than this? Deliver pages at 1+ second time for 10KB payload? Notes: My servers run of a fairly good data center from Linode at Fremont.

    Read the article

  • GlusterFs - high load 90-107% CPU

    - by Sara
    I try and try and try to performance and fix problem with gluster, i try all. I served on gluster webpages, php files, images etc. I have problem after update from 3.3.0 to 3.3.1. I try 3.4 when i think maybe fix it but still the same problem. I temporarily have 1 brick, but before upgrade will be fine. Config: Volume Name: ... Type: Replicate Volume ID: ... Status: Started Number of Bricks: 0 x 2 = 1 Transport-type: tcp Bricks: Brick1: ...:/... Options Reconfigured: cluster.stripe-block-size: 128KB performance.cache-max-file-size: 100MB performance.flush-behind: on performance.io-thread-count: 16 performance.cache-size: 256MB auth.allow: ... performance.cache-refresh-timeout: 5 performance.write-behind-window-size: 1024MB I use fuse, hmm "Maybe the high load is due to the unavailable brick" i think about it, but i cant find information on how to safely change type of volume. Maybe u know how?

    Read the article

  • RAID P410i and P812 performance issues

    - by Alexey
    I'm having much trouble with I/O performance of HP DL360 server with two RAID controllers - P410i and P812, Windows Server 2008, 36 GiB RAM and 16 x Intel Xeon x5550. The server runs a bunch of tasks producing heavy sequential I/O, and after about 20-30 minutes of intensive work it looks like the tasks are stuck, not using CPU and with enough free memory (so this cannot be a bottleneck). The same tasks were running quite well on the older server (Windows Server 2003, 4 x Intel Xeon, 12 GiB RAM). RAID cache is present, write-cache battery is installed. Cache is configured as 25% readahead/75% writeback. The swap file resides on the logical disk served by P410i and other logical disks are on P812. Can someone tell me what can be the matter of this? Is this a hardware problem or misconfiguration?

    Read the article

  • USB drive gets infected every time I plug it into my laptop

    - by ashwnacharya
    Whenever I plug in my USB flash drive into my laptop running Windows XP, an AutoRun.inf file gets created inside it. Also a hidden folder called "Cache" which uses the Recycle bin icon gets created. AutoRUn.inf file cannot be opened in Windows. Both the Autorun.inf file and the "Cache" folder cannot be deleted. I opened the AutoRun.inf file using Ubuntu. I saw that it is trying to run a file Cache\TMP983.exe. I ran all anti-spywares, anti-malwares, anti-viruses, USB autorun eaters, but to no avail. How do I fix this?

    Read the article

  • squid3 auth thru samba using ntlm to AD doesn't work

    - by derty
    some users here are spending to much time exploring the WWW. So big boss whats to get this under control. We use a squid3 just for some security reason and chace benefits. and now i'm trying to set up a new proxy on a different server (Debian 6) Permissions are defined in AC and the squid3 should get the auth thru samba/winbind by using the ntlm protocol. but i'll get all the time Access, denited. it only works by using LDAP but thats not the way i need it. here some log and confs squid access.log 1326878095.784 1 192.168.15.27 TCP_DENIED/407 4049 GET http://at.msn.com/? -NONE/- text/html 1326878095.791 1 192.168.15.27 TCP_DENIED/407 4294 GET http://at.msn.com/? - NONE/- text/html 1326878095.803 9 192.168.15.27 TCP_DENIED/403 4028 GET http://at.msn.com/? kavan NONE/- text/html 1326878095.848 0 192.168.15.27 TCP_DENIED/403 3881 GET http://www.squid-cache.org/Artwork/SN.png kavan NONE/- text/html 1326878100.279 0 192.168.15.27 TCP_DENIED/403 3735 GET http://www.google.at/ kavan NONE/- text/html 1326878100.296 0 192.168.15.27 TCP_DENIED/403 3870 GET http://www.squid-cache.org/Artwork/SN.png kavan NONE/- text/html 1326878155.700 0 192.168.15.27 TCP_DENIED/407 4072 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml - NONE/- text/html 1326878155.705 2 192.168.15.27 TCP_DENIED/407 4317 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml - NONE/- text/html 1326878155.709 3 192.168.15.27 TCP_DENIED/403 4026 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml kavan NONE/- text/html squid chace 2012/01/18 10:12:49| Creating Swap Directories 2012/01/18 10:12:49| Starting Squid Cache version 3.1.6 for x86_64-pc-linux-gnu... 2012/01/18 10:12:49| Process ID 17236 2012/01/18 10:12:49| With 65535 file descriptors available 2012/01/18 10:12:49| Initializing IP Cache... 2012/01/18 10:12:49| DNS Socket created at [::], FD 7 2012/01/18 10:12:49| DNS Socket created at 0.0.0.0, FD 8 2012/01/18 10:12:49| Adding nameserver 192.168.15.2 from /etc/resolv.conf 2012/01/18 10:12:49| Adding nameserver 192.168.15.19 from /etc/resolv.conf 2012/01/18 10:12:49| Adding nameserver 192.168.15.1 from /etc/resolv.conf 2012/01/18 10:12:49| Adding domain schoenbrunn.local from /etc/resolv.conf 2012/01/18 10:12:49| helperOpenServers: Starting 5/5 'squid_ldap_auth' processes 2012/01/18 10:12:49| helperOpenServers: Starting 10/10 'ntlm_auth' processes 2012/01/18 10:12:49| helperOpenServers: Starting 10/10 'squid_kerb_auth' processes 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| helperOpenServers: Starting 5/5 'squid_ldap_group' processes 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| Unlinkd pipe opened on FD 73 2012/01/18 10:12:49| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec 2012/01/18 10:12:49| Store logging disabled 2012/01/18 10:12:49| Swap maxSize 0 + 262144 KB, estimated 20164 objects 2012/01/18 10:12:49| Target number of buckets: 1008 2012/01/18 10:12:49| Using 8192 Store buckets 2012/01/18 10:12:49| Max Mem size: 262144 KB 2012/01/18 10:12:49| Max Swap size: 0 KB 2012/01/18 10:12:49| Using Least Load store dir selection 2012/01/18 10:12:49| Set Current Directory to /var/spool/squid3 2012/01/18 10:12:49| Loaded Icons. 2012/01/18 10:12:49| Accepting HTTP connections at [::]:3128, FD 74. 2012/01/18 10:12:49| HTCP Disabled. 2012/01/18 10:12:49| Squid modules loaded: 0 2012/01/18 10:12:49| Adaptation support is off. 2012/01/18 10:12:49| Ready to serve requests. 2012/01/18 10:12:50| storeLateRelease: released 0 objects smb.conf # Domain Authntication Settings workgroup = <WORKGROUP> security = ads password server = <DOMAINNAME>.LOCAL realm = <DOMAINNAME>.LOCAL ldap ssl = no # logging log level = 5 max log size = 50 # logs split per machine log file = /var/log/samba/%m.log # max 50KB per log file, then rotate ; max log size = 50 # User settings username map = /etc/samba/smbusers idmap uid = 10000-20000000 idmap gid = 10000-20000000 idmap backend = ad ; template primary group = <ad group> template shell = /sbin/nologin # Winbind Settings winbind separator = + winbind enum users = Yes winbind enum groups = Yes winbind netsted groups = Yes winbind nested groups = Yes winbind cache time = 10 winbind use default domain = Yes #Other Globals unix charset = LOCALE server string = <SERVERNAME> load printers = no printing = cups cups options = raw ; printcap name = /etc/printcap #obtain list of printers automatically on SystemV ; printcap name = lpstat ; printing = cups squid.conf auth_param ntlm program /usr/bin/ntlm_auth --require-membership-of=<DOMAINNAME>\\INTERNETZ --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 10 auth_param basic program /usr/lib/squid3/squid_ldap_auth -R -b "dc=<dcname>,dc=local" -D "cn=administrator,cn=Users,dc=<domainname>,dc=local" -w "******" -f sAMAccountName=%s -h 192.168.15.19:3268 auth_param basic realm "Proxy Authentifizierung. Bitte geben Sie Ihren Benutzername und Ihr Passwort ein!" #means insert you PW in an other language - # external_acl_type InetGroup %LOGIN /usr/lib/squid3/squid_ldap_group -R -b "dc=<domainname>,dc=local" -D "cn=administrator,cn=Users,dc=<domainname>,dc=local" -w "******" -f "(&(objectclass=person)(sAMAccountName=%v) (memberof=cn=%a,cn=internetz,dc=<domainname>,dc=local))" -h 192.168.15.19:3268 auth_param negotiate program /usr/lib/squid3/squid_kerb_auth -d auth_param negotiate children 10 auth_param negotiate keep_alive on acl localnet proxy_auth REQUIRED acl InetAccess external InetGroup Internetz http_access allow InetAccess http_access deny all acl auth proxy_auth REQUIRED http_access allow auth and a very suspicious is that by adding the proxy server to the Domain i see 2 new entries in the PC one with the original computer-name leopoldine and one with leopoldine CNF:f8efa4c4-ff0e-4217-939d-f1523b43464d ?!? I tried a lot, really... but i stuck on this problem... i actually i even reinstalled all dependent programs and reconfigured them from default. Group exists and has me in it. Firefox running on the old proxy and i use IE for testing the new one. But i'll get all the time Access-Denited and to be honest i'm quite a beginner, so please don't be to prude. I'll interested in improving, i'll get the information we need to fix this but i started working 2 month ago and got only 1 1/2 year's training and not a single sec. in linux ;)

    Read the article

  • Is an Intel Atom D525 suitable to run MythTV

    - by Martin Thompson
    I have an oldish netbook with an Atom N450 (1.6GHz, 512KB cache) - I've been using it to experiment with MythTV, but it seems really slow, even just to work through the menus! Seconds, sometimes 10 or 20s, to load a new menu. Admittedly from a remote backend, but my older Core1 based laptop seems to be fine with the same setup. I was hoping to use one of the so-called "nettop" devices which currently seem to be D525-based (1.8GHz, 1MB cache) - is double the cache really going to make that much difference? Or has the internal architecture of the Atom moved on leaps and bounds in between? Given that I design non-Intel embedded computers for a living I was hoping to get lots of hardcore architecture detail from the Intel website, so I could see for myself, but I can't find it! So: will a D525 be fast enough to run a MythTV backend/frontend combined box?

    Read the article

  • Perforce Proxy Server: Caching selective files [closed]

    - by fbrereto
    I just set up a Perforce proxy server for work. I'm noticing the cache directory is filling up very quickly -- with files I know I will never need. For example, there is a 'sandbox' directory in the depot where users keep personal branches and other work; a p4 sync is causing the p4 proxy cache to grab these user's sandboxes when I'll never need them. I would create a symbolic link for the sandbox directory to /dev/null but then I wouldn't be caching my sandbox, which I am interested in. Is there any way to tell the perforce proxy something to the effect of "if I haven't had to sync it, please don't cache it?"

    Read the article

  • Autodetect/mount SDCards and run script for them on Linux

    - by Brendan
    Hey Everyone, I'm currently running SME Server, and need to have a script run upon the attachment of SD Cards to my server. The script itself works fine (it copies the contents of the cards), but the automounting and execution of the script is where I'm having issues. The I have a USB hub consisting of 10 USB ports; that shows up as: [root@server ~]# lsusb Bus 004 Device 002: ID 0000:0000 Bus 004 Device 001: ID 0000:0000 Bus 003 Device 001: ID 0000:0000 Bus 002 Device 001: ID 0000:0000 Bus 001 Device 055: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 051: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 050: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 001: ID 0000:0000 (The hub is the TERMINUS TECHNOLOGY INC entries) As I cannot plug SD Cards directly into the server; I use a USB to SD card attachement (10 of them) plugged into the hub to read the cards. Upon pluggig the 10 attachments (without cards) into the hub; lsusb yields the following: [root@server ~]# lsusb Bus 004 Device 002: ID 0000:0000 Bus 004 Device 001: ID 0000:0000 Bus 003 Device 001: ID 0000:0000 Bus 002 Device 001: ID 0000:0000 Bus 001 Device 073: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 072: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 071: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 070: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 069: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 068: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 067: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 066: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 065: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 064: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 055: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 051: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 050: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 001: ID 0000:0000 As you can see, the readers are the "Gensys Logic, Inc" entries. Plugging in an SD card to a reader doesn't affect lsusb (it reads exactly as above), however my system recognises the cards fine; as indicated by dmesg: Attached scsi generic sg11 at scsi54, channel 0, id 0, lun 0, type 0 USB Mass Storage device found at 73 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 If I manually mount sdd1 (mount /dev/sdd1 /somedirectory/) this works fine. What I'm really after is a solution that automounts each of the cards as they are inputted into the reader; and executes a script for them (this will involve copying their contents to another directory). My problem is that I don't know how to do this; I don't think udev will work as the USB devices don't change; if I could somehow get udev working with /dev/disk/by-path/ however I think this is doable (it seems to keep constant entries). ls /dev/disk returns: pci-0000:00:1d.7-usb-0:4.1.1.1:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0-part1 pci-0000:0b:01.0-scsi-0:0:1:0 pci-0000:0b:01.0-scsi-0:0:1:0-part1 pci-0000:0b:01.0-scsi-0:0:1:0-part2 From above, we can see I have only one card plugged into the reader (pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0-part1). Going mount /dev/disk/by-path/pci-0000\:00\:1d.7-usb-0\:4.4\:1.0-scsi-0\:0\:0\:0-part1 Works and places the card under /media/usbdisk/, however: mount /dev/disk/by-path/pci-0000\:00\:1d.7-usb-0\:4.4\:1.0-scsi-0\:0\:0\:0-part1 slot1/ doesn't work, and returns "mount: can't get address for /dev/disk/by-path/pci-0000" Any ideas and solutions would be great, I've seen the knowledge of a lot of the guys on here before so I'm hopeful someone can help me out. Thanks

    Read the article

  • eAccelerator ignore my new setting?

    - by Mwebe Nkrumah
    Hi, Im using eAccelerator 0.9.5.2, CentOS 5.3, lighttpd 1.4.22 But because eAccelerator is cached in RAM, I needs too much RAM. So Im trying to cache in hard disk. (my website is not generate money, so Im thinking about cheaper solution) So, I modify /etc/php.d/eaccelerator.ini with below codes: extension="eaccelerator.so" eaccelerator.shm_size="12" eaccelerator.cache_dir="/var/cache/eaccelerator" eaccelerator.enable="1" eaccelerator.optimizer="1" eaccelerator.check_mtime="0" eaccelerator.debug="0" eaccelerator.filter="" eaccelerator.shm_max="20M" eaccelerator.shm_ttl="1800" eaccelerator.shm_prune_period="0" eaccelerator.shm_only="0" eaccelerator.compress="0" eaccelerator.compress_level="9" eaccelerator.keys="disk_only" eaccelerator.sessions="disk_only" eaccelerator.content="disk_only" So, the output of phpinfo() as below: http://img175.imageshack.us/img175/1104/screenshggot.png But after using "disk_only" in eAccelerator and restart lighttpd & php-cgi using killall, my RAM usage is still high for php-cgi. Reboot the server also not works. The data is created in cache directory, but RAM usage is still high.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >