Search Results

Search found 5415 results on 217 pages for 'concurrent mark sweep'.

Page 26/217 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Play! Framework 1.2.4 --- C3P0 settings to avoid Communications link failure do to idle time

    - by HelpMeStackOverflowMyOnlyHope
    I'm trying to customize my C3P0 settings to avoid the error shown at the bottom of this post. It was suggested at this url --- http://make-it-open.blogspot.com/2008/12/sql-error-0-sqlstate-08s01.html --- to adjust the settings as follows: In hibernate.cfg.xml, write <property name="c3p0.min_size">5</property> <property name="c3p0.max_size">20</property> <property name="c3p0.timeout">1800</property> <property name="c3p0.max_statements">50</property> Then create "c3p0.properties" in your root classpath folder and write c3p0.testConnectionOnCheckout=true c3p0.acquireRetryDelay=1000 c3p0.acquireRetryAttempts=1 I've tried to make those adjustments following the direction of the Play! Framework documentation, where they say use "db.pool..." as follows: db.pool.timeout=1800 db.pool.maxSize=15 db.pool.minSize=5 db.pool.initialSize=5 db.pool.acquireRetryAttempts=1 db.pool.preferredTestQuery=SELECT 1 db.pool.testConnectionOnCheckout=true db.pool.acquireRetryDelay=1000 db.pool.maxStatements=50 Are those settings not going to work? Should I be trying to set them in a different way? With those settings I still get the error shown below, that is due to to long of a idle time. Complete Stack Trace of Error: 23:00:44,932 WARN ~ SQL Error: 0, SQLState: 08S01 2012-04-13T23:00:44+00:00 app[web.1]: 23:00:44,932 ERROR ~ Communications link failure 2012-04-13T23:00:44+00:00 app[web.1]: 2012-04-13T23:00:44+00:00 app[web.1]: The last packet successfully received from the server was 274,847 milliseconds ago. The last packet sent successfully to the server was 7 milliseconds ago. 2012-04-13T23:00:44+00:00 app[web.1]: 23:00:44,934 ERROR ~ Why the driver complains here? 2012-04-13T23:00:44+00:00 app[web.1]: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed.Connection was implicitly closed by the driver. 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.Util.handleNewInstance(Util.java:407) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.Util.getInstance(Util.java:382) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1013) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:987) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:982) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:927) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.ConnectionImpl.throwConnectionClosedException(ConnectionImpl.java:1213) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.ConnectionImpl.getMutex(ConnectionImpl.java:3101) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.ConnectionImpl.setAutoCommit(ConnectionImpl.java:4975) 2012-04-13T23:00:44+00:00 app[web.1]: at org.hibernate.jdbc.BorrowedConnectionProxy.invoke(BorrowedConnectionProxy.java:74) 2012-04-13T23:00:44+00:00 app[web.1]: at $Proxy49.setAutoCommit(Unknown Source) 2012-04-13T23:00:44+00:00 app[web.1]: at play.db.jpa.JPAPlugin.closeTx(JPAPlugin.java:368) 2012-04-13T23:00:44+00:00 app[web.1]: at play.db.jpa.JPAPlugin.onInvocationException(JPAPlugin.java:328) 2012-04-13T23:00:44+00:00 app[web.1]: at play.plugins.PluginCollection.onInvocationException(PluginCollection.java:447) 2012-04-13T23:00:44+00:00 app[web.1]: at play.Invoker$Invocation.onException(Invoker.java:240) 2012-04-13T23:00:44+00:00 app[web.1]: at play.jobs.Job.onException(Job.java:124) 2012-04-13T23:00:44+00:00 app[web.1]: at play.jobs.Job.call(Job.java:163) 2012-04-13T23:00:44+00:00 app[web.1]: at play.jobs.Job$1.call(Job.java:66) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.FutureTask.run(FutureTask.java:166) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) 2012-04-13T23:00:44+00:00 app[web.1]: at java.lang.Thread.run(Thread.java:636) 2012-04-13T23:00:44+00:00 app[web.1]: Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

    Read the article

  • New Whitepaper: Best Practices for Gathering EBS Database Statistics

    - by Elke Phelps (Oracle Development)
    Most Oracle Applications DBAs and E-Business Suite users understand the importance of accurate database statistics.  Missing, stale or skewed statistics can adversely affect performance.  Oracle E-Business Suite statistics should only be gathered using FND_STATS or the Gather Statistics concurrent request. Gathering statistics with DBMS_STATS or the desupported ANALYZE command may result in suboptimal executions plans for E-Business Suite. Our E-Business Suite Performance Team has been busy implementing and testing new features for gathering statistics using FND_STATS in Oracle E-Business Suite databases.  The new features and guidelines for when and how to gather statistics are published in the following whitepaper: Best Practices for Gathering Statistics with Oracle E-Business Suite (Note 1586374.1) The new white paper details the following options for gathering statistics using FND_STATS and the Gather Statistics concurrent request:: History Mode - backup existing statistics prior to gather new statistics GATHER_AUTO Option - gather statistics for tables based upon % change Histograms - collect statistics for histograms AUTO Sampling - use the new FND_STATS feature that supports the AUTO option for using AUTO sample size Extended Statistics - use the new FND_STATS feature that supports the creation of column groups and automatic statistics collection on the column groups when table statistics are gathered Incremental Statistics - gather incremental statistics for partitioned tables The new white paper also includes examples and performance test cases for the following: Extended Optimizer Statistics Incremental Statistics Gathering Concurrent Statistics Gathering This white paper includes details about the standalone Oracle E-Business Suite Release 11i and 12 patches that are required to take advantage of this new functionality. Your feedback is welcome We would be very interested in hearing about your experiences with these new options for gathering statistics.  Please feel free to post your comments here or drop us a line privately.Related Oracle OpenWorld 2013 Session Getting Optimal Performance from Oracle E-Business Suite (CON8485) Related My Oracle Support Notes Collecting Statistics with Oracle EBS 11i and R12 (Note 368252.1) Non-EBS Related Blogs, White Papers and My Oracle Support Notes  Oracle Optimizer Blog Understanding Optimizer Statistic (white paper) Fixed Objects Statistics(GATHER_FIXED_OBJECTS_STATS) Considerations (Note 798257.1)

    Read the article

  • New Release of Oracle Berkeley DB

    - by Eric Jensen
    We are pleased to announce that a new release of Oracle Berkeley DB, version 11.2.5.2.28, is available today. Our latest release includes yet more value added features for SQLite users, as well as several performance enhancements and new customer-requested features to the key-value pair API.  We continue to provide technology leadership, features and performance for SQLite applications.  This release introduces additional features that are not available in native SQLite, and adds functionality allowing customers to create richer, more scalable, more concurrent applications using the Berkeley DB SQL API. This release is compelling to Oracle’s customers and partners because it: delivers a complete, embeddable SQL92 database as a library under 1MB size drop-in API compatible with SQLite version 3 no-oversight, zero-touch database administration industrial quality, battle tested Berkeley DB B-TREE for concurrent transactional data storage New Features Include: MVCC support for even higher concurrency direct SQL support for HA/replication transactionally protected Sequence number generation functions lower memory requirements, shared memory regions and faster/smaller memory on startup easier B-TREE page size configuration with new ''db_tuner" utility New Key-Value API Features Include: HEAP access method for constrained disk-space applications (key-value API) faster QUEUE access method operations for highly concurrent applications -- up 2-3X faster! (key-value API) new X/open compliant XA resource manager, easily integrated with Oracle Tuxedo (key-value API) additional HA/replication management and communication options (key-value API) and a lot more! BDB is hands-down the best edge, mobile, and embedded database available to developers. Downloads available today on the Berkeley DB download pageProduct Documentation

    Read the article

  • Timeout Considerations for Solicit Response

    - by Michael Stephenson
    Background One of the clients I work with had been experiencing some issues for a while surrounding web service timeouts.  It's been a little challenging to work through the problems due to limitations in the diagnostic information available from one of the applications, but I learned some interesting things while troubleshooting the problem which don't seem to have been discussed much in the community so I thought I'd share my findings. In the scenario we have BizTalk trying to make calls to a .net web service which was exposed as a WSE 2 endpoint.  In the process BizTalk will try to make a large number of concurrent web service calls to the application, and the backend application has more than enough infrastructure and capability to handle the load. We have configured the <ConnectionManagement> section of the BizTalk configuration file to support up to 100 concurrent connections from each of our 2 BizTalk send servers to the web servers of the application. The problem we were facing was that the BizTalk side was reporting a significant number of timeouts when calling the web service.   One of the biggest issues was the challenge of being able to correlate a message from BizTalk to the IIS log in the .net application and the custom logs in the application especially when there was a fairly large number of servers hosting the web services.  However the key moment came when we were able to identify a specific call which had taken 40 seconds to execute on the server (yes a long time I know but that's a different story!).  Anyway we were able to identify that this had timed out on the BizTalk side.  Based on the normal 2 minute timeout we knew something unexpected was going on. From here I decided to do some experimentation and I wanted to start outside of BizTalk because my hunch was this was not a BizTalk behaviour but something which was being highlighted by BizTalk because of our large load.     Server-side - Sample Web Service To begin with I created a sample web service.  Nothing special just a vanilla asmx web service hosted in IIS6 on Windows 2003 Standard Edition.  The web service is just a hello world style web service as shown in the below picture.  The only key feature is that the server side web method has a 30 second sleep in it and will trace out some information before and after the thread is set to sleep.      In the configuration for this web service there again is nothing special it's pretty much the most plain simple web service you could build. Client-Side To begin looking at what was happening with our example I created a number of different ways to consume the web service. SoapHttpClientProtocol Example I created a small application which would use a normal proxy generated to call the web service.  It would iterate around a loop and make calls using the begin/end methods so I can do this asynchronously.  I would do a loop of 20 calls with the ConnectionManager configuration section supporting only 5 concurrent connections to the server.     <connectionManagement> <remove address="*"/> <add address = "*" maxconnection = "12" /> <add address = "http://<ServerName>" maxconnection = "5" />                         </connectionManagement> </system.net>     The below picture shows an example of the service calling code, key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service         The Test I would run the client and execute 21 calls to the web service.   The Results  Below is the client side trace showing what's happening on the client. In the below diagram is the web service side trace showing what's happening on the server Some observations on the results are: All of the calls were successful from the clients perspective You could see the next call starting on the server as soon as the previous one had completed Calls took significantly longer than 40 seconds from the start of our call to the return. In fact call 20 took 2 minutes and 30 seconds from the perspective of my code to execute even though I had set the timeout to 40 seconds     WSE 2 Sample In the second example I used the exact same code to call the web service again with a single exception that I modified the web service proxy to derive from WebServiceClient protocol which is part of WSE 2 (using SP3).  The below picture shows the basic code and the key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service        The Test This test would execute 21 calls from the client to the web service.   The Results  The below trace is from the client side: The below trace is from the server side:   Some observations on the trace results for this scenario are: With call 4 if you look at the server side trace it did not start executing on the server for a number of seconds after the other 4 initial calls which were accepted by the server. I re-ran the test and this happened a couple of times and not on most others so at this point I'm just putting this down to something unexpected happening on the development machine and we will leave this observation out of scope of this article. You can see that the client side trace statement executed almost immediately in all cases All calls after the initial few calls would timeout On the client side the calls that did timeout; timed out in a longer duration than the 40 seconds we set as the timeout You can see that as calls were completing on the server the next calls were starting to come through The calls that timed out on the client did actually connect to the server and their server side execution completed successfully     Elaboration on the findings Based on the above observations I have drawn the below sequence diagram to illustrate conceptually what is happening.  Everything except the final web service object is on the client side of the call. In the diagram below I've put two notes on the Web Service Proxy to show the two different places where the different base classes seem to start their timeout counters. From the earlier samples we can work out that the timeout counter for the WSE web service proxy starts before the one for the SoapHttpClientProtocol proxy and the WSE one includes the time to get a connection from the pool; whereas the Soap proxy timeout just covers the method execution. One interesting observation is if we rerun the above sample and increase the number of calls from 21 to 100,000 then for the WSE sample we will see a similar pattern where everything after the first few calls will timeout on the client as soon as it makes a connection to the server whereas the soap proxy will happily plug away and process all of the calls without a single timeout. I have actually set the sample running overnight and this did happen. At this point you are probably thinking the same thoughts I was at the time about the differences in behaviour and which is right and why are they different? I'm not sure there is a definitive answer to this in the documentation, or at least not that I could find! I think you just have to consider that they are different and they could have different effects depending on your messaging solution. In lots of situations this is just not an issue as your concurrent requests doesn't get to the situation where you end up throttling the web service calls on the client side, however this is definitely more common with an integration broker such as BizTalk where you often have high throughput requirements.  Some of the considerations you should make Based on this behaviour you should be aware of the following: In a .net application if you are making lots of concurrent web service calls from an application in an asynchronous manner your user may thing they are experiencing poor performance but you think your web service is working well. The problem could be that the client will have a default of 2 connections to remote servers so you should bear this in mind When you are developing a BizTalk solution or a .net solution with the WSE 2 stack you may experience timeouts under load and throttling the number of connections using the max connections element in the configuration file will not help you For an application using WSE2 or SoapHttpClientProtocol an expired timeout will not throw an error until after a connection to the server has been made so you should consider this in your transaction and durability patterns     Our Work Around In the short term for our specific scenario we know that we can handle this by just increasing our timeout value.  There is only a specific small window when we get lots of concurrent traffic that causes this scenario so we should be able to increase the timeout to take into consideration the additional client side wait, and on the odd occasion where we do get a timeout the BizTalk send port retry will handle this. What was causing our original problem was that for that short window we were getting a lot of retries which significantly increased the load on our send servers and highlighted the issue.  Longer Term Solution As a longer term solution this really gives us more ammunition to argue a migration to WCF. The application we are calling has some factors which limit the protocols we can use but with WCF we would have more control on the various timeout options because in WCF you can configure specific parts of the timeout. Summary I've had this blog post on my to do list for ages but hopefully it will be useful to some people to just understand this behaviour and to possibly help you with some performance issues you may have. I do not believe there is too much in the way of documentation particularly around WSE2 and ASMX in this area so again another bit of ammunition for migrating to WCF. I'll try to do a follow up post with the sample for WCF to show how this changes things.

    Read the article

  • E-Business Suite Proactive Support - Workflow Analyzer

    - by Alejandro Sosa
    Overview The Workflow Analyzer is a standalone, easy to run tool created to read, validate and troubleshoot Workflow components configuration as well as runtime. It identifies areas where potential problems may arise and based on set of best practices suggests the Workflow System Administrator what to do when such potential problems are found. This tool represents a proactive way to verify Workflow configuration and runtime data to prevent issues ahead of time before they may become of more considerable impact on a production environment. Installation Since it is standalone there are no pre-requisites and runs on Oracle E-Business applications from 11.5.10 onwards. It is installed in the back-end server and can be run directly from SQL*Plus. The output of this tool is written in a HTML file friendly formatted containing the following on both workflow Components configuration and Workflow Runtime data: Workflow-related database initialization parameters Relevant Oracle E-Business profile option values Workflow-owned concurrent programs schedule and Workflow components status Workflow notification mailer configuration and throughput via related queues and table Workflow-relevant recommended and critical one-off patches as well as current code level Workflow database footprint by reading Workflow run-time tables to identify aged processes not being purged. It also checks for large open and closed processes or unhealthy looping conditions in a workflow process, among other checks. See a sample of Workflow Analyzer's output here.  Besides performing the validations listed above, the Workflow Analyzer provides clarification on the issues it finds and refers the reader to specific Oracle MOS documents to address the findings or explains the condition for the reader to take proper action. How to get it? The Workflow Analyzer can be obtained from Oracle MOS Workflow Analyzer script for E-Business Suite Workflow Monitoring and Maintenance (Doc ID 1369938.1) and the supplemental note How to run EBS Workflow Analyzer Tool as a Concurrent Request (Doc ID 1425053.1) explains how to register and run this tool as a concurrent program. This way the report from the Workflow Analyzer can be submitted from the Application and its output can be seen from the application as well.

    Read the article

  • Oracle’s Vision for the Social-Enabled Enterprise

    - by Richard Lefebvre
    2 years ago, Social was a nice to have. Now it’s a must-have’- Mark Hurd .Do you agree? Check out  the on demand version of the Oracle’s Vision for the Social-Enabled Enterprise Exclusive Webcast in a 30' video HERE  Smart companies are developing social media strategies to engage customers, gain brand insights, and transform employee collaboration and recruitment. Join Oracle President Mark Hurd and senior Oracle executives to learn more about Oracle's vision for the social-enabled enterprise

    Read the article

  • Oracle Magazine - Deriving and Sharing Business Intelligence Metadata

    - by David Allan
    There is a new Oracle Magazine article titled 'Deriving and Sharing Business Intelligence Metadata' from Oracle ACE director Mark Rittman in the July/August 2010 issue that illustrates the business definitions derived and shared across OWB 11gR2 and OBIEE: http://www.oracle.com/technology/oramag/oracle/10-jul/o40bi.html Thanks to Mark for the time producing this. As for OWB would be have been useful to have had the reverse engineering capabilities from OBIEE, interesting to have had code template based support for deployment of such business definitions and powerful to use these objects (logical folders etc.) in the mapping itself.

    Read the article

  • Book Review: Fast Track to MDX

    - by Greg Low
    Another book that I re-read while travelling last week was Fast Track to MDX . I still think that it's the best book that I've seen for introducing the core concepts of MDX. SolidQ colleague Mark Whitehorn, along with Mosha Pasumansky and Robert Zare do an amazing job of building MDX knowledge throughout the book. I had dinner with Mark in London a few years back and I was pestering him to update this book. The biggest limitation of the book is that it was written for SQL Server 2000 Analysis Services,...(read more)

    Read the article

  • Load Plan article in Oracle Magazine

    - by David Allan
    Timely article in Oracle Magazine on ODI Load Plans from Mark Rittman in the current issue, worth having a quick read of the article and play with the sample which is included if you get the time. Thanks to Mark for investing the time and energy providing such useful information to the community.http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52bi-1735905.htmlMark goes over the main benefits of the load plan in the article. Interested to hear any creative use cases or comments in general.

    Read the article

  • Canonical et Nokia pourraient travailler en collaboration sur Qt, il pourrait devenir le framework de référence pour Ubuntu

    Canonical et Nokia pourraient travailler en collaboration sur Qt Sur les quelques derniers mois, des contacts ont eu lieu entre les développeurs de Qt et Canonical ainsi qu'avec les participants au projet Ubuntu. Mark Zimmerman, CTO de Canonical, disait sur son blog, en octobre : Citation: Envoyé par Mark Zimmerman, CTO de Canonical I have been thinking about Qt recently. We want to make it fast, easy and painless to devel...

    Read the article

  • Getting Started With Knockout.js

    - by Pawan_Mishra
    Client side template binding in web applications is getting popular with every passing day. More and more libraries are coming up with enhanced support for client side binding. jQuery templates is one very popular mechanism for client side template bindings. The idea with client side template binding is simple. Define the html mark-up with appropriate place holder for data. User template engines like jQuery template to bind the data(JSON formatted data) with the previously defined mark-up.In this...(read more)

    Read the article

  • Black Screen After Sleep (Win 8)

    - by Mark
    I have an SSD only, Intel HD Graphics only, Ultrabook running on Windows 8 64-bit and after I turn it back on, after a long sleep of a few hours, the screen remains black even though the power is on (the fan and keyboard backlight are on). I have to force a shut down and switch it back on to use it normally again. Hibernation and hybrid-sleep are turned off. It comes out of sleep of less than an hour just fine. Any advise? Mark

    Read the article

  • The Next Wave of PeopleSoft Capabilities for the Staffing Industry Is Here

    - by Mark Rosenberg
    With the release of PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 in January this year, we introduced substantial new capabilities for our Staffing Industry customers. Through a co-development project with Infosys Limited, we have enriched Oracle's PeopleSoft Staffing Solution with new tools aimed at accelerating and improving the quality of job order fulfillment, increasing branch recruiter productivity, and driving profitable growth. Staffing industry firms succeed based on their ability to rapidly, cost-effectively, and continually fill their pipelines with new clients and job orders, recruit the best talent, and match orders with talent. Pressure to execute in each of these functional areas is even more acute on staffing firms as contingent labor becomes a more substantial and permanent part of the workforce mix. In an industry that creates value through speedy execution, there is little room for manual, inefficient processes and brittle, custom integrations, which throttle profitability and growth. The latest wave of investment in the PeopleSoft Staffing Solution focuses on generating efficiency and flexibility for our customers. Simplicity To operate profitably and continue growing, a Staffing enterprise needs its client management, recruiting, order fulfillment, and other processes to function in harmony. Most importantly, they need to be simple for recruiters, branch managers, and applicants to access and understand. The latest PeopleSoft Staffing Solution set of enhancements includes numerous automated defaulting mechanisms and information-rich dashboard pagelets that even a new employee can learn quickly. Pending Applicant, Agenda management, Search, and other pagelets are just a few of the newest, easy-to-use tools that not only aggregate and summarize information, but also provide instant access to applicants, tasks, and key reports for branch staff. Productivity The leading firms in the Staffing industry are those that can more efficiently orchestrate large numbers of candidates, clients, and orders than their competitors can. PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 delivers productivity boosters that Staffing firms can leverage to streamline tasks and processes for competitive advantage. For example, we enhanced the Recruiting Funnel, which manages the candidate on-boarding process, with a highly interactive user interface. It integrates disparate Staffing business processes and exploits new PeopleTools technologies to offer a superior on-boarding user experience. Automated creation of agenda items and assignment tasks for each candidate minimizes setup and organizes assignment steps for the on-boarding process. Mass updates of tasks and instant access to the candidate overview page (which we also expanded), candidate event status, event counts, and other key data enable recruiters to better serve clients and candidates. Lower TCO Constructing and maintaining an efficient yet flexible labor supply chain can be complicated, let alone expensive. Traditionally, Staffing firms have been challenged in controlling their technology cost of ownership because connecting candidate and client-facing tools involved building and integrating custom applications and technologies and managing staff turnover, placing heavy demands on IT and support staff. With PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2, there are two major enhancements that aggressively tackle these challenges. First, we added another integration framework to enable cost-effective linking of the Staffing firm’s PeopleSoft applications and its job board distributors. (The first PeopleSoft 9.1 Feature Pack released in March 2011 delivered an integration framework to connect to resume parsing providers.) Second, we introduced the teaming concept to enable work to be partitioned to groups, as well as individuals. These two capabilities, combined with a host of others, position Staffing firms to configure and grow their businesses without growing their IT and overhead expenditures. For our Staffing Industry customers, PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 is loaded with high-value tools aimed at enabling and sustaining a flexible labor supply chain. For more information, contact [email protected] or Mark[email protected].

    Read the article

  • Essential Links for the SharePoint Client Side Developer

    - by Mark Rackley
    Front End Developer? Client Side Developer? Middle Tier??? I’m covering all my bases.  Regardless, I’m sick and tired of Googling with Bing when I forget where information that I need often is located. I was getting ready to bookmark some of them when it hit me… “Hey Mark… (I don’t actually refer to myself in the third person), Why don’t you put the links in a blog so that it looks like you are being helpful!” I can’t tell you how many times I’ve had to go back to some of my old blogs to remember how I did something. Seriously people, you need to start a blog, it’s the best way to remember how the frick you got something to work… and it looks like you are being helpful when in reality you are just forgetful.  So… where was I? Oh yeah.. essential information that I’ve needed from time to time when I was not using Visual Studio. All of this info has come in handy from time to time. Know about these things and keep them in your tool belt, it’s amazing the stuff you can accomplish with just knowing where to look. What Why SPServices Widely used library written by Marc Anderson used to call SharePoint Web Services with jQuery jQuery For SPServices and other cool stuff Easy Tabs Essential tool for quick page enhancements. This widely used too from Christophe Humbert groups multiple web parts into one tabbed display. Very quick and easy way to get oohs and ahs from End Users. Convert Calculated Columns to HTML Also from Christophe, I use this script all the time to convert html in my calculated columns to actually display as html and not with the tags. Unlocking the Mysteries of Data View Web Part XSL Tags This blog series from Marc Anderson makes it very easy to understand what’s going on with all those weird xsl tags in your data view web parts. Essential to make those things do what you want them to do. Creating Parent / Child list relationships (2007) Creating Parent / Child list relationships (2010) By far my most viewed blog posts (tens and tens of thousands).  I have posts for both 2007 and 2010 that walk you through automatically setting the lookup id on a list to its “parent”. Set SharePoint Form fields using Query String Variables Also widely read, this one walks you through taking a variable from your Query String and set a form field to that value.   Hmmm… I KNOW there are more, but I’m tired and drawing a blank.  I’ll try to add them when I remember them (or need them again and think “Oh, I forgot to add that one”) But it’s a start, and please feel free to add your own in the comments… So, it’s YOUR turn to be helpful. What little tip or trick do you find yourself using ALL the time that you think everyone should know about??

    Read the article

  • ipvsadm lists a few hosts by IP only, rest by name

    - by dmourati
    We use keepalived to manage our Linux Virtual Server (LVS) load balancer. The LVS VIPs are setup to use a FWMARK as configured in iptables. virtual_server fwmark 300000 { delay_loop 10 lb_algo wrr lb_kind NAT persistence_timeout 180 protocol TCP real_server 10.10.35.31 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.31" misc_timeout 30 } } real_server 10.10.35.32 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.32" misc_timeout 30 } } real_server 10.10.35.33 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.33" misc_timeout 30 } } real_server 10.10.35.34 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.34" misc_timeout 30 } } } http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.fwmark.html [root@lb1 ~]# iptables -L -n -v -t mangle Chain PREROUTING (policy ACCEPT 182G packets, 114T bytes) 190M 167G MARK tcp -- * * 0.0.0.0/0 w1.x1.y1.4 multiport dports 80,443 MARK set 0x493e0 62M 58G MARK tcp -- * * 0.0.0.0/0 w1.x1.y2.4 multiport dports 80,443 MARK set 0x493e0 [root@lb1 ~]# ipvsadm -L IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM 300000 wrr persistent 180 -> 10.10.35.31:0 Masq 24 1 0 -> dis2.domain.com:0 Masq 24 3 231 -> 10.10.35.33:0 Masq 24 0 208 -> 10.10.35.34:0 Masq 24 0 0 At the time the realservers were setup, there was a misconfigured dns for some hosts in the 10.10.35.0/24 network. Thereafter, we fixed the DNS. However, the hosts continue to show up as only their IP numbers (10.10.35.31,10.10.35.33,10.10.35.34) above. [root@lb1 ~]# host 10.10.35.31 31.35.10.10.in-addr.arpa domain name pointer dis1.domain.com. OS is CentOS 6.3. Ipvsadm is ipvsadm-1.25-10.el6.x86_64. kernel is kernel-2.6.32-71.el6.x86_64. Keepalived is keepalived-1.2.7-1.el6.x86_64. How can we get ipvsadm -L to list all realservers by their proper hostnames?

    Read the article

  • Why do we get a sudden spike in response times?

    - by Christian Hagelid
    We have an API that is implemented using ServiceStack which is hosted in IIS. While performing load testing of the API we discovered that the response times are good but that they deteriorate rapidly as soon as we hit about 3,500 concurrent users per server. We have two servers and when hitting them with 7,000 users the average response times sit below 500ms for all endpoints. The boxes are behind a load balancer so we get 3,500 concurrents per server. However as soon as we increase the number of total concurrent users we see a significant increase in response times. Increasing the concurrent users to 5,000 per server gives us an average response time per endpoint of around 7 seconds. The memory and CPU on the servers are quite low, both while the response times are good and when after they deteriorate. At peak with 10,000 concurrent users the CPU averages just below 50% and the RAM sits around 3-4 GB out of 16. This leaves us thinking that we are hitting some kind of limit somewhere. The below screenshot shows some key counters in perfmon during a load test with a total of 10,000 concurrent users. The highlighted counter is requests/second. To the right of the screenshot you can see the requests per second graph becoming really erratic. This is the main indicator for slow response times. As soon as we see this pattern we notice slow response times in the load test. How do we go about troubleshooting this performance issue? We are trying to identify if this is a coding issue or a configuration issue. Are there any settings in web.config or IIS that could explain this behaviour? The application pool is running .NET v4.0 and the IIS version is 7.5. The only change we have made from the default settings is to update the application pool Queue Length value from 1,000 to 5,000. We have also added the following config settings to the Aspnet.config file: <system.web> <applicationPool maxConcurrentRequestsPerCPU="5000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" /> </system.web> More details: The purpose of the API is to combine data from various external sources and return as JSON. It is currently using an InMemory cache implementation to cache individual external calls at the data layer. The first request to a resource will fetch all data required and any subsequent requests for the same resource will get results from the cache. We have a 'cache runner' that is implemented as a background process that updates the information in the cache at certain set intervals. We have added locking around the code that fetches data from the external resources. We have also implemented the services to fetch the data from the external sources in an asynchronous fashion so that the endpoint should only be as slow as the slowest external call (unless we have data in the cache of course). This is done using the System.Threading.Tasks.Task class. Could we be hitting a limitation in terms of number of threads available to the process?

    Read the article

  • Another Marketing Conference, part one – the best morning sessions.

    - by Roger Hart
    Yesterday I went to Another Marketing Conference. I honestly can’t tell if the title is just tipping over into smug, but in the balance of things that doesn’t matter, because it was a good conference. There was an enjoyable blend of theoretical and practical, and enough inter-disciplinary spread to keep my inner dilettante grinning from ear to ear. Sure, there was a bumpy bit in the middle, with two back-to-back sales pitches and a rather thin overview of the state of the web. But the signal:noise ratio at AMC2012 was impressively high. Here’s the first part of my write-up of the sessions. It’s a bit of a mammoth. It’s also a bit of a mash-up of what was said and what I thought about it. I’ll add links to the videos and slides from the sessions as they become available. Although it was in the morning session, I’ve not included Vanessa Northam’s session on the power of internal comms to build brand ambassadors. It’ll be in the next roundup, as this is already pushing 2.5k words. First, the important stuff. I was keeping a tally, and nobody said “synergy” or “leverage”. I did, however, hear the term “marketeers” six times. Shame on you – you know who you are. 1 – Branding in a post-digital world, Graham Hales This initially looked like being a sales presentation for Interbrand, but Graham pulled it out of the bag a few minutes in. He introduced a model for brand management that was essentially Plan >> Do >> Check >> Act, with Do and Check rolled up together, and went on to stress that this looks like on overall business management model for a reason. Brand has to be part of your overall business strategy and metrics if you’re going to care about it at all. This was the first iteration of what proved to be one of the event’s emergent themes: do it throughout the stack or don’t bother. Graham went on to remind us that brands, in so far as they are owned at all, are owned by and co-created with our customers. Advertising can offer a message to customers, but they provide the expression of a brand. This was a preface to talking about an increasingly chaotic marketplace, with increasingly hard-to-manage purchase processes. Services like Amazon reviews and TripAdvisor (four presenters would make this point) saturate customers with information, and give them a kind of vigilante power to comment on and define brands. Consequentially, they experience a number of “moments of deflection” in our sales funnels. Our control is lessened, and failure to engage can negatively-impact buying decisions increasingly poorly. The clearest example given was the failure of NatWest’s “caring bank” campaign, where staff in branches, customer support, and online presences didn’t align. A discontinuity of experience basically made the campaign worthless, and disgruntled customers talked about it loudly on social media. This in turn presented an opportunity to engage and show caring, but that wasn’t taken. What I took away was that brand (co)creation is ongoing and needs monitoring and metrics. But reciprocally, given you get what you measure, strategy and metrics must include brand if any kind of branding is to work at all. Campaigns and messages must permeate product and service design. What that doesn’t mean (and Graham didn’t say it did) is putting Marketing at the top of the pyramid, and having them bawl demands at Product Management, Support, and Development like an entitled toddler. It’s going to have to be collaborative, and session 6 on internal comms handled this really well. The main thing missing here was substantiating data, and the main question I found myself chewing on was: if we’re building brands collaboratively and in the open, what about the cultural politics of trolling? 2 – Challenging our core beliefs about human behaviour, Mark Earls This was definitely the best show of the day. It was also some of the best content. Mark talked us through nudging, behavioural economics, and some key misconceptions around decision making. Basically, people aren’t rational, they’re petty, reactive, emotional sacks of meat, and they’ll go where they’re led. Comforting stuff. Examples given were the spread of the London Riots and the “discovery” of the mountains of Kong, and the popularity of Susan Boyle, which, in turn made me think about Per Mollerup’s concept of “social wayshowing”. Mark boiled his thoughts down into four key points which I completely failed to write down word for word: People do, then think – Changing minds to change behaviour doesn’t work. Post-rationalization rules the day. See also: mere exposure effects. Spock < Kirk - Emotional/intuitive comes first, then we rationalize impulses. The non-thinking, emotive, reactive processes run much faster than the deliberative ones. People are not really rational decision makers, so  intervening with information may not be appropriate. Maximisers or satisficers? – Related to the last point. People do not consistently, rationally, maximise. When faced with an abundance of choice, they prefer to satisfice than evaluate, and will often follow social leads rather than think. Things tend to converge – Behaviour trends to a consensus normal. When faced with choices people overwhelmingly just do what they see others doing. Humans are extraordinarily good at mirroring behaviours and receiving influence. People “outsource the cognitive load” of choices to the crowd. Mark’s headline quote was probably “the real influence happens at the table next to you”. Reference examples, word of mouth, and social influence are tremendously important, and so talking about product experiences may be more important than talking about products. This reminded me of Kathy Sierra’s “creating bad-ass users” concept of designing to make people more awesome rather than products they like. If we can expose user-awesome, and make sharing easy, we can normalise the behaviours we want. If we normalize the behaviours we want, people should make and post-rationalize the buying decisions we want.  Where we need to be: “A bigger boy made me do it” Where we are: “a wizard did it and ran away” However, it’s worth bearing in mind that some purchasing decisions are personal and informed rather than social and reactive. There’s a quadrant diagram, in fact. What was really interesting, though, towards the end of the talk, was some advice for working out how social your products might be. The standard technology adoption lifecycle graph is essentially about social product diffusion. So this idea isn’t really new. Geoffrey Moore’s “chasm” idea may not strictly apply. However, his concepts of beachheads and reference segments are exactly what is required to normalize and thus enable purchase decisions (behaviour change). The final thing is that in only very few categories does a better product actually affect purchase decision. Where the choice is personal and informed, this is true. But where it’s personal and impulsive, or in any way social, “better” is trumped by popularity, endorsement, or “point of sale salience”. UX, UCD, and e-commerce know this to be true. A better (and easier) experience will always beat “more features”. Easy to use, and easy to observe being used will beat “what the user says they want”. This made me think about the astounding stickiness of rational fallacies, “common sense” and the pathological willful simplifications of the media. Rational fallacies seem like they’re basically the heuristics we use for post-rationalization. If I were profoundly grimy and cynical, I’d suggest deploying a boat-load in our messaging, to see if they’re really as sticky and appealing as they look. 4 – Changing behaviour through communication, Stephen Donajgrodzki This was a fantastic follow up to Mark’s session. Stephen basically talked us through some tactics used in public information/health comms that implement the kind of behavioural theory Mark introduced. The session was largely about how to get people to do (good) things they’re predisposed not to do, and how communication can (and can’t) make positive interventions. A couple of things stood out, in particular “implementation intentions” and how they can be linked to goals. For example, in order to get people to check and test their smoke alarms (a goal intention, rarely actualized  an information campaign will attempt to link this activity to the clocks going back or forward (a strong implementation intention, well-actualized). The talk reinforced the idea that making behaviour changes easy and visible normalizes them and makes them more likely to succeed. To do this, they have to be embodied throughout a product and service cycle. Experiential disconnects undermine the normalization. So campaigns, products, and customer interactions must be aligned. This is underscored by the second section of the presentation, which talked about interventions and pre-conditions for change. Taking the examples of drug addiction and stopping smoking, Stephen showed us a framework for attempting (and succeeding or failing in) behaviour change. He noted that when the change is something people fundamentally want to do, and that is easy, this gets a to simpler. Coordinated, easily-observed environmental pressures create preconditions for change and build motivation. (price, pub smoking ban, ad campaigns, friend quitting, declining social acceptability) A triggering even leads to a change attempt. (getting a cold and panicking about how bad the cough is) Interventions can be made to enable an attempt (NHS services, public information, nicotine patches) If it succeeds – yay. If it fails, there’s strong negative enforcement. Triggering events seem largely personal, but messaging can intervene in the creation of preconditions and in supporting decisions. Stephen talked more about systems of thinking and “bounded rationality”. The idea being that to enable change you need to break through “automatic” thinking into “reflective” thinking. Disruption and emotion are great tools for this, but that is only the start of the process. It occurs to me that a great deal of market research is focused on determining triggers rather than analysing necessary preconditions. Although they are presumably related. The final section talked about setting goals. Marketing goals are often seen as deriving directly from business goals. However, marketing may be unable to deliver on these directly where decision and behaviour-change processes are involved. In those cases, marketing and communication goals should be to create preconditions. They should also consider priming and norms. Content marketing and brand awareness are good first steps here, as brands can be heuristics in decision making for choice-saturated consumers, or those seeking education. 5 – The power of engaged communities and how to build them, Harriet Minter (the Guardian) The meat of this was that you need to let communities define and establish themselves, and be quick to react to their needs. Harriet had been in charge of building the Guardian’s community sites, and learned a lot about how they come together, stabilize  grow, and react. Crucially, they can’t be about sales or push messaging. A community is not just an audience. It’s essential to start with what this particular segment or tribe are interested in, then what they want to hear. Eventually you can consider – in light of this – what they might want to buy, but you can’t start with the product. A community won’t cohere around one you’re pushing. Her tips for community building were (again, sorry, not verbatim): Set goals Have some targets. Community building sounds vague and fluffy, but you can have (and adjust) concrete goals. Think like a start-up This is the “lean” stuff. Try things, fail quickly, respond. Don’t restrict platforms Let the audience choose them, and be aware of their differences. For example, LinkedIn is very different to Twitter. Track your stats Related to the first point. Keeping an eye on the numbers lets you respond. They should be qualified, however. If you want a community of enterprise decision makers, headcount alone may be a bad metric – have you got CIOs, or just people who want to get jobs by mingling with CIOs? Build brand advocates Do things to involve people and make them awesome, and they’ll cheer-lead for you. The last part really got my attention. Little bits of drive-by kindness go a long way. But more than that, genuinely helping people turns them into powerful advocates. Harriet gave an example of the Guardian engaging with an aspiring journalist on its Q&A forums. Through a series of serendipitous encounters he became a BBC producer, and now enthusiastically speaks up for the Guardian community sites. Cultivating many small, authentic, influential voices may have a better pay-off than schmoozing the big guys. This could be particularly important in the context of Mark and Stephen’s models of social, endorsement-led, and example-led decision making. There’s a lot here I haven’t covered, and it may be worth some follow-up on community building. Thoughts I was quite sceptical of nudge theory and behavioural economics. First off it sounds too good to be true, and second it sounds too sinister to permit. But I haven’t done the background reading. So I’m going to, and if it seems to hold real water, and if it’s possible to do it ethically (Stephen’s presentations suggests it may be) then it’s probably worth exploring. The message seemed to be: change what people do, and they’ll work out why afterwards. Moreover, the people around them will do it too. Make the things you want them to do extraordinarily easy and very, very visible. Normalize and support the decisions you want them to make, and they’ll make them. In practice this means not talking about the thing, but showing the user-awesome. Glib? Perhaps. But it feels worth considering. Also, if I ever run a marketing conference, I’m going to ban speakers from using examples from Apple. Quite apart from not being consistently generalizable, it’s becoming an irritating cliché.

    Read the article

  • Android threading and database locking

    - by Sena Gbeckor-Kove
    Hi, We are using AsyncTasks to access database tables and cursors. Unfortunately we are seeing occasional exceptions regarding the database being locked. E/SQLiteOpenHelper(15963): Couldn't open iviewnews.db for writing (will try read-only): E/SQLiteOpenHelper(15963): android.database.sqlite.SQLiteException: database is locked E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteDatabase.native_setLocale(Native Method) E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteDatabase.setLocale(SQLiteDatabase.java:1637) E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteDatabase.<init>(SQLiteDatabase.java:1587) E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteDatabase.openDatabase(SQLiteDatabase.java:638) E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteDatabase.openOrCreateDatabase(SQLiteDatabase.java:659) E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteDatabase.openOrCreateDatabase(SQLiteDatabase.java:652) E/SQLiteOpenHelper(15963): at android.app.ApplicationContext.openOrCreateDatabase(ApplicationContext.java:482) E/SQLiteOpenHelper(15963): at android.content.ContextWrapper.openOrCreateDatabase(ContextWrapper.java:193) E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteOpenHelper.getWritableDatabase(SQLiteOpenHelper.java:98) E/SQLiteOpenHelper(15963): at android.database.sqlite.SQLiteOpenHelper.getReadableDatabase(SQLiteOpenHelper.java:158) E/SQLiteOpenHelper(15963): at com.iview.android.widget.IViewNewsTopStoryWidget.initData(IViewNewsTopStoryWidget.java:73) E/SQLiteOpenHelper(15963): at com.iview.android.widget.IViewNewsTopStoryWidget.updateNewsWidgets(IViewNewsTopStoryWidget.java:121) E/SQLiteOpenHelper(15963): at com.iview.android.async.GetNewsTask.doInBackground(GetNewsTask.java:338) E/SQLiteOpenHelper(15963): at com.iview.android.async.GetNewsTask.doInBackground(GetNewsTask.java:1) E/SQLiteOpenHelper(15963): at android.os.AsyncTask$2.call(AsyncTask.java:185) E/SQLiteOpenHelper(15963): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:256) E/SQLiteOpenHelper(15963): at java.util.concurrent.FutureTask.run(FutureTask.java:122) E/SQLiteOpenHelper(15963): at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:648) E/SQLiteOpenHelper(15963): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:673) E/SQLiteOpenHelper(15963): at java.lang.Thread.run(Thread.java:1060) Does anybody have a general example for code which writes to a database from a different thread than the one reading and how can we ensure thread safety. One suggestion I've had is to use a ContentProvider, as this would handle the access of the database from multiple threads. I am going to look at this, but is this the recommended method of handling such a problem? It seems rather heavyweight considering we're talking about in front or behind Thanks in advance.

    Read the article

  • infinite loop shutting down ensime

    - by Jeff Bowman
    When I run M-X ensime-disconnect I get the following forever: string matching regex `\"((?:[^\"\\]|\\.)*)\"' expected but `^@' found and I see this exception when I use C-c C-c Uncaught exception in com.ensime.server.SocketHandler@769aba32 java.net.SocketException: Broken pipe at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) at java.net.SocketOutputStream.write(SocketOutputStream.java:153) at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:220) at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:290) at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:294) at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:140) at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229) at java.io.BufferedWriter.flush(BufferedWriter.java:253) at com.ensime.server.SocketHandler.write(server.scala:118) at com.ensime.server.SocketHandler$$anonfun$act$1$$anonfun$apply$mcV$sp$1.apply(server.scala:132) at com.ensime.server.SocketHandler$$anonfun$act$1$$anonfun$apply$mcV$sp$1.apply(server.scala:127) at scala.actors.Actor$class.receive(Actor.scala:456) at com.ensime.server.SocketHandler.receive(server.scala:67) at com.ensime.server.SocketHandler$$anonfun$act$1.apply$mcV$sp(server.scala:127) at com.ensime.server.SocketHandler$$anonfun$act$1.apply(server.scala:127) at com.ensime.server.SocketHandler$$anonfun$act$1.apply(server.scala:127) at scala.actors.Reactor$class.seq(Reactor.scala:262) at com.ensime.server.SocketHandler.seq(server.scala:67) at scala.actors.Reactor$$anon$3.andThen(Reactor.scala:240) at scala.actors.Combinators$class.loop(Combinators.scala:26) at com.ensime.server.SocketHandler.loop(server.scala:67) at scala.actors.Combinators$$anonfun$loop$1.apply(Combinators.scala:26) at scala.actors.Combinators$$anonfun$loop$1.apply(Combinators.scala:26) at scala.actors.Reactor$$anonfun$seq$1$$anonfun$apply$1.apply(Reactor.scala:259) at scala.actors.ReactorTask.run(ReactorTask.scala:36) at scala.actors.ReactorTask.compute(ReactorTask.scala:74) at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:147) at scala.concurrent.forkjoin.ForkJoinTask.quietlyExec(ForkJoinTask.java:422) at scala.concurrent.forkjoin.ForkJoinWorkerThread.mainLoop(ForkJoinWorkerThread.java:340) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:325) Is there something else I'm missing in my config or I should check on? Thanks, Jeff

    Read the article

  • Liferay 5.1.1 solr plugin ClassCastException

    - by Mustafa Zidan
    solr-pulgin throw the following exception [SolrIndexSearcherImpl:79] Error while sending request to Solr java.lang.ClassCastException: com.liferay.portal.kernel.util.HttpUtil cannot be cast to com.liferay.portal.kernel.util.HttpUtil at com.liferay.portal.kernel.util.HttpUtil._getUtil(HttpUtil.java:317) at com.liferay.portal.kernel.util.HttpUtil.getHttp(HttpUtil.java:96) at com.liferay.portal.kernel.util.HttpUtil.addParameter(HttpUtil.java:68) at com.liferay.portal.search.solr.SolrIndexSearcherImpl.search(SolrIndexSearcherImpl.java:71) at com.liferay.portal.search.solr.SolrSearchEngineUtil.search(SolrSearchEngineUtil.java:78) at com.liferay.portal.search.solr.messaging.SolrReaderMessageListener.doCommandSearch(SolrReaderMessageListener.java:92) at com.liferay.portal.search.solr.messaging.SolrReaderMessageListener.doReceive(SolrReaderMessageListener.java:75) at com.liferay.portal.search.solr.messaging.SolrReaderMessageListener.receive(SolrReaderMessageListener.java:46) at com.liferay.portal.kernel.messaging.InvokerMessageListener.receive(InvokerMessageListener.java:69) at com.liferay.portal.kernel.messaging.ParallelDestination$1.run(ParallelDestination.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) 16:08:16,174 ERROR [SolrReaderMessageListener:49] Unable to process message com.liferay.portal.kernel.messaging.Message@2c591d98 com.liferay.portal.kernel.search.SearchException: java.lang.ClassCastException: com.liferay.portal.kernel.util.HttpUtil cannot be cast to com.liferay.portal.kernel.util.HttpUtil at com.liferay.portal.search.solr.SolrIndexSearcherImpl.search(SolrIndexSearcherImpl.java:81) at com.liferay.portal.search.solr.SolrSearchEngineUtil.search(SolrSearchEngineUtil.java:78) at com.liferay.portal.search.solr.messaging.SolrReaderMessageListener.doCommandSearch(SolrReaderMessageListener.java:92) at com.liferay.portal.search.solr.messaging.SolrReaderMessageListener.doReceive(SolrReaderMessageListener.java:75) at com.liferay.portal.search.solr.messaging.SolrReaderMessageListener.receive(SolrReaderMessageListener.java:46) at com.liferay.portal.kernel.messaging.InvokerMessageListener.receive(InvokerMessageListener.java:69) at com.liferay.portal.kernel.messaging.ParallelDestination$1.run(ParallelDestination.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) can anyone help me on this

    Read the article

  • DateFormat conversion problem in java?

    - by androidbase Praveen
    my input String is : 2010-03-24T17:28:50.000Z output pattern is like: DateFormat formatter1 = new SimpleDateFormat("EEE. MMM. d. yyyy"); i convert this like this: formatter1.format(new Date("2010-03-24T17:28:50.000Z"));//illegalArgumentException here the string "2010-03-24T17:28:50.000Z" ouput should be like this: Thu. Mar. 24. 2010 idea but i get a illegalArgumentException. Dont know why? any idea?? stacktrace message is: 04-08 19:50:28.326: WARN/System.err(306): java.lang.IllegalArgumentException 04-08 19:50:28.345: WARN/System.err(306): at java.util.Date.parse(Date.java:447) 04-08 19:50:28.355: WARN/System.err(306): at java.util.Date.<init>(Date.java:157) 04-08 19:50:28.366: WARN/System.err(306): at com.example.brown.Bru_Tube$SelectDataTask.doInBackground(Bru_Tube.java:222) 04-08 19:50:28.366: WARN/System.err(306): at com.example.brown.Bru_Tube$SelectDataTask.doInBackground(Bru_Tube.java:1) 04-08 19:50:28.405: WARN/System.err(306): at android.os.AsyncTask$2.call(AsyncTask.java:185) 04-08 19:50:28.415: WARN/System.err(306): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) 04-08 19:50:28.415: WARN/System.err(306): at java.util.concurrent.FutureTask.run(FutureTask.java:137) 04-08 19:50:28.446: WARN/System.err(306): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) 04-08 19:50:28.456: WARN/System.err(306): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) 04-08 19:50:28.466: WARN/System.err(306): at java.lang.Thread.run(Thread.java:1096)

    Read the article

  • Annoying Twisted Python problem

    - by Kalmi
    I'm trying to answer the following question out of personal interest: What is the fastest way to send 100,000 HTTP requests in Python? And this is what I have came up so far, but I'm experiencing something very stange. When installSignalHandlers is True, it just hangs. I can see that the DelayedCall instances are in reactor._newTimedCalls, but processResponse never gets called. When installSignalHandlers is False, it throws an error and works. from twisted.internet import reactor from twisted.web.client import Agent from threading import Semaphore, Thread import time concurrent = 100 s = Semaphore(concurrent) reactor.suggestThreadPoolSize(concurrent) t=Thread( target=reactor.run, kwargs={'installSignalHandlers':True}) t.daemon=True t.start() agent = Agent(reactor) def processResponse(response,url): print response.code, url s.release() def processError(response,url): print "error", url s.release() def addTask(url): req = agent.request('HEAD', url) req.addCallback(processResponse, url) req.addErrback(processError, url) for url in open('urllist.txt'): addTask(url.strip()) s.acquire() while s._Semaphore__value!=concurrent: time.sleep(0.1) reactor.stop() And here is the error that it throws when installSignalHandlers is True: (Note: This is the expected behaviour! The question is why it doesn't work when installSignalHandlers is False.) Traceback (most recent call last): File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 396, in fireEvent DeferredList(beforeResults).addCallback(self._continueFiring) File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 224, in addCallback callbackKeywords=kw) File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 213, in addCallbacks self._runCallbacks() File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 371, in _runCallbacks self.result = callback(self.result, *args, **kw) --- <exception caught here> --- File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 409, in _continueFiring callable(*args, **kwargs) File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 1165, in _reallyStartRunning self._handleSignals() File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 1105, in _handleSignals signal.signal(signal.SIGINT, self.sigInt) exceptions.ValueError: signal only works in main thread What am I doing wrong and what is the right way? I'm new to twisted.

    Read the article

  • reactor not working when reactor.run is not called in the main thread and installSignalHandlers=Fals

    - by Kalmi
    I'm trying to answer the following question out of personal interest: What is the fastest way to send 100,000 HTTP requests in Python? And this is what I have came up so far, but I'm experiencing something very stange. When installSignalHandlers is True, it just hangs. I can see that the DelayedCall instances are in reactor._newTimedCalls, but processResponse never gets called. When installSignalHandlers is False, it throws an error and works. from twisted.internet import reactor from twisted.web.client import Agent from threading import Semaphore, Thread import time concurrent = 100 s = Semaphore(concurrent) reactor.suggestThreadPoolSize(concurrent) t=Thread( target=reactor.run, kwargs={'installSignalHandlers':True}) t.daemon=True t.start() agent = Agent(reactor) def processResponse(response,url): print response.code, url s.release() def processError(response,url): print "error", url s.release() def addTask(url): req = agent.request('HEAD', url) req.addCallback(processResponse, url) req.addErrback(processError, url) for url in open('urllist.txt'): addTask(url.strip()) s.acquire() while s._Semaphore__value!=concurrent: time.sleep(0.1) reactor.stop() And here is the error that it throws when installSignalHandlers is True: (Note: This is the expected behaviour! The question is why it doesn't work when installSignalHandlers is False.) Traceback (most recent call last): File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 396, in fireEvent DeferredList(beforeResults).addCallback(self._continueFiring) File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 224, in addCallback callbackKeywords=kw) File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 213, in addCallbacks self._runCallbacks() File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 371, in _runCallbacks self.result = callback(self.result, *args, **kw) --- <exception caught here> --- File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 409, in _continueFiring callable(*args, **kwargs) File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 1165, in _reallyStartRunning self._handleSignals() File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 1105, in _handleSignals signal.signal(signal.SIGINT, self.sigInt) exceptions.ValueError: signal only works in main thread What am I doing wrong and what is the right way? I'm new to twisted.

    Read the article

  • Back Up to Tape the Way You Shop For Groceries

    - by rickramsey
    Imagine if this was how you shopped for groceries: From the end of the aisle sprint to the point where you reach the ketchup. Pull a bottle from the shelf and yell at the top of your lungs, “Got it!” Sprint back to the end of the aisle. Start again and sprint down the same aisle to the mustard, pull a bottle from the shelf and again yell for the whole store to hear, “Got it!” Sprint back to the end of the aisle. Repeat this procedure for every item you need in the aisle. Proceed to the next aisle and follow the same steps for the list of items you need from that aisle. Sounds ridiculous, doesn’t it? Not only is it horribly inefficient, it’s exhausting and can lead to wear out failures on your grocery cart, or worse, yourself. This is essentially how NetApp and some other applications write NDMP backups to tape. In the analogy, the ketchup and mustard are the files to be written, yelling “Got it!” is the equivalent of a sync mark at the end of a file, and the sprint back to the end of an aisle is the process most commonly called a “backhitch” where the drive has to back up on a tape to start writing again. Writing to tape in this way results in very slow tape drive performance and imposes unnecessary wear on the tape drive and the media, especially when writing small files. The good news is not all tape drives behave this way when writing small files. Unlike midrange LTO drives, Oracle’s StorageTek T10000D tape drive is designed to handle this scenario efficiently. The difference between the two drive types is that the T10000D drive gives you the ability to write files in a NetApp NDMP backup environment the way you would normally shop for groceries. With grocery shopping, you essentially stream through aisles picking up items as you go, and then after checking out, yell, “Got it!”, though you might do that last step silently. With the T10000D, it has a feature called the Tape Application Accelerator, which prevents the drive from having to stop after each file is written to notify NetApp or another application that the write was successful. When enabled in the T10000D tape drive, Tape Application Accelerator causes the tape drive to respond to tape mark and file sync commands differently than when disabled: A tape mark received by the tape drive is treated as a buffered tape mark. A file sync received by the tape drive is treated as a no op command. Since buffered tape marks and no op commands do not cause the tape drive to empty the contents of its buffer to tape and backhitch, the data is written to tape in significantly less time. Oracle has emulated NetApp environments with a number of different file sizes and found the following when comparing the T10000D with the Tape Application Accelerator enabled versus LTO6 tape drives. Notice how the T10000D is not only monumentally faster, but also remarkably consistent? In addition, the writing of the 50 GB of files is done without a single backhitch. The LTO6 drive, meanwhile, will perform as many as 3,800 backhitches! At the end of writing the entire set of files, the T10000D tape drive reports back to the application, in this case NetApp, that the write was successful via a tape mark. So if the Tape Application Accelerator dramatically improves performance and reliability, why wouldn’t you always have it enabled? The reason is because tape drive buffers are meant to be just temporary data repositories so in the event of a power loss, there could be data loss in certain environments for the files that resided in the buffer. Fortunately, we do have best practices depending on your environment to avoid this from happening. I highly recommend reading Maximizing Tape Performance with StorageTek T10000 Tape Drives (pdf) to decide which best practice is right for you. The white paper also digs deeper into the benefits of the Tape Application Accelerator. The white paper is free, and after downloading it you can decide for yourself whether you want to yell “Got it!” out loud or just silently to yourself. Customer Advisory Panel One final link: Oracle has started up a Customer Advisory Panel program to collect feedback from customers on their current experiences with Oracle products, as well as desires for future product development. If you would like to participate in the program, go to this link at oracle.com. photo taken on Idaho's Sacajewea Historic Biway by Rick Ramsey - Brian Zents Follow OTN on Blog | Facebook | Twitter | YouTube

    Read the article

  • UITableView not displaying parsed data

    - by Graeme
    I have a UITableView which is setup in Interface Builder and connected properly to its class in Xcode. I also have a "Importer" Class which downloads and parses an RSS feed and stores the information in an NSMutableArray. However I have verified the parsing is working properly (using breakpoints and NSlog) but no data is showing in the UITable View. Any ideas as to what the problem could be? I'm almost out of them. It's based on the XML performance Apple example. Here's the code for TableView.h: #import <UIKit/UIKit.h> #import "IncidentsImporter.h" @class SongDetailsController; @interface CurrentIncidentsTableViewController : UITableViewController <IncidentsImporterDelegate>{ NSMutableArray *incidents; SongDetailsController *detailController; UITableView *ctableView; IncidentsImporter *parser; } @property (nonatomic, retain) NSMutableArray *incidents; @property (nonatomic, retain, readonly) SongDetailsController *detailController; @property (nonatomic, retain) IncidentsImporter *parser; @property (nonatomic, retain) IBOutlet UITableView *ctableView; // Called by the ParserChoiceViewController based on the selected parser type. - (void)beginParsing; @end And the code for .m: #import "CurrentIncidentsTableViewController.h" #import "SongDetailsController.h" #import "Incident.h" @implementation CurrentIncidentsTableViewController @synthesize ctableView, incidents, parser, detailController; #pragma mark - #pragma mark View lifecycle - (void)viewDidLoad { [super viewDidLoad]; self.parser = [[IncidentsImporter alloc] init]; parser.delegate = self; [parser start]; UIBarButtonItem *refreshButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemRefresh target:self action:@selector(beginParsing)]; self.navigationItem.rightBarButtonItem = refreshButton; [refreshButton release]; // Uncomment the following line to preserve selection between presentations. //self.clearsSelectionOnViewWillAppear = NO; // Uncomment the following line to display an Edit button in the navigation bar for this view controller. // self.navigationItem.rightBarButtonItem = self.editButtonItem; } - (void)viewWillAppear:(BOOL)animated { NSIndexPath *selectedRowIndexPath = [ctableView indexPathForSelectedRow]; if (selectedRowIndexPath != nil) { [ctableView deselectRowAtIndexPath:selectedRowIndexPath animated:NO]; } } // This method will be called repeatedly - once each time the user choses to parse. - (void)beginParsing { NSLog(@"Parsing has begun"); //self.navigationItem.rightBarButtonItem.enabled = NO; // Allocate the array for song storage, or empty the results of previous parses if (incidents == nil) { NSLog(@"Grabbing array"); self.incidents = [NSMutableArray array]; } else { [incidents removeAllObjects]; [ctableView reloadData]; } // Create the parser, set its delegate, and start it. self.parser = [[IncidentsImporter alloc] init]; parser.delegate = self; [parser start]; } /* - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; } */ /* - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; } */ /* - (void)viewDidDisappear:(BOOL)animated { [super viewDidDisappear:animated]; } */ - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Override to allow orientations other than the default portrait orientation. return YES; } #pragma mark - #pragma mark Table view data source - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { // Return the number of sections. return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in the section. return [incidents count]; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { NSLog(@"Table Cell Sought"); static NSString *kCellIdentifier = @"MyCell"; UITableViewCell *cell = [ctableView dequeueReusableCellWithIdentifier:kCellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:kCellIdentifier] autorelease]; cell.textLabel.font = [UIFont boldSystemFontOfSize:14.0]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; } cell.textLabel.text = @"Test";//[[incidents objectAtIndex:indexPath.row] title]; return cell; } /* // Override to support conditional editing of the table view. - (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the specified item to be editable. return YES; } */ /* // Override to support editing the table view. - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // Delete the row from the data source [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:YES]; } else if (editingStyle == UITableViewCellEditingStyleInsert) { // Create a new instance of the appropriate class, insert it into the array, and add a new row to the table view } } */ /* // Override to support rearranging the table view. - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)fromIndexPath toIndexPath:(NSIndexPath *)toIndexPath { } */ /* // Override to support conditional rearranging of the table view. - (BOOL)tableView:(UITableView *)tableView canMoveRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the item to be re-orderable. return YES; } */ #pragma mark - #pragma mark Table view delegate - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { self.detailController.incident = [incidents objectAtIndex:indexPath.row]; [self.navigationController pushViewController:self.detailController animated:YES]; } #pragma mark - #pragma mark Memory management - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Relinquish ownership any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Relinquish ownership of anything that can be recreated in viewDidLoad or on demand. // For example: self.myOutlet = nil; } - (void)parserDidEndParsingData:(IncidentsImporter *)parser { [ctableView reloadData]; self.navigationItem.rightBarButtonItem.enabled = YES; self.parser = nil; } - (void)parser:(IncidentsImporter *)parser didParseIncidents:(NSArray *)parsedIncidents { //[incidents addObjectsFromArray: parsedIncidents]; // Three scroll view properties are checked to keep the user interface smooth during parse. When new objects are delivered by the parser, the table view is reloaded to display them. If the table is reloaded while the user is scrolling, this can result in eratic behavior. dragging, tracking, and decelerating can be checked for this purpose. When the parser finishes, reloadData will be called in parserDidEndParsingData:, guaranteeing that all data will ultimately be displayed even if reloadData is not called in this method because of user interaction. if (!ctableView.dragging && !ctableView.tracking && !ctableView.decelerating) { self.title = [NSString stringWithFormat:NSLocalizedString(@"Top %d Songs", @"Top Songs format"), [parsedIncidents count]]; [ctableView reloadData]; } } - (void)parser:(IncidentsImporter *)parser didFailWithError:(NSError *)error { // handle errors as appropriate to your application... } - (void)dealloc { [super dealloc]; } @end

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >