Search Results

Search found 6947 results on 278 pages for 'annotation processing'.

Page 81/278 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • What is "read operations inside transactions can't allow failover" ?

    - by Kenyth
    From time to time I got the following exception message on GAE for my GAE/J app. I searched with Google, no relevant results were found. Does anyone know about this? Thanks in advance for any response! The exception message is as below: Nested in org.springframework.orm.jpa.JpaSystemException: Illegal argument; nested exception is javax.persistence.PersistenceException: Illegal argument: java.lang.IllegalArgumentException: read operations inside transactions can't allow failover at com.google.appengine.api.datastore.DatastoreApiHelper.translateError(DatastoreApiHelper.java: 34) at com.google.appengine.api.datastore.DatastoreApiHelper.makeSyncCall(DatastoreApiHelper.java: 67) at com.google.appengine.api.datastore.DatastoreServiceImpl $1.run(DatastoreServiceImpl.java:128) at com.google.appengine.api.datastore.TransactionRunner.runInTransaction(TransactionRunner.java: 30) at com.google.appengine.api.datastore.DatastoreServiceImpl.get(DatastoreServiceImpl.java: 111) at com.google.appengine.api.datastore.DatastoreServiceImpl.get(DatastoreServiceImpl.java: 84) at com.google.appengine.api.datastore.DatastoreServiceImpl.get(DatastoreServiceImpl.java: 77) at org.datanucleus.store.appengine.RuntimeExceptionWrappingDatastoreService.get(RuntimeExceptionWrappingDatastoreService.java: 53) at org.datanucleus.store.appengine.DatastorePersistenceHandler.get(DatastorePersistenceHandler.java: 94) at org.datanucleus.store.appengine.DatastorePersistenceHandler.get(DatastorePersistenceHandler.java: 106) at org.datanucleus.store.appengine.DatastorePersistenceHandler.fetchObject(DatastorePersistenceHandler.java: 464) at org.datanucleus.state.JDOStateManagerImpl.loadUnloadedFieldsInFetchPlan(JDOStateManagerImpl.java: 1627) at org.datanucleus.state.JDOStateManagerImpl.loadFieldsInFetchPlan(JDOStateManagerImpl.java: 1603) at org.datanucleus.ObjectManagerImpl.performDetachAllOnCommitPreparation(ObjectManagerImpl.java: 3192) at org.datanucleus.ObjectManagerImpl.preCommit(ObjectManagerImpl.java: 2931) at org.datanucleus.TransactionImpl.internalPreCommit(TransactionImpl.java: 369) at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:256) at org.datanucleus.jpa.EntityTransactionImpl.commit(EntityTransactionImpl.java: 104) at org.datanucleus.store.appengine.jpa.DatastoreEntityTransactionImpl.commit(DatastoreEntityTransactionImpl.java: 55) at name.kenyth.playtweets.service.Tx.run(Tx.java:39) at name.kenyth.playtweets.web.controller.TwitterApiController.persistStatus(TwitterApiController.java: 309) at name.kenyth.playtweets.web.controller.TwitterApiController.processStatusesForWebCall(TwitterApiController.java: 271) at name.kenyth.playtweets.web.controller.TwitterApiController.getHomeTimelineUpdates_aroundBody0(TwitterApiController.java: 247) at name.kenyth.playtweets.web.controller.TwitterApiController $AjcClosure1.run(TwitterApiController.java:1) at name.kenyth.playtweets.web.refine.AuthenticationEnforcement.ajc $around$name_kenyth_playtweets_web_refine_AuthenticationEnforcement $2$439820b7proceed(AuthenticationEnforcement.aj:1) at name.kenyth.playtweets.web.refine.AuthenticationEnforcement.ajc $around$name_kenyth_playtweets_web_refine_AuthenticationEnforcement $2$439820b7(AuthenticationEnforcement.aj:168) at name.kenyth.playtweets.web.controller.TwitterApiController.getHomeTimelineUpdates(TwitterApiController.java: 129) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Method.java:43) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.doInvokeMethod(HandlerMethodInvoker.java: 710) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java: 167) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java: 414) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java: 402) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java: 771) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java: 716) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java: 647) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java: 552) at javax.servlet.http.HttpServlet.service(HttpServlet.java:693) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java: 511) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java: 390) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java: 216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java: 182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java: 765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java: 418) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) at org.tuckey.web.filters.urlrewrite.NormalRewrittenUrl.doRewrite(NormalRewrittenUrl.java: 195) at org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java: 159) at org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java: 141) at org.tuckey.web.filters.urlrewrite.UrlRewriter.processRequest(UrlRewriter.java: 90) at org.tuckey.web.filters.urlrewrite.UrlRewriteFilter.doFilter(UrlRewriteFilter.java: 417) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java: 71) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java: 76) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java: 88) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java: 76) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.ParseBlobUploadFilter.doFilter(ParseBlobUploadFilter.java: 97) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.runtime.jetty.SaveSessionFilter.doFilter(SaveSessionFilter.java: 35) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java: 43) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java: 388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java: 216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java: 182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java: 765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java: 418) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.handle(AppVersionHandlerMap.java: 238) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java: 152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java: 542) at org.mortbay.jetty.HttpConnection $RequestHandler.headerComplete(HttpConnection.java:923) at com.google.apphosting.runtime.jetty.RpcRequestParser.parseAvailable(RpcRequestParser.java: 76) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java: 135) at com.google.apphosting.runtime.JavaRuntime.handleRequest(JavaRuntime.java: 250) at com.google.apphosting.base.RuntimePb$EvaluationRuntime $6.handleBlockingRequest(RuntimePb.java:5838) at com.google.apphosting.base.RuntimePb$EvaluationRuntime $6.handleBlockingRequest(RuntimePb.java:5836) at com.google.net.rpc.impl.BlockingApplicationHandler.handleRequest(BlockingApplicationHandler.java: 24) at com.google.net.rpc.impl.RpcUtil.runRpcInApplication

    Read the article

  • Save all music files in a VLC xspf playlist to another folder

    - by Parto
    I have a VLC playlist (.xspf) of over a 100 songs all scattered all over my computer. I'm looking for a way to save this playlist and all it's songs to another folder - flash drive, external drive or just a different location in my computer. How can I do this? EDIT The xspf playlist is in XML and is such a format: <?xml version="1.0" encoding="UTF-8"?> <playlist xmlns="http://xspf.org/ns/0/" xmlns:vlc="http://www.videolan.org/vlc/playlist/ns/0/" version="1"> <title>Playlist</title> <trackList> <track> <location>file:///home/subroot/Music/3%20Days%20Grace%20-%20Wake%20Up.mp3</location> <title>Wake Up</title> <creator>3 Days Grace</creator> <album>Three Days Grace</album> <trackNum>10</trackNum> <annotation> </annotation> <duration>206036</duration> <extension application="http://www.videolan.org/vlc/playlist/0"> <vlc:id>0</vlc:id> </extension> </track> . . [Many more tracks here] . </trackList> <extension application="http://www.videolan.org/vlc/playlist/0"> <vlc:item tid="0"/> . . [Other id's here] . </extension> </playlist>

    Read the article

  • What exactly are Link Relation Values?

    - by bckpwrld
    From REST in Practice: Hypermedia and Systems Architecture: For computer-to-computer interactions, we advertise protocol information by embedding links in representations, much as we do with the human Web. To describe a link's purpose, we annotate it. Annotations indicate what the linked resource means to the current resource: “status of your coffee order” “payment” and so on. We call such annotated links hypermedia controls, reflecting their enhanced capabilities over raw URIs. ... link relation values, which describe the roles of linked resources ... Link relation values help consumers understand why they might want to activate a hypermedia control. They do so by indicating the role of the linked resource in the context of the current representation. I interpret the above quotes as saying that Hypermedia control contains both a link to a resource and an annotation describing the role of linked resource in the context of the current representation. And we call this annotation ( which describes the role of linked resource ) a link relation value. Is my assumption correct or does the term link relation value actually describe something different? Thank you

    Read the article

  • Running lmgrd on ubuntu 14.04 LTS

    - by SumanBhatR
    I have installed Xilinx 14.7 in ubuntu 14.04 LTS machine(i386 - 64bit). But I am unable to run lmgrd (for starting the license server). When I googled this problem, I found that lsb-core package needs to be installed. But the package is having many dependencies, I want to know how to install lsb-core package with all the necessary dependencies. Thanks for the help On running sudo apt-get install lsb-core I got the following output Reading package lists... Done Building dependency tree Reading state information... Done Package lsb-core is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'lsb-core' has no installation candidate So I downloaded lsb-core package from http://packages.ubuntu.com/trusty/misc/lsb-core site and used "sudo dpkg -i ./lsb-core_4.1+Debian11ubuntu6_i386.deb" to install it By doing it, I got the following output Selecting previously unselected package lsb-core. (Reading database ... 163205 files and directories currently installed.) Preparing to unpack .../lsb-core_4.1+Debian11ubuntu6_i386.deb ... Unpacking lsb-core (4.1+Debian11ubuntu6) ... dpkg: dependency problems prevent configuration of lsb-core: lsb-core depends on libc6 ( 2.3.5). lsb-core depends on libz1. lsb-core depends on libncurses5. lsb-core depends on libpam0g. lsb-core depends on lsb-invalid-mta (= 4.1+Debian11ubuntu6) | mail-transport-agent. lsb-core depends on at. lsb-core depends on binutils. lsb-core depends on cron | cron-daemon. lsb-core depends on libc6-dev | libc-dev. lsb-core depends on locales. lsb-core depends on m4. lsb-core depends on mailx | mailutils. lsb-core depends on ncurses-term. lsb-core depends on pax. lsb-core depends on psmisc. lsb-core depends on alien (= 8.36). lsb-core depends on python3. lsb-core depends on lsb-security (= 4.1+Debian11ubuntu6). lsb-core depends on time. dpkg: error processing package lsb-core (--install): dependency problems - leaving unconfigured Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: lsb-core So I want to know how to install lsb-core package with all the necessary dependencies in one go. Thanks for the help

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

  • E: mkinitramfs failure cpio 141 gzip 1

    - by Nagaraj Shindagi
    I'm using Ubuntu 12.04 LTS with Dell power-edge R720 server, facing the problem when I apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up linux-image-3.2.0-37-generic-pae (3.2.0-37.58) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-37-generic-pae Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-37-generic-pae /boot/vmlinuz-3.2.0-37-generic-pae update-initramfs: Generating /boot/initrd.img-3.2.0-37-generic-pae gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-37-generic-pae with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0 -37-generic-pae.postinst line 1010. dpkg: error processing linux-image-3.2.0-37-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic-pae: linux-image-generic-pae depends on linux-image-3.2.0-37-generic-pae; however: Package linux-image-3.2.0-37-generic-pae is not configured yet. dpkg: error processing linux-image-generic-pae (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup erro r from a previous failure. Errors were encountered while processing: linux-image-3.2.0-37-generic-pae linux-image-generic-pae E: Sub-process /usr/bin/dpkg returned an error code (1) ------------ even i tried with apt-get clean apt-get remove apt-get autoremove apt-get purge there is no difference it will show the same error message as above, even i checked the disk space ----------- Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda6 24030076 612456 22196964 3% / udev 16536644 4 16536640 1% /dev tmpfs 6618884 1164 6617720 1% /run none 5120 0 5120 0% /run/lock none 16547208 72 16547136 1% /run/shm cgroup 16547208 0 16547208 0% /sys/fs/cgroup /dev/sda1 93207 75034 13361 85% /boot /dev/sda10 9611492 1096076 8027176 13% /tmp /dev/sda12 9611492 226340 8896912 3% /opt /dev/sda13 9611492 152516 8970736 2% /srv /dev/sda7 9611492 592208 8531044 7% /home /dev/sda8 9611492 2656736 6466516 30% /usr /dev/sda9 9611492 696468 8426784 8% /var /dev/sda14 961237336 134563516 777845764 15% /usr/data /dev/sda15 618991384 84498388 503050052 15% /usr/data1 /dev/sda11 9611492 152616 8970636 2% /usr/local --------------- is there any problem on allotting the space to the partiations please let me know the solution its on urgent please help me on this issue regards

    Read the article

  • Fundtech’s Global PAYplus Achieves Oracle Exadata and Oracle Exalogic Optimized Status

    - by Javier Puerta
    Fundtech, a leader in global transaction banking solutions, has announced  that Global PAYplus® – Services Platform (GPP-SP) version 4 has achieved Oracle Exadata Optimized and Oracle Exalogic Optimized status. (Read full announcement here) "GPP-SP testing was done in the third quarter of 2012 in the Oracle Exastack Lab located in the Oracle Solution Center in Linlithgow, Scotland. It showed that an integrated solution can result in a highly streamlined installation, enabling reduced cost of evaluation, acquisition and ownership. Highlights of the transaction processing test are as follows: 9.3 million Mass Payments per hour 5.7 million Single Payments per hour The test found that the optimized combination of GPP-SP running on Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud is able to increase transactions per second (TPS) output per core, and able to reduce total cost of ownership (TCO). The volumes achieved were using only 25% of Exadata/Exalogic processing capacity".

    Read the article

  • Records not being saved to core data sqlite file

    - by esd100
    I'm a complete newbie when it comes to iOS programming and much less Core Data. It's rather non-intuitive for me, as I really came into my own with programming with MATLAB, which I guess is more of a 'scripting' language. At any rate, my problem is that I had no idea what I had to do to create a database for my application. So I read a little bit and thought I had to create a SQL database of my stuff and then import it. Long story short, I created a SQLite db and I want to use the work I have already done to import stuff into my CoreData database. I tried exporting to comma-delimited files and xml files and reading those in, but I didn't like it and it seemed like an extra step that I shouldn't need to do. So, I imported the SQLite database into my resources and added the sqlite framework. I have my core data model setup and it is setting up the SQLite database for the model correctly in the background. When I run through my program to add objects to my entities, it seems to work and I can even fetch results afterward. However, when I inspect the Core Data Database SQLite file, no records have been saved. How is it possible for it to fetch results but not save them to the database? - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions{ //load in the path for resources NSString *paths = [[NSBundle mainBundle] resourcePath]; NSString *databaseName = @"histology.sqlite"; NSString *databasePath = [paths stringByAppendingPathComponent:databaseName]; [self createDatabase:databasePath ]; NSError *error; if ([[self managedObjectContext] save:&error]) { NSLog(@"Whoops, couldn't save: %@", [error localizedDescription]); } // Test listing all CELLS from the store NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entityMO = [NSEntityDescription entityForName:@"CELL" inManagedObjectContext:[self managedObjectContext]]; [fetchRequest setEntity:entityMO]; NSArray *fetchedObjects = [[self managedObjectContext] executeFetchRequest:fetchRequest error:&error]; for (CELL *cellName in fetchedObjects) { //NSLog(@"cellName: %@", cellName); } -(void) createDatabase:databasePath { NSLog(@"The createDatabase function was entered."); NSLog(@"The databasePath is %@ ",[databasePath description]); // Setup the database object sqlite3 *histoDatabase; // Open the database from filessytem if(sqlite3_open([databasePath UTF8String], &histoDatabase) == SQLITE_OK) { NSLog(@"The database was opened"); // Setup the SQL Statement and compile it for faster access const char *sqlStatement = "SELECT * FROM CELL"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(histoDatabase, sqlStatement, -1, &compiledStatement, NULL) != SQLITE_OK) { NSAssert1(0, @"Error while creating add statement. '%s'", sqlite3_errmsg(histoDatabase)); } if(sqlite3_prepare_v2(histoDatabase, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { // Loop through the results and add them to cell MO array while(sqlite3_step(compiledStatement) == SQLITE_ROW) { CELL *cellMO = [NSEntityDescription insertNewObjectForEntityForName:@"CELL" inManagedObjectContext:[self managedObjectContext]]; if (sqlite3_column_type(compiledStatement, 0) != SQLITE_NULL) { cellMO.cellName = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 0)]; } else { cellMO.cellName = @"undefined"; } if (sqlite3_column_type(compiledStatement, 1) != SQLITE_NULL) { cellMO.cellDescription = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 1)]; } else { cellMO.cellDescription = @"undefined"; } NSLog(@"The contents of NSString *cellName = %@",[cellMO.cellName description]); } } // Release the compiled statement from memory sqlite3_finalize(compiledStatement); } sqlite3_close(histoDatabase); } I have a feeling that it has something to do with the timing of opening/closing both of the databases? Attached I have some SQL debugging output to the terminal 2012-05-28 16:03:39.556 MedPix[34751:fb03] The createDatabase function was entered. 2012-05-28 16:03:39.557 MedPix[34751:fb03] The databasePath is /Users/jack/Library/Application Support/iPhone Simulator/5.1/Applications/A6B2A79D-BA93-4E24-9291-5B7948A3CDF4/MedPix.app/histology.sqlite 2012-05-28 16:03:39.559 MedPix[34751:fb03] The database was opened 2012-05-28 16:03:39.560 MedPix[34751:fb03] The database was prepared 2012-05-28 16:03:39.575 MedPix[34751:fb03] CoreData: annotation: Connecting to sqlite database file at "/Users/jack/Library/Application Support/iPhone Simulator/5.1/Applications/A6B2A79D-BA93-4E24-9291-5B7948A3CDF4/Documents/MedPix.sqlite" 2012-05-28 16:03:39.576 MedPix[34751:fb03] CoreData: annotation: creating schema. 2012-05-28 16:03:39.577 MedPix[34751:fb03] CoreData: sql: pragma page_size=4096 2012-05-28 16:03:39.578 MedPix[34751:fb03] CoreData: sql: pragma auto_vacuum=2 2012-05-28 16:03:39.630 MedPix[34751:fb03] CoreData: sql: BEGIN EXCLUSIVE 2012-05-28 16:03:39.631 MedPix[34751:fb03] CoreData: sql: SELECT TBL_NAME FROM SQLITE_MASTER WHERE TBL_NAME = 'Z_METADATA' 2012-05-28 16:03:39.632 MedPix[34751:fb03] CoreData: sql: CREATE TABLE ZCELL ( Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, Z_OPT INTEGER, ZCELLDESCRIPTION VARCHAR, ZCELLNAME VARCHAR ) ... 2012-05-28 16:03:39.669 MedPix[34751:fb03] CoreData: annotation: Creating primary key table. 2012-05-28 16:03:39.671 MedPix[34751:fb03] CoreData: sql: CREATE TABLE Z_PRIMARYKEY (Z_ENT INTEGER PRIMARY KEY, Z_NAME VARCHAR, Z_SUPER INTEGER, Z_MAX INTEGER) 2012-05-28 16:03:39.672 MedPix[34751:fb03] CoreData: sql: INSERT INTO Z_PRIMARYKEY(Z_ENT, Z_NAME, Z_SUPER, Z_MAX) VALUES(1, 'CELL', 0, 0) ... 2012-05-28 16:03:39.701 MedPix[34751:fb03] CoreData: sql: CREATE TABLE Z_METADATA (Z_VERSION INTEGER PRIMARY KEY, Z_UUID VARCHAR(255), Z_PLIST BLOB) 2012-05-28 16:03:39.702 MedPix[34751:fb03] CoreData: sql: SELECT TBL_NAME FROM SQLITE_MASTER WHERE TBL_NAME = 'Z_METADATA' 2012-05-28 16:03:39.703 MedPix[34751:fb03] CoreData: sql: DELETE FROM Z_METADATA WHERE Z_VERSION = ? 2012-05-28 16:03:39.704 MedPix[34751:fb03] CoreData: sql: INSERT INTO Z_METADATA (Z_VERSION, Z_UUID, Z_PLIST) VALUES (?, ?, ?) 2012-05-28 16:03:39.705 MedPix[34751:fb03] CoreData: sql: COMMIT 2012-05-28 16:03:39.710 MedPix[34751:fb03] CoreData: sql: pragma cache_size=200 2012-05-28 16:03:39.711 MedPix[34751:fb03] CoreData: sql: SELECT Z_VERSION, Z_UUID, Z_PLIST FROM Z_METADATA 2012-05-28 16:03:39.712 MedPix[34751:fb03] The contents of NSString *cellName = Beta Cell 2012-05-28 16:03:39.712 MedPix[34751:fb03] The contents of NSString *cellName = Gastric Chief Cell ... 2012-05-28 16:03:39.714 MedPix[34751:fb03] The database was prepared 2012-05-28 16:03:39.764 MedPix[34751:fb03] The createDatabase function has finished. Now fetching. 2012-05-28 16:03:39.765 MedPix[34751:fb03] CoreData: sql: SELECT 0, t0.Z_PK, t0.Z_OPT, t0.ZCELLDESCRIPTION, t0.ZCELLNAME FROM ZCELL t0 2012-05-28 16:03:39.766 MedPix[34751:fb03] CoreData: annotation: sql connection fetch time: 0.0008s 2012-05-28 16:03:39.767 MedPix[34751:fb03] CoreData: annotation: total fetch execution time: 0.0016s for 0 rows. 2012-05-28 16:03:39.768 MedPix[34751:fb03] cellName: <CELL: 0x6bbc120> (entity: CELL; id: 0x6bbc160 <x-coredata:///CELL/t57D10DDD-74E2-474F-97EE-E3BD0FF684DA34> ; data: { cellDescription = "S cells are cells which release secretin, found in the jejunum and duodenum. They are stimulated by a drop in pH to 4 or below in the small intestine's lumen. The released secretin will increase the s"; cellName = "S Cell"; organs = ( ); specimens = ( ); systems = ( ); tissues = ( ); }) ... Sections were cut short to abbreviate. But note that the fetch results contain information, but it says that total fetch execution was for "0" rows? How can that be? Any help will be greatly appreciated, especially detailed explanations. :) Thanks.

    Read the article

  • some files just wont install- linux-image-3.5.0-42-generic

    - by jatin
    It gives the following details of the package operation failing after using the sofware updater on 12.10: installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... (Reading database ... (Reading database ... 217566 files and directories currently installed.) Preparing to replace linux-image-3.5.0-36-generic 3.5.0-36.57~precise1 (using .../linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Done. Unpacking replacement linux-image-3.5.0-36-generic ... dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb (--unpack): unable to make backup link of `./boot/System.map-3.5.0-36-generic' before installing new version: Operation not permitted No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-36-generic /boot/vmlinuz-3.5.0-36-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-36-generic /boot/vmlinuz-3.5.0-36-generic Preparing to replace linux-image-3.5.0-42-generic 3.5.0-42.65~precise1 (using .../linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Done. Unpacking replacement linux-image-3.5.0-42-generic ... dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb (--unpack): unable to make backup link of `./boot/System.map-3.5.0-42-generic' before installing new version: Operation not permitted No apport report written because MaxReports is reached already Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic Preparing to replace memtest86+ 4.20-1.1ubuntu1 (using .../memtest86+_4.20-1.1ubuntu2.1_amd64.deb) ... Unpacking replacement memtest86+ ... dpkg: error processing /var/cache/apt/archives/memtest86+_4.20-1.1ubuntu2.1_amd64.deb (--unpack): unable to make backup link of `./boot/memtest86+.bin' before installing new version: Operation not permitted No apport report written because MaxReports is reached already locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb /var/cache/apt/archives/memtest86+_4.20-1.1ubuntu2.1_amd64.deb Error in function: Output of df -h: Filesystem Size Used Avail Use% Mounted on /dev/sda6 38G 6.5G 30G 19% / udev 3.5G 4.0K 3.5G 1% /dev tmpfs 1.5G 924K 1.5G 1% /run none 5.0M 4.0K 5.0M 1% /run/lock none 3.6G 296K 3.6G 1% /run/shm none 100M 64K 100M 1% /run/user /dev/sda1 197M 87M 111M 44% /boot /dev/sda5 243G 971M 230G 1% /home

    Read the article

  • Debian apt dependency mismatch (libc6)

    - by Sean Gordon
    Earlier, I tried to install package via apt-get (cython), but it failed with the Errors were encountered while processing: message, and since then, apt is refusing to install anything. apt-get check output below: root@dix:~# apt-get check Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libc6 : Depends: libc-bin (= 2.11.3-2) but 2.11.3-4 is installed libc6-dev : Depends: libc6 (= 2.11.3-4) but 2.11.3-2 is installed libc6-i386 : Depends: libc6 (= 2.11.3-4) but 2.11.3-2 is installed E: Unmet dependencies. Try using -f. Apt/aptitude don't seem to be able to fix this dependency issue, and I don't know what to do. Edit: Running apt-get -f install results in no change, and my sources are all squeeze. Running apt-get update then apt-get dist-upgrade show no change either. Edit 2: I went back to try this again in a new terminal and apt-get -f install gives this error: dpkg: error processing /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb (--unpack): subprocess new pre-installation script killed by signal (Aborted) configured to not write apport reports Errors were encountered while processing: /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Edit 3: Using apt-get clean first, then the previous commands, results in the first error again. Using apt-get -f dist-upgrade gives the below. Reading package lists... Building dependency tree... Reading state information... Correcting dependencies... Done The following packages will be upgraded: apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common at automake base-files bind9 bind9-doc bind9-host bind9utils debian-archive-keyring dnsutils dpkg-dev file host initscripts isc-dhcp-client isc-dhcp-common krb5-multidev libapr1 libbind9-60 libc6 libdns69 libdpkg-perl libexpat1 libexpat1-dev libgc1c2 libgssapi-krb5-2 libgssrpc4 libisc62 libisccc60 libisccfg62 libk5crypto3 libkadm5clnt-mit7 libkadm5srv-mit7 libkdb5-4 libkrb5-3 libkrb5-dev libkrb5support0 liblwres60 libmagic1 libmysqlclient16 libnss3-1d libssl-dev libssl0.9.8 libtiff4 libtiff4-dev libtiffxx0c2 libxi6 libxml2 linux-libc-dev lwresd mysql-client-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1 openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssh-client openssh-server openssl procps python python-crypto python-minimal sudo sysv-rc sysvinit sysvinit-utils tzdata tzdata-java 75 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 5 not fully installed or removed. Need to get 0 B/79.9 MB of archives. After this operation, 1,411 kB of additional disk space will be used. (Reading database ... 52241 files and directories currently installed.) Preparing to replace libc6 2.11.3-2 (using .../libc6_2.11.3-4_amd64.deb) ... *** stack smashing detected ***: /usr/bin/perl terminated ======= Backtrace: ========= /lib/libc.so.6(__fortify_fail+0x37)[0x7fdaad9b9f87] /lib/libc.so.6(__fortify_fail+0x0)[0x7fdaad9b9f50] /usr/lib/libperl.so.5.10(Perl_yylex+0x5896)[0x7fdaae343346] [0x8e83a0] ======= Memory map: ======== 00400000-00402000 r-xp 00000000 08:01 525338 /usr/bin/perl 00601000-00602000 rw-p 00001000 08:01 525338 /usr/bin/perl 00602000-0091f000 rw-p 00000000 00:00 0 [heap] 7fdaaca54000-7fdaaca6a000 r-xp 00000000 08:01 393818 /lib/libgcc_s.so.1 7fdaaca6a000-7fdaacc69000 ---p 00016000 08:01 393818 /lib/libgcc_s.so.1 7fdaacc69000-7fdaacc6a000 rw-p 00015000 08:01 393818 /lib/libgcc_s.so.1 7fdaacc6a000-7fdaacc6f000 r-xp 00000000 08:01 524949 /usr/lib/perl5/auto/Locale/gettext/gettext.so 7fdaacc6f000-7fdaace6e000 ---p 00005000 08:01 524949 /usr/lib/perl5/auto/Locale/gettext/gettext.so 7fdaace6e000-7fdaace6f000 rw-p 00004000 08:01 524949 /usr/lib/perl5/auto/Locale/gettext/gettext.so 7fdaace6f000-7fdaace79000 r-xp 00000000 08:01 532753 /usr/lib/perl/5.10.1/auto/Encode/Encode.so 7fdaace79000-7fdaad078000 ---p 0000a000 08:01 532753 /usr/lib/perl/5.10.1/auto/Encode/Encode.so 7fdaad078000-7fdaad079000 rw-p 00009000 08:01 532753 /usr/lib/perl/5.10.1/auto/Encode/Encode.so 7fdaad079000-7fdaad07e000 r-xp 00000000 08:01 525444 /usr/lib/perl/5.10.1/auto/IO/IO.so 7fdaad07e000-7fdaad27d000 ---p 00005000 08:01 525444 /usr/lib/perl/5.10.1/auto/IO/IO.so 7fdaad27d000-7fdaad27e000 rw-p 00004000 08:01 525444 /usr/lib/perl/5.10.1/auto/IO/IO.so 7fdaad27e000-7fdaad299000 r-xp 00000000 08:01 525450 /usr/lib/perl/5.10.1/auto/POSIX/POSIX.so 7fdaad299000-7fdaad498000 ---p 0001b000 08:01 525450 /usr/lib/perl/5.10.1/auto/POSIX/POSIX.so 7fdaad498000-7fdaad49b000 rw-p 0001a000 08:01 525450 /usr/lib/perl/5.10.1/auto/POSIX/POSIX.so 7fdaad49b000-7fdaad49e000 r-xp 00000000 08:01 525436 /usr/lib/perl/5.10.1/auto/Fcntl/Fcntl.so 7fdaad49e000-7fdaad69e000 ---p 00003000 08:01 525436 /usr/lib/perl/5.10.1/auto/Fcntl/Fcntl.so 7fdaad69e000-7fdaad69f000 rw-p 00003000 08:01 525436 /usr/lib/perl/5.10.1/auto/Fcntl/Fcntl.so 7fdaad69f000-7fdaad6a7000 r-xp 00000000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad6a7000-7fdaad8a6000 ---p 00008000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad8a6000-7fdaad8a7000 r--p 00007000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad8a7000-7fdaad8a8000 rw-p 00008000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad8a8000-7fdaad8d6000 rw-p 00000000 00:00 0 7fdaad8d6000-7fdaada2f000 r-xp 00000000 08:01 393822 /lib/libc-2.11.3.so 7fdaada2f000-7fdaadc2e000 ---p 00159000 08:01 393822 /lib/libc-2.11.3.so 7fdaadc2e000-7fdaadc32000 r--p 00158000 08:01 393822 /lib/libc-2.11.3.so 7fdaadc32000-7fdaadc33000 rw-p 0015c000 08:01 393822 /lib/libc-2.11.3.so 7fdaadc33000-7fdaadc38000 rw-p 00000000 00:00 0 7fdaadc38000-7fdaadc4f000 r-xp 00000000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaadc4f000-7fdaade4e000 ---p 00017000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaade4e000-7fdaade4f000 r--p 00016000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaade4f000-7fdaade50000 rw-p 00017000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaade50000-7fdaade54000 rw-p 00000000 00:00 0 7fdaade54000-7fdaaded4000 r-xp 00000000 08:01 393826 /lib/libm-2.11.3.so 7fdaaded4000-7fdaae0d4000 ---p 00080000 08:01 393826 /lib/libm-2.11.3.so 7fdaae0d4000-7fdaae0d5000 r--p 00080000 08:01 393826 /lib/libm-2.11.3.so 7fdaae0d5000-7fdaae0d6000 rw-p 00081000 08:01 393826 /lib/libm-2.11.3.so 7fdaae0d6000-7fdaae0d8000 r-xp 00000000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae0d8000-7fdaae2d8000 ---p 00002000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae2d8000-7fdaae2d9000 r--p 00002000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae2d9000-7fdaae2da000 rw-p 00003000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae2da000-7fdaae43f000 r-xp 00000000 08:01 525387 /usr/lib/libperl.so.5.10.1 7fdaae43f000-7fdaae63e000 ---p 00165000 08:01 525387 /usr/lib/libperl.so.5.10.1 7fdaae63e000-7fdaae647000 rw-p 00164000 08:01 525387 /usr/lib/libperl.so.5.10.1 7fdaae647000-7fdaae665000 r-xp 00000000 08:01 393819 /lib/ld-2.11.3.so 7fdaae854000-7fdaae859000 rw-p 00000000 00:00 0 7fdaae862000-7fdaae864000 rw-p 00000000 00:00 0 7fdaae864000-7fdaae865000 r--p 0001d000 08:01 393819 /lib/ld-2.11.3.so 7fdaae865000-7fdaae866000 rw-p 0001e000 08:01 393819 /lib/ld-2.11.3.so 7fdaae866000-7fdaae867000 rw-p 00000000 00:00 0 7fff9616d000-7fff9618e000 rw-p 00000000 00:00 0 [stack] 7fff961ff000-7fff96200000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r--p 00000000 00:00 0 [vsyscall] dpkg: error processing /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb (--unpack): subprocess new pre-installation script killed by signal (Aborted) Errors were encountered while processing: /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb

    Read the article

  • SQL SERVER – Parsing SSIS Catalog Messages – Notes from the Field #030

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data. In this episode of the Notes from the Field series I requested SSIS Expert Andy Leonard to discuss one of the most interesting concepts of SSIS Catalog Messages. There are plenty of interesting and useful information captured in the SSIS catalog and we will learn together how to explore the same. The SSIS Catalog captures a lot of cool information by default. Here’s a query I use to parse messages from the catalog.operation_messages table in the SSISDB database, where the logged messages are stored. This query is set up to parse a default message transmitted by the Lookup Transformation. It’s one of my favorite messages in the SSIS log because it gives me excellent information when I’m tuning SSIS data flows. The message reads similar to: Data Flow Task:Information: The Lookup processed 4485 rows in the cache. The processing time was 0.015 seconds. The cache used 1376895 bytes of memory. The query: USE SSISDB GO DECLARE @MessageSourceType INT = 60 DECLARE @StartOfIDString VARCHAR(100) = 'The Lookup processed ' DECLARE @ProcessingTimeString VARCHAR(100) = 'The processing time was ' DECLARE @CacheUsedString VARCHAR(100) = 'The cache used ' DECLARE @StartOfIDSearchString VARCHAR(100) = '%' + @StartOfIDString + '%' DECLARE @ProcessingTimeSearchString VARCHAR(100) = '%' + @ProcessingTimeString + '%' DECLARE @CacheUsedSearchString VARCHAR(100) = '%' + @CacheUsedString + '%' SELECT operation_id , SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))) AS LookupRowsCount , SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))) AS LookupProcessingTime , CASE WHEN (CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))))) = 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) / CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1)))) END AS LookupRowsPerSecond , SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1))) AS LookupBytesUsed ,CASE WHEN (CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))))= 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1)))) / CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) END AS LookupBytesPerRow FROM [catalog].[operation_messages] WHERE message_source_type = @MessageSourceType AND MESSAGE LIKE @StartOfIDSearchString GO Note that you have to set some parameter values: @MessageSourceType [int] – represents the message source type value from the following results: Value     Description 10           Entry APIs, such as T-SQL and CLR Stored procedures 20           External process used to run package (ISServerExec.exe) 30           Package-level objects 40           Control Flow tasks 50           Control Flow containers 60           Data Flow task 70           Custom execution message Note: Taken from Reza Rad’s (excellent!) helper.MessageSourceType table found here. @StartOfIDString [VarChar(100)] – use this to uniquely identify the message field value you wish to parse. In this case, the string ‘The Lookup processed ‘ identifies all the Lookup Transformation messages I desire to parse. @ProcessingTimeString [VarChar(100)] – this parameter is message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Processing Time value. For this execution, I use the string ‘The processing time was ‘. @CacheUsedString [VarChar(100)] – this parameter is also message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Cache  Used value. It returns the memory used, in bytes. For this execution, I use the string ‘The cache used ‘. The other parameters are built from variations of the parameters listed above. The query parses the values into text. The string values are converted to numeric values for ratio calculations; LookupRowsPerSecond and LookupBytesPerRow. Since ratios involve division, CASE statements check for denominators that equal 0. Here are the results in an SSMS grid: This is not the only way to retrieve this information. And much of the code lends itself to conversion to functions. If there is interest, I will share the functions in an upcoming post. If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • Auto DOP and Concurrency

    - by jean-pierre.dijcks
    After spending some time in the cloud, I figured it is time to come down to earth and start discussing some of the new Auto DOP features some more. As Database Machines (the v2 machine runs Oracle Database 11.2) are effectively selling like hotcakes, it makes some sense to talk about the new parallel features in more detail. For basic understanding make sure you have read the initial post. The focus there is on Auto DOP and queuing, which is to some extend the focus here. But now I want to discuss the concurrency a little and explain some of the relevant parameters and their impact, specifically in a situation with concurrency on the system. The goal of Auto DOP The idea behind calculating the Automatic Degree of Parallelism is to find the highest possible DOP (ideal DOP) that still scales. In other words, if we were to increase the DOP even more  above a certain DOP we would see a tailing off of the performance curve and the resource cost / performance would become less optimal. Therefore the ideal DOP is the best resource/performance point for that statement. The goal of Queuing On a normal production system we should see statements running concurrently. On a Database Machine we typically see high concurrency rates, so we need to find a way to deal with both high DOP’s and high concurrency. Queuing is intended to make sure we Don’t throttle down a DOP because other statements are running on the system Stay within the physical limits of a system’s processing power Instead of making statements go at a lower DOP we queue them to make sure they will get all the resources they want to run efficiently without trashing the system. The theory – and hopefully – practice is that by giving a statement the optimal DOP the sum of all statements runs faster with queuing than without queuing. Increasing the Number of Potential Parallel Statements To determine how many statements we will consider running in parallel a single parameter should be looked at. That parameter is called PARALLEL_MIN_TIME_THRESHOLD. The default value is set to 10 seconds. So far there is nothing new here…, but do realize that anything serial (e.g. that stays under the threshold) goes straight into processing as is not considered in the rest of this post. Now, if you have a system where you have two groups of queries, serial short running and potentially parallel long running ones, you may want to worry only about the long running ones with this parallel statement threshold. As an example, lets assume the short running stuff runs on average between 1 and 15 seconds in serial (and the business is quite happy with that). The long running stuff is in the realm of 1 – 5 minutes. It might be a good choice to set the threshold to somewhere north of 30 seconds. That way the short running queries all run serial as they do today (if it ain’t broken, don’t fix it) and allows the long running ones to be evaluated for (higher degrees of) parallelism. This makes sense because the longer running ones are (at least in theory) more interesting to unleash a parallel processing model on and the benefits of running these in parallel are much more significant (again, that is mostly the case). Setting a Maximum DOP for a Statement Now that you know how to control how many of your statements are considered to run in parallel, lets talk about the specific degree of any given statement that will be evaluated. As the initial post describes this is controlled by PARALLEL_DEGREE_LIMIT. This parameter controls the degree on the entire cluster and by default it is CPU (meaning it equals Default DOP). For the sake of an example, let’s say our Default DOP is 32. Looking at our 5 minute queries from the previous paragraph, the limit to 32 means that none of the statements that are evaluated for Auto DOP ever runs at more than DOP of 32. Concurrently Running a High DOP A basic assumption about running high DOP statements at high concurrency is that you at some point in time (and this is true on any parallel processing platform!) will run into a resource limitation. And yes, you can then buy more hardware (e.g. expand the Database Machine in Oracle’s case), but that is not the point of this post… The goal is to find a balance between the highest possible DOP for each statement and the number of statements running concurrently, but with an emphasis on running each statement at that highest efficiency DOP. The PARALLEL_SERVER_TARGET parameter is the all important concurrency slider here. Setting this parameter to a higher number means more statements get to run at their maximum parallel degree before queuing kicks in.  PARALLEL_SERVER_TARGET is set per instance (so needs to be set to the same value on all 8 nodes in a full rack Database Machine). Just as a side note, this parameter is set in processes, not in DOP, which equates to 4* Default DOP (2 processes for a DOP, default value is 2 * Default DOP, hence a default of 4 * Default DOP). Let’s say we have PARALLEL_SERVER_TARGET set to 128. With our limit set to 32 (the default) we are able to run 4 statements concurrently at the highest DOP possible on this system before we start queuing. If these 4 statements are running, any next statement will be queued. To run a system at high concurrency the PARALLEL_SERVER_TARGET should be raised from its default to be much closer (start with 60% or so) to PARALLEL_MAX_SERVERS. By using both PARALLEL_SERVER_TARGET and PARALLEL_DEGREE_LIMIT you can control easily how many statements run concurrently at good DOPs without excessive queuing. Because each workload is a little different, it makes sense to plan ahead and look at these parameters and set these based on your requirements.

    Read the article

  • Problems with updates

    - by legospace9876
    I can not update Weather Indicator with Update Manager. This is the terminal log: installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "sr_RS.utf_8_latin" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "sr_RS.utf_8_latin" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "sr_RS.utf_8_latin" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "sr_RS.utf_8_latin" are supported and installed on your system. perl: warning: Falling back to the andard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Setting up indicator-weather (11.11.28-0ubuntu1.1) ... Installing indicator-specific icons... Installing indicator dconf schema... cp: cannot stat `/usr/share/indicator-weather/indicator-weather.gschema.xml': No such file or directory dpkg: error processing indicator-weather (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: indicator-weather Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up indicator-weather (11.11.28-0ubuntu1.1) ... Installing indicator-specific icons... Installing indicator dconf schema... cp: cannot stat `**/usr/share/indicator-weather/indicator-weather.gschema.xml**': No such file or directory dpkg: error processing indicator-weather (--configure): subprocess installed post-installation script returned error exit status 1 The file that I bold really does not exist. How can I solve this problem?

    Read the article

  • Cloud Computing Architecture Patterns: Don’t Focus on the Client

    - by BuckWoody
    Normally I try to put topics in the positive in other words "Do this" not "Don't do that". Sometimes its clearer to focus on what *not* to do. Popular development processes often start with screen mockups, or user input descriptions. In a scale-out pattern like Cloud Computing on Windows Azure, that's the wrong place to start. Start with the Data    Instead, I recommend that you start with the data that a process requires. That data might be temporary or persisted, but starting with the data and its requirements helps to define not only the storage engine you need but also drives everything from security to the integrity of the application. For instance, assume the requirements show that the user must enter their phone number, and that this datum is used in a contact management system further down the application chain. For that datum, you can determine what data type you need (U.S. only or International?) the security requirements, whether it needs ACID compliance, how it will be searched, indexed and so on. From one small data point you can extrapolate out your options for storing and processing the data. Here's the interesting part, which begins to break the patterns that we've used for decades: all of the data doesn't have the same requirements. The phone number might be best suited for a list, or an element, or a string, with either BASE or ACID requirements, based on how it is used. That means we don't have to dump everything into XML, an RDBMS, a NoSQL engine, or a flat file exclusively. In fact, one record might use all of those depending on the use-case requirements. Next Is Data Management  With the data defined, we can move on to how to store the data. Again, the requirements now dictate whether we need a full relational calculus or set-based operations, or we can choose another method based on the requirements for the data. And breaking another pattern its OK to store in more than once, in more than one location. We do this all the time for reporting systems and Business Intelligence systems, so this is a pattern we need to think about even for OLTP data. Move to Data Transport How does the data get around? We can use a connection-based method, sending the data along a transport to the storage engine, but in some cases we may want to use a cache, a queue, the Service Bus, or Complex Event Processing. Finally, Data Processing Most RDBMS engines, NoSQL, and certainly Big Data engines not only store data, but can process and manipulate it as well. Its doubtful that you'll calculate that phone number right? Well, if you're the phone company, you most certainly will. And so we see that even once we've chosen the data type, storage and engine, the same element can have different computing requirements based on how it is used. Sure, We Need A Front-End At Some Point Not all data is entered by human hands in fact most data isn't. We don't really need a Graphical User Interface (GUI) we need some way for a GUI to get data into and out of the systems listed earlier.   But when we do need to allow users to enter or examine data, that should be left to the GUI that best fits the device the user has. Ever tried to use an application designed for a web browser on a phone? Or one designed for a tablet on a phone? Its usually quite painful. The siren song of "We'll just write one interface for all devices" is strong, and has beguiled many an unsuspecting architect. But they just don't work out.   Instead, focus on the data, its transport and processing. Create API calls or a message system that allows for resilient transport to the device or interface, and let it do what it does best. References Microsoft Architecture Journal:   http://msdn.microsoft.com/en-us/architecture/bb410935.aspx Patterns and Practices:   http://msdn.microsoft.com/en-us/library/ff921345.aspx Windows Azure iOS, Android, Windows 8 Mobile Devices SDK: http://www.windowsazure.com/en-us/develop/mobile/tutorials/get-started-ios/ Windows Azure Facebook SDK: http://ntotten.com/2013/03/14/using-windows-azure-mobile-services-with-the-facebook-sdk-for-windows-phone/

    Read the article

  • perl scripts stdin/pipe reading problem [closed]

    - by user4541
    I have 2 scripts for a task. The 1st outputs lines of data (terminated with RT/LF) to STDOUT now and then. The 2nd keeps reading data from STDIN for further processing in the following way: use strict; my $dataline; while(1) { $dtaline = ""; $dataline = ; until( $dataline ne "") { sleep(1); $dataline = ; } #further processing with a non-empty data line follows # } print "quitting...\n"; I redirect the output from the 1st to the 2nd using pipe as following: perl scrt1 |perl scpt2. But the problem I'm having with these 2 scpts is that it looks like that the 2nd scpt keeps getting the initial load of lines of data from the 1st scpt if there's no data anymore. Wonder if anybody having similar issues can kindly help a bit? Thanks.

    Read the article

  • Event Processed

    - by Antony Reynolds
    Installing Oracle Event Processing 11g Earlier this month I was involved in organizing the Monument Family History Day.  It was certainly a complex event, with dozens of presenters, guides and 100s of visitors.  So with that experience of a complex event under my belt I decided to refresh my acquaintance with Oracle Event Processing (CEP). CEP has a developer side based on Eclipse and a runtime environment. Developer Install The developer install requires several steps (documentation) Download required software Eclipse  (Linux) – It is recommended to use version 3.6.2 (Helios) Install Eclipse Unzip the download into the desired directory Start Eclipse Add Oracle CEP Repository in Eclipse http://download.oracle.com/technology/software/cep-ide/11/ Install Oracle CEP Tools for Eclipse 3.6 You may need to set the proxy if behind a firewall. Modify eclipse.ini If using Windows edit with wordpad rather than notepad Point to 1.6 JVM Insert following lines before –vmargs -vm \PATH_TO_1.6_JDK\jre\bin\javaw.exe Increase PermGen Memory Insert following line at end of file -XX:MaxPermSize=256M Restart eclipse and verify that everything is installed as expected. Server install The server install is very straightforward (documentation).  It is recommended to use the JRockit JDK with CEP so the steps to set up a working CEP server environment are: Download required software JRockit – I used Oracle “JRockit 6 - R28.2.5” which includes “JRockit Mission Control 4.1” and “JRockit Real Time 4.1”. Oracle Event Processor – I used “Complex Event Processing Release 11gR1 (11.1.1.6.0)” Install JRockit Run the JRockit installer, the download is an executable binary that just needs to be marked as executable. Install CEP Unzip the downloaded file Run the CEP installer,  the unzipped file is an executable binary that may need to be marked as executable. Choose a custom install and add the examples if needed. It is not recommended to add the examples to a production environment but they can be helpful in development. Voila The Deed Is Done With CEP installed you are now ready to start a server, if you didn’t install the demoes then you will need to create a domain before starting the server. Once the server is up and running (using startwlevs.sh) you can verify that the visualizer is available on http://hostname:port/wlevs, the default port for the demo domain is 9002. With the server running you can test the IDE by creating a new “Oracle CEP Application Project” and creating a new target environment pointing at your CEP installation. Much easier than organizing a Family History Day!

    Read the article

  • Spring @Transactional not creating required transaction

    - by Steve
    Ok, so I've finally bowed to peer pressure and started using Spring in my web app :-)... So I'm trying to get the transaction handling stuff to work, and I just can't seem to get it. My Spring configuration looks like this: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd"> <bean id="groupDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.GroupDao" lazy-init="true"> <property name="entityManagerFactory" ><ref bean="entityManagerFactory"/></property> </bean> <!-- enables interpretation of the @Required annotation to ensure that dependency injection actually occures --> <bean class="org.springframework.beans.factory.annotation.RequiredAnnotationBeanPostProcessor"/> <!-- enables interpretation of the @PersistenceUnit/@PersistenceContext annotations providing convenient access to EntityManagerFactory/EntityManager --> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/> <!-- uses the persistence unit defined in the META-INF/persistence.xml JPA configuration file --> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean"> <property name="persistenceUnitName" value="CONOPS_PU" /> </bean> <!-- transaction manager for use with a single JPA EntityManagerFactory for transactional data access to a single datasource --> <bean id="jpaTransactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory"/> </bean> <!-- enables interpretation of the @Transactional annotation for declerative transaction managment using the specified JpaTransactionManager --> <tx:annotation-driven transaction-manager="jpaTransactionManager" proxy-target-class="true"/> </beans> persistence.xml: <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="CONOPS_PU" transaction-type="RESOURCE_LOCAL"> <provider>org.hibernate.ejb.HibernatePersistence</provider> ... Class mappings removed for brevity... <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/> <property name="hibernate.connection.autocommit" value="false"/> <property name="hibernate.connection.username" value="****"/> <property name="hibernate.connection.password" value="*****"/> <property name="hibernate.connection.driver_class" value="oracle.jdbc.OracleDriver"/> <property name="hibernate.connection.url" value="jdbc:oracle:thin:@*****:1521:*****"/> <property name="hibernate.cache.provider_class" value="org.hibernate.cache.NoCacheProvider"/> <property name="hibernate.hbm2ddl.auto" value="create"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.format_sql" value="true"/> </properties> </persistence-unit> </persistence> The DAO method to save my domain object looks like this: @Transactional(propagation=Propagation.REQUIRES_NEW) protected final T saveOrUpdate (T model) { EntityManager em = emf.createEntityManager ( ); EntityTransaction trans = em.getTransaction ( ); System.err.println ("Transaction isActive () == " + trans.isActive ( )); if (em != null) { try { if (model.getId ( ) != null) { em.persist (model); em.flush (); } else { em.merge (model); em.flush (); } } finally { em.close (); } } return (model); } So I try to save a copy of my Group object using the following code in my test case: context = new ClassPathXmlApplicationContext(configs); dao = (GroupDao)context.getBean("groupDao"); dao.saveOrUpdate (new Group ()); This bombs with the following exception: javax.persistence.TransactionRequiredException: no transaction is in progress at org.hibernate.ejb.AbstractEntityManagerImpl.flush(AbstractEntityManagerImpl.java:301) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:600) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerInvocationHandler.invoke(ExtendedEntityManagerCreator.java:341) at $Proxy26.flush(Unknown Source) at mil.navy.ndms.conops.common.dao.impl.jpa.GenericJPADao.saveOrUpdate(GenericJPADao.java:646) at mil.navy.ndms.conops.common.dao.impl.jpa.GroupDao.save(GroupDao.java:641) at mil.navy.ndms.conops.common.dao.impl.jpa.GroupDao$$FastClassByCGLIB$$50343b9b.invoke() at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:149) at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept(Cglib2AopProxy.java:622) at mil.navy.ndms.conops.common.dao.impl.jpa.GroupDao$$EnhancerByCGLIB$$7359ba58.save() at mil.navy.ndms.conops.common.dao.impl.jpa.GroupDaoTest.testGroupDaoSave(GroupDaoTest.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:600) at junit.framework.TestCase.runTest(TestCase.java:164) at junit.framework.TestCase.runBare(TestCase.java:130) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:120) at junit.framework.TestSuite.runTest(TestSuite.java:230) at junit.framework.TestSuite.run(TestSuite.java:225) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) In addition, I get the following warnings when Spring first starts. Since these reference the entityManagerFactory and the transactionManager, they probably have some bearing on the problem, but I've no been able to decipher them enough to know what: Mar 11, 2010 12:19:27 PM org.springframework.context.support.AbstractApplicationContext$BeanPostProcessorChecker postProcessAfterInitialization INFO: Bean 'entityManagerFactory' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) Mar 11, 2010 12:19:27 PM org.springframework.context.support.AbstractApplicationContext$BeanPostProcessorChecker postProcessAfterInitialization INFO: Bean 'entityManagerFactory' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) Mar 11, 2010 12:19:27 PM org.springframework.context.support.AbstractApplicationContext$BeanPostProcessorChecker postProcessAfterInitialization INFO: Bean 'jpaTransactionManager' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) Mar 11, 2010 12:19:27 PM org.springframework.context.support.AbstractApplicationContext$BeanPostProcessorChecker postProcessAfterInitialization INFO: Bean '(inner bean)' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) Mar 11, 2010 12:19:27 PM org.springframework.context.support.AbstractApplicationContext$BeanPostProcessorChecker postProcessAfterInitialization INFO: Bean '(inner bean)' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) Mar 11, 2010 12:19:27 PM org.springframework.context.support.AbstractApplicationContext$BeanPostProcessorChecker postProcessAfterInitialization INFO: Bean 'org.springframework.transaction.interceptor.TransactionAttributeSourceAdvisor' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) Mar 11, 2010 12:19:27 PM org.springframework.context.support.AbstractApplicationContext$BeanPostProcessorChecker postProcessAfterInitialization INFO: Bean 'org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) Mar 11, 2010 12:19:27 PM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@37003700: defining beans [groupDao,org.springframework.beans.factory.annotation.RequiredAnnotationBeanPostProcessor,org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor,entityManagerFactory,jpaTransactionManager,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.interceptor.TransactionAttributeSourceAdvisor]; root of factory hierarchy Does anyone have any idea what I'm missing? I'm totally stumped... Thanks

    Read the article

  • Issues when upgrading gnome-session

    - by Gabriel A. Zorrilla
    I'm having issues since yesterday after an upgrade. GNOME session got messed up somehow and now i cannot install further upgrades nor new apps. Here is what i got from terminal after doing the recommended apt-get install -f; (Reading database ... 166876 files and directories currently installed.) Preparing to replace gnome-session 3.0.1-0ubuntu1~build2 (using .../gnome-session_3.0.2-0ubuntu3~natty1_all.deb) ... Unpacking replacement gnome-session ... dpkg: error processing /var/cache/apt/archives/gnome-session_3.0.2-0ubuntu3~natty1_all.deb (--unpack): trying to overwrite '/usr/share/xsessions/gnome-shell.desktop', which is also in package gnome-shell 3.0.1-0ubuntu1~build1 Errors were encountered while processing: /var/cache/apt/archives/gnome-session_3.0.2-0ubuntu3~natty1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) In var/log/apt/history.log, it says: Start-Date: 2011-05-29 17:56:16 Commandline: apt-get -f install Upgrade: gnome-session:amd64 (3.0.1-0ubuntu1~build2, 3.0.2-0ubuntu3~natty1) Error: Sub-process /usr/bin/dpkg returned an error code (1) End-Date: 2011-05-29 17:56:21 No idea what all this means.

    Read the article

  • How to fix the "Setting locale failed" error while installing vim?

    - by user211775
    When installing vim through Software Center , I get this error installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Setting up install-info (4.13a.dfsg.1-10ubuntu4) ... /etc/environment: line 1: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games: No such file or directory dpkg: error processing install-info (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: install-info Setting up install-info (4.13a.dfsg.1-10ubuntu4) ... /etc/environment: line 1: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games: No such file or directory dpkg: error processing install-info (--configure): subprocess installed post-installation script returned error exit status 1

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • How and when to use UNIT testing properly

    - by Zebs
    I am an iOS developer. I have read about unit testing and how it is used to test specific pieces of your code. A very quick example has to do with processing JSON data onto a database. The unit test reads a file from the project bundle and executes the method that is in charge of processing JSON data. But I dont get how this is different from actually running the app and testing with the server. So my question might be a bit general, but I honestly dont understand the proper use of unit testing, or even how it is useful; I hope the experienced programmers that surf around StackOverflow can help me. Any help is very much appreciated!

    Read the article

  • POP Culture

    - by [email protected]
    When we hear the word POP, we normally think of a soft drink, or a soda, while for others, it might be their favourite kind of music. In my case, it's the sound my knee makes when I bend down. Within Oracle though, when we talk about POP, we are referring to the Partner Ordering Portal. The Partner Ordering Portal, or POP as we like to call it, provides AutoVue Partners with a method to submit their orders online. POP offers Partners with up-to-date pricing and licensing information, efficient order processing, as most data is validated on screen, thereby reducing errors and enabling faster processing and, online order status and tracking. POP is not yet available in every country, but it is available in most. Click here to check out the POP home page (OPN Login information required) to see if your country of business is eligible to use POP and, for access to creating an account, watching instructional training viewlets, etc.

    Read the article

  • Improve your Application Performance with .NET Framework 4.0

    Nice Article on CodeGuru. This processors we use today are quite different from those of just a few years ago, as most processors today provide multiple cores and/or multiple threads. With multiple cores and/or threads we need to change how we tackle problems in code. Yes we can still continue to write code to perform an action in a top down fashion to complete a task. This apprach will continue to work; however, you are not taking advantage of the extra processing power available. The best way to take advantage of the extra cores prior to .NET Framework 4.0 was to create threads and/or utilize the ThreadPool. For many developers utilizing Threads or the ThreadPool can be a little daunting. The .NET 4.0 Framework drastically simplified the process of utilizing the extra processing power through the Task Parallel Library (TPL). This article talks following topics “Data Parallelism”, “Parallel LINQ (PLINQ)” and “Task Parallelism”. span.fullpost {display:none;}

    Read the article

  • Global keyboard states

    - by Petr Abdulin
    I have following idea about processing keyboard input. We capture input in "main" Game class like this: protected override void Update(GameTime gameTime) { this.CurrentKeyboardState = Keyboard.GetState(); // main :Game class logic here base.Update(gameTime); this.PreviousKeyboardState = this.CurrentKeyboardState; } then, reuse keyboard states (which have internal scope) in all other game components. The reasons behind this are 1) minimize keyboard processing load, and 2) reduce "pollution" of all other classes with similar keyboard state variables. Since I'm quite a noob in both game and XNA development, I would like to know if all of this sounds reasonable.

    Read the article

  • libnotfy1(>= 0.4.4) and libnotify1-gtk2.10 for Ubuntu 11.10

    - by Aamir
    I have recently downloaded the latest version of Lotus Symphony 3.0.1 but whenever I try to install it, it shows the following dependencies. Can anybody tell where I can find these dependencies. dpkg: dependency problems prevent configuration of symphony: symphony depends on **libnotify1 (>= 0.4.4)**; however: Package libnotify1 is not installed. symphony depends on **libnotify1-gtk2.10**; however: Package libnotify1-gtk2.10 is not installed. dpkg: error processing symphony (--install): dependency problems - leaving unconfigured Errors were encountered while processing: symphony

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >