Search Results

Search found 5572 results on 223 pages for 'cpu'.

Page 209/223 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • Killing Stuck Child JVM's

    - by ACShorten
    Note: This facility only applies to Oracle Utilities Application Framework products using COBOL. In some situations, the Child JVM's may spin. This causes multiple startup/shutdown Child JVM messages to be displayed and recursive child JVM's to be initiated and shunned. If the following: Unable to establish connection on port …. after waiting .. seconds.The issue can be caused intermittently by CPU spins in connection to the creation of new processes, specifically Child JVMs. Recursive (or double) invocation of the System.exit call in the remote JVM may be caused by a Process.destroy call that the parent JVM always issues when shunning a JVM. The issue may happen when the thread in the parent JVM that is responsible for the recycling gets stuck and it affects all child JVMs. If this issue occurs at your site then there are a number of options to address the issue: Configure an Operating System level kill command to force the Child JVM to be shunned when it becomes stuck. Configure a Process.destroy command to be used if the kill command is not configured or desired. Specify a time tolerance to detect stuck threads before issuing the Process.destroy or kill commands. Note: This facility is also used when the Parent JVM is also shutdown to ensure no zombie Child JVM's exit. The following additional settings must be added to the spl.properties for the Business Application Server to use this facility: spl.runtime.cobol.remote.kill.command – Specify the command to kill the Child JVM process. This can be a command or specify a script to execute to provide additional information. The kill.command property can accept two arguments, {pid} and {jvmNumber}, in the specified string. The arguments must be enclosed in curly braces as shown here. Note: The PID will be appended to the killcmd string, unless the {pid} and {jvmNumber} arguments are specified. The jvmNumber can be useful if passed to a script for logging purposes. Note: If a script is used it must be in the path and be executable by the OS user running the system. spl.runtime.cobol.remote.destroy.enabled – Specify whether to use the Process.destroy command instead of the kill command. Specify true or false. Default value is false. Note: Unless otherwise required, it is recommended to use the kill command option if shunning JVM's is an issue. There this value can remain its default value, false, unless otherwise required. spl.runtime.cobol.remote.kill.delaysecs – Specify the number of seconds to wait for the Child JVM to terminate naturally before issuing the Process.destroy or kill commands. Default is 10 seconds. For example: spl.runtime.cobol.remote.kill.command=kill -9 {pid} {jvmNumber}spl.runtime.cobol.remote.destroy.enabled=falsespl.runtime.cobol.remote.kill.delaysecs=10 When a Child JVM is to be recycled, these properties are inspected and the spl.runtime.cobol.remote.kill.command, executed if provided. This is done after waiting for spl.runtime.cobol.remote.kill.delaysecs seconds to give the JVM time to shut itself down. The spl.runtime.cobol.remote.destroy.enabled property must be set to true AND the spl.runtime.cobol.remote.kill.command omitted for the original Process.destroy command to be used on the process. Note: By default the spl.runtime.cobol.remote.destroy enabled is set to false and is therefore disabled. If neither spl.runtime.cobol.remote.kill.command nor spl.runtime.cobol.remote.destroy.enabled is specified, child JVMs will not beforcibly killed. They will be left to shut themselves down (which may lead to orphan JVMs). If both are specified, the spl.runtime.cobol.remote.kill.command is preferred and spl.runtime.cobol.remote.destroy.enabled defaulted to false.It is recommended to invoke a script to issue the direct kill command instead of directly using the kill -9 commands.For example, the following sample script ensures that the process Id is an active cobjrun process before issuing the kill command: forcequit.sh #!/bin/shTHETIME=`date +"%Y-%m-%d %H:%M:%S"`if [ "$1" = "" ]then  echo "$THETIME: Process Id is required" >>$SPLSYSTEMLOGS/forcequit.log  exit 1fijavaexec=cobjrunps e $1 | grep -c $javaexecif [ $? = 0 ]then  echo "$THETIME: Process $1 is an active $javaexec process -- issuing kill-9 $1" >>$SPLSYSTEMLOGS/forcequit.log  kill -9 $1exit 0else  echo "$THETIME: Process id $1 is not a $javaexec process or not active --  kill will not be issued" >>$SPLSYSTEMLOGS/forcequit.logexit 1fi This script's name would then be specified as the value for the spl.runtime.cobol.remote.kill.command property, for example: spl.runtime.cobol.remote.kill.command=forcequit.sh The forcequit script does not have any explicit parameters but pid is passed automatically. To use the jvmNumber parameter it must explicitly specified in the command. For example, to call script forcequit.sh and pass it the pid and the child JVM number, specify it as follows: spl.runtime.cobol.remote.kill.command=forcequit.sh {pid} {jvmNumber} The script can then use the JVM number for logging purposes or to further ensure that the correct pid is being killed.If the arguments are omitted, the pid is automatically appended to the spl.runtime.cobol.remote.kill.command string. To use this facility the following patches must be installed: Patch 13719584 for Oracle Utilities Application Framework V2.1, Patches 13684595 and 13634933 for Oracle Utilities Application Framework V2.2 Group Fix 4 (as Patch 13640668) for Oracle Utilities Application Framework V4.1.

    Read the article

  • Ranking - an Introduction

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Ranking Ranking is quite common in the internet. Readers are asked to rank their latest reading by clicking on one of 5 (sometimes 10) stars. The number of stars is then converted to a number and the average number of stars as selected by all the readers is proudly (or shamefully) displayed for future readers. SharePoint 2007 lacked this feature altogether. SharePoint 2010 allows the users to rank items in a list or documents in a library (the two are actually the same because a library is actually a list). But in SP2010 the computation of the average is done later on a timer rather than on-the-spot as it should be. I suspect that the reason for this shortcoming is that they did not involve a mathematician! Let me explain. Ranking is kept in a related list. When a user rates a document the rank-list is added an item with the item id, the user name, and his number of stars. The fact that a user already ranked an item prevents him from ranking it again. This prevents the creator of the item from asking his mother to rank it a 5 and do it 753 times, thus stacking the ballot. Some systems will allow a user to change his rating and this will be done by updating the rank-list item. Now, when the timer kicks off, the list is spanned and for each item the rank-list items containing this id are summed up and divided by the number of votes thus yielding the new average. This is obviously very time consuming and very server intensive. In the 18th century an early actuary named James Dodson used what the great Augustus De Morgan (of De Morgan’s law) later named Commutation tables. The labor involved in computing a life insurance premium was staggering and also very error prone. Clerks with pencil and paper would multiply and add mountains of numbers to do the task. The more steps the greater the probability of error and the more expensive the process. Commutation tables created a “summary” of many steps and reduced the work 100 fold. So had Microsoft taken a lesson in the history of computation, they would have developed a much faster way for rating that may be done in real-time and is also 100 times faster and less CPU intensive. How do we do this? We use a form of commutation. We always keep the number of votes and the total of stars. One simple division gives us the average. So we write an event receiver. When a vote is added, we just add the stars to the total-stars and 1 to the number of votes. We then recomputed the average. When a vote is updated, we reduce the total by the old vote, increase it by the new vote and leave the number of votes the same. Again we do the division to get the new average. When a vote is deleted (highly unlikely and maybe even prohibited), we reduce the total by that vote and reduce the number of votes by 1… Gone are the days of spanning lists, counting items, and tallying votes and we have no need for a timer process to run it all. This is the first of a few treatises on ranking. Even though I discussed the math and the history thereof, in here I am only going to solve the presentation issue. I wanted to create the CSS and Jscript needed to display the stars, create the various effects like hovering and clicking (onmouseover, onmouseout, onclick, etc.) and I wanted to create a general solution with any number of stars. When I had it all done, I created the ranking game so that I could test it. The game is interesting in and on itself, so here it is (or go to the games page and select “rank the stars”). BTW, when you play it, look at the source code and see how it was all done.  Next, how the 5 stars are displayed in the New and Update forms. When the whole set of articles will be done, you’ll be able to create the complete solution. That’s all folks!

    Read the article

  • Why lock-free data structures just aren't lock-free enough

    - by Alex.Davies
    Today's post will explore why the current ways to communicate between threads don't scale, and show you a possible way to build scalable parallel programming on top of shared memory. The problem with shared memory Soon, we will have dozens, hundreds and then millions of cores in our computers. It's inevitable, because individual cores just can't get much faster. At some point, that's going to mean that we have to rethink our architecture entirely, as millions of cores can't all access a shared memory space efficiently. But millions of cores are still a long way off, and in the meantime we'll see machines with dozens of cores, struggling with shared memory. Alex's tip: The best way for an application to make use of that increasing parallel power is to use a concurrency model like actors, that deals with synchronisation issues for you. Then, the maintainer of the actors framework can find the most efficient way to coordinate access to shared memory to allow your actors to pass messages to each other efficiently. At the moment, NAct uses the .NET thread pool and a few locks to marshal messages. It works well on dual and quad core machines, but it won't scale to more cores. Every time we use a lock, our core performs an atomic memory operation (eg. CAS) on a cell of memory representing the lock, so it's sure that no other core can possibly have that lock. This is very fast when the lock isn't contended, but we need to notify all the other cores, in case they held the cell of memory in a cache. As the number of cores increases, the total cost of a lock increases linearly. A lot of work has been done on "lock-free" data structures, which avoid locks by using atomic memory operations directly. These give fairly dramatic performance improvements, particularly on systems with a few (2 to 4) cores. The .NET 4 concurrent collections in System.Collections.Concurrent are mostly lock-free. However, lock-free data structures still don't scale indefinitely, because any use of an atomic memory operation still involves every core in the system. A sync-free data structure Some concurrent data structures are possible to write in a completely synchronization-free way, without using any atomic memory operations. One useful example is a single producer, single consumer (SPSC) queue. It's easy to write a sync-free fixed size SPSC queue using a circular buffer*. Slightly trickier is a queue that grows as needed. You can use a linked list to represent the queue, but if you leave the nodes to be garbage collected once you're done with them, the GC will need to involve all the cores in collecting the finished nodes. Instead, I've implemented a proof of concept inspired by this intel article which reuses the nodes by putting them in a second queue to send back to the producer. * In all these cases, you need to use memory barriers correctly, but these are local to a core, so don't have the same scalability problems as atomic memory operations. Performance tests I tried benchmarking my SPSC queue against the .NET ConcurrentQueue, and against a standard Queue protected by locks. In some ways, this isn't a fair comparison, because both of these support multiple producers and multiple consumers, but I'll come to that later. I started on my dual-core laptop, running a simple test that had one thread producing 64 bit integers, and another consuming them, to measure the pure overhead of the queue. So, nothing very interesting here. Both concurrent collections perform better than the lock-based one as expected, but there's not a lot to choose between the ConcurrentQueue and my SPSC queue. I was a little disappointed, but then, the .NET Framework team spent a lot longer optimising it than I did. So I dug out a more powerful machine that Red Gate's DBA tools team had been using for testing. It is a 6 core Intel i7 machine with hyperthreading, adding up to 12 logical cores. Now the results get more interesting. As I increased the number of producer-consumer pairs to 6 (to saturate all 12 logical cores), the locking approach was slow, and got even slower, as you'd expect. What I didn't expect to be so clear was the drop-off in performance of the lock-free ConcurrentQueue. I could see the machine only using about 20% of available CPU cycles when it should have been saturated. My interpretation is that as all the cores used atomic memory operations to safely access the queue, they ended up spending most of the time notifying each other about cache lines that need invalidating. The sync-free approach scaled perfectly, despite still working via shared memory, which after all, should still be a bottleneck. I can't quite believe that the results are so clear, so if you can think of any other effects that might cause them, please comment! Obviously, this benchmark isn't realistic because we're only measuring the overhead of the queue. Any real workload, even on a machine with 12 cores, would dwarf the overhead, and there'd be no point worrying about this effect. But would that be true on a machine with 100 cores? Still to be solved. The trouble is, you can't build many concurrent algorithms using only an SPSC queue to communicate. In particular, I can't see a way to build something as general purpose as actors on top of just SPSC queues. Fundamentally, an actor needs to be able to receive messages from multiple other actors, which seems to need an MPSC queue. I've been thinking about ways to build a sync-free MPSC queue out of multiple SPSC queues and some kind of sign-up mechanism. Hopefully I'll have something to tell you about soon, but leave a comment if you have any ideas.

    Read the article

  • Java fatal error, don't know what it means

    - by Thomas King
    It happens at the same place in my code (albeit not the first time the method is executed) but I can't make head or tail of what is wrong. (Doubly so as it's code for a robot). Be most appreciative if someone can give me an idea of what kind of problem it is. I assume it's to do with threading (multi-threaded app) but I don't really know what?!? Worried as deadline for uni project is looming!!! The message: # A fatal error has been detected by the Java Runtime Environment: # SIGSEGV (0xb) at pc=0xb70f0ca7, pid=5065, tid=2145643376 # JRE version: 6.0_15-b03 Java VM: Java HotSpot(TM) Server VM (14.1-b02 mixed mode linux-x86 ) Problematic frame: V [libjvm.so+0x4c9ca7] # An error report file with more information is saved as: /home/thomas/workspace/sir13/hs_err_pid5065.log # If you would like to submit a bug report, please visit: http://java.sun.com/webapps/bugreport/crash.jsp # The log: # A fatal error has been detected by the Java Runtime Environment: # SIGSEGV (0xb) at pc=0xb70f0ca7, pid=5065, tid=2145643376 # JRE version: 6.0_15-b03 Java VM: Java HotSpot(TM) Server VM (14.1-b02 mixed mode linux-x86 ) Problematic frame: V [libjvm.so+0x4c9ca7] # If you would like to submit a bug report, please visit: http://java.sun.com/webapps/bugreport/crash.jsp # --------------- T H R E A D --------------- Current thread (0x0904ec00): JavaThread "CompilerThread1" daemon [_thread_in_native, id=5078, stack(0x7fdbe000,0x7fe3f000)] siginfo:si_signo=SIGSEGV: si_errno=0, si_code=1 (SEGV_MAPERR), si_addr=0x00000004 Registers: EAX=0x00000000, EBX=0xb733d720, ECX=0x000003b4, EDX=0x00000000 ESP=0x7fe3bf30, EBP=0x7fe3bf78, ESI=0x7fe3c250, EDI=0x7e9a7790 EIP=0xb70f0ca7, CR2=0x00000004, EFLAGS=0x00010283 Top of Stack: (sp=0x7fe3bf30) 0x7fe3bf30: 00020008 7ec8de5c 7fe3c250 00000000 0x7fe3bf40: 7f610451 00001803 7e9a7790 000003f5 0x7fe3bf50: 7e920030 7f239910 7f23b349 7f23b348 0x7fe3bf60: 7f550e35 7fe3c250 0000021b b733d720 0x7fe3bf70: 000003bc 7f23db10 7fe3bfc8 b70f0997 0x7fe3bf80: 7fe3c240 7f23db10 00000000 00000002 0x7fe3bf90: 00000000 7fe3c1b0 00000000 00000000 0x7fe3bfa0: 00004000 00000020 7ec88870 00000002 Instructions: (pc=0xb70f0ca7) 0xb70f0c97: 7d 08 8b 87 c8 02 00 00 89 c7 8b 45 c4 8b 14 87 0xb70f0ca7: 8b 42 04 8b 00 85 c0 75 22 8b 4e 04 8b 52 1c 39 Stack: [0x7fdbe000,0x7fe3f000], sp=0x7fe3bf30, free space=503k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x4c9ca7] V [libjvm.so+0x4c9997] V [libjvm.so+0x4c6e23] V [libjvm.so+0x25b75f] V [libjvm.so+0x2585df] V [libjvm.so+0x1f2c2f] V [libjvm.so+0x260ceb] V [libjvm.so+0x260609] V [libjvm.so+0x617286] V [libjvm.so+0x6108fe] V [libjvm.so+0x531c4e] C [libpthread.so.0+0x580e] Current CompileTask: C2:133 ! BehaviourLeftUnexplored.action()V (326 bytes) --------------- P R O C E S S --------------- Java Threads: ( = current thread ) 0x08fb5400 JavaThread "DestroyJavaVM" [_thread_blocked, id=5066, stack(0xb6bb0000,0xb6c01000)] 0x09213c00 JavaThread "Thread-4" [_thread_blocked, id=5085, stack(0x7eeaf000,0x7ef00000)] 0x09212c00 JavaThread "Thread-3" [_thread_in_Java, id=5084, stack(0x7f863000,0x7f8b4000)] 0x09206800 JavaThread "AWT-XAWT" daemon [_thread_in_native, id=5083, stack(0x7f8b4000,0x7f905000)] 0x091b7400 JavaThread "Java2D Disposer" daemon [_thread_blocked, id=5082, stack(0x7f93e000,0x7f98f000)] 0x09163c00 JavaThread "Thread-0" [_thread_in_native, id=5081, stack(0x7fc87000,0x7fcd8000)] 0x09050c00 JavaThread "Low Memory Detector" daemon [_thread_blocked, id=5079, stack(0x7fd6d000,0x7fdbe000)] =0x0904ec00 JavaThread "CompilerThread1" daemon [_thread_in_native, id=5078, stack(0x7fdbe000,0x7fe3f000)] 0x0904c000 JavaThread "CompilerThread0" daemon [_thread_blocked, id=5077, stack(0x7fe3f000,0x7fec0000)] 0x0904a800 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=5076, stack(0x7fec0000,0x7ff11000)] 0x09036c00 JavaThread "Finalizer" daemon [_thread_blocked, id=5075, stack(0x7ff57000,0x7ffa8000)] 0x09035400 JavaThread "Reference Handler" daemon [_thread_blocked, id=5074, stack(0x7ffa8000,0x7fff9000)] Other Threads: 0x09031400 VMThread [stack: 0x7fff9000,0x8007a000] [id=5073] 0x09052800 WatcherThread [stack: 0x7fcec000,0x7fd6d000] [id=5080] VM state:not at safepoint (normal execution) VM Mutex/Monitor currently owned by a thread: None Heap PSYoungGen total 46784K, used 32032K [0xae650000, 0xb3440000, 0xb3a50000) eden space 46720K, 68% used [0xae650000,0xb0588f48,0xb13f0000) from space 64K, 95% used [0xb3390000,0xb339f428,0xb33a0000) to space 384K, 0% used [0xb33e0000,0xb33e0000,0xb3440000) PSOldGen total 43008K, used 20872K [0x84650000, 0x87050000, 0xae650000) object space 43008K, 48% used [0x84650000,0x85ab2308,0x87050000) PSPermGen total 16384K, used 5115K [0x80650000, 0x81650000, 0x84650000) object space 16384K, 31% used [0x80650000,0x80b4ec30,0x81650000) Dynamic libraries: 08048000-08052000 r-xp 00000000 08:05 34708 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/bin/java 08052000-08053000 rwxp 00009000 08:05 34708 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/bin/java 08faf000-09220000 rwxp 00000000 00:00 0 [heap] 7e900000-7e9f9000 rwxp 00000000 00:00 0 7e9f9000-7ea00000 ---p 00000000 00:00 0 7ea00000-7ea41000 rwxp 00000000 00:00 0 7ea41000-7eb00000 ---p 00000000 00:00 0 7eb00000-7ebfc000 rwxp 00000000 00:00 0 7ebfc000-7ec00000 ---p 00000000 00:00 0 7ec00000-7ecf7000 rwxp 00000000 00:00 0 7ecf7000-7ed00000 ---p 00000000 00:00 0 7ed00000-7ede7000 rwxp 00000000 00:00 0 7ede7000-7ee00000 ---p 00000000 00:00 0 7eeaf000-7eeb2000 ---p 00000000 00:00 0 7eeb2000-7ef00000 rwxp 00000000 00:00 0 7ef00000-7eff9000 rwxp 00000000 00:00 0 7eff9000-7f000000 ---p 00000000 00:00 0 7f100000-7f1f6000 rwxp 00000000 00:00 0 7f1f6000-7f200000 ---p 00000000 00:00 0 7f200000-7f2fc000 rwxp 00000000 00:00 0 7f2fc000-7f300000 ---p 00000000 00:00 0 7f300000-7f4fe000 rwxp 00000000 00:00 0 7f4fe000-7f500000 ---p 00000000 00:00 0 7f500000-7f5fb000 rwxp 00000000 00:00 0 7f5fb000-7f600000 ---p 00000000 00:00 0 7f600000-7f6f9000 rwxp 00000000 00:00 0 7f6f9000-7f700000 ---p 00000000 00:00 0 7f700000-7f800000 rwxp 00000000 00:00 0 7f830000-7f836000 r-xs 00000000 08:05 241611 /var/cache/fontconfig/945677eb7aeaf62f1d50efc3fb3ec7d8-x86.cache-2 7f836000-7f838000 r-xs 00000000 08:05 241612 /var/cache/fontconfig/99e8ed0e538f840c565b6ed5dad60d56-x86.cache-2 7f838000-7f83b000 r-xs 00000000 08:05 241620 /var/cache/fontconfig/e383d7ea5fbe662a33d9b44caf393297-x86.cache-2 7f83b000-7f846000 r-xs 00000000 08:05 241600 /var/cache/fontconfig/0f34bcd4b6ee430af32735b75db7f02b-x86.cache-2 7f863000-7f866000 ---p 00000000 00:00 0 7f866000-7f8b4000 rwxp 00000000 00:00 0 7f8b4000-7f8b7000 ---p 00000000 00:00 0 7f8b7000-7f905000 rwxp 00000000 00:00 0 7f905000-7f909000 r-xp 00000000 08:05 5012 /usr/lib/libXfixes.so.3.1.0 7f909000-7f90a000 r-xp 00003000 08:05 5012 /usr/lib/libXfixes.so.3.1.0 7f90a000-7f90b000 rwxp 00004000 08:05 5012 /usr/lib/libXfixes.so.3.1.0 7f90b000-7f913000 r-xp 00000000 08:05 5032 /usr/lib/libXrender.so.1.3.0 7f913000-7f914000 r-xp 00007000 08:05 5032 /usr/lib/libXrender.so.1.3.0 7f914000-7f915000 rwxp 00008000 08:05 5032 /usr/lib/libXrender.so.1.3.0 7f915000-7f91e000 r-xp 00000000 08:05 5004 /usr/lib/libXcursor.so.1.0.2 7f91e000-7f91f000 r-xp 00008000 08:05 5004 /usr/lib/libXcursor.so.1.0.2 7f91f000-7f920000 rwxp 00009000 08:05 5004 /usr/lib/libXcursor.so.1.0.2 7f92f000-7f931000 r-xs 00000000 08:05 241622 /var/cache/fontconfig/f24b2111ab8703b4e963115a8cf14259-x86.cache-2 7f931000-7f932000 r-xs 00000000 08:05 241606 /var/cache/fontconfig/4c73fe0c47614734b17d736dbde7580a-x86.cache-2 7f932000-7f936000 r-xs 00000000 08:05 241599 /var/cache/fontconfig/062808c12e6e608270f93bb230aed730-x86.cache-2 7f936000-7f93e000 r-xs 00000000 08:05 241617 /var/cache/fontconfig/d52a8644073d54c13679302ca1180695-x86.cache-2 7f93e000-7f941000 ---p 00000000 00:00 0 7f941000-7f98f000 rwxp 00000000 00:00 0 7f98f000-7fa0e000 r-xp 00000000 08:05 34755 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/libfontmanager.so 7fa0e000-7fa19000 rwxp 0007e000 08:05 34755 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/libfontmanager.so 7fa19000-7fa1d000 rwxp 00000000 00:00 0 7fa1d000-7fa21000 r-xp 00000000 08:05 5008 /usr/lib/libXdmcp.so.6.0.0 7fa21000-7fa22000 rwxp 00003000 08:05 5008 /usr/lib/libXdmcp.so.6.0.0 7fa22000-7fa3e000 r-xp 00000000 08:05 6029 /usr/lib/libxcb.so.1.1.0 7fa3e000-7fa3f000 r-xp 0001c000 08:05 6029 /usr/lib/libxcb.so.1.1.0 7fa3f000-7fa40000 rwxp 0001d000 08:05 6029 /usr/lib/libxcb.so.1.1.0 7fa40000-7fa42000 r-xp 00000000 08:05 4997 /usr/lib/libXau.so.6.0.0 7fa42000-7fa43000 r-xp 00001000 08:05 4997 /usr/lib/libXau.so.6.0.0 7fa43000-7fa44000 rwxp 00002000 08:05 4997 /usr/lib/libXau.so.6.0.0 7fa44000-7fb6e000 r-xp 00000000 08:05 4991 /usr/lib/libX11.so.6.2.0 7fb6e000-7fb6f000 ---p 0012a000 08:05 4991 /usr/lib/libX11.so.6.2.0 7fb6f000-7fb70000 r-xp 0012a000 08:05 4991 /usr/lib/libX11.so.6.2.0 7fb70000-7fb72000 rwxp 0012b000 08:05 4991 /usr/lib/libX11.so.6.2.0 7fb72000-7fb73000 rwxp 00000000 00:00 0 7fb73000-7fb81000 r-xp 00000000 08:05 5010 /usr/lib/libXext.so.6.4.0 7fb81000-7fb82000 r-xp 0000d000 08:05 5010 /usr/lib/libXext.so.6.4.0 7fb82000-7fb83000 rwxp 0000e000 08:05 5010 /usr/lib/libXext.so.6.4.0 7fb83000-7fb84000 r-xs 00000000 08:05 241614 /var/cache/fontconfig/c05880de57d1f5e948fdfacc138775d9-x86.cache-2 7fb84000-7fb87000 r-xs 00000000 08:05 241613 /var/cache/fontconfig/a755afe4a08bf5b97852ceb7400b47bc-x86.cache-2 7fb87000-7fb8a000 r-xs 00000000 08:05 241608 /var/cache/fontconfig/6d41288fd70b0be22e8c3a91e032eec0-x86.cache-2 7fb8a000-7fb92000 r-xs 00000000 08:05 219560 /var/cache/fontconfig/e13b20fdb08344e0e664864cc2ede53d-x86.cache-2 7fb92000-7fbd5000 r-xp 00000000 08:05 34752 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/xawt/libmawt.so 7fbd5000-7fbd7000 rwxp 00043000 08:05 34752 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/xawt/libmawt.so 7fbd7000-7fbd8000 rwxp 00000000 00:00 0 7fbd8000-7fc5c000 r-xp 00000000 08:05 34750 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/libawt.so 7fc5c000-7fc63000 rwxp 00084000 08:05 34750 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/libawt.so 7fc63000-7fc87000 rwxp 00000000 00:00 0 7fc87000-7fc8a000 ---p 00000000 00:00 0 7fc8a000-7fcd8000 rwxp 00000000 00:00 0 7fcd8000-7fceb000 r-xp 00000000 08:05 34739 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/libnet.so 7fceb000-7fcec000 rwxp 00013000 08:05 34739 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/libnet.so 7fcec000-7fced000 ---p 00000000 00:00 0 7fced000-7fd6d000 rwxp 00000000 00:00 0 7fd6d000-7fd70000 ---p 00000000 00:00 0 7fd70000-7fdbe000 rwxp 00000000 00:00 0 7fdbe000-7fdc1000 ---p 00000000 00:00 0 7fdc1000-7fe3f000 rwxp 00000000 00:00 0 7fe3f000-7fe42000 ---p 00000000 00:00 0 7fe42000-7fec0000 rwxp 00000000 00:00 0 7fec0000-7fec3000 ---p 00000000 00:00 0 7fec3000-7ff11000 rwxp 00000000 00:00 0 7ff11000-7ff18000 r-xs 00000000 08:05 134616 /usr/lib/gconv/gconv-modules.cache 7ff18000-7ff57000 r-xp 00000000 08:05 136279 /usr/lib/locale/en_GB.utf8/LC_CTYPE 7ff57000-7ff5a000 ---p 00000000 00:00 0 7ff5a000-7ffa8000 rwxp 00000000 00:00 0 7ffa8000-7ffab000 ---p 00000000 00:00 0 7ffab000-7fff9000 rwxp 00000000 00:00 0 7fff9000-7fffa000 ---p 00000000 00:00 0 7fffa000-800ad000 rwxp 00000000 00:00 0 800ad000-80243000 r-xs 02fb3000 08:05 34883 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/rt.jar 80243000-80244000 ---p 00000000 00:00 0 80244000-802c4000 rwxp 00000000 00:00 0 802c4000-802c5000 ---p 00000000 00:00 0 802c5000-8034d000 rwxp 00000000 00:00 0 8034d000-80365000 rwxp 00000000 00:00 0 80365000-8037a000 rwxp 00000000 00:00 0 8037a000-804b5000 rwxp 00000000 00:00 0 804b5000-804bd000 rwxp 00000000 00:00 0 804bd000-804d5000 rwxp 00000000 00:00 0 804d5000-804ea000 rwxp 00000000 00:00 0 804ea000-80625000 rwxp 00000000 00:00 0 80625000-8064c000 rwxp 00000000 00:00 0 8064c000-8064f000 rwxp 00000000 00:00 0 8064f000-81650000 rwxp 00000000 00:00 0 81650000-84650000 rwxp 00000000 00:00 0 84650000-87050000 rwxp 00000000 00:00 0 87050000-ae650000 rwxp 00000000 00:00 0 ae650000-b3440000 rwxp 00000000 00:00 0 b3440000-b3a50000 rwxp 00000000 00:00 0 b3a50000-b3a52000 r-xs 00000000 08:05 241602 /var/cache/fontconfig/2c5ba8142dffc8bf0377700342b8ca1a-x86.cache-2 b3a52000-b3a5b000 r-xp 00000000 08:05 5018 /usr/lib/libXi.so.6.0.0 b3a5b000-b3a5c000 r-xp 00008000 08:05 5018 /usr/lib/libXi.so.6.0.0 b3a5c000-b3a5d000 rwxp 00009000 08:05 5018 /usr/lib/libXi.so.6.0.0 b3a5d000-b3a66000 rwxp 00000000 00:00 0 b3a66000-b3b1d000 rwxp 00000000 00:00 0 b3b1d000-b3d5d000 rwxp 00000000 00:00 0 b3d5d000-b6b1d000 rwxp 00000000 00:00 0 b6b1d000-b6b2c000 r-xp 00000000 08:05 34735 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/libzip.so b6b2c000-b6b2e000 rwxp 0000e000 08:05 34735 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/libzip.so b6b2e000-b6b38000 r-xp 00000000 08:05 1042 /lib/tls/i686/cmov/libnss_files-2.10.1.so b6b38000-b6b39000 r-xp 00009000 08:05 1042 /lib/tls/i686/cmov/libnss_files-2.10.1.so b6b39000-b6b3a000 rwxp 0000a000 08:05 1042 /lib/tls/i686/cmov/libnss_files-2.10.1.so b6b3a000-b6b43000 r-xp 00000000 08:05 1055 /lib/tls/i686/cmov/libnss_nis-2.10.1.so b6b43000-b6b44000 r-xp 00008000 08:05 1055 /lib/tls/i686/cmov/libnss_nis-2.10.1.so b6b44000-b6b45000 rwxp 00009000 08:05 1055 /lib/tls/i686/cmov/libnss_nis-2.10.1.so b6b45000-b6b4b000 r-xp 00000000 08:05 1028 /lib/tls/i686/cmov/libnss_compat-2.10.1.so b6b4b000-b6b4c000 r-xp 00005000 08:05 1028 /lib/tls/i686/cmov/libnss_compat-2.10.1.so b6b4c000-b6b4d000 rwxp 00006000 08:05 1028 /lib/tls/i686/cmov/libnss_compat-2.10.1.so b6b4d000-b6b54000 r-xs 00035000 08:05 304369 /home/thomas/workspace/sir13/javaclient/jars/javaclient.jar b6b54000-b6b5c000 rwxs 00000000 08:05 393570 /tmp/hsperfdata_thomas/5065 b6b5c000-b6b6f000 r-xp 00000000 08:05 1020 /lib/tls/i686/cmov/libnsl-2.10.1.so b6b6f000-b6b70000 r-xp 00012000 08:05 1020 /lib/tls/i686/cmov/libnsl-2.10.1.so b6b70000-b6b71000 rwxp 00013000 08:05 1020 /lib/tls/i686/cmov/libnsl-2.10.1.so b6b71000-b6b73000 rwxp 00000000 00:00 0 b6b73000-b6b77000 r-xp 00000000 08:05 5038 /usr/lib/libXtst.so.6.1.0 b6b77000-b6b78000 r-xp 00004000 08:05 5038 /usr/lib/libXtst.so.6.1.0 b6b78000-b6b79000 rwxp 00005000 08:05 5038 /usr/lib/libXtst.so.6.1.0 b6b79000-b6b7f000 r-xp 00000000 08:05 34723 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/native_threads/libhpi.so b6b7f000-b6b80000 rwxp 00006000 08:05 34723 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/native_threads/libhpi.so b6b80000-b6b81000 rwxp 00000000 00:00 0 b6b81000-b6b82000 r-xp 00000000 00:00 0 b6b82000-b6ba5000 r-xp 00000000 08:05 34733 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/libjava.so b6ba5000-b6ba7000 rwxp 00023000 08:05 34733 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/libjava.so b6ba7000-b6bae000 r-xp 00000000 08:05 1733 /lib/tls/i686/cmov/librt-2.10.1.so b6bae000-b6baf000 r-xp 00006000 08:05 1733 /lib/tls/i686/cmov/librt-2.10.1.so b6baf000-b6bb0000 rwxp 00007000 08:05 1733 /lib/tls/i686/cmov/librt-2.10.1.so b6bb0000-b6bb3000 ---p 00000000 00:00 0 b6bb3000-b6c01000 rwxp 00000000 00:00 0 b6c01000-b6c25000 r-xp 00000000 08:05 1016 /lib/tls/i686/cmov/libm-2.10.1.so b6c25000-b6c26000 r-xp 00023000 08:05 1016 /lib/tls/i686/cmov/libm-2.10.1.so b6c26000-b6c27000 rwxp 00024000 08:05 1016 /lib/tls/i686/cmov/libm-2.10.1.so b6c27000-b72f4000 r-xp 00000000 08:05 34724 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/server/libjvm.so b72f4000-b7341000 rwxp 006cc000 08:05 34724 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/server/libjvm.so b7341000-b7765000 rwxp 00000000 00:00 0 b7765000-b78a3000 r-xp 00000000 08:05 967 /lib/tls/i686/cmov/libc-2.10.1.so b78a3000-b78a4000 ---p 0013e000 08:05 967 /lib/tls/i686/cmov/libc-2.10.1.so b78a4000-b78a6000 r-xp 0013e000 08:05 967 /lib/tls/i686/cmov/libc-2.10.1.so b78a6000-b78a7000 rwxp 00140000 08:05 967 /lib/tls/i686/cmov/libc-2.10.1.so b78a7000-b78aa000 rwxp 00000000 00:00 0 b78aa000-b78ac000 r-xp 00000000 08:05 1014 /lib/tls/i686/cmov/libdl-2.10.1.so b78ac000-b78ad000 r-xp 00001000 08:05 1014 /lib/tls/i686/cmov/libdl-2.10.1.so b78ad000-b78ae000 rwxp 00002000 08:05 1014 /lib/tls/i686/cmov/libdl-2.10.1.so b78ae000-b78b5000 r-xp 00000000 08:05 34734 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/jli/libjli.so b78b5000-b78b7000 rwxp 00006000 08:05 34734 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/jli/libjli.so b78b7000-b78b8000 rwxp 00000000 00:00 0 b78b8000-b78cd000 r-xp 00000000 08:05 1081 /lib/tls/i686/cmov/libpthread-2.10.1.so b78cd000-b78ce000 r-xp 00014000 08:05 1081 /lib/tls/i686/cmov/libpthread-2.10.1.so b78ce000-b78cf000 rwxp 00015000 08:05 1081 /lib/tls/i686/cmov/libpthread-2.10.1.so b78cf000-b78d1000 rwxp 00000000 00:00 0 b78d1000-b78d2000 r-xs 00000000 08:05 161622 /var/cache/fontconfig/4794a0821666d79190d59a36cb4f44b5-x86.cache-2 b78d2000-b78d4000 r-xs 00000000 08:05 241610 /var/cache/fontconfig/7ef2298fde41cc6eeb7af42e48b7d293-x86.cache-2 b78d4000-b78df000 r-xp 00000000 08:05 34732 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/libverify.so b78df000-b78e0000 rwxp 0000b000 08:05 34732 /usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/libverify.so b78e0000-b78e2000 rwxp 00000000 00:00 0 b78e2000-b78e3000 r-xp 00000000 00:00 0 [vdso] b78e3000-b78fe000 r-xp 00000000 08:05 64 /lib/ld-2.10.1.so b78fe000-b78ff000 r-xp 0001a000 08:05 64 /lib/ld-2.10.1.so b78ff000-b7900000 rwxp 0001b000 08:05 64 /lib/ld-2.10.1.so bfc33000-bfc48000 rwxp 00000000 00:00 0 [stack] VM Arguments: jvm_args: -Dfile.encoding=UTF-8 java_command: Main Launcher Type: SUN_STANDARD Environment Variables: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games USERNAME=thomas LD_LIBRARY_PATH=/usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/server:/usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386:/usr/lib/jvm/java-6-sun-1.6.0.15/jre/../lib/i386:/usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386/client:/usr/lib/jvm/java-6-sun-1.6.0.15/jre/lib/i386:/usr/lib/xulrunner-addons:/usr/lib/xulrunner-addons SHELL=/bin/bash DISPLAY=:0.0 Signal Handlers: SIGSEGV: [libjvm.so+0x650690], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGBUS: [libjvm.so+0x650690], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGFPE: [libjvm.so+0x52f580], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGPIPE: [libjvm.so+0x52f580], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGXFSZ: [libjvm.so+0x52f580], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGILL: [libjvm.so+0x52f580], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000 SIGUSR2: [libjvm.so+0x532170], sa_mask[0]=0x00000004, sa_flags=0x10000004 SIGHUP: [libjvm.so+0x531ea0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGINT: [libjvm.so+0x531ea0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGTERM: [libjvm.so+0x531ea0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGQUIT: [libjvm.so+0x531ea0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 --------------- S Y S T E M --------------- OS:squeeze/sid uname:Linux 2.6.31-20-generic #57-Ubuntu SMP Mon Feb 8 09:05:19 UTC 2010 i686 libc:glibc 2.10.1 NPTL 2.10.1 rlimit: STACK 8192k, CORE 0k, NPROC infinity, NOFILE 1024, AS infinity load average:1.07 0.55 0.23 CPU:total 2 (2 cores per cpu, 1 threads per core) family 6 model 15 stepping 13, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3 Memory: 4k page, physical 3095836k(1519972k free), swap 1261060k(1261060k free) vm_info: Java HotSpot(TM) Server VM (14.1-b02) for linux-x86 JRE (1.6.0_15-b03), built on Jul 2 2009 15:49:13 by "java_re" with gcc 3.2.1-7a (J2SE release) time: Mon Mar 22 12:08:40 2010 elapsed time: 21 seconds

    Read the article

  • Autoscaling in a modern world&hellip;. Part 2

    - by Steve Loethen
    When we last left off, we had a web application spinning away in the cloud, and a local console application watching it and reacting to changes in demand.  Reactions that were specified by a set of rules.  Let’s talk about those rules. Constraints.  The first set of rules this application answered to were the constraints. Here is what they looked like: <constraintRules> <rule name="default" enabled="true" rank="1" description="The default constraint rule"> <actions> <range min="1" max="4" target="AutoscalingApplicationRole"/> </actions> </rule> </constraintRules> Pretty basic.  We have one role, the “AutoscalingApplicationRole”, and we have decided to have it live within a range of 1 to 4.  This rule does not adjust, but instead, set’s limits on what other rules can do.  It has a rank, so you can have you can specify other sets of constraints, perhaps based on time or date, to allow for deviations from this set.  But for now, let’s keep it simple.  In the real world, you would probably use the minimum to set a lower end SLA.  A common value might be a 2, to prevent the reactive rules from ever taking you down to 1 role.  The maximum is often used to keep a rule from driving the cost up, setting an upper limit to prevent you waking up one morning and find a bill for hundreds of instances you didn’t expect.  So, here we have the range we want our application to live inside.  This is good for our investigation and testing.  Next, let’s take a look at the reactive rules.  These rules are what you use to react (hence reactive rules) to changing demands on your application.  The HOL has two simple rules.  One that looks at a queue depth, and one that looks at a performance counter that reports cpu utilization.  the XML in the rules file looks like this: <reactiveRules> <rule name="ScaleUp" rank="10" description="Scale Up the web role" enabled="true"> <when> <any> <greaterOrEqual operand="Length_05_holqueue" than="10"/> <greaterOrEqual operand="CPU_05_holwebrole" than="65"/> </any> </when> <actions> <scale target="AutoscalingApplicationRole" by="1"/> </actions> </rule> <rule name="ScaleDown" rank="10" description="Scale down the web role" enabled="true"> <when> <all> <less operand="Length_05_holqueue" than="5"/> <less operand="CPU_05_holwebrole" than="40"/> </all> </when> <actions> <scale target="AutoscalingApplicationRole" by="-1"/> </actions> </rule> </reactiveRules> <operands> <performanceCounter alias="CPU_05_holwebrole" performanceCounterName="\Processor(_Total)\% Processor Time" source="AutoscalingApplicationRole" timespan="00:05:00" aggregate="Average" /> <queueLength alias="Length_05_holqueue" queue="hol-queue" timespan="00:05:00" aggregate="Average"/> </operands> These rules are currently contained in a file called rules.xml, that is in the root of the console application.  The console app, starts up, grabs the rules and starts watching the 2 operands.  When it detects a rule has been satisfied, it performs the desired action.  (here, scale up or down my 1). But I want to host the autoscaler  in the cloud.  For my first trick, I will move the rules (and another file called services.xml) to azure blob storage.  Look for part 3.

    Read the article

  • 3 Ways to Make Steam Even Faster

    - by Chris Hoffman
    Have you ever noticed how slow Steam’s built-in web browser can be? Do you struggle with slow download speeds? Or is Steam just slow in general? These tips will help you speed it up. Steam isn’t a game itself, so there are no 3D settings to change to achieve maximum performance. But there are some things you can do to speed it up dramatically. Speed Up the Steam Web Browser Steam’s built-in web browser — used in both the Steam store and in Steam’s in-game overlay to provide a web browser you can quickly use within games – can be frustratingly slow on many systems. Rather than the typical speed we’ve come to expect from Chrome, Firefox, or even Internet Explorer, Steam seems to struggle. When you click a link or go to a new page, there’s a noticeable delay before the new page appears — something that doesn’t happen in desktop browsers. Many people seem to have made peace with this slowness, accepting that Steam’s built-in browser is just bad. However, there’s a trick that will eliminate this delay on many systems and make the Steam web browser fast. This problem seems to arise from an incompatibility with the Automatically Detect Proxy Settings option, which is enabled by default on Windows. This is a compatibility option that very few people should actually need, so it’s safe to disable it. To disable this option, open the Internet Options dialog — press the Windows key to access the Start menu or Start screen, type Internet Options, and click the Internet Options shortcut. Select the Connections tab in the Internet Options window and click the LAN settings button. Uncheck the Automatically detect settings option here, then click OK to save your settings. If you experienced a significant delay every time a web page loaded in Steam’s web browser, it should now be gone. In the unlikely event that you encounter some sort of problem with your network connection, you could always re-enable this option. Increase Steam’s Game Download Speed Steam attempts to automatically select the nearest download server to your location. However, it may not always select the ideal download server. Or, in the case of high-traffic events like big seasonal sales and huge game launches, you may benefit from selecting a less-congested server. To do this, open Steam’s settings by clicking the Steam menu in Steam and selecting Settings. Click over to the Downloads tab and select the closest download server from the Download Region box. You should also ensure that Steam’s download bandwidth isn’t limited from here. You may want to restart Steam and see if your download speeds improve after changing this setting. In some cases, the closest server might not be the fastest. One a bit farther away could be faster if your local server is more congested, for example. Steam once provided information about content server load, which allowed you to select a regional server that wasn’t under high-load, but this information no longer seems to be available. Steam still provides a page that shows you the amount of download activity happening in different regions, including statistics about the difference in download speeds in different US states, but this information isn’t as useful. Accelerate Steam and Your Games One way to speed up all your games — and Steam itself —  is by getting a solid-state drive and installing Steam to it. Steam allows you to easily move your Steam folder — at C:\Program Files (x86)\Steam by default — to another hard drive. Just move it like you would any other folder. You can then launch the Steam.exe program as if you had never moved Steam’s files. Steam also allows you to configure multiple game library folders. This means that you can set up a Steam library folder on a solid-state drive and one on your larger magnetic hard drive. Install your most frequently played games to the solid-state drive for maximum speed and your less frequently played ones to the slower magnetic hard drive to save SSD space. To set up additional library folders, open Steam’s Settings window and click the Downloads tab. You’ll find the Steam Library Folders option here. Click the Add Library Folder button and create a new game library on another hard drive. When you install a game in Steam, you’ll be asked which library folder you want to install it to. With the proxy compatibility option disabled, the correct download server chosen, and Steam installed to a fast SSD, it should be a speed demon. There’s not much more you can do to speed up Steam, short of upgrading other hardware like your computer’s CPU. Image Credit: Andrew Nash on Flickr     

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • IBM Keynote: (hardware,software)–>{IBM.java.patterns}

    - by Janice J. Heiss
    On Sunday evening, September 30, 2012, Jason McGee, IBM Distinguished Engineer and Chief Architect Cloud Computing, along with John Duimovich IBM Distinguished Engineer and Java CTO, gave an information- and idea-rich keynote that left Java developers with much to ponder.Their focus was on the challenges to make Java more efficient and productive given the hardware and software environments of 2012. “One idea that is very interesting is the idea of multi-tenancy,” said McGee, “and how we can move up the spectrum. In traditional systems, we ran applications on dedicated middleware, operating systems and hardware. A lot of customers still run that way. Now people introduce hardware virtualization and share the hardware. That is good but there is a lot more we can do. We can share middleware and the application itself.” McGee challenged developers to better enable the Java language to function in these higher density models. He spoke about the need to describe patterns that help us grasp the full environment that an application needs, whether it’s a web or full enterprise application. Developers need to understand the resources that an application interacts with in a way that is simple and straightforward. The task is to then automate that deployment so that the complexity of infrastructure can be by-passed and developers can live in a simpler world where the cloud can automatically configure the needed environment. McGee argued that the key, something IBM has been working on, is to use a simpler pattern that allows a cloud-based architecture to embrace the entire infrastructure required for an application and make it highly available, scalable and able to recover from failure. The cloud-based architecture would automate the complexity of setting up and managing the infrastructure. IBM has been trying to realize this vision for customers so they can describe their Java application environment simply and allow the cloud to automate the deployment and management of applications. “The point,” explained McGee, “is to package the executable used to describe applications, to drop it into a shared system and let that system provide some intelligence about how to deploy and manage those applications.”John Duimovich on Improvements in JavaMcGee then brought onstage IBM’s Distinguished Engineer and CTO for Java, John Duimovich, who showed the audience ways to deploy Java applications more efficiently.Duimovich explained that, “When you run lots of copies of Java in the cloud or any hypervisor virtualized system, there are a lot of duplications of code and jar files. IBM has a facility called ‘shared classes’ where we put shared code, read only artefacts in a cache that is sharable across hypervisors.” By putting JIT code in ahead of time, he explained that the application server will use 20% less memory and operate 30% faster.  He described another example of how the JVM allows for the maximum amount of sharing that manages the tenants and file sockets and memory use through throttling and control. Duimovich touched on the “thin is in” model and IBM’s Liberty Profile and lightweight runtime for the cloud, which allows for greater efficiency in interacting with the cloud.Duimovich discussed the confusion Java developers experience when, for example, the hypervisor tells them that that they have 8 and then 4 and then 16 cores. “Because hypervisors are virtualized, they can change based on resource needs across the hypervisor layer. You may have 10 instances of an operation system and you may need to reallocate memory, " explained Duimovich.  He showed how to resize LPARs, reallocate CPUs and migrate applications as needed. He explained how application servers can resize thread pools and better use resources based on information from the hypervisors.Java Challenges in Hardware and SoftwareMcGee ended the keynote with a summary of upcoming hardware and software challenges for the Java platform. He noted that one reason developers love Java is it allows them to ignore differences in hardware. He stated that the most important things happening in hardware were in network and storage – in developments such as the speed of SSD, the exploitation of high-speed, low-latency networking, and recent developments such as storage-class memory, and non-volatile main memory. “So we are challenged to maintain the benefits of Java and the abstraction it provides from hardware while still exploiting the new innovations in hardware,” said McGee.McGee discussed transactional messaging applications where developers send messages transactionally persist a message to storage, something traditionally done by backing messages on spinning disks, something mostly outdated. “Now,” he pointed out, “we would use SSD and store it in Flash and get 70,000 messages a second. If we stored it using a PCI express-based flash memory device, it is still Flash but put on a PCI express bus on a card closer to the CPU. This way I get 300,000 messages a second and 25% improvement in latency.” McGee’s central point was that hardware has a huge impact on the performance and scalability of applications. New technologies are enabling developers to build classes of Java applications previously unheard of. “We need to be able to balance these things in Java – we need to maintain the abstraction but also be able to exploit the evolution of hardware technology,” said McGee. According to McGee, IBM's current focus is on systems wherein hardware and software are shipped together in what are called Expert Integrated Systems – systems that are pre-optimized, and pre-integrated together. McGee closed IBM’s engaging and thought-provoking keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers now must understand the strengths and weaknesses of such newcomers as applications increasingly involve a complex interconnection of languages.

    Read the article

  • Streaming desktop with avconv - severe sound issues

    - by Tommy Brunn
    I'm trying to do some live streaming in Ubuntu 12.10, but I'm having some problems with audio. More specifically, the quality is complete garbage and it's at least 10 seconds out of sync with the video. I'm using an excellent guide found here to set up my loopback devices so that I can combine the desktop audio with the microphone input. It seems to work, as I'm able to stream both audio and video to Twitch.tv. But, as I said, the audio quality is terrible. The microphone audio is very, very low, but if I increase it, I get a horrible garbled sound that is absolutely unbearable. Nothing like that is present during VoIP calls or when recording sound alone with the sound recorder, so it's not an issue with the microphone itself. The entire audio stream is also delayed about 10-15 seconds compared to the video stream. I put together an imgur album of my settings. Here is some example output from when I'm streaming: avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:11 with gcc 4.7.2 [x11grab @ 0x162fd80] device: :0.0+570,262 -> display: :0.0 x: 570 y: 262 width: 1280 height: 720 [x11grab @ 0x162fd80] shared memory extension found [x11grab @ 0x162fd80] Estimating duration from bitrate, this may be inaccurate Input #0, x11grab, from ':0.0+570,262': Duration: N/A, start: 1353181686.735113, bitrate: 884736 kb/s Stream #0.0: Video: rawvideo, bgra, 1280x720, 884736 kb/s, 30 tbr, 1000k tbn, 30 tbc [alsa @ 0x163fce0] capture with some ALSA plugins, especially dsnoop, may hang. [alsa @ 0x163fce0] Estimating duration from bitrate, this may be inaccurate Input #1, alsa, from 'pulse': Duration: N/A, start: 1353181686.773841, bitrate: N/A Stream #1.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s Incompatible pixel format 'bgra' for codec 'libx264', auto-selecting format 'yuv420p' [buffer @ 0x1641ec0] w:1280 h:720 pixfmt:bgra [scale @ 0x1642480] w:1280 h:720 fmt:bgra -> w:852 h:480 fmt:yuv420p flags:0x4 [libx264 @ 0x165ae80] VBV maxrate unspecified, assuming CBR [libx264 @ 0x165ae80] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x165ae80] profile Main, level 3.1 [libx264 @ 0x165ae80] 264 - core 123 r2189 35cf912 - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=2 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=6 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=1 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=4 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=0 b_adapt=1 b_bias=0 direct=1 weightb=0 open_gop=1 weightp=1 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=30 rc=cbr mbtree=1 bitrate=712 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=712 vbv_bufsize=512 nal_hrd=none ip_ratio=1.25 aq=1:1.00 Output #0, flv, to 'rtmp://live.justin.tv/app/live_23011330_Pt1plSRM0z5WVNJ0QmCHvTPmpUnfC4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: libx264, yuv420p, 852x480, q=-1--1, 712 kb/s, 1k tbn, 30 tbc Stream #0.1: Audio: libmp3lame, 44100 Hz, 2 channels, s16, 712 kb/s Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Stream #1:0 -> #0:1 (pcm_s16le -> libmp3lame) Press ctrl-c to stop encoding frame= 17 fps= 0 q=0.0 size= 0kB time=10000000000.00 bitrate= 0.0kbitframe= 32 fps= 31 q=0.0 size= 0kB time=10000000000.00 bitrate= 0.0kbitframe= 40 fps= 23 q=29.0 size= 44kB time=0.03 bitrate=13786.2kbits/s dup=frame= 47 fps= 21 q=31.0 size= 93kB time=2.73 bitrate= 277.7kbits/s dup=0frame= 62 fps= 23 q=29.0 size= 160kB time=3.23 bitrate= 406.2kbits/s dup=0frame= 77 fps= 24 q=23.0 size= 209kB time=3.71 bitrate= 462.5kbits/s dup=0frame= 92 fps= 25 q=20.0 size= 267kB time=4.91 bitrate= 445.2kbits/s dup=0frame= 107 fps= 25 q=20.0 size= 318kB time=5.41 bitrate= 482.1kbits/s dup=0frame= 123 fps= 26 q=18.0 size= 368kB time=5.96 bitrate= 505.7kbits/s dup=0frame= 139 fps= 26 q=16.0 size= 419kB time=6.48 bitrate= 529.7kbits/s dup=0frame= 155 fps= 27 q=15.0 size= 473kB time=7.00 bitrate= 553.6kbits/s dup=0frame= 170 fps= 27 q=14.0 size= 525kB time=7.52 bitrate= 571.7kbits/s dup=0 frame= 180 fps= 25 q=-1.0 Lsize= 652kB time=7.97 bitrate= 670.0kbits/s dup=0 drop=32 //Here I stop the streaming video:531kB audio:112kB global headers:0kB muxing overhead 1.345945% [libx264 @ 0x165ae80] frame I:1 Avg QP:30.43 size: 39748 [libx264 @ 0x165ae80] frame P:45 Avg QP:11.37 size: 11110 [libx264 @ 0x165ae80] frame B:134 Avg QP:15.93 size: 27 [libx264 @ 0x165ae80] consecutive B-frames: 0.6% 0.0% 1.7% 97.8% [libx264 @ 0x165ae80] mb I I16..4: 7.3% 0.0% 92.7% [libx264 @ 0x165ae80] mb P I16..4: 0.1% 0.0% 0.1% P16..4: 49.1% 1.2% 2.1% 0.0% 0.0% skip:47.4% [libx264 @ 0x165ae80] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 0.1% 0.0% 0.0% direct: 0.0% skip:99.9% L0:42.5% L1:56.9% BI: 0.6% [libx264 @ 0x165ae80] coded y,uvDC,uvAC intra: 82.3% 87.4% 71.9% inter: 7.1% 8.4% 7.0% [libx264 @ 0x165ae80] i16 v,h,dc,p: 27% 29% 16% 28% [libx264 @ 0x165ae80] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 21% 14% 8% 8% 8% 7% 5% 7% [libx264 @ 0x165ae80] i8c dc,h,v,p: 47% 22% 20% 11% [libx264 @ 0x165ae80] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0x165ae80] ref P L0: 96.4% 3.6% [libx264 @ 0x165ae80] kb/s:474.19 Received signal 2: terminating. Any ideas on how I can resolve this? The video delay is perfectly acceptable, so I wouldn't think that it's a network issue that's causing the delay in the audio. Any help would be appreciated.

    Read the article

  • The standards that fail us and the intellectual bubble

    - by Jeff
    There has been a great deal of noise in the techie community about standards, and a sudden and unexplainable hate for Flash. This noise isn't coming from consumers... the countless soccer moms, teens and your weird uncle Bob, it's coming from the people who build (or at least claim to build) the stuff those consumers consume. If you could survey the position of consumers on the topic, they'd likely tell you that they just want stuff on the Web to work.The noise goes something like this: Web standards are the correct and right thing to use across the Intertubes, and anything not a part of those standards (Flash) is bad. Furthermore, the more recent noise is centered around the idea that HTML 5, along with Javascript, is the right thing to use. The arguments against Flash are, well, the truth is I haven't seen a good argument. I see anecdotal nonsense about high CPU usage and things I'd never think to check when I'm watching Piano Cat on YouTube, but these aren't arguments to me. Sure, I've seen it crash a browser a few times, but it's totally rare.But let's go back to standards. Yes, standards have played an important role in establishing the ubiquity of the Web. The protocols themselves, TCP/IP and HTTP, have been critical. HTML, which has served us well for a very long time, established an incredible foundation. Javascript did an OK job, and thanks to clever programmers writing great frameworks like JQuery, is becoming more and more useful. CSS is awful (there, I said it, I feel SO much better), and I'll never understand why it's so disconnected and different from anything else. It doesn't help that it's so widely misinterpreted by different browsers. Still, there's no question that standards are a good thing, and they've been good for the Web, consumers and publishers alike.HTML 4 has been with us for more than a decade. In Web years, that might as well be 80. HTML 5, contrary to popular belief, is not a standard, and likely won't be for many years to come. In fact, the Web hasn't really evolved at all in terms of its standards. The tools that generate the standard markup and script have, but at the end of the day, we're still living with standards that are more than ten years old. The "official" standards process has failed us.The Web evolved anyway, and did not wait for standards bodies to decide what to do next. It evolved in part because Macromedia, then Adobe, kept evolving Flash. In the earlier days, it mostly just did obnoxious splash pages, but then it started doing animation, and then rich apps as they added form input. Eventually it found its killer app: video. Now more than 95% of browsers have Flash installed. Consumers are better for it.But I'll do it one better... I'll go out on a limb and say that Flash is a standard. If it's that pervasive, I don't care what you tell me, it's a standard. Just because a company owns it doesn't mean that it's evil or not a standard. And hey, it pains me to say that as a developer, because I think the dev tools are the suck (more on that in a minute). But again, consumers don't care. They don't even pay for Flash. The bottom line is that if I put something Flash based on the Internet, it's likely that my audience will see it.And what about the speed of standards owned by a company? Look no further than Silverlight. Silverlight 2 (which I consider the "real" start to the story) came out about a year and a half ago. Now version 4 is out, and it has come a very long way in its capabilities. If you believe Riastats.com, more than half of browsers have it now. It didn't have to wait for standards bodies and nerds drafting documents, it's out today. At this rate, Silverlight will be on version 6 or 7 by the time HTML 5 is a ratified standard.Back to the noise, one of the things that has continually disappointed me about this profession is the number of people who get stuck in an intellectual bubble, color it with dogmatic principles, and completely ignore the actual marketplace where this stuff all has to live. We aren't machines; Binary thinking that forces us to choose between "open standards" and "proprietary lock-in" (the most loaded b.s. FUD term evar) isn't smart at all. The truth is that the <object> tag has allowed us to build incredible stuff on top of the old standards, and consumers have benefitted greatly. Consumer desire, capitalism, and yes, standards ratified by nerds who think about this stuff for years have all played a role in the broad adoption of the Interwebs.We could all do without the noise. At the end of the day, I'm going to build stuff for the Web that's good for my users, and I'm not going to base my decisions on a techie bubble religion. Imagine what the brilliant minds behind the noise could do for the Web if they joined me in that pursuit.

    Read the article

  • .NET 4: &ldquo;Slim&rdquo;-style performance boost!

    - by Vitus
    RTM version of .NET 4 and Visual Studio 2010 is available, and now we can do some test with it. Parallel Extensions is one of the most valuable part of .NET 4.0. It’s a set of good tools for easily consuming multicore hardware power. And it also contains some “upgraded” sync primitives – Slim-version. For example, it include updated variant of widely known ManualResetEvent. For people, who don’t know about it: you can sync concurrency execution of some pieces of code with this sync primitive. Instance of ManualResetEvent can be in 2 states: signaled and non-signaled. Transition between it possible by Set() and Reset() methods call. Some shortly explanation: Thread 1 Thread 2 Time mre.Reset(); mre.WaitOne(); //code execution 0 //wating //code execution 1 //wating //code execution 2 //wating //code execution 3 //wating mre.Set(); 4 //code execution //… 5 Upgraded version of this primitive is ManualResetEventSlim. The idea in decreasing performance cost in case, when only 1 thread use it. Main concept in the “hybrid sync schema”, which can be done as following:   internal sealed class SimpleHybridLock : IDisposable { private Int32 m_waiters = 0; private AutoResetEvent m_waiterLock = new AutoResetEvent(false);   public void Enter() { if (Interlocked.Increment(ref m_waiters) == 1) return; m_waiterLock.WaitOne(); }   public void Leave() { if (Interlocked.Decrement(ref m_waiters) == 0) return; m_waiterLock.Set(); }   public void Dispose() { m_waiterLock.Dispose(); } } It’s a sample from Jeffry Richter’s book “CLR via C#”, 3rd edition. Primitive SimpleHybridLock have two public methods: Enter() and Leave(). You can put your concurrency-critical code between calls of these methods, and it would executed in only one thread at the moment. Code is really simple: first thread, called Enter(), increase counter. Second thread also increase counter, and suspend while m_waiterLock is not signaled. So, if we don’t have concurrent access to our lock, “heavy” methods WaitOne() and Set() will not called. It’s can give some performance bonus. ManualResetEvent use the similar idea. Of course, it have more “smart” technics inside, like a checking of recursive calls, and so on. I want to know a real difference between classic ManualResetEvent realization, and new –Slim. I wrote a simple “benchmark”: class Program { static void Main(string[] args) { ManualResetEventSlim mres = new ManualResetEventSlim(false); ManualResetEventSlim mres2 = new ManualResetEventSlim(false);   ManualResetEvent mre = new ManualResetEvent(false);   long total = 0; int COUNT = 50;   for (int i = 0; i < COUNT; i++) { mres2.Reset(); Stopwatch sw = Stopwatch.StartNew();   ThreadPool.QueueUserWorkItem((obj) => { //Method(mres, true); Method2(mre, true); mres2.Set(); }); //Method(mres, false); Method2(mre, false);   mres2.Wait(); sw.Stop();   Console.WriteLine("Pass {0}: {1} ms", i, sw.ElapsedMilliseconds); total += sw.ElapsedMilliseconds; }   Console.WriteLine(); Console.WriteLine("==============================="); Console.WriteLine("Done in average=" + total / (double)COUNT); Console.ReadLine(); }   private static void Method(ManualResetEventSlim mre, bool value) { for (int i = 0; i < 9000000; i++) { if (value) { mre.Set(); } else { mre.Reset(); } } }   private static void Method2(ManualResetEvent mre, bool value) { for (int i = 0; i < 9000000; i++) { if (value) { mre.Set(); } else { mre.Reset(); } } } } I use 2 concurrent thread (the main thread and one from thread pool) for setting and resetting ManualResetEvents, and try to run test COUNT times, and calculate average execution time. Here is the results (I get it on my dual core notebook with T7250 CPU and Windows 7 x64): ManualResetEvent ManualResetEventSlim Difference is obvious and serious – in 10 times! So, I think preferable way is using ManualResetEventSlim, because not always on calling Set() and Reset() will be called “heavy” methods for working with Windows kernel-mode objects. It’s a small and nice improvement! ;)

    Read the article

  • How do I install on an UEFI Asus 1215b netbook?

    - by Tarek
    I'm trying to install Ubuntu 11.10 on a UEFI netbook Asus 1215b using an USB stick. I created a fat32 efi partition of 100MB, 2GB swap, and 2 ext4 partitions (for root (/ ) and /home, respectively). While installing, Ubuntu switches to CLI and starts running efibootmgr. After a few commands (sadly I don't have a screen grab), it stops displaying text but it's still running judging by the HDD led. Then, there's a weird graphic glitch and the screen turns off (HDD led still indicating activity). Finally, it just stops, but doesn't turn off. Not even a hard reboot works (holding down the power button a few secs). I have to plug the netbook off and remove the battery. After that, it still doesn't boot Ubuntu... Anyway, what can I do? I'm considering following the footsteps here and here. Edit: here is the syslog $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] BUG: unable to handle kernel paging request at 00000000ffe1867c $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] IP: [<ffff880066d44c1f>] 0xffff880066d44c1e $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] PGD 14ecc067 PUD 0 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] Oops: 0000 [#1] SMP $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] CPU 0 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] Modules linked in: cryptd aes_x86_64 ufs qnx4 hfsplus hfs minix ntfs msdos xfs reiserfs jfs bnep parport_pc rfcomm dm_crypt ppdev bluetooth lp parport joydev eeepc_wmi asus_wmi sparse_keymap uvcvideo videodev v4l2_compat_ioctl32 snd_hda_codec_realtek snd_seq_midi snd_hda_codec_hdmi snd_hda_intel snd_hda_codec arc4 snd_rawmidi snd_hwdep psmouse snd_pcm snd_seq_midi_event ath9k serio_raw sp5100_tco i2c_piix4 k10temp snd_seq mac80211 snd_timer ath9k_common ath9k_hw snd_seq_device ath snd cfg80211 soundcore snd_page_alloc binfmt_misc squashfs overlayfs nls_iso8859_1 nls_cp437 vfat fat dm_raid45 xor dm_mirror dm_region_hash dm_log btrfs zlib_deflate libcrc32c usb_storage uas radeon video ahci libahci ttm drm_kms_helper drm wmi i2c_algo_bit atl1c $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] Pid: 28432, comm: efibootmgr Not tainted 3.0.0-12-generic #20-Ubuntu ASUSTeK Computer INC. 1215B/1215B $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] RIP: 0010:[<ffff880066d44c1f>] [<ffff880066d44c1f>] 0xffff880066d44c1e $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] RSP: 0018:ffff88005e2cbab0 EFLAGS: 00010082 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] RAX: 00000000ffe1867c RBX: 0000000000000009 RCX: 00000000ffe1867c $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] RDX: 0000000000000000 RSI: ffff88005e2cbbea RDI: ffff88005e2cbb40 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] RBP: 00000000ffe1867c R08: 0000000000000000 R09: 0000000000000084 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] R10: ffffc9001101ff83 R11: ffffc90011018685 R12: 0000000000000001 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] R13: 0000000000000000 R14: ffffc9001101867c R15: ffff88005e2cbbe1 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] FS: 00007f9cdde13720(0000) GS:ffff880066a00000(0000) knlGS:0000000000000000 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] CR2: 00000000ffe1867c CR3: 000000002dace000 CR4: 00000000000006f0 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] Process efibootmgr (pid: 28432, threadinfo ffff88005e2ca000, task ffff880014f0dc80) $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] Stack: $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] ffffc90011010000 ffff88005e2cbac8 0000000000010000 ffff880066d4401d $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] 000000000000007c ffff880009e84400 0000000000000090 ffff880066d45738 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] ffffc9001101867c ffff880066d4331c 0000000000000009 ffffc9001101867b $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] Call Trace: $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff815e9efe>] ? _raw_spin_lock+0xe/0x20 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff811d9c2d>] ? open+0x10d/0x1b0 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff8116554b>] ? __dentry_open+0x2bb/0x320 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff811d9b20>] ? bin_vma_open+0x70/0x70 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff815e9efe>] ? _raw_spin_lock+0xe/0x20 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff811849ee>] ? vfsmount_lock_local_unlock+0x1e/0x30 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff8104303b>] ? efi_call5+0x4b/0x80 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff81042a7f>] ? virt_efi_set_variable+0x2f/0x40 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff814bb125>] ? efivar_create+0x1e5/0x280 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff811d9d63>] ? write+0x93/0x190 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff811d9de4>] ? write+0x114/0x190 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff81167813>] ? vfs_write+0xb3/0x180 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff81167b3a>] ? sys_write+0x4a/0x90 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff815f22c2>] ? system_call_fastpath+0x16/0x1b $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] Code: ec 01 75 f0 41 bc 01 00 00 00 e8 e5 fb ff ff e8 e4 fc ff ff 33 c0 44 0f b7 c0 66 3b c3 73 20 41 0f b7 c0 41 0f b7 d0 03 c5 8b c8 <8a> 00 42 38 04 3a 75 0a 66 45 03 c4 66 44 3b c3 72 e2 33 c0 66 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] RIP [<ffff880066d44c1f>] 0xffff880066d44c1e $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] RSP <ffff88005e2cbab0> $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] CR2: 00000000ffe1867c $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] ---[ end trace 493844b002da4787 ]---

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • JNI 'problmatic frame' causes JVM to crash

    - by HJED
    Hi I'm using JNI to access the exiv2 library (written in C++) in Java and I'm getting a weird runtime error in the JNI code. I've tried using various -Xms and -Xmx options, but that seems to have no affect. I've also tried running this code on JDK1.7.0 with the same result. # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007ff31807757f, pid=4041, tid=140682078746368 # # JRE version: 6.0_20-b20 # Java VM: OpenJDK 64-Bit Server VM (19.0-b09 mixed mode linux-amd64 ) # Derivative: IcedTea6 1.9.2 # Distribution: Ubuntu 10.10, package 6b20-1.9.2-0ubuntu2 # Problematic frame: # V [libjvm.so+0x42757f] # # If you would like to submit a bug report, please include # instructions how to reproduce the bug and visit: # https://bugs.launchpad.net/ubuntu/+source/openjdk-6/ # --------------- T H R E A D --------------- Current thread (0x000000000190d000): JavaThread "main" [_thread_in_Java, id=4043, stack(0x00007ff319447000,0x00007ff319548000)] siginfo:si_signo=SIGSEGV: si_errno=0, si_code=1 (SEGV_MAPERR), si_addr=0x0000000000000024 Registers: ... Register to memory mapping: RAX=0x0000000000000002 0x0000000000000002 is pointing to unknown location RBX=0x000000000190db90 0x000000000190db90 is pointing to unknown location RCX=0x0000000000000000 0x0000000000000000 is pointing to unknown location RDX=0x00007ff3195463f8 0x00007ff3195463f8 is pointing into the stack for thread: 0x000000000190d000 "main" prio=10 tid=0x000000000190d000 nid=0xfcb runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE RSP=0x00007ff319546270 0x00007ff319546270 is pointing into the stack for thread: 0x000000000190d000 "main" prio=10 tid=0x000000000190d000 nid=0xfcb runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE RBP=0x00007ff319546270 0x00007ff319546270 is pointing into the stack for thread: 0x000000000190d000 "main" prio=10 tid=0x000000000190d000 nid=0xfcb runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE RSI=0x0000000000000024 0x0000000000000024 is pointing to unknown location RDI=0x00007ff3195463e0 0x00007ff3195463e0 is pointing into the stack for thread: 0x000000000190d000 "main" prio=10 tid=0x000000000190d000 nid=0xfcb runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE R8 =0x000000000190d000 "main" prio=10 tid=0x000000000190d000 nid=0xfcb runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE R9 =0x000000000190db88 0x000000000190db88 is pointing to unknown location R10=0x00007ff319546300 0x00007ff319546300 is pointing into the stack for thread: 0x000000000190d000 "main" prio=10 tid=0x000000000190d000 nid=0xfcb runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE R11=0x0000000000000002 0x0000000000000002 is pointing to unknown location R12=0x000000000190d000 "main" prio=10 tid=0x000000000190d000 nid=0xfcb runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE R13=0x00007ff319546560 0x00007ff319546560 is pointing into the stack for thread: 0x000000000190d000 "main" prio=10 tid=0x000000000190d000 nid=0xfcb runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE R14=0x00007ff3195463e0 0x00007ff3195463e0 is pointing into the stack for thread: 0x000000000190d000 "main" prio=10 tid=0x000000000190d000 nid=0xfcb runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE R15=0x0000000000000003 0x0000000000000003 is pointing to unknown location Top of Stack: (sp=0x00007ff319546270) ... Instructions: (pc=0x00007ff31807757f) 0x00007ff31807756f: e2 03 48 03 57 58 31 c9 48 8b 32 48 85 f6 74 03 0x00007ff31807757f: 48 8b 0e 48 89 0a 8b 77 68 83 c0 01 39 f0 7c d1 Stack: [0x00007ff319447000,0x00007ff319548000], sp=0x00007ff319546270, free space=1020k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x42757f] V [libjvm.so+0x42866b] V [libjvm.so+0x4275c8] V [libjvm.so+0x4331bd] V [libjvm.so+0x44e5c7] C [libExiff2-binding.so+0x1f16] _ZN7JNIEnv_15CallVoidMethodAEP8_jobjectP10_jmethodIDPK6jvalue+0x40 C [libExiff2-binding.so+0x1b96] _Z8loadIPTCSt8auto_ptrIN5Exiv25ImageEEPKcP7JNIEnv_P8_jobject+0x2ba C [libExiff2-binding.so+0x1d3f] _Z7getVarsPKcP7JNIEnv_P8_jobject+0x176 C [libExiff2-binding.so+0x1de7] Java_photo_exiv2_Exiv2MetaDataStore_impl_1loadFromExiv+0x4b j photo.exiv2.Exiv2MetaDataStore.impl_loadFromExiv(Ljava/lang/String;Lphoto/exiv2/Exiv2MetaDataStore;)V+0 j photo.exiv2.Exiv2MetaDataStore.loadFromExiv2()V+9 j photo.exiv2.Exiv2MetaDataStore.loadData()V+1 j photo.exiv2.Exiv2MetaDataStore.<init>(Lphoto/ImageFile;)V+10 j test.Main.main([Ljava/lang/String;)V+76 v ~StubRoutines::call_stub V [libjvm.so+0x428698] V [libjvm.so+0x4275c8] V [libjvm.so+0x432943] V [libjvm.so+0x447f91] C [java+0x3495] JavaMain+0xd75 --------------- P R O C E S S --------------- Java Threads: ( => current thread ) 0x00007ff2c4027800 JavaThread "Low Memory Detector" daemon [_thread_blocked, id=4060, stack(0x00007ff2c9052000,0x00007ff2c9153000)] 0x00007ff2c4025000 JavaThread "CompilerThread1" daemon [_thread_blocked, id=4059, stack(0x00007ff2c9153000,0x00007ff2c9254000)] 0x00007ff2c4022000 JavaThread "CompilerThread0" daemon [_thread_blocked, id=4058, stack(0x00007ff2c9254000,0x00007ff2c9355000)] 0x00007ff2c401f800 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=4057, stack(0x00007ff2c9355000,0x00007ff2c9456000)] 0x00007ff2c4001000 JavaThread "Finalizer" daemon [_thread_blocked, id=4056, stack(0x00007ff2c994d000,0x00007ff2c9a4e000)] 0x0000000001984000 JavaThread "Reference Handler" daemon [_thread_blocked, id=4055, stack(0x00007ff2c9a4e000,0x00007ff2c9b4f000)] =>0x000000000190d000 JavaThread "main" [_thread_in_Java, id=4043, stack(0x00007ff319447000,0x00007ff319548000)] Other Threads: 0x000000000197d800 VMThread [stack: 0x00007ff2c9b4f000,0x00007ff2c9c50000] [id=4054] 0x00007ff2c4032000 WatcherThread [stack: 0x00007ff2c8f51000,0x00007ff2c9052000] [id=4061] VM state:not at safepoint (normal execution) VM Mutex/Monitor currently owned by a thread: None Heap PSYoungGen total 18432K, used 316K [0x00007ff2fed30000, 0x00007ff3001c0000, 0x00007ff313730000) eden space 15808K, 2% used [0x00007ff2fed30000,0x00007ff2fed7f0b8,0x00007ff2ffca0000) from space 2624K, 0% used [0x00007ff2fff30000,0x00007ff2fff30000,0x00007ff3001c0000) to space 2624K, 0% used [0x00007ff2ffca0000,0x00007ff2ffca0000,0x00007ff2fff30000) PSOldGen total 42240K, used 0K [0x00007ff2d5930000, 0x00007ff2d8270000, 0x00007ff2fed30000) object space 42240K, 0% used [0x00007ff2d5930000,0x00007ff2d5930000,0x00007ff2d8270000) PSPermGen total 21248K, used 2827K [0x00007ff2cb330000, 0x00007ff2cc7f0000, 0x00007ff2d5930000) object space 21248K, 13% used [0x00007ff2cb330000,0x00007ff2cb5f2f60,0x00007ff2cc7f0000) Dynamic libraries: 00400000-00409000 r-xp 00000000 08:03 141899 /usr/lib/jvm/java-6-openjdk/jre/bin/java 00608000-00609000 r--p 00008000 08:03 141899 /usr/lib/jvm/java-6-openjdk/jre/bin/java 00609000-0060a000 rw-p 00009000 08:03 141899 /usr/lib/jvm/java-6-openjdk/jre/bin/java 01904000-019ad000 rw-p 00000000 00:00 0 [heap] ... 7ff2c820c000-7ff2c8232000 r-xp 00000000 08:03 917704 /lib/libexpat.so.1.5.2 7ff2c8232000-7ff2c8432000 ---p 00026000 08:03 917704 /lib/libexpat.so.1.5.2 7ff2c8432000-7ff2c8434000 r--p 00026000 08:03 917704 /lib/libexpat.so.1.5.2 7ff2c8434000-7ff2c8435000 rw-p 00028000 08:03 917704 /lib/libexpat.so.1.5.2 7ff2c8435000-7ff2c844a000 r-xp 00000000 08:03 917708 /lib/libgcc_s.so.1 7ff2c844a000-7ff2c8649000 ---p 00015000 08:03 917708 /lib/libgcc_s.so.1 7ff2c8649000-7ff2c864a000 r--p 00014000 08:03 917708 /lib/libgcc_s.so.1 7ff2c864a000-7ff2c864b000 rw-p 00015000 08:03 917708 /lib/libgcc_s.so.1 7ff2c864b000-7ff2c8733000 r-xp 00000000 08:03 134995 /usr/lib/libstdc++.so.6.0.14 7ff2c8733000-7ff2c8932000 ---p 000e8000 08:03 134995 /usr/lib/libstdc++.so.6.0.14 7ff2c8932000-7ff2c893a000 r--p 000e7000 08:03 134995 /usr/lib/libstdc++.so.6.0.14 7ff2c893a000-7ff2c893c000 rw-p 000ef000 08:03 134995 /usr/lib/libstdc++.so.6.0.14 7ff2c893c000-7ff2c8951000 rw-p 00000000 00:00 0 7ff2c8951000-7ff2c8af3000 r-xp 00000000 08:03 134599 /usr/lib/libexiv2.so.6.0.0 7ff2c8af3000-7ff2c8cf2000 ---p 001a2000 08:03 134599 /usr/lib/libexiv2.so.6.0.0 7ff2c8cf2000-7ff2c8d0f000 r--p 001a1000 08:03 134599 /usr/lib/libexiv2.so.6.0.0 7ff2c8d0f000-7ff2c8d10000 rw-p 001be000 08:03 134599 /usr/lib/libexiv2.so.6.0.0 7ff2c8d10000-7ff2c8d23000 rw-p 00000000 00:00 0 7ff2c8d42000-7ff2c8d45000 r-xp 00000000 08:03 800718 /home/hjed/libExiff2-binding.so 7ff2c8d45000-7ff2c8f44000 ---p 00003000 08:03 800718 /home/hjed/libExiff2-binding.so 7ff2c8f44000-7ff2c8f45000 r--p 00002000 08:03 800718 /home/hjed/libExiff2-binding.so 7ff2c8f45000-7ff2c8f46000 rw-p 00003000 08:03 800718 /home/hjed/libExiff2-binding.so 7ff2c8f46000-7ff2c8f49000 r--s 0000f000 08:03 141333 /usr/lib/jvm/java-6-openjdk/jre/lib/ext/pulse-java.jar 7ff2c8f49000-7ff2c8f51000 r--s 00066000 08:03 408472 /usr/share/java/gnome-java-bridge.jar ... 7ff2ca559000-7ff2ca55b000 r--s 0001d000 08:03 141354 /usr/lib/jvm/java-6-openjdk/jre/lib/plugin.jar 7ff2ca55b000-7ff2ca560000 r--s 00044000 08:03 141353 /usr/lib/jvm/java-6-openjdk/jre/lib/netx.jar 7ff2ca560000-7ff2ca592000 rw-p 00000000 00:00 0 7ff2ca592000-7ff2ca720000 r--s 038af000 08:03 141833 /usr/lib/jvm/java-6-openjdk/jre/lib/rt.jar ... 7ff31673b000-7ff316742000 r-xp 00000000 08:03 141867 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libzip.so 7ff316742000-7ff316941000 ---p 00007000 08:03 141867 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libzip.so 7ff316941000-7ff316942000 r--p 00006000 08:03 141867 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libzip.so 7ff316942000-7ff316943000 rw-p 00007000 08:03 141867 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libzip.so 7ff316943000-7ff31694f000 r-xp 00000000 08:03 921396 /lib/libnss_files-2.12.1.so 7ff31694f000-7ff316b4e000 ---p 0000c000 08:03 921396 /lib/libnss_files-2.12.1.so 7ff316b4e000-7ff316b4f000 r--p 0000b000 08:03 921396 /lib/libnss_files-2.12.1.so 7ff316b4f000-7ff316b50000 rw-p 0000c000 08:03 921396 /lib/libnss_files-2.12.1.so 7ff316b50000-7ff316b5a000 r-xp 00000000 08:03 921398 /lib/libnss_nis-2.12.1.so 7ff316b5a000-7ff316d59000 ---p 0000a000 08:03 921398 /lib/libnss_nis-2.12.1.so 7ff316d59000-7ff316d5a000 r--p 00009000 08:03 921398 /lib/libnss_nis-2.12.1.so 7ff316d5a000-7ff316d5b000 rw-p 0000a000 08:03 921398 /lib/libnss_nis-2.12.1.so 7ff316d5b000-7ff316d63000 r-xp 00000000 08:03 921393 /lib/libnss_compat-2.12.1.so 7ff316d63000-7ff316f62000 ---p 00008000 08:03 921393 /lib/libnss_compat-2.12.1.so 7ff316f62000-7ff316f63000 r--p 00007000 08:03 921393 /lib/libnss_compat-2.12.1.so 7ff316f63000-7ff316f64000 rw-p 00008000 08:03 921393 /lib/libnss_compat-2.12.1.so 7ff316f64000-7ff316f6c000 r-xp 00000000 08:03 141869 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/native_threads/libhpi.so 7ff316f6c000-7ff31716b000 ---p 00008000 08:03 141869 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/native_threads/libhpi.so 7ff31716b000-7ff31716c000 r--p 00007000 08:03 141869 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/native_threads/libhpi.so 7ff31716c000-7ff31716d000 rw-p 00008000 08:03 141869 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/native_threads/libhpi.so 7ff31716d000-7ff317184000 r-xp 00000000 08:03 921392 /lib/libnsl-2.12.1.so 7ff317184000-7ff317383000 ---p 00017000 08:03 921392 /lib/libnsl-2.12.1.so 7ff317383000-7ff317384000 r--p 00016000 08:03 921392 /lib/libnsl-2.12.1.so 7ff317384000-7ff317385000 rw-p 00017000 08:03 921392 /lib/libnsl-2.12.1.so 7ff317385000-7ff317387000 rw-p 00000000 00:00 0 7ff317387000-7ff3173b2000 r-xp 00000000 08:03 141850 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libjava.so 7ff3173b2000-7ff3175b1000 ---p 0002b000 08:03 141850 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libjava.so 7ff3175b1000-7ff3175b2000 r--p 0002a000 08:03 141850 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libjava.so 7ff3175b2000-7ff3175b5000 rw-p 0002b000 08:03 141850 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libjava.so 7ff3175b5000-7ff3175c3000 r-xp 00000000 08:03 141866 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libverify.so 7ff3175c3000-7ff3177c2000 ---p 0000e000 08:03 141866 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libverify.so 7ff3177c2000-7ff3177c4000 r--p 0000d000 08:03 141866 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libverify.so 7ff3177c4000-7ff3177c5000 rw-p 0000f000 08:03 141866 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/libverify.so 7ff3177c5000-7ff3177cc000 r-xp 00000000 08:03 921405 /lib/librt-2.12.1.so 7ff3177cc000-7ff3179cb000 ---p 00007000 08:03 921405 /lib/librt-2.12.1.so 7ff3179cb000-7ff3179cc000 r--p 00006000 08:03 921405 /lib/librt-2.12.1.so 7ff3179cc000-7ff3179cd000 rw-p 00007000 08:03 921405 /lib/librt-2.12.1.so 7ff3179cd000-7ff317a4f000 r-xp 00000000 08:03 921390 /lib/libm-2.12.1.so 7ff317a4f000-7ff317c4e000 ---p 00082000 08:03 921390 /lib/libm-2.12.1.so 7ff317c4e000-7ff317c4f000 r--p 00081000 08:03 921390 /lib/libm-2.12.1.so 7ff317c4f000-7ff317c50000 rw-p 00082000 08:03 921390 /lib/libm-2.12.1.so 7ff317c50000-7ff3184c4000 r-xp 00000000 08:03 141871 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server/libjvm.so 7ff3184c4000-7ff3186c3000 ---p 00874000 08:03 141871 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server/libjvm.so 7ff3186c3000-7ff318739000 r--p 00873000 08:03 141871 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server/libjvm.so 7ff318739000-7ff318754000 rw-p 008e9000 08:03 141871 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server/libjvm.so 7ff318754000-7ff31878d000 rw-p 00000000 00:00 0 7ff31878d000-7ff318907000 r-xp 00000000 08:03 921385 /lib/libc-2.12.1.so 7ff318907000-7ff318b06000 ---p 0017a000 08:03 921385 /lib/libc-2.12.1.so 7ff318b06000-7ff318b0a000 r--p 00179000 08:03 921385 /lib/libc-2.12.1.so 7ff318b0a000-7ff318b0b000 rw-p 0017d000 08:03 921385 /lib/libc-2.12.1.so 7ff318b0b000-7ff318b10000 rw-p 00000000 00:00 0 7ff318b10000-7ff318b12000 r-xp 00000000 08:03 921388 /lib/libdl-2.12.1.so 7ff318b12000-7ff318d12000 ---p 00002000 08:03 921388 /lib/libdl-2.12.1.so 7ff318d12000-7ff318d13000 r--p 00002000 08:03 921388 /lib/libdl-2.12.1.so 7ff318d13000-7ff318d14000 rw-p 00003000 08:03 921388 /lib/libdl-2.12.1.so 7ff318d14000-7ff318d18000 r-xp 00000000 08:03 141838 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/jli/libjli.so 7ff318d18000-7ff318f17000 ---p 00004000 08:03 141838 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/jli/libjli.so 7ff318f17000-7ff318f18000 r--p 00003000 08:03 141838 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/jli/libjli.so 7ff318f18000-7ff318f19000 rw-p 00004000 08:03 141838 /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/jli/libjli.so 7ff318f19000-7ff318f31000 r-xp 00000000 08:03 921401 /lib/libpthread-2.12.1.so 7ff318f31000-7ff319130000 ---p 00018000 08:03 921401 /lib/libpthread-2.12.1.so 7ff319130000-7ff319131000 r--p 00017000 08:03 921401 /lib/libpthread-2.12.1.so 7ff319131000-7ff319132000 rw-p 00018000 08:03 921401 /lib/libpthread-2.12.1.so 7ff319132000-7ff319136000 rw-p 00000000 00:00 0 7ff319136000-7ff31914c000 r-xp 00000000 08:03 917772 /lib/libz.so.1.2.3.4 7ff31914c000-7ff31934c000 ---p 00016000 08:03 917772 /lib/libz.so.1.2.3.4 7ff31934c000-7ff31934d000 r--p 00016000 08:03 917772 /lib/libz.so.1.2.3.4 7ff31934d000-7ff31934e000 rw-p 00017000 08:03 917772 /lib/libz.so.1.2.3.4 7ff31934e000-7ff31936e000 r-xp 00000000 08:03 921379 /lib/ld-2.12.1.so 7ff319387000-7ff319391000 rw-p 00000000 00:00 0 7ff319391000-7ff319447000 rw-p 00000000 00:00 0 7ff319447000-7ff31944a000 ---p 00000000 00:00 0 7ff31944a000-7ff31954d000 rw-p 00000000 00:00 0 7ff319562000-7ff31956a000 rw-s 00000000 08:03 1966453 /tmp/hsperfdata_hjed/4041 7ff31956a000-7ff31956b000 rw-p 00000000 00:00 0 7ff31956b000-7ff31956c000 r--p 00000000 00:00 0 7ff31956c000-7ff31956e000 rw-p 00000000 00:00 0 7ff31956e000-7ff31956f000 r--p 00020000 08:03 921379 /lib/ld-2.12.1.so 7ff31956f000-7ff319570000 rw-p 00021000 08:03 921379 /lib/ld-2.12.1.so 7ff319570000-7ff319571000 rw-p 00000000 00:00 0 7fff0fb03000-7fff0fb24000 rw-p 00000000 00:00 0 [stack] 7fff0fbff000-7fff0fc00000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] VM Arguments: jvm_args: -Dfile.encoding=UTF-8 java_command: test.Main Launcher Type: SUN_STANDARD Environment Variables: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games USERNAME=hjed LD_LIBRARY_PATH=/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64 SHELL=/bin/bash DISPLAY=:0.0 Signal Handlers: SIGSEGV: [libjvm.so+0x712700], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGBUS: [libjvm.so+0x712700], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGFPE: [libjvm.so+0x5d4020], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGPIPE: [libjvm.so+0x5d4020], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGXFSZ: [libjvm.so+0x5d4020], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGILL: [libjvm.so+0x5d4020], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000 SIGUSR2: [libjvm.so+0x5d3730], sa_mask[0]=0x00000004, sa_flags=0x10000004 SIGHUP: [libjvm.so+0x5d61a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGINT: SIG_IGN, sa_mask[0]=0x00000000, sa_flags=0x00000000 SIGTERM: [libjvm.so+0x5d61a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGQUIT: [libjvm.so+0x5d61a0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 --------------- S Y S T E M --------------- OS:Ubuntu 10.10 (maverick) uname:Linux 2.6.35-24-generic #42-Ubuntu SMP Thu Dec 2 02:41:37 UTC 2010 x86_64 libc:glibc 2.12.1 NPTL 2.12.1 rlimit: STACK 8192k, CORE 0k, NPROC infinity, NOFILE 1024, AS infinity load average:0.25 0.16 0.21 /proc/meminfo: MemTotal: 4048200 kB MemFree: 1230476 kB Buffers: 589572 kB Cached: 911132 kB SwapCached: 0 kB Active: 1321712 kB Inactive: 1202272 kB Active(anon): 1023852 kB Inactive(anon): 7168 kB Active(file): 297860 kB Inactive(file): 1195104 kB Unevictable: 64 kB Mlocked: 64 kB SwapTotal: 7065596 kB SwapFree: 7065596 kB Dirty: 632 kB Writeback: 0 kB AnonPages: 1023368 kB Mapped: 145832 kB Shmem: 7728 kB Slab: 111136 kB SReclaimable: 66316 kB SUnreclaim: 44820 kB KernelStack: 3824 kB PageTables: 27736 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 9089696 kB Committed_AS: 2378396 kB VmallocTotal: 34359738367 kB VmallocUsed: 332928 kB VmallocChunk: 34359397884 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 67136 kB DirectMap2M: 4118528 kB CPU:total 8 (4 cores per cpu, 2 threads per core) family 6 model 26 stepping 5, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, ht Memory: 4k page, physical 4048200k(1230476k free), swap 7065596k(7065596k free) vm_info: OpenJDK 64-Bit Server VM (19.0-b09) for linux-amd64 JRE (1.6.0_20-b20), built on Dec 10 2010 19:45:55 by "buildd" with gcc 4.4.5 time: Sat Jan 1 14:12:27 2011 elapsed time: 0 seconds The java code is: ... public class Main { public static void main(String[] args) { ... ImageFile img = new ImageFile(System.getProperty("user.home") + "/PC100001.JPG"); Exiv2MetaDataStore e = new Exiv2MetaDataStore(img); Iterator<Entry<String, String>> i = e.entrySet().iterator(); while (i.hasNext()) { Entry<String, String> entry = i.next(); System.out.println(entry.getKey() + ":" + entry.getValue()); } //if you switch this print statment with the while loop you get the same error. // System.out.print(e.toString()); } } and /** NB: MetaDataStore is an abstract class that extends HashMap<String,String> */ public class Exiv2MetaDataStore extends MetaDataStore{ ... private final ImageFile F; /** * Creates an meta data store from an ImageFile using Exiv2 * this calls loadData(); * @param f */ public Exiv2MetaDataStore(ImageFile f) { F = f; loadData(); } ... @Override protected void loadData() { loadFromExiv2(); } ... private void loadFromExiv2() { impl_loadFromExiv(F.getAbsolutePath(), this); } private native void impl_loadFromExiv(String path, Exiv2MetaDataStore str); //this method called by the C++ code public void exiv2_reciveElement(String key, String value) { super.put(key,value); } static { Runtime.getRuntime().load("/home/hjed/libExiff2-binding.so"); } } C++ code: #include <exif.hpp> #include <image.hpp> #include <iptc.hpp> #include <exiv2/exiv2.hpp> #include <exiv2/error.hpp> #include <iostream> #include <iomanip> #include <cassert> void loadIPTC(Exiv2::Image::AutoPtr image, const char * path, JNIEnv * env, jobject obj) { Exiv2::IptcData &iptcData = image->iptcData(); //load method jclass cls = env->GetObjectClass(obj); jmethodID mid = env->GetMethodID(cls, "exiv2_reciveElement", "(Ljava/lang/String;Ljava/lang/String;)V"); //is there any IPTC data AND check that method exists if (iptcData.empty() | (mid == NULL)) { std::string error(path); error += ": failed loading IPTC data, there may not be any data"; } else { Exiv2::IptcData::iterator end = iptcData.end(); for (Exiv2::IptcData::iterator md = iptcData.begin(); md != end; ++md) { jvalue values[2]; const char* key = md->key().c_str(); values[0].l = env->NewStringUTF(key); md->value().toString().c_str(); const char* value = md->typeName(); values[2].l = env->NewStringUTF(value); //If I replace the code for values[2] with the commented out code I get the same error. //const char* type = md->typeName(); //values[2].l = env->NewStringUTF(type); env->CallVoidMethodA(obj, mid, values); } } } void getVars(const char* path, JNIEnv * env, jobject obj) { //Load image Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path); assert(image.get() != 0); image->readMetadata(); //Load IPTC data loadIPTC(image, path, env, obj); } JNIEXPORT void JNICALL Java_photo_exiv2_Exiv2MetaDataStore_impl_1loadFromExiv(JNIEnv * env, jobject obj, jstring path, jobject obj2) { const char* path2 = env->GetStringUTFChars(path, NULL); getVars(path2, env, obj); env->ReleaseStringUTFChars(path, path2); } I've searched for a fix for this, but I can't find one. I don't have much experience using C++ so if I've made an obvious mistake in the C code I apologies. Thanks for any help, HJED P.S. This is my first post on this site and I wasn't sure how much of the code I needed to show. Sorry if I've put to much up.

    Read the article

  • Mobile Apps for Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    Many things have changed in the mobile space over the last few years. Here's an update on our strategy for mobile apps for the E-Business Suite. Mobile app strategy We're building our family of mobile apps for the E-Business Suite using Oracle Mobile Application Framework.  This framework allows us to write a single application that can be run on Apple iOS and Google Android platforms. Mobile apps for the E-Business Suite will share a common look-and-feel. The E-Business Suite is a suite of over 200 product modules spanning Financials, Supply Chain, Human Resources, and many other areas. Our mobile app strategy is to release standalone apps for specific product modules.  Our Oracle Timecards app, which allows users to create and submit timecards, is an example of a standalone app. Some common functions that span multiple product areas will have dedicated apps, too. An example of this is our Oracle Approvals app, which allows users to review and approve requests for expenses, requisitions, purchase orders, recruitment vacancies and offers, and more. You can read more about our Oracle Mobile Approvals app here: Now Available: Oracle Mobile Approvals for iOS Our goal is to support smaller screen (e.g. smartphones) as well as larger screens (e.g. tablets), with the smaller screen versions generally delivered first.  Where possible, we will deliver these as universal apps.  An example is our Oracle Mobile Field Service app, which allows field service technicians to remotely access customer, product, service request, and task-related information.  This app can run on a smartphone, while providing a richer experience for tablets. Deploying EBS mobile apps The mobile apps, themselves (i.e. client-side components) can be downloaded by end-users from the Apple iTunes today.  Android versions will be available from Google play. You can monitor this blog for Android-related updates. Where possible, our mobile apps should be deployable with a minimum of server-side changes.  These changes will generally involve a consolidated server-side patch for technology-stack components, and possibly a server-side patch for the functional product module. Updates to existing mobile apps may require new server-side components to enable all of the latest mobile functionality. All EBS product modules are certified for internal intranet deployments (i.e. used by employees within an organization's firewall).  Only a subset of EBS products such as iRecruitment are certified to be deployed externally (i.e. used by non-employees outside of an organization's firewall).  Today, many organizations running the E-Business Suite do not expose their EBS environment externally and all of the mobile apps that we're building are intended for internal employee use.  Recognizing this, our mobile apps are currently designed for users who are connected to the organization's intranet via VPN.  We expect that this may change in future updates to our mobile apps. Mobile apps and internationalization The initial releases of our mobile apps will be in English.  Later updates will include translations for all left-to-right languages supported by the E-Business Suite.  Right-to-left languages will not be translated. Customizing apps for enterprise deployments The current generation of mobile apps for Oracle E-Business Suite cannot be customized. We are evaluating options for limited customizations, including corporate branding with logos, corporate color schemes, and others. This is a potentially-complex area with many tricky implications for deployment and maintenance.  We would be interested in hearing your requirements for customizations in enterprise deployments.Prerequisites Apple iOS 7 and higher Android 4.1 (API level 16) and higher, with minimum CPU/memory configurations listed here EBS 12.1: EBS 12.1.3 Family Packs for the related product module EBS 12.2.3 References Oracle E-Business Suite Mobile Apps, Release 12.1 and 12.2 Documentation (Note 1641772.1) Oracle E-Business Suite Mobile Apps Administrator's Guide, Release 12.1 and 12.2 (Note 1642431.1) Related Articles Using Mobile Devices with Oracle E-Business Suite Apple iPads Certified with Oracle E-Business Suite 12.1 Now Available: Oracle Mobile Approvals for iOS The preceding is intended to outline our general product direction.  It is intended for information purposes only, and may not be incorporated into any contract.   It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • XNA: Huge Tile Map, long load times

    - by Zach
    Recently I built a tile map generator for a game project. What I am very proud of is that I finally got it to the point where I can have a GIANT 2D map build perfectly on my PC. About 120000pixels by 40000 pixels. I can go larger actually, but I have only 1 draw back. #1 ram, the map currently draws about 320MB of ram and I know the Xbox allows 512MB I think? #2 It takes 20 mins for the map to build then display on the Xbox, on my PC it take less then a few seconds. I need to bring that 20 minutes of generating from 20 mins to how ever little bit I can, and how can a lower the amount of RAM usage while still being able to generate my map. Right now everything is stored in Jagged Arrays, each piece generating in a size of 1280x720 (the mother piece). Up to the amount that I need, every block is exactly 40x40 pixels however the blocks get removed from a List or regenerated in a List depending how close the mother piece is to the player. Saving A LOT of CPU, so at all times its no more then looping through 5184 some blocks. Well at least I'm sure of this. But how can I lower my RAM usage without hurting the size of the map, and how can I lower these INSANE loading times? EDIT: Let me explain my self better. Also I'd like to let everyone know now that I'm inexperienced with many of these things. So here is an example of the arrays I'm using. Here is the overall in a shorter term: int[][] array = new int[30][]; array[0] = new int[] { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 }; array[1] = new int[] { 1, 3, 3, 3, 3, 1, 0, 0, 0, 0, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 }; that goes on for around 30 arrays downward. Now for every time it hits a 1, it goes and generates a tile map 1280x720 and it does that exactly the way it does it above. This is how I loop through those arrays: for (int i = 0; i < array.Length; i += 1) { for (int h = 0; h < array[i].Length; h += 1) { } { Now how the tiles are drawn and removed is something like this: public void Draw(SpriteBatch spriteBatch, Vector2 cam) { if (cam.X >= this.Position.X - 1280) { if (cam.X <= this.Position.X + 2560) { if (cam.Y >= this.Position.Y - 720) { if (cam.Y <= this.Position.Y + 1440) { if (visible) { if (once == 0) { once = 1; visible = false; regen(); } } for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles[i].Draw(spriteBatch, cam); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles[i].Draw(spriteBatch, cam); } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } } If you guys still need more information just ask in the comments.

    Read the article

  • JavaOne 2012 Sunday Strategy Keynote

    - by Janice J. Heiss
    At the Sunday Strategy Keynote, held at the Masonic Auditorium, Hasan Rizvi, EVP, Middleware and Java Development, stated that the theme for this year's JavaOne is: “Make the future Java”-- meaning that Java continues in its role as the most popular, complete, productive, secure, and innovative development platform. But it also means, he qualified, the process by which we make the future Java -- an open, transparent, collaborative, and community-driven evolution. "Many of you have bet your businesses and your careers on Java, and we have bet our business on Java," he said.Rizvi detailed the three factors they consider critical to the success of Java--technology innovation, community participation, and Oracle's leadership/stewardship. He offered a scorecard in these three realms over the past year--with OS X and Linux ARM support on Java SE, open sourcing of JavaFX by the end of the year, the release of Java Embedded Suite 7.0 middleware platform, and multiple releases on the Java EE side. The JCP process continues, with new JSR activity, and JUGs show a 25% increase in participation since last year. Oracle, meanwhile, continues its commitment to both technology and community development/outreach--with four regional JavaOne conferences last year in various part of the world, as well as the release of Java Magazine, with over 120,000 current subscribers. Georges Saab, VP Development, Java SE, next reviewed features of Java SE 7--the first major revision to the platform under Oracle's stewardship, which has included near-monthly update releases offering hundreds of fixes, performance enhancements, and new features. Saab indicated that developers, ISVs, and hosting providers have all been rapid adopters of the platform. He also noted that Oracle's entire Fusion middleware stack is supported on SE 7. The supported platforms for SE 7 has also increased--from Windows, Linux, and Solaris, to OS X, Linux ARM, and the emerging ARM micro-server market. "In the last year, we've added as many new platforms for Java, as were added in the previous decade," said Saab.Saab also explored the upcoming JDK 8 release--including Project Lambda, Project Nashorn (a modern implementation of JavaScript running on the JVM), and others. He noted that Nashorn functionality had already been used internally in NetBeans 7.3, and announced that they were planning to contribute the implementation to OpenJDK. Nandini Ramani, VP Development, Java Client, ME and Card, discussed the latest news pertaining to JavaFX 2.0--releases on Windows, OS X, and Linux, release of the FX Scene Builder tool, the JavaFX WebView component in NetBeans 7.3, and an OpenJFX project in OpenJDK. Nandini announced, as of Sunday, the availability for download of JavaFX on Linux ARM (developer preview), as well as Scene Builder on Linux. She noted that for next year's JDK 8 release, JavaFX will offer 3D, as well as third-party component integration. Avinder Brar, Senior Software Engineer, Navis, and Dierk König, Canoo Fellow, next took the stage and demonstrated all that JavaFX offers, with a feature-rich, animation-rich, real-time cargo management application that employs Canoo's just open-sourced Dolphin technology.Saab also explored Java SE 9 and beyond--Jigsaw modularity, Penrose Project for interoperability with OSGi, improved multi-tenancy for Java in the cloud, and Project Sumatra. Phil Rogers, HSA Foundation President and AMD Corporate Fellow, explored heterogeneous computing platforms that combine the CPU and the parallel processor of the GPU into a single piece of silicon and shared memory—a hardware technology driven by such advanced functionalities as HD video, face recognition, and cloud workloads. Project Sumatra is an OpenJDK project targeted at bringing Java to such heterogeneous platforms--with hardware and software experts working together to modify the JVM for these advanced applications and platforms.Ramani next discussed the latest with Java in the embedded space--"the Internet of things" and M2M--declaring this to be "the next IT revolution," with Java as the ideal technology for the ecosystem. Last week, Oracle released Java ME Embedded 3.2 (for micro-contollers and low-power devices), and Java Embedded Suite 7.0 (a middleware stack based on Java SE 7). Axel Hansmann, VP Strategy and Marketing, Cinterion, explored his company's use of Java in M2M, and their new release of EHS5, the world's smallest 3G-capable M2M module, running Java ME Embedded. Hansmaan explained that Java offers them the ability to create a "simple to use, scalable, coherent, end-to-end layer" for such diverse edge devices.Marc Brule, Chief Financial Office, Royal Canadian Mint, also explored the fascinating use-case of JavaCard in his country's MintChip e-cash technology--deployable on smartphones, USB device, computer, tablet, or cloud. In parting, Ramani encouraged developers to download the latest releases of Java Embedded, and try them out.Cameron Purdy, VP, Fusion Middleware Development and Java EE, summarized the latest developments and announcements in the Enterprise space--greater developer productivity in Java EE6 (with more on the way in EE 7), portability between platforms, vendors, and even cloud-to-cloud portability. The earliest version of the Java EE 7 SDK is now available for download--in GlassFish 4--with WebSocket support, better JSON support, and more. The final release is scheduled for April of 2013. Nicole Otto, Senior Director, Consumer Digital Technology, Nike, explored her company's Java technology driven enterprise ecosystem for all things sports, including the NikeFuel accelerometer wrist band. Looking beyond Java EE 7, Purdy mentioned NoSQL database functionality for EE 8, the concurrency utilities (possibly in EE 7), some of the Avatar projects in EE 7, some in EE 8, multi-tenancy for the cloud, supporting SaaS applications, and more.Rizvi ended by introducing Dr. Robert Ballard, oceanographer and National Geographic Explorer in Residence--part of Oracle's philanthropic relationship with the National Geographic Society to fund K-12 education around ocean science and conservation. Ballard is best known for having discovered the wreckage of the Titanic. He offered a fascinating video and overview of the cutting edge technology used in such deep-sea explorations, noting that in his early days, high-bandwidth exploration meant that you’d go down in a submarine and "stick your face up against the window." Now, it's a remotely operated, technology telepresence--"I think of my Hercules vehicle as my equivalent of a Na'vi. When I go beneath the sea, I actually send my spirit." Using high bandwidth satellite links, such amazing explorations can now occur via smartphone, laptop, or whatever platform. Ballard’s team regularly offers live feeds and programming out to schools and the world, spanning 188 countries--with embedding educators as part of the expeditions. It's technology at its finest, inspiring the next-generation of scientists and explorers!

    Read the article

  • SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11. The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth. The SPARC T4-4 server operated on an Oracle OLAP cube with a 4 billion row fact table of sales data containing 4 dimensions. This represents as many as 90 quintillion aggregate rows (90 followed by 18 zeros). Performance Landscape Oracle OLAP Perf Version 2 Benchmark 4 Billion Fact Table Rows System Queries/hour Users* Response Time (sec) Average Median SPARC T4-4 430,000 7,300 0.85 0.43 * Users - the supported number of users with a given think time of 60 seconds Configuration Summary and Results Hardware Configuration: SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 1 TB memory Data Storage 1 x Sun Fire X4275 (using COMSTAR) 2 x Sun Storage F5100 Flash Array (each with 80 FMODs) Redo Storage 1 x Sun Fire X4275 (using COMSTAR with 8 HDD) Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option Benchmark Description The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing. The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube consisting of a number of years of sales data with fully pre-computed aggregations. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc. Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix. Key Points and Best Practices Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here. For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law N = (rt + tt) * tp where rt is the average response time, tt is the think time and tp is the measured throughput. Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds. For more information see chapter 3 from the book "Quantitative System Performance" cited below. -- See Also Quantitative System Performance Computer System Analysis Using Queueing Network Models Edward D. Lazowska, John Zahorjan, G. Scott Graham, Kenneth C. Sevcik external local Oracle Database 11g – Oracle OLAP oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

    Read the article

  • Session Report - Java on the Raspberry Pi

    - by Janice J. Heiss
    On mid-day Wednesday, the always colorful Oracle Evangelist Simon Ritter demonstrated Java on the Raspberry Pi at his session, “Do You Like Coffee with Your Dessert?”. The Raspberry Pi consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there is a single feature that makes the Raspberry Pi significant,” observed Ritter, “but a combination of things really makes it stand out. First, it's $35 for what is effectively a completely usable computer. You do have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM (Advanced RISC Machine and Acorn RISC Machine) processor is noteworthy, because it avoids problems like cooling (no heat sink or fan) and can use a USB power brick. When you add in the enormous community support, it offers a great platform for teaching everyone about computing.”Some 200 enthusiastic attendees were present at the session which had the feel of Simon Ritter sharing a fun toy with friends. The main point of the session was to show what Oracle was doing to support Java on the Raspberry Pi in a way that is entertaining and fun. Ritter pointed out that, in addition to being great for teaching, it’s an excellent introduction to the ARM architecture, and runs well with Java and will get better once it has official hard float support. The possibilities are vast.Ritter explained that the Raspberry Pi Project started in 2006 with the goal of devising a computer to inspire children; it drew inspiration from the BBC Micro literacy project of 1981 that produced a series of microcomputers created by the Acorn Computer company. It was officially launched on February 29, 2012, with a first production of 10,000 boards. There were 100,000 pre-orders in one day; currently about 4,000 boards are produced a day. Ritter described the specification as follows:* CPU: ARM 11 core running at 700MHz Broadcom SoC package Can now be overclocked to 1GHz (without breaking the warranty!) * Memory: 256Mb* I/O: HDMI and composite video 2 x USB ports (Model B only) Ethernet (Model B only) Header pins for GPIO, UART, SPI and I2C He took attendees through a brief history of ARM Architecture:* Acorn BBC Micro (6502 based) Not powerful enough for Acorn’s plans for a business computer * Berkeley RISC Project UNIX kernel only used 30% of instruction set of Motorola 68000 More registers, less instructions (Register windows) One chip architecture to come from this was… SPARC * Acorn RISC Machine (ARM) 32-bit data, 26-bit address space, 27 registers First machine was Acorn Archimedes * Spin off from Acorn, Advanced RISC MachinesNext he presented its features:* 32-bit RISC Architecture–  ARM accounts for 75% of embedded 32-bit CPUs today– 6.1 Billion chips sold last year (zero manufactured by ARM)* Abstract architecture and microprocessor core designs– Raspberry Pi is ARM11 using ARMv6 instruction set* Low power consumption– Good for mobile devices– Raspberry Pi can be powered from 700mA 5V only PSU– Raspberry Pi does not require heatsink or fanHe described the current ARM Technology:* ARMv6– ARM 11, ARM Cortex-M* ARMv7– ARM Cortex-A, ARM Cortex-M, ARM Cortex-R* ARMv8 (Announced)– Will support 64-bit data and addressingHe next gave the Java Specifics for ARM: Floating point operations* Despite being an ARMv6 processor it does include an FPU– FPU only became standard as of ARMv7* FPU (Hard Float, or HF) is much faster than a software library* Linux distros and Oracle JVM for ARM assume no HF on ARMv6– Need special build of both– Raspbian distro build now available– Oracle JVM is in the works, release date TBDNot So RISCPerformance Improvements* DSP Enhancements* Jazelle* Thumb / Thumb2 / ThumbEE* Floating Point (VFP)* NEON* Security Enhancements (TrustZone)He spent a few minutes going over the challenges of using Java on the Raspberry Pi and covered:* Sound* Vision * Serial (TTL UART)* USB* GPIOTo implement sound with Java he pointed out:* Sound drivers are now included in new distros* Java Sound API– Remember to add audio to user’s groups– Some bits work, others not so much* Playing (the right format) WAV file works* Using MIDI hangs trying to open a synthesizer* FreeTTS text-to-speech– Should work once sound works properlyHe turned to JavaFX on the Raspberry Pi:* Currently internal builds only– Will be released as technology preview soon* Work involves optimal implementation of Prism graphics engine– X11?* Once the JavaFX implementation is completed there will be little of concern to developers-- It’s just Java (WORA). He explained the basis of the Serial Port:* UART provides TTL level signals (3.3V)* RS-232 uses 12V signals* Use MAX3232 chip to convert* Use this for access to serial consoleHe summarized his key points. The Raspberry Pi is a very cool (and cheap) computer that is great for teaching, a great introduction to ARM that works very well with Java and will work better in the future. The opportunities are limitless. For further info, check out, Raspberry Pi User Guide by Eben Upton and Gareth Halfacree. From there, Ritter tried out several fun demos, some of which worked better than others, but all of which were greeted with considerable enthusiasm and support and good humor (even when he ran into some glitches).  All in all, this was a fun and lively session.

    Read the article

  • Maximize Performance and Availability with Oracle Data Integration

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Alert: Oracle is hosting the 12c Launch Webcast for Oracle Data Integration and Oracle Golden Gate on Tuesday, November 12 (tomorrow) to discuss the new capabilities in detail and share customer perspectives. Hear directly from customer experts and executives from SolarWorld Industries America, British Telecom and Rittman Mead and get your questions answered live by product experts. Register for this complimentary webcast today and join in the discussion tomorrow. Author: Irem Radzik, Senior Principal Product Director, Oracle Organizations that want to use IT as a strategic point of differentiation prefer Oracle’s complete application offering to drive better business performance and optimize their IT investments. These enterprise applications are in the center of business operations and they contain critical data that needs to be accessed continuously, as well as analyzed and acted upon in a timely manner. These systems also need to operate with high-performance and availability, which means analytical functions should not degrade applications performance, and even system maintenance and upgrades should not interrupt availability. Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, and Oracle Enterprise Data Quality, provide the core foundation for bringing data from various business-critical systems to gain a broader, unified view. As a more advance offering to 3rd party products, Oracle’s data integration products facilitate real-time reporting for Oracle Applications without impacting application performance, and provide ability to upgrade and maintain the system without taking downtime. Oracle GoldenGate is certified for Oracle Applications, including E-Business Suite, Siebel CRM, PeopleSoft, and JD Edwards, for moving transactional data in real-time to a dedicated operational reporting environment. This solution allows the app users to offload the resource-heavy queries to the reporting instance(s), reducing CPU utilization, improving OLTP performance, and extending the lifetime of existing IT assets. In addition, having a dedicated reporting instance with up-to-the-second transactional data allows optimizing the reporting environment and even decreasing costs as GoldenGate can move only the required data from expensive mainframe environments to cost-efficient open system platforms.  With real-time data replication capabilities GoldenGate is also certified to enable application upgrades and database/hardware/OS migration without impacting business operations. GoldenGate is certified for Siebel CRM, Communications Billing and Revenue Management and JD Edwards for supporting zero downtime upgrades to the latest app version. GoldenGate synchronizes a parallel, upgraded system with the old version in real time, thus enables continuous operations during the process. Oracle GoldenGate is also certified for minimal downtime database migrations for Oracle E-Business Suite and other key applications. GoldenGate’s solution also minimizes the risk by offering a failback option after the switchover to the new environment. Furthermore, Oracle GoldenGate’s bidirectional active-active data replication is certified for Oracle ATG Web Commerce to enable geographically load balancing and high availability for ATG customers. For enabling better business insight, Oracle Data Integration products power Oracle BI Applications with high performance bulk and real-time data integration. Oracle Data Integrator (ODI) is embedded in Oracle BI Applications version 11.1.1.7.1 and helps to integrate data end-to-end across the full BI Applications architecture, supporting capabilities such as data-lineage, which helps business users identify report-to-source capabilities. ODI is integrated with Oracle GoldenGate and provides Oracle BI Applications customers the option to use real-time transactional data in analytics, and do so non-intrusively. By using Oracle GoldenGate with the latest release of Oracle BI Applications, organizations not only leverage fresh data in analytics, but also eliminate the need for an ETL batch window and minimize the impact on OLTP systems. You can learn more about Oracle Data Integration products latest 12c version in our upcoming launch webcast and access the app-specific free resources in the new Data Integration for Oracle Applications Resource Center.

    Read the article

  • Windows for IoT, continued

    - by Valter Minute
    Originally posted on: http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2014/08/05/windows-for-iot-continued.aspxI received many interesting feedbacks on my previous blog post and I tried to find some time to do some additional tests. Bert Kleinschmidt pointed out that pins 2,3 and 10 of the Galileo are connected directly to the SOC, while pin 13, the one used for the sample sketch is controlled via an I2C I/O expander. I changed my code to use pin 2 instead of 13 (just changing the variable assignment at the beginning of the code) and latency was greatly reduced. Now each pulse lasts for 1.44ms, 44% more than the expected time, but ways better that the result we got using pin 13. I also used SetThreadPriority to increase the priority of the thread that was running the sketch to THREAD_PRIORITY_HIGHEST but that didn't change the results. When I was using the I2C-controlled pin I tried the same and the timings got ways worse (increasing more than 10 times) and so I did not commented on that part, wanting to investigate the issua a bit more in detail. It seems that increasing the priority of the application thread impacts negatively the I2C communication. I tried to use also the Linux-based implementation (using a different Galileo board since the one provided by MS seems to use a different firmware) and the results of running the sample blink sketch modified to use pin 2 and blink the led for 1ms are similar to those we got on the same board running Windows. Here the difference between expected time and measured time is worse, getting around 3.2ms instead of 1 (320% compared to 150% using Windows but far from the 100.1% we got with the 8-bit Arduino). Both systems were not under load during the test, maybe loading some applications that use part of the CPU time would make those timings even less reliable, but I think that those numbers are enough to draw some conclusions. It may not be worth running a full OS if what you need is Arduino compatibility. The Arduino UNO is probably the best Arduino you can find to perform this kind of development. The Galileo running the Linux-based stack or running Windows for IoT is targeted to be a platform for "Internet of Things" devices, whatever that means. At the moment I don't see the "I" part of IoT. We have low level interfaces (SPI, I2C, the GPIO pins) that can be used to connect sensors but the support for connectivity is limited and the amount of work required to deliver some data to the cloud (using a secure HTTP request or a message queuing system like APMQS or MQTT) is still big and the rich OS underneath seems to not provide any help doing that.Why should I use sockets and can't access all the high level connectivity features we have on "full" Windows?I know that it's possible to use some third party libraries, try to build them using the Windows For IoT SDK etc. but this means re-inventing the wheel every time and can also lead to some IP concerns if used for products meant to be closed-source. I hope that MS and Intel (and others) will focus less on the "coolness" of running (some) Arduino sketches and more on providing a better platform to people that really want to design devices that leverage internet connectivity and the cloud processing power to deliver better products and services. Providing a reliable set of connectivity services would be a great start. Providing support for .NET would be even better, leaving native code available for hardware access etc. I know that those components may require additional storage and memory etc. So making the OS componentizable (or, at least, provide a way to install additional components) would be a great way to let developers pick the parts of the system they need to develop their solution, knowing that they will integrate well together. I can understand that the Arduino and Raspberry Pi* success may have attracted the attention of marketing departments worldwide and almost any new development board those days is promoted as "XXX response to Arduino" or "YYYY alternative to Raspberry Pi", but this is misleading and prevents companies from focusing on how to deliver good products and how to integrate "IoT" features with their existing offer to provide, at the end, a better product or service to their customers. Marketing is important, but can't decide the key features of a product (the OS) that is going to be used to develop full products for end customers integrating it with hardware and application software. I really like the "hackable" nature of open-source devices and like to see that companies are getting more and more open in releasing information, providing "hackable" devices and supporting developers with documentation, good samples etc. On the other side being able to run a sketch designed for an 8 bit microcontroller on a full-featured application processor may sound cool and an easy upgrade path for people that just experimented with sensors etc. on Arduino but it's not, in my humble opinion, the main path to follow for people who want to deliver real products.   *Shameless self-promotion: if you are looking for a good book in Italian about the Raspberry Pi , try mine: http://www.amazon.it/Raspberry-Pi-alluso-Digital-LifeStyle-ebook/dp/B00GYY3OKO

    Read the article

  • Mobile Apps for Oracle E-Business Suite

    - by Carlos Chang
    Crosspost from the mobile apps blog.  TL;DR Oracle E-Business Suite is now building mobile apps with Oracle Mobile Application Framework (MAF). Believe it! Build iOS and Android apps with once code base and get it done! By Steven Chan (Oracle Development)  Many things have changed in the mobile space over the last few years. Here's an update on our strategy for mobile apps for the E-Business Suite. Mobile app strategy We're building our family of mobile apps for the E-Business Suite using Oracle Mobile Application Framework.  This framework allows us to write a single application that can be run on Apple iOS and Google Android platforms. Mobile apps for the E-Business Suite will share a common look-and-feel. The E-Business Suite is a suite of over 200 product modules spanning Financials, Supply Chain, Human Resources, and many other areas. Our mobile app strategy is to release standalone apps for specific product modules.  Our Oracle Timecards app, which allows users to create and submit timecards, is an example of a standalone app. Some common functions that span multiple product areas will have dedicated apps, too. An example of this is ourOracle Approvals app, which allows users to review and approve requests for expenses, requisitions, purchase orders, recruitment vacancies and offers, and more. You can read more about our Oracle Mobile Approvals app here: Now Available: Oracle Mobile Approvals for iOS Our goal is to support smaller screen (e.g. smartphones) as well as larger screens (e.g. tablets), with the smaller screen versions generally delivered first.  Where possible, we will deliver these as universal apps.  An example is our Oracle Mobile Field Service app, which allows field service technicians to remotely access customer, product, service request, and task-related information.  This app can run on a smartphone, while providing a richer experience for tablets. Deploying EBS mobile apps The mobile apps, themselves (i.e. client-side components) can be downloaded by end-users from the Apple iTunes today.  Android versions will be available from Google play. You can monitor this blog for Android-related updates. Where possible, our mobile apps should be deployable with a minimum of server-side changes.  These changes will generally involve a consolidated server-side patch for technology-stack components, and possibly a server-side patch for the functional product module. Updates to existing mobile apps may require new server-side components to enable all of the latest mobile functionality. All EBS product modules are certified for internal intranet deployments (i.e. used by employees within an organization's firewall).  Only a subset of EBS products such as iRecruitment are certified to be deployed externally (i.e. used by non-employees outside of an organization's firewall).  Today, many organizations running the E-Business Suite do not expose their EBS environment externally and all of the mobile apps that we're building are intended for internal employee use.  Recognizing this, our mobile apps are currently designed for users who are connected to the organization's intranet via VPN.  We expect that this may change in future updates to our mobile apps. Mobile apps and internationalization The initial releases of our mobile apps will be in English.  Later updates will include translations for all left-to-right languages supported by the E-Business Suite.  Right-to-left languages will not be translated. Customizing apps for enterprise deployments The current generation of mobile apps for Oracle E-Business Suite cannot be customized. We are evaluating options for limited customizations, including corporate branding with logos, corporate color schemes, and others. This is a potentially-complex area with many tricky implications for deployment and maintenance.  We would be interested in hearing your requirements for customizations in enterprise deployments.Prerequisites Apple iOS 7 and higher Android 4.1 (API level 16) and higher, with minimum CPU/memory configurations listed here EBS 12.1: EBS 12.1.3 Family Packs for the related product module EBS 12.2.3 References Oracle E-Business Suite Mobile Apps, Release 12.1 and 12.2 Documentation (Note 1641772.1) Oracle E-Business Suite Mobile Apps Administrator's Guide, Release 12.1 and 12.2 (Note 1642431.1) Follow @OracleMobile on Twitter Oracle Mobile Blog is here. 

    Read the article

  • A Basic Thread

    - by Joe Mayo
    Most of the programs written are single-threaded, meaning that they run on the main execution thread. For various reasons such as performance, scalability, and/or responsiveness additional threads can be useful. .NET has extensive threading support, from the basic threads introduced in v1.0 to the Task Parallel Library (TPL) introduced in v4.0. To get started with threads, it's helpful to begin with the basics; starting a Thread. Why Do I Care? The scenario I'll use for needing to use a thread is writing to a file.  Sometimes, writing to a file takes a while and you don't want your user interface to lock up until the file write is done. In other words, you want the application to be responsive to the user. How Would I Go About It? The solution is to launch a new thread that performs the file write, allowing the main thread to return to the user right away.  Whenever the file writing thread completes, it will let the user know.  In the meantime, the user is free to interact with the program for other tasks. The following examples demonstrate how to do this. Show Me the Code? The code we'll use to work with threads is in the System.Threading namespace, so you'll need the following using directive at the top of the file: using System.Threading; When you run code on a thread, the code is specified via a method.  Here's the code that will execute on the thread: private static void WriteFile() { Thread.Sleep(1000); Console.WriteLine("File Written."); } The call to Thread.Sleep(1000) delays thread execution. The parameter is specified in milliseconds, and 1000 means that this will cause the program to sleep for approximately 1 second.  This method happens to be static, but that's just part of this example, which you'll see is launched from the static Main method.  A thread could be instance or static.  Notice that the method does not have parameters and does not have a return type. As you know, the way to refer to a method is via a delegate.  There is a delegate named ThreadStart in System.Threading that refers to a method without parameters or return type, shown below: ThreadStart fileWriterHandlerDelegate = new ThreadStart(WriteFile); I'll show you the whole program below, but the ThreadStart instance above goes in the Main method. The thread uses the ThreadStart instance, fileWriterHandlerDelegate, to specify the method to execute on the thread: Thread fileWriter = new Thread(fileWriterHandlerDelegate); As shown above, the argument type for the Thread constructor is the ThreadStart delegate type. The fileWriterHandlerDelegate argument is an instance of the ThreadStart delegate type. This creates an instance of a thread and what code will execute, but the new thread instance, fileWriter, isn't running yet. You have to explicitly start it, like this: fileWriter.Start(); Now, the code in the WriteFile method is executing on a separate thread. Meanwhile, the main thread that started the fileWriter thread continues on it's own.  You have two threads running at the same time. Okay, I'm Starting to Get Glassy Eyed. How Does it All Fit Together? The example below is the whole program, pulling all the previous bits together. It's followed by its output and an explanation. using System; using System.Threading; namespace BasicThread { class Program { static void Main() { ThreadStart fileWriterHandlerDelegate = new ThreadStart(WriteFile); Thread fileWriter = new Thread(fileWriterHandlerDelegate); Console.WriteLine("Starting FileWriter"); fileWriter.Start(); Console.WriteLine("Called FileWriter"); Console.ReadKey(); } private static void WriteFile() { Thread.Sleep(1000); Console.WriteLine("File Written"); } } } And here's the output: Starting FileWriter Called FileWriter File Written So, Why are the Printouts Backwards? The output above corresponds to Console.Writeline statements in the program, with the second and third seemingly reversed. In a single-threaded program, "File Written" would print before "Called FileWriter". However, this is a multi-threaded (2 or more threads) program.  In multi-threading, you can't make any assumptions about when a given thread will run.  In this case, I added the Sleep statement to the WriteFile method to greatly increase the chances that the message from the main thread will print first. Without the Thread.Sleep, you could run this on a system with multiple cores and/or multiple processors and potentially get different results each time. Interesting Tangent but What Should I Get Out of All This? Going back to the main point, launching the WriteFile method on a separate thread made the program more responsive.  The file writing logic ran for a while, but the main thread returned to the user, as demonstrated by the print out of "Called FileWriter".  When the file write finished, it let the user know via another print statement. This was a very efficient use of CPU resources that made for a more pleasant user experience. Joe

    Read the article

  • SQLIO Writes

    - by Grant Fritchey
    SQLIO is a fantastic utility for testing the abilities of the disks in your system. It has a very unfortunate name though, since it's not really a SQL Server testing utility at all. It really is a disk utility. They ought to call it DiskIO because they'd get more people using I think. Anyway, branding is not the point of this blog post. Writes are the point of this blog post. SQLIO works by slamming your disk. It performs as mean reads as it can or it performs as many writes as it can depending on how you've configured your tests. There are much smarter people than me who will get into all the various types of tests you should run. I'd suggest reading a bit of what Jonathan Kehayias (blog|twitter) has to say or wade into Denny Cherry's (blog|twitter) work. They're going to do a better job than I can describing all the benefits and mechanisms around using this excellent piece of software. My concerns are very focused. I needed to set up a series of tests to see how well our product SQL Storage Compress worked. I wanted to know the effects it would have on a system, the disk for sure, but also memory and CPU. How to stress the system? SQLIO of course. But when I set it up and ran it, following the documentation that comes with it, I was seeing better than 99% compression on the files. Don't get me wrong. Our product is magnificent, wonderful, all things great and beautiful, gets you coffee in the morning and is made mostly from bacon. But 99% compression. No, it's not that good. So what's up? Well, it's the configuration. The default mechanism is to load up a file, something large that will overwhelm your disk cache. You're instructed to load the file with a character 0x0. I never got a computer science degree. I went to film school. Because of this, I didn't memorize ASCII tables so when I saw this, I thought it was zero's or something. Nope. It's NULL. That's right, you're making a very large file, but you're filling it with NULL values. That's actually ok when all you're testing is the disk sub-system. But, when you want to test a compression and decompression, that can be an issue. I got around this fairly quickly. Instead of generating a file filled with NULL values, I just copied a database file for my tests. And to test it with SQL Storage Compress, I used a database file that had already been run through compression (about 40% compression on that file if you're interested). Now the reads were taken care of. I am seeing very realistic performance from decompressing the information for reads through SQLIO. But what about writes? Well, the issue is, what does SQLIO write? I don't have access to the code. But I do have access to the results. I did two different tests, just to be sure of what I was seeing. First test, use the .DAT file as described in the documentation. I opened the .DAT file after I was done with SQLIO, using WordPad. Guess what? It's a giant file full of air. SQLIO writes NULL values. What does that do to compression? I did the test again on a copy of an uncompressed database file. Then I ran the original and the SQLIO modified copy through ZIP to see what happened. I got better than 99% compression out of the SQLIO modified file (original file of 624,896kb went to 275,871kb compressed, after SQLIO it went to 608kb compressed). So, what does SQLIO write? It writes air. If you're trying to test it with compression or maybe some other type of file storage mechanism like dedupe, you need to know this because your tests really won't be valid. Should I find some other mechanism for testing? Yeah, if all I'm interested in is establishing performance to my own satisfaction, yes. But, I want to be able to compare my results with other people's results and we all need to be using the same tool in order for that to happen. SQLIO is the common mechanism that most people I know use to establish disk performance behavior. It'd be better if we could get SQLIO to do writes in some other fashion. Oh, and before I go, I get to brag a bit. Measuring IOPS, SQL Storage Compress outperforms my disk alone by about 30%.

    Read the article

  • My Feelings About Microsoft Surface

    - by Valter Minute
    Advice: read the title carefully, I’m talking about “feelings” and not about advanced technical points proved in a scientific and objective way I still haven’t had a chance to play with a MS Surface tablet (I would love to, of course) and so my ideas just came from reading different articles on the net and MS official statements. Remember also that the MVP motto begins with “Independent” (“Independent Experts. Real World Answers.”) and this is just my humble opinion about a product and a technology. I know that, being an MS MVP you can be called an “MS-fanboy”, I don’t care, I hope that people can appreciate my opinion, even if it doesn’t match theirs. The “Surface” brand can be confusing for techies that knew the “original” surface concept but I think that will be a fresh new brand name for most of the people out there. But marketing department are here to confuse people… so I can understand this “recycle” of an existing name. So Microsoft is entering the hardware arena… for me this is good news. Microsoft developed some nice hardware in the past: the xbox, zune (even if the commercial success was quite limited) and, last but not least, the two arc mices (old and new model) that I use and appreciate. In the past Microsoft worked with OEMs and that model lead to good and bad things. Good thing (for microsoft, at least) is market domination by windows-based PCs that only in the last years has been reduced by the return of the Mac and tablets. Google is also moving in the hardware business with its acquisition of Motorola, and Apple leveraged his control of both the hardware and software sides to develop innovative products. Microsoft can scare OEMs and make them fly away from windows (but where?) or just lead the pack, showing how devices should be designed to compete in the market and bring back some of the innovation that disappeared from recent PC products (look at the shelves of your favorite electronics store and try to distinguish a laptop between the huge mass of anonymous PCs on displays… only Macs shine out there…). Having to compete with MS “official” hardware will force OEMs to develop better product and bring back some real competition in a market that was ruled only by prices (the lower the better even when that means low quality) and no innovative features at all (when it was the last time that a new PC surprised you?). Moving into a new market is a big and risky move, but with Windows 8 Microsoft is playing a crucial move for its future, trying to be back in the innovation run against apple and google. MS can’t afford to fail this time. I saw the new devices (the WinRT and Pro) and the specifications are scarce, misleading and confusing. The first impression is that the device looks like an iPad with a nice keyboard cover… Using “HD” and “full HD” to define display resolution instead of using the real figures and reviving the “ClearType” brand (now dead on Win8 as reported here and missed by people who hate to read text on displays, like myself) without providing clear figures (couldn’t you count those damned pixels?) seems to imply that MS was caught by surprise by apple recent “retina” displays that brought very high definition screens on tablets.Also there are no specifications about the processors used (even if some sources report NVidia Tegra for the ARM tablet and i5 for the x86 one) and expected battery life (a critical point for tablets and the point that killed Windows7 x86 based tablets). Also nothing about the price, and this will be another critical point because other platform out there already provide lots of applications and have a good user base, if MS want to enter this market tablets pricing must be competitive. There are some expansion ports (SD and USB), so no fixed storage model (even if the specs talks about 32-64GB for RT and 128-256GB for pro). I like this and don’t like the apple model where flash memory (that it’s dirt cheap used in thumdrives or SD cards) is as expensive as gold (or cocaine to have a more accurate per gram measurement) when mounted inside a tablet/phone. For big files you’ll be able to use external media and an SD card could be used to store files that don’t require super-fast SSD-like access times, I hope. To be honest I really don’t like the marketplace model and the limitation of Windows RT APIs (no local database? from a company that based a good share of its success on VB6+Access!) and lack of desktop support on the ARM (even if the support is here and has been used to port office). It’s a step toward the consumer market (where competitors are making big money), but may impact enterprise (and embedded) users that may not appreciate Windows 8 new UI or the limitations of the new app model (if you aren’t connected you are dead ). Not having compatibility with the desktop will require brand new applications and honestly made all the CPU cycles spent to convert .NET IL into real machine code in the past like a huge waste of time… as soon as a new processor architecture is supported by Windows you still have to rewrite part of your application (and MS is pushing HTML5+JS and native code more than .NET in my perception). On the other side I believe that the development experience provided by Visual Studio is still miles (or kilometres) ahead of the competition and even the all-uppercase menu of VS2012 hasn’t changed this situation. The new metro UI got mixed reviews. On my side I should say that is very pleasant to use on a touch screen, I like the minimalist design (even if sometimes is too minimal and hides stuff that, in my opinion, should be visible) but I should also say that using it with mouse and keyboard is like trying to pick your nose with boxing gloves… Metro is also very interesting for embedded devices where touch screen usage is quite common and where having an application taking all the screen is the norm. For devices like kiosks, vending machines etc. this kind of UI can be a great selling point. I don’t need a new tablet (to be honest I’m pretty happy with my wife’s iPad and with my PC), but I may change my opinion after having a chance to play a little bit with those new devices and understand what’s hidden under all this mysterious and generic announcements and specifications!

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >