Search Results

Search found 1544 results on 62 pages for 'heap corruption'.

Page 33/62 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work??

    - by themoondothshine
    Hey all, I'm trying to learn more about library versioning in Linux and how to put it all to work. Here's the context: -- I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so. -- An application is linked against libsome1.so. -- This application uses libdl.so to dynamically load another module, say libmagic.so. -- Now libmagic.so is linked against libsome2.so. Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION. -- So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. This works... Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1). -- However, when some data structures are serialized to disk, I noticed some corruption. In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen. I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol@@VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol@@VER_2)... Nothing seems to work!!! Help!!!!!! Edit: I should have mentioned it earlier, but the app in question is Firefox, and libsome1.so is libsqlite3.so shipped with it. I don't quite have the option of recompiling them. Also, using version scripts to hide symbols seems to be the only solution right now. So what really happens when symbols are hidden? Do they become 'local' to the SO? Does rtld have no knowledge of their existence? What happens when an exported function refers to a hidden symbol?

    Read the article

  • In C# ,how do I terminate a thread that has had its call stack corrupted?

    - by Emil D
    I have a thread in my application that is running code that can potentially cause call stack corruption ( my application is a testing tool for dlls ). Assuming that I have a method of detecting if the child thread is misbehaving, how would I terminate it? From what I read, calling Thread.Abort() on the misbehaving thread would be equivalent to raising an exception inside it.I fear that that not be a good idea, provided the call stack of the thread might be corrupted.Any suggestions?

    Read the article

  • freeing a memory twice

    - by benjamin button
    Hi, AFAIAK, freeing a NULL will result in nothing.i mean nothing is being done by the compiler/no functionality is performed. Still i do see some statements where people say that one of the scenario,where a memory corruption can occur is "freeing a memory twice"? Is this still true?

    Read the article

  • StackOverFlowError while creating Mac object on AS400/Java

    - by Prasanna K Rao
    Hello all, I am a newbie to AS400-Java programming. I am trying to create my first program to test the implementation of Message Authentication Code (MAC). I am trying to use the HMACSHA1 hash function. My (Java 1.4) program runs fine on a dev box (V5R4).But fails terribly on the QA box (V5R3). My program is as below: ===================================================== import java.security.InvalidKeyException; import java.security.NoSuchAlgorithmException; import java.security.Security; import java.security.Provider; import javax.crypto.Mac; import javax.crypto.spec.SecretKeySpec; import javax.crypto.SecretKey; public class Test01 { private static final String HMAC_SHA1_ALGORITHM = "HmacSHA1"; public static void main (String [] arguments) { byte[] key = { 1,2,3,4,5,6,7,8}; SecretKeySpec SHA1key = new SecretKeySpec(key, "HmacSHA1"); Mac hmac; String strFinalRslt = ""; try { hmac = Mac.getInstance("HmacSHA1"); hmac.init(SHA1key); byte[] result = hmac.doFinal(); strFinalRslt = toHexString(result); }catch (NoSuchAlgorithmException e) { // TODO Auto-generated catch block e.printStackTrace(); }catch (InvalidKeyException e) { // TODO Auto-generated catch block e.printStackTrace(); }catch(StackOverflowError e){ e.printStackTrace(); } System.out.println(strFinalRslt); System.out.println("All done!!!"); } public static byte[] fromHexString ( String s ) { int stringLength = s.length(); if ( (stringLength & 0x1) != 0 ) { throw new IllegalArgumentException ( "fromHexString requires an even number of hex characters" ); } byte[] b = new byte[stringLength / 2]; for ( int i=0,j=0; i 4] ); //look up low nibble char sb.append( hexChar [b[i] & 0x0f] ); } return sb.toString(); } static char[] hexChar = { '0' , '1' , '2' , '3' , '4' , '5' , '6' , '7' , '8' , '9' , 'a' , 'b' , 'c' , 'd' , 'e' , 'f'}; } This program compiles fine and gets the correct response on my win-xp client and also my dev box. But, fails with the following error on the QA box: java.lang.StackOverflowError at java.lang.Throwable.(Throwable.java:180) at java.lang.Error.(Error.java:37) at java.lang.StackOverflowError.(StackOverflowError.java:24) at java.io.Os400FileSystem.list(Native method) at java.io.File.list(File.java:922) at javax.crypto.b.e(Unknown source) at javax.crypto.b.a(Unknown source) at javax.crypto.b.c(Unknown source) at javax.crypto.b£0.run(Unknown source) at javax.crypto.b.(Unknown source) at javax.crypto.Mac.getInstance(Unknown source) I have verified the java.security file and entry corresponding to the jce files are all ok. The DMPJVM command gives me the following response: Thu Jun 03 12:25:34 E Java Virtual Machine Information 016822/QPGMR/11111 ........................................................................ . Classpath . ........................................................................ java.version=1.4 sun.boot.class.path=/QIBM/ProdData/OS400/Java400/jdk/lib/jdkptf14.zip:/QIBM /ProdData/OS400/Java400/ext/ibmjssefw.jar:/QIBM/ProdData/CAP/ibmjsseprovide r.jar:/QIBM/ProdData/OS400/Java400/ext/ibmjsseprovider2.jar:/QIBM/ProdData/ OS400/Java400/ext/ibmpkcs11impl.jar:/QIBM/ProdData/CAP/ibmjssefips.jar:/QIB M/ProdData/OS400/Java400/jdk/lib/IBMiSeriesJSSE.jar:/QIBM/ProdData/OS400/Ja va400/jdk/lib/jce.jar:/QIBM/ProdData/OS400/Java400/jdk/lib/jaas.jar:/QIBM/P rodData/OS400/Java400/jdk/lib/ibmcertpathfw.jar:/QIBM/ProdData/OS400/Java40 0/jdk/lib/ibmcertpathprovider.jar:/QIBM/ProdData/OS400/Java400/ext/ibmpkcs. jar:/QIBM/ProdData/OS400/Java400/jdk/lib/ibmjgssfw.jar:/QIBM/ProdData/OS400 /Java400/jdk/lib/ibmjgssprovider.jar:/QIBM/ProdData/OS400/Java400/jdk/lib/s ecurity.jar:/QIBM/ProdData/OS400/Java400/jdk/lib/charsets.jar:/QIBM/ProdDat a/OS400/Java400/jdk/lib/resources.jar:/QIBM/ProdData/OS400/Java400/jdk/lib/ rt.jar:/QIBM/ProdData/OS400/Java400/jdk/lib/sunrsasign.jar:/QIBM/ProdData/O S400/Java400/ext/IBMmisc.jar:/QIBM/ProdData/Java400/ java.class.path=/myhome/lib/commons-codec-1.3.jar:/myhome/lib/commons-httpc lient-3.1.jar:/myhome/lib/commons-logging-1.1.jar:/myhome/lib/log4j-1.2.15.jar:/myhome/lib/log4j-core.jar ; java.ext.dirs=/QIBM/ProdData/OS400/Java400/jdk/lib/ext:/QIBM/UserData/Java4 00/ext:/QIBM/ProdData/Java400/jdk14/lib/ext java.library.path=/QSYS.LIB/ROBOTLIB.LIB:/QSYS.LIB/QTEMP.LIB:/QSYS.LIB/ODIP GM.LIB:/QSYS.LIB/QGPL.LIB ........................................................................ . Garbage Collection . ........................................................................ Garbage collector parameters Initial size: 16384 K Max size: 240000000 K Current values Heap size: 437952 K Garbage collections: 58 Additional values JIT heap size: 53824 K JVM heap size: 55752 K Last GC cycle time: 1333 ms ........................................................................ . Thread information . ........................................................................ Information for 4 thread(s) of 4 thread(s) processed Thread: 00000004 Thread-0 TDE: B00380000BAA0000 Thread priority: 5 Thread status: Running Thread group: main Runnable: java/lang/Thread Stack: java/io/Os400FileSystem.list(Ljava/io/File;)[Ljava/lang/String;+0 (Os400FileSystem.java:0) java/io/File.list()[Ljava/lang/String;+19 (File.java:922) javax/crypto/b.e()[B+127 (:0) javax/crypto/b.a(Ljava/security/cert/X509Certificate;)V+7 (:0) javax/crypto/b.access$500(Ljava/security/cert/X509Certificate;)V+1 (:0) javax/crypto/b$0.run()Ljava/lang/Object;+98 (:0) javax/crypto/b.()V+507 (:0) javax/crypto/Mac.getInstance(Ljava/lang/String;)Ljavax/crypto/Mac;+10 (:0) Locks: None Thread: 00000007 jitcompilethread TDE: B00380000BD58000 Thread priority: 5 Thread status: Java wait Thread group: system Runnable: java/lang/Thread Stack: None Locks: None Thread: 00000005 Reference Handler TDE: B00380000BAAC000 Thread priority: 10 Thread status: Waiting Wait object: java/lang/ref/Reference$Lock Thread group: system Runnable: java/lang/ref/Reference$ReferenceHandler Stack: java/lang/Object.wait()V+1 (Object.java:452) java/lang/ref/Reference$ReferenceHandler.run()V+47 (Reference.java:169) Locks: None Thread: 00000006 Finalizer TDE: B00380000BAB3000 Thread priority: 8 Thread status: Waiting Wait object: java/lang/ref/ReferenceQueue$Lock Thread group: system Runnable: java/lang/ref/Finalizer$FinalizerThread Stack: java/lang/ref/ReferenceQueue.remove(J)Ljava/lang/ref/Reference;+43 (ReferenceQueue.java:111) java/lang/ref/ReferenceQueue.remove()Ljava/lang/ref/Reference;+1 (ReferenceQueue.java:127) java/lang/ref/Finalizer$FinalizerThread.run()V+3 (Finalizer.java:171) Locks: None ........................................................................ . Class loader information . ........................................................................ 0 Default class loader 1 sun/reflect/DelegatingClassLoader 2 sun/misc/Launcher$ExtClassLoader ........................................................................ . GC heap information . ........................................................................ Loader Objects Class name ------ ------- ---------- 0 1493 [C 0 2122181 java/lang/String 0 47 [Ljava/util/Hashtable$Entry; 0 68 [Ljava/lang/Object; 0 1016 java/lang/Class 0 31 java/util/HashMap 0 37 java/util/Hashtable 0 2 java/lang/ThreadGroup 0 2 java/lang/RuntimePermission 0 2 java/lang/ref/ReferenceQueue$Null 0 5 java/lang/ref/ReferenceQueue 0 50 java/util/Vector 0 4 java/util/Stack 0 3 sun/misc/SoftCache 0 1 [Ljava/lang/ThreadGroup; 0 5 [Ljava/io/ObjectStreamField; 0 1 sun/reflect/ReflectionFactory 0 7 java/lang/ref/ReferenceQueue$Lock 0 10 java/lang/Object 0 1 java/lang/String$CaseInsensitiveComparator 0 1 java/util/Hashtable$EmptyEnumerator 0 1 java/util/Hashtable$EmptyIterator 0 33 [Ljava/util/HashMap$Entry; 0 19210 [J 0 1 sun/nio/cs/StandardCharsets 0 5 java/util/TreeMap 0 1075 java/util/TreeMap$Entry 0 469 [Ljava/lang/String; 0 1 java/lang/StringBuffer 0 2 java/io/FileInputStream 0 2 java/io/FileOutputStream 0 2 java/io/BufferedOutputStream 0 1 java/lang/reflect/ReflectPermission 0 1 [[Ljava/lang/ref/SoftReference; 0 2 [Ljava/lang/ref/SoftReference; 0 2 sun/nio/cs/Surrogate$Parser 0 3 sun/misc/Signal 0 1 [Ljava/io/File; 0 6 java/io/File 0 1 java/util/BitSet 0 17 sun/reflect/NativeConstructorAccessorImpl 0 2 java/net/URLClassLoader$ClassFinder 0 12 java/util/ArrayList 0 32 java/io/RandomAccessFile 0 16 java/lang/Thread 0 1 java/lang/ref/Reference$ReferenceHandler 0 1 java/lang/ref/Finalizer$FinalizerThread 0 266 [B 0 2 java/util/Properties 0 71 java/lang/ref/Finalizer 0 2 com/ibm/nio/cs/DirectEncoder 0 38 java/lang/reflect/Constructor 0 33 java/util/jar/JarFile 0 19200 java/lang/StackOverflowError 0 5 java/security/AccessControlContext 0 2 [Ljava/lang/Thread; 0 4 java/lang/OutOfMemoryError 0 1065 java/util/Hashtable$Entry 0 1 java/io/BufferedInputStream 0 2 java/io/PrintStream 0 2 java/io/OutputStreamWriter 0 428 [I 0 3 java/lang/ClassLoader$NativeLibrary 0 25 java/util/Locale 0 3 sun/misc/URLClassPath 0 30 java/util/zip/Inflater 0 612 java/util/HashMap$Entry 0 2 java/io/FilePermission 0 10 java/io/ObjectStreamField 0 1 java/security/BasicPermissionCollection 0 2 java/security/ProtectionDomain 0 1 java/lang/Integer$1 0 1 java/lang/ref/Reference$Lock 0 1 java/lang/Shutdown$Lock 0 1 java/lang/Runtime 0 36 java/io/FileDescriptor 0 1 java/lang/Long$1 0 202 java/lang/Long 0 3 java/lang/ThreadLocal 0 3 java/nio/charset/CodingErrorAction 0 2 java/nio/charset/CoderResult 0 1 java/nio/charset/CoderResult$1 0 1 java/nio/charset/CoderResult$2 0 1 sun/misc/Unsafe 0 2 java/nio/ByteOrder 0 1 java/io/Os400FileSystem 0 3 java/lang/Boolean 0 1 java/lang/Terminator$1 0 23 java/lang/Integer 0 2 sun/misc/NativeSignalHandler 0 1 sun/misc/Launcher$Factory 0 1 sun/misc/Launcher 0 53 [Ljava/lang/Class; 0 1 java/lang/reflect/ReflectAccess 0 18 sun/reflect/DelegatingConstructorAccessorImpl 0 1 sun/net/www/protocol/file/Handler 0 3 java/util/HashSet 0 3 sun/net/www/protocol/jar/Handler 0 1 java/util/jar/JavaUtilJarAccessImpl 0 1 java/net/UnknownContentHandler 0 2 [Ljava/security/Principal; 0 10 [Ljava/security/cert/Certificate; 0 2 sun/misc/AtomicLongCSImpl 0 3 sun/reflect/DelegatingMethodAccessorImpl 0 1 sun/security/util/ByteArrayLexOrder 0 1 sun/security/util/ByteArrayTagOrder 0 7 sun/security/x509/CertificateVersion 0 7 sun/security/x509/CertificateSerialNumber 0 7 sun/security/x509/SerialNumber 0 7 sun/security/x509/CertificateAlgorithmId 0 7 sun/security/x509/CertificateIssuerName 0 60 sun/security/x509/RDN 0 60 [Lsun/security/x509/AVA; 0 67 sun/security/util/DerInputStream 0 3 [Ljava/math/BigInteger; 0 2 com/ibm/nio/cs/Converter 0 2 sun/nio/cs/StreamEncoder$CharsetSE 0 35 java/lang/ref/SoftReference 0 2 java/nio/HeapByteBuffer 0 2 java/io/BufferedWriter 0 33 sun/misc/URLClassPath$JarLoader 0 4 java/lang/ThreadLocal$ThreadLocalMap$Entry 0 76 java/net/URL 0 1 sun/misc/Launcher$ExtClassLoader 0 1 sun/misc/Launcher$AppClassLoader 0 4 java/lang/Throwable 0 7 java/lang/reflect/Method 0 2 sun/misc/URLClassPath$FileLoader 0 2 java/security/CodeSource 0 2 java/security/Permissions 0 2 java/io/FilePermissionCollection 0 1 java/lang/ThreadLocal$ThreadLocalMap 0 1 javax/crypto/spec/SecretKeySpec 0 17 java/util/jar/Attributes$Name 0 1 [Ljava/lang/ThreadLocal$ThreadLocalMap$Entry; 0 1 java/security/SecureRandom 0 2 sun/security/provider/Sun 0 1 java/util/jar/JarFile$JarFileEntry 0 1 java/util/jar/JarVerifier 0 3 sun/reflect/NativeMethodAccessorImpl 0 116 sun/security/util/ObjectIdentifier 0 1 java/lang/Package 0 2 [S 0 104 java/math/BigInteger 0 20 sun/security/x509/AlgorithmId 0 14 sun/security/x509/X500Name 0 14 [Lsun/security/x509/RDN; 0 60 sun/security/x509/AVA 0 67 sun/security/util/DerValue 0 67 sun/security/util/DerInputBuffer 0 21 sun/security/x509/AVAKeyword 0 6 sun/security/x509/X509CertImpl 0 7 sun/security/x509/X509CertInfo 0 1 [Lsun/security/util/ObjectIdentifier; 0 1 [[Ljava/lang/Byte; 0 3 [[B 0 7 sun/security/provider/DSAPublicKey 0 7 sun/security/x509/AuthorityKeyIdentifierExtension 0 12 [Ljava/lang/Byte; 0 14 java/lang/Byte 0 7 sun/security/x509/CertificateSubjectName 0 7 sun/security/x509/CertificateX509Key 0 14 sun/security/x509/KeyIdentifier 0 4 [Z 0 5 sun/text/Normalizer$Mode 0 7 sun/security/x509/CertificateValidity 0 14 java/util/Date 0 7 sun/security/provider/DSAParameters 0 7 sun/security/util/BitArray 0 7 sun/security/x509/CertificateExtensions 0 7 java/security/AlgorithmParameters 0 7 sun/security/x509/SubjectKeyIdentifierExtension 0 5 sun/security/x509/BasicConstraintsExtension 0 2 sun/security/x509/KeyUsageExtension 0 1 sun/text/CompactCharArray 0 1 sun/text/CompactByteArray 0 1 sun/net/www/protocol/jar/JarFileFactory 0 1 java/util/Collections$EmptySet 0 1 java/util/Collections$EmptyList 0 1 java/util/Collections$ReverseComparator 0 1 com/ibm/security/jgss/i18n/PropertyResource 0 1 javax/crypto/b$0 0 1 sun/security/provider/X509Factory 0 1 sun/reflect/BootstrapConstructorAccessorImpl 1 1 sun/reflect/GeneratedConstructorAccessor3202134454 2 1 com/ibm/crypto/provider/IBMJCE 0 6 java/util/ResourceBundle$LoaderReference 0 1 [Lsun/security/x509/NetscapeCertTypeExtension$MapEntry; 0 1 com/sun/rsajca/Provider 0 1 com/ibm/security/cert/IBMCertPath 0 1 com/ibm/as400/ibmonly/net/ssl/Provider 0 1 com/ibm/jsse/IBMJSSEProvider 0 1 com/ibm/security/jgss/IBMJGSSProvider 0 5 org/ietf/jgss/Oid 0 1 java/util/PropertyResourceBundle 0 7 java/util/ResourceBundle$ResourceCacheKey 0 2 sun/net/www/protocol/jar/URLJarFile 0 6 sun/misc/SoftCache$ValueCell 0 1 java/util/Random 0 1 java/util/Collections$EmptyMap 0 112 com/ibm/security/util/ObjectIdentifier 0 5 java/security/Security$ProviderProperty 0 1 java/security/cert/CertificateFactory 0 1 sun/security/provider/SecureRandom 0 2 java/security/MessageDigest$Delegate 0 2 sun/security/provider/SHA 0 1 sun/util/calendar/ZoneInfo 0 4 com/ibm/security/x509/X500Name 0 2 [Ljava/security/cert/X509Certificate; 0 1 sun/reflect/DelegatingClassLoader 0 1 sun/security/x509/NetscapeCertTypeExtension 0 7 sun/security/x509/NetscapeCertTypeExtension$MapEntry 0 3 [[Ljava/lang/String; 0 3 java/util/Arrays$ArrayList 0 7 com/ibm/security/x509/NetscapeCertTypeExtension$MapEntry 0 1 com/ibm/security/validator/EndEntityChecker 0 1 java/util/AbstractList$Itr 0 1 com/ibm/security/util/ByteArrayLexOrder 0 1 com/ibm/security/util/ByteArrayTagOrder 0 18 [Lcom/ibm/security/x509/AVA; 0 18 com/ibm/security/util/DerInputStream 0 5 com/ibm/security/util/text/Normalizer$Mode 0 1 com/ibm/security/validator/SimpleValidator 0 1 [Lcom/ibm/security/x509/NetscapeCertTypeExtension$MapEntry; 0 4 [Lcom/ibm/security/x509/RDN; 0 1 java/util/Hashtable$Enumerator 0 4 java/util/LinkedHashMap$Entry 0 1 sun/text/resources/LocaleElements 0 1 sun/text/resources/LocaleElements_en 0 22 com/ibm/security/x509/AVAKeyword 0 4 javax/security/auth/x500/X500Principal 0 18 com/ibm/security/x509/RDN 0 18 com/ibm/security/x509/AVA 0 18 com/ibm/security/util/DerInputBuffer 0 18 com/ibm/security/util/DerValue 0 1 com/ibm/security/util/text/CompactCharArray 0 1 com/ibm/security/util/text/CompactByteArray 0 2 java/util/LinkedHashMap 0 1 java/net/InetAddress$1 0 2 [Ljava/net/InetAddress; 0 2 java/net/InetAddress$Cache 0 1 java/net/Inet4AddressImpl 0 3 java/net/Inet4Address 0 2 java/net/InetAddress$CacheEntry ........................................................................ . Global registry information . ........................................................................ Loader Objects Class name ------ ------- ---------- 0 23 [C 0 1017 java/lang/Class 0 1 java/lang/ref/Reference$ReferenceHandler 0 1 java/lang/ref/Finalizer$FinalizerThread 0 1 sun/misc/Launcher$AppClassLoader 0 32 java/io/RandomAccessFile 0 32 [B Can someone please advise me? Thanks a lot, Prasanna

    Read the article

  • Trace Flag 610 – When should you use it?

    - by simonsabin
    Thanks to Marcel van der Holst for providing this great information on the use of Trace Flag 610. This trace flag can be used to have minimal logging into a b tree (i.e. clustered table or an index on a heap) that already has data. It is a trace flag because in testing they found some scenarios where it didn’t perform as well. Marcel explains why below. “ TF610 can be used to get minimal logging in a non-empty B-Tree. The idea is that when you insert a large amount of data, you don't want to...(read more)

    Read the article

  • Static vs Singleton in C# (Difference between Singleton and Static)

    - by Jalpesh P. Vadgama
    Recently I have came across a question what is the difference between Static and Singleton classes. So I thought it will be a good idea to share blog post about it.Difference between Static and Singleton classes:A singleton classes allowed to create a only single instance or particular class. That instance can be treated as normal object. You can pass that object to a method as parameter or you can call the class method with that Singleton object. While static class can have only static methods and you can not pass static class as parameter.We can implement the interfaces with the Singleton class while we can not implement the interfaces with static classes.We can clone the object of Singleton classes we can not clone the object of static classes.Singleton objects stored on heap while static class stored in stack.more at my personal blog: dotnetjalps.com

    Read the article

  • Graphics card initialisation problems when booting - requires a "double" boot

    - by DMA57361
    Problem Outline When booting from cold (and my machine is disconnected from main power when off, but leaving it connected doesn't help) the graphics card (single PCI-e card GeForce 460) will not initialise on the first boot, leaving me with the motherboards on-board graphics (which kick in automatically if no PCI-e card is found). However, if I restart the computer - normally I do this by powering it off just after the numlock lights up on the keyboard (ie, just after POST/BIOS and before Windows takes over), wait for the system to whirr down, and power up again - the graphics card will work correctly. Once double-booted in this matter the system seems to work correctly - with no noticeable problems. This is reproducible every time I try to boot - it has been working like this for about a month now. Background Information Sept 2010 - I suffered a hardware malfunction (crashes in Windows and graphics corruption on BIOS screens). By way of spare hardware I determined that replacing the PSU removed the issue, so I replaced the PSU with a brand new one of slightly higher power (460W replaced with 500W). Oct 2010 - The problem resurfaced. I purchased a new graphics card (GeForce 460), which removed the problem. The new graphics card immediately started having the boot initialisation problems mentioned. I presumed there was a motherboard fault all along, but because the system worked once booted, and I was temporarily out of spare money, I left the system alone and continued to use it. Early/Mid Dec 2010 - In the space of 5 days I recieved 3 instances of hard drive corruption (seemlingly fixed by chkdsk and sfc in each case...). Since I was already under the impression the motherboard was faulty, I purchased a new one ASAP, this also required new RAM (as I dropped from 4 slots to 2 and didn't want to drop mem quantity). Past 3-4 weeks - With a brand new PSU, Graphics Card, Motherboard and RAM I'm suffering the problem outlined above. So, what could be causing this and how do I can resolve it? Additional Notes Once double-booted the system seems to work entirely correctly. The graphics card problem has occured on two entirely different motherboards. I do not have the opportunity to test the graphics card in a different computer (I've only the old motherboard, which is dubious, or a really old desktop that still has an AGP port). Under load (ie, modern games for long enough for temperatures to plateau) the system remains stable and performs as expected. The software that came with the new motherboard and SpeenFan both report all voltages and temperatures are within nominal bounds, when idle and when under load. I've looking over the BIOS settings for my motherboard multiple times and can find nothing that helps. This system is configured to run with everything at standard levels - no overclocking. I've tried booting the system with only the mobo and graphics card connected (thinking maybe my new PSU was too weak for the new gfx card, even though it meets the quoted PSU requirements for the card) but the same problem persists (and really if the PSU was weak I'd have problems with the system under load). When the gfx card does not initialise the fan on its cooling unit is running, possibly slower than otherwise - but this measurement is by eye and so unreliable.

    Read the article

  • Joins in single-table queries

    - by Rob Farley
    Tables are only metadata. They don’t store data. I’ve written something about this before, but I want to take a viewpoint of this idea around the topic of joins, especially since it’s the topic for T-SQL Tuesday this month. Hosted this time by Sebastian Meine (@sqlity), who has a whole series on joins this month. Good for him – it’s a great topic. In that last post I discussed the fact that we write queries against tables, but that the engine turns it into a plan against indexes. My point wasn’t simply that a table is actually just a Clustered Index (or heap, which I consider just a special type of index), but that data access always happens against indexes – never tables – and we should be thinking about the indexes (specifically the non-clustered ones) when we write our queries. I described the scenario of looking up phone numbers, and how it never really occurs to us that there is a master list of phone numbers, because we think in terms of the useful non-clustered indexes that the phone companies provide us, but anyway – that’s not the point of this post. So a table is metadata. It stores information about the names of columns and their data types. Nullability, default values, constraints, triggers – these are all things that define the table, but the data isn’t stored in the table. The data that a table describes is stored in a heap or clustered index, but it goes further than this. All the useful data is going to live in non-clustered indexes. Remember this. It’s important. Stop thinking about tables, and start thinking about indexes. So let’s think about tables as indexes. This applies even in a world created by someone else, who doesn’t have the best indexes in mind for you. I’m sure you don’t need me to explain Covering Index bit – the fact that if you don’t have sufficient columns “included” in your index, your query plan will either have to do a Lookup, or else it’ll give up using your index and use one that does have everything it needs (even if that means scanning it). If you haven’t seen that before, drop me a line and I’ll run through it with you. Or go and read a post I did a long while ago about the maths involved in that decision. So – what I’m going to tell you is that a Lookup is a join. When I run SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 285; against the AdventureWorks2012 get the following plan: I’m sure you can see the join. Don’t look in the query, it’s not there. But you should be able to see the join in the plan. It’s an Inner Join, implemented by a Nested Loop. It’s pulling data in from the Index Seek, and joining that to the results of a Key Lookup. It clearly is – the QO wouldn’t call it that if it wasn’t really one. It behaves exactly like any other Nested Loop (Inner Join) operator, pulling rows from one side and putting a request in from the other. You wouldn’t have a problem accepting it as a join if the query were slightly different, such as SELECT sod.OrderQty FROM Sales.SalesOrderHeader AS soh JOIN Sales.SalesOrderDetail as sod on sod.SalesOrderID = soh.SalesOrderID WHERE soh.SalesPersonID = 285; Amazingly similar, of course. This one is an explicit join, the first example was just as much a join, even thought you didn’t actually ask for one. You need to consider this when you’re thinking about your queries. But it gets more interesting. Consider this query: SELECT SalesOrderID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276 AND CustomerID = 29522; It doesn’t look like there’s a join here either, but look at the plan. That’s not some Lookup in action – that’s a proper Merge Join. The Query Optimizer has worked out that it can get the data it needs by looking in two separate indexes and then doing a Merge Join on the data that it gets. Both indexes used are ordered by the column that’s indexed (one on SalesPersonID, one on CustomerID), and then by the CIX key SalesOrderID. Just like when you seek in the phone book to Farley, the Farleys you have are ordered by FirstName, these seek operations return the data ordered by the next field. This order is SalesOrderID, even though you didn’t explicitly put that column in the index definition. The result is two datasets that are ordered by SalesOrderID, making them very mergeable. Another example is the simple query SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276; This one prefers a Hash Match to a standard lookup even! This isn’t just ordinary index intersection, this is something else again! Just like before, we could imagine it better with two whole tables, but we shouldn’t try to distinguish between joining two tables and joining two indexes. The Query Optimizer can see (using basic maths) that it’s worth doing these particular operations using these two less-than-ideal indexes (because of course, the best indexese would be on both columns – a composite such as (SalesPersonID, CustomerID – and it would have the SalesOrderID column as part of it as the CIX key still). You need to think like this too. Not in terms of excusing single-column indexes like the ones in AdventureWorks2012, but in terms of having a picture about how you’d like your queries to run. If you start to think about what data you need, where it’s coming from, and how it’s going to be used, then you will almost certainly write better queries. …and yes, this would include when you’re dealing with regular joins across multiples, not just against joins within single table queries.

    Read the article

  • Recycle Old Hardware into a Showcase Table

    - by Jason Fitzpatrick
    If you have a plethora of old hardware laying around, especially motherboard and expansion cards, this obsolete-hardware-to-table hack is just the ticket for your office or geek cave. The table’s design is simple. They took a regular coffee table, affixed old mother boards to it and then, over the motherboards and elevated by acrylic standoffs, they put a heavy sheet of acrylic to serve as the table top. You could replicate the design with any sort of old hardware that is interesting to look at: memory modules your company is sending off to be recycled, old digital cameras, mechanisms from peripherals headed for the scrap heap, etc. Hit up the link below to see more photos of the table. Circuit Table [Chris Harrison] How to Make and Install an Electric Outlet in a Cabinet or DeskHow To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)

    Read the article

  • Abstract Data Type and Data Structure

    - by mark075
    It's quite difficult for me to understand these terms. I searched on google and read a little on Wikipedia but I'm still not sure. I've determined so far that: Abstract Data Type is a definition of new type, describes its properties and operations. Data Structure is an implementation of ADT. Many ADT can be implemented as the same Data Structure. If I think right, array as ADT means a collection of elements and as Data Structure, how it's stored in a memory. Stack is ADT with push, pop operations, but can we say about stack data structure if I mean I used stack implemented as an array in my algorithm? And why heap isn't ADT? It can be implemented as tree or an array.

    Read the article

  • Play Majesty: The Fantasy Kingdom Sim on your Java ME phone

    - by hinkmond
    Here's a game that started on on the iDrone, then Anphoid, and now finally on Java ME tech-enabled mobile phones (thank goodness!). See: Majesty: Fantasy Kingdom Here's a quote: When you become the head of the country all the responsibility for the land's prosperity rests on your royal shoulders. You will have to fight various enemies and monsters, explore new territories, manage economic and scientific developments and solve a heap of unusual and unexpected tasks. For example, what will you do when all the gold in the kingdom transforms into cookies? Sounds like the same as becoming President of the U.S... except for the gold turning into cookies part... and the part about dragons. But, everything else is the same. Hinkmond

    Read the article

  • Does semi-normalization exist as a concept? Is it "normalized"?

    - by Gracchus
    If you don't mind, a tldr on my experience: My experience tldr I have an application that's heavily dependent upon uncertainty, a bane to database design. I tried to normalize it as best as I could according to the capabilities of my database of choice, but a "simple" query took 50ms to read. Nosql appeals to me, but I can't trust myself with it, and besides, normalization has cut down my debugging time immensely over and over. Instead of 100% normalization, I made semi-redundant 1:1 tables with very wide primary keys and equivalent foreign keys. Read times dropped to a few ms, and write times barely degraded. The semi-normalized point Given this reality, that anyone who's tried to rely upon views of fully normalized data is aware of, is this concept codified? Is it as simple as having wide unique and foreign keys, or are there any hidden secrets to this technique? Or is uncertainty merely a special case that has extremely limited application and can be left on the ash heap?

    Read the article

  • SOA Suite 11g: Unable to start domain (Error occurred during initialization of VM)

    - by Chris Tomkins
    If you have recently installed SOA Suite, created a domain and then tried to start it only to find it fails with the error: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. the solution is to edit the file <domain home>\bin\setSOADomainEnv.cmd/sh (depending on your platform) and modify the line: set DEFAULT_MEM_ARGS=-Xms512m -Xmx1024m to something like: set DEFAULT_MEM_ARGS=-Xms512m -Xmx768m Save the file and then try to start your domain again. Everything should now work at least it does on the Dell Latitude 630 laptop with 4Gb RAM that I have. Technorati Tags: soa suite,11g,java,troubleshooting,problems,domain

    Read the article

  • Is it a waste of time to free resources before I exit a process?

    - by Martin
    Let's consider a fictional program that builds a linked list in the heap, and at the end of the program there is a loop that frees all the nodes, and then exits. For this case let's say the linked list is just 500K of memory, and no special space managing is required. Is that a wast of time, because the OS will do that anyway? Will there be a different behavior later? Is that different according to the OS version? I'm mainly interested in UNIX based systems, but any information will be appreciated. I had today my first lesson in OS course and I'm wondering about that now.

    Read the article

  • Tab Sweep - State of Java EE, Dynamic JPA, Java EE performance, Garbage Collection, ...

    - by alexismp
    Recent Tips and News on Java EE 6 & GlassFish: • Java EE: The state of the environment (SDTimes) • Extend your Persistence Unit on the fly (EclipseLink blog) • Glassfish 3.1 - AccessLog Format (Ralph) • Java Enterprise Performance - Unburdended Applications (Lucas) • Java Garbage Collection and Heap Analysis (John) • Qu’attendez-vous de JMS 2.0? (Julien) • Dynamically registering WebFilter with Java EE 6 (Markus)

    Read the article

  • What's the difference between stateful and stateless?

    - by Pankaj Upadhyay
    The books and documentation on the MVC just heap on using the Stateful and Stateless terms. To be honest, i am just unable to grab the idea of it, what the books are talking about. They don't give an example to understand any of the either state, rather than just telling that HTTP is stateless and with ASP.NET MVC microsoft is going along with it. Am I missing some fundamental knowledge, as i can't understand what is stateful and why is stateful and same goes for stateless. A simple and short example that talks about a control like button or textbox can be simplify the understanding i suppose.

    Read the article

  • OSB, Service Callouts and OQL - Part 1

    - by Sabha
    Oracle Fusion Middleware customers use Oracle Service Bus (OSB) for virtualizing Service endpoints and implementing stateless service orchestrations. Behind the performance and speed of OSB, there are a couple of key design implementations that can affect application performance and behavior under heavy load. One of the heavily used feature in OSB is the Service Callout pipeline action for message enrichment and invoking multiple services as part of one single orchestration. Overuse of this feature, without understanding its internal implementation, can lead to serious problems. This post will delve into OSB internals, the problem associated with usage of Service Callout under high loads, diagnosing it via thread dump and heap dump analysis using tools like ThreadLogic and OQL (Object Query Language) and resolving it. The first section in the series will mainly cover the threading model used internally by OSB for implementing Route Vs. Service Callouts. Please refer to the blog post for more details. 

    Read the article

  • WMemoryProfiler is Released

    - by Alois Kraus
    What is it? WMemoryProfiler is a managed profiling Api to aid integration testing. This free library can get managed heap statistics and memory usage for your own process (remember testing) and other processes as well. The best thing is that it does work from .NET 2.0 up to .NET 4.5 in x86 and x64. To make it more interesting it can attach to any running .NET process. The reason why I do mention this is that commercial profilers do support this functionality only for their professional editions. An normally only since .NET 4.0 since the profiling API only since then does support attaching to a running process. This thing does differ in many aspects from “normal” profilers because while profiling yourself you can get all objects from all managed heaps back as an object array. If you ever wanted to change the state of an object which does only exist a method local in another thread you can get your hands on it now … Enough theory. Show me some code /// <summary> /// Show feature to not only get statisics out of a process but also the newly allocated /// instances since the last call to MarkCurrentObjects. /// GetNewObjects does return the newly allocated objects as object array /// </summary> static void InstanceTracking() { using (var dumper = new MemoryDumper()) // if you have problems use to see the debugger windows true,true)) { dumper.MarkCurrentObjects(); Allocate(); ILookup<Type, object> newObjects = dumper.GetNewObjects() .ToLookup( x => x.GetType() ); Console.WriteLine("New Strings:"); foreach (var newStr in newObjects[typeof(string)] ) { Console.WriteLine("Str: {0}", newStr); } } } … New Strings: Str: qqd Str: String data: Str: String data: 0 Str: String data: 1 … This is really hot stuff. Not only you can get heap statistics but you can directly examine the new objects and make queries upon them. When I do find more time I can reconstruct the object root graph from it from my own process. It this cool or what? You can also peek into the Finalization Queue to check if you did accidentally forget to dispose a whole bunch of objects … /// <summary> /// .NET 4.0 or above only. Get all finalizable objects which are ready for finalization and have no other object roots anymore. /// </summary> static void NotYetFinalizedObjects() { using (var dumper = new MemoryDumper()) { object[] finalizable = dumper.GetObjectsReadyForFinalization(); Console.WriteLine("Currently {0} objects of types {1} are ready for finalization. Consider disposing them before.", finalizable.Length, String.Join(",", finalizable.ToLookup( x=> x.GetType() ) .Select( x=> x.Key.Name)) ); } } How does it work? The W of WMemoryProfiler is a good hint. It does employ Windbg and SOS dll to do the heavy lifting and concentrates on an easy to use Api which does hide completely Windbg. If you do not want to see Windbg you will never see it. In my experience the most complex thing is actually to download Windbg from the Windows 8 Stanalone SDK. This is described in the Readme and the exception you are greeted with if it is missing in much greater detail. So I will not go into this here.   What Next? Depending on the feedback I do get I can imagine some features which might be useful as well Calculate first order GC Roots from the actual object graph Identify global statics in Types in object graph Support read out of finalization queue of .NET 2.0 as well. Support Memory Dump analysis (again a feature only supported by commercial profilers in their professional editions if it is supported at all) Deserialize objects from a memory dump into a live process back (this would need some more investigation but it is doable) The last item needs some explanation. Why on earth would you want to do that? The basic idea is to store in your live process some logging/tracing data which can become quite big but since it is never written to it is very fast to generate. When your process crashes with a memory dump you could transfer this data structure back into a live viewer which can then nicely display your program state at the point it did crash. This is an advanced trouble shooting technique I have not seen anywhere yet but it could be quite useful. You can have here a look at the current feature list of WMemoryProfiler with some examples.   How To Get Started? First I would download the released source package (it is tiny). And compile the complete project. Then you can compile the Example project (it has this name) and uncomment in the main method the scenario you want to check out. If you are greeted with an exception it is time to install the Windows 8 Standalone SDK which is described in great detail in the exception text. Thats it for the first round. I have seen something more limited in the Java world some years ago (now I cannot find the link anymore) but anyway. Now we have something much better.

    Read the article

  • Mysql not starting - innodb not found

    - by Rob Guderian
    I have a fresh install of ubuntu 12.04 server edition and mysql server is not starting properly. I did a simple apt-get install apt-get install mysql-server But, it's failing with this error message root@test:~# mysqld 120618 20:57:32 [Warning] The syntax '--log-slow-queries' is deprecated and will be removed in a future release. Please use '--slow-query-log'/'--slow-query-log-file' instead. 120618 20:57:32 [Note] Plugin 'FEDERATED' is disabled. 120618 20:57:32 InnoDB: The InnoDB memory heap is disabled 120618 20:57:32 InnoDB: Mutexes and rw_locks use GCC atomic builtins 120618 20:57:32 InnoDB: Compressed tables use zlib 1.2.3.4 120618 20:57:32 InnoDB: Unrecognized value fdatasync for innodb_flush_method 120618 20:57:32 [ERROR] Plugin 'InnoDB' init function returned error. 120618 20:57:32 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 120618 20:57:32 [ERROR] Unknown/unsupported storage engine: InnoDB 120618 20:57:32 [ERROR] Aborting I can start the server with the "--skip-innodb --default-storage-engine=myisam" flags, but would like to use innodb. Does anyone know what the issue here is?

    Read the article

  • LightView: JavaFX 2 real-time visualizer for GlassFish

    - by arungupta
    Adam Bien launched LightFish, a light-weight monitoring and visualization application for GlassFish. It comes with a introduction and a screencast to get you started. The tool provides monitoring information about threads and memory (such as heap size, thread count, peak thread count), transactions (commits and rollbacks), HTTP sessions, JDBC sessions, and even "paranormal activity". In a recently released first part of a tri-part article series at OTN, Adam explains how REST services can be exposed as bindable set of properties for JavaFX. The article titled "Enterprise side of JavaFX" shows how a practical combination of REST and JavaFX together. It explains how read-only and dynamic properties can be created. The fine-grained binding model allows clear separation of the view, presentation, and business logic. Read the first part here.

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • SS7(M3UA, SCCP, TCAP, MAP) Stack

    - by Ammar Hameed
    I'm building an open source SMSC from scratch; it's almost finished, The SRI and the forwardSM operations are working, but I still have few things to do for the receiving part. I've built the SS7 stack already, but I'm using DB for saving the TCAP transactions IDs to be updated later to get/generate responses. My approach is this: I created memory table (heap table), saved the TCAP TID in the database, then compared the received TCAP TID with the TIDs saved in the database and then decide whether to end the TCAP session or continue. What is the best way to implement it? I'm thinking of doubly linked list that holds the TCAP TID. Am I going towards the right direction, or should I use another technique other than database or D-linked list? Should I leave it as it is, and let the database do the job for saving the TIDs? Please note that I'm using SCTP implementation available on Linux (lsctp) as a transport protocol, the language I'm using is C and the DB is MYSQL.

    Read the article

  • Class Versus Struct

    - by Prometheus87
    In C++ and other influenced languages there is a construct called Structure (struct) and we all know the class. Both are capable of holding functions and variables. some differences are 1. Class is given memory in heap and struct is given memory in stack 2. in class variable are private by default and in struct thy are public My question is that struct was somehow abandoned for Class. Why? other that abstraction, a struct can do all the same stuff a class does. Then why abandon it?

    Read the article

  • How can I neatly embed Flash in a page in a way that is cross-browser compatible?

    - by Mark Hatton
    When I receive Flash objects from my designer, it comes with an example HTML page which includes both <object> tags and <embed> tags as well as a whole heap of JavaScript. If I copy and paste this code in to my webpage, it works, but the code looks a mess (and there is so much of it!). If I remove the extra code and try either just <embed> or <object> on their own, it works in some browsers, but not others. Is there a neat, minimal method that works in all the major browsers?

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >