Search Results

Search found 3815 results on 153 pages for 'chris cap'.

Page 146/153 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • StackOverFlowError while creating Mac object on AS400/Java

    - by Prasanna K Rao
    Hello all, I am a newbie to AS400-Java programming. I am trying to create my first program to test the implementation of Message Authentication Code (MAC). I am trying to use the HMACSHA1 hash function. My (Java 1.4) program runs fine on a dev box (V5R4).But fails terribly on the QA box (V5R3). My program is as below: ===================================================== import java.security.InvalidKeyException; import java.security.NoSuchAlgorithmException; import java.security.Security; import java.security.Provider; import javax.crypto.Mac; import javax.crypto.spec.SecretKeySpec; import javax.crypto.SecretKey; public class Test01 { private static final String HMAC_SHA1_ALGORITHM = "HmacSHA1"; public static void main (String [] arguments) { byte[] key = { 1,2,3,4,5,6,7,8}; SecretKeySpec SHA1key = new SecretKeySpec(key, "HmacSHA1"); Mac hmac; String strFinalRslt = ""; try { hmac = Mac.getInstance("HmacSHA1"); hmac.init(SHA1key); byte[] result = hmac.doFinal(); strFinalRslt = toHexString(result); }catch (NoSuchAlgorithmException e) { // TODO Auto-generated catch block e.printStackTrace(); }catch (InvalidKeyException e) { // TODO Auto-generated catch block e.printStackTrace(); }catch(StackOverflowError e){ e.printStackTrace(); } System.out.println(strFinalRslt); System.out.println("All done!!!"); } public static byte[] fromHexString ( String s ) { int stringLength = s.length(); if ( (stringLength & 0x1) != 0 ) { throw new IllegalArgumentException ( "fromHexString requires an even number of hex characters" ); } byte[] b = new byte[stringLength / 2]; for ( int i=0,j=0; i 4] ); //look up low nibble char sb.append( hexChar [b[i] & 0x0f] ); } return sb.toString(); } static char[] hexChar = { '0' , '1' , '2' , '3' , '4' , '5' , '6' , '7' , '8' , '9' , 'a' , 'b' , 'c' , 'd' , 'e' , 'f'}; } This program compiles fine and gets the correct response on my win-xp client and also my dev box. But, fails with the following error on the QA box: java.lang.StackOverflowError at java.lang.Throwable.(Throwable.java:180) at java.lang.Error.(Error.java:37) at java.lang.StackOverflowError.(StackOverflowError.java:24) at java.io.Os400FileSystem.list(Native method) at java.io.File.list(File.java:922) at javax.crypto.b.e(Unknown source) at javax.crypto.b.a(Unknown source) at javax.crypto.b.c(Unknown source) at javax.crypto.b£0.run(Unknown source) at javax.crypto.b.(Unknown source) at javax.crypto.Mac.getInstance(Unknown source) I have verified the java.security file and entry corresponding to the jce files are all ok. The DMPJVM command gives me the following response: Thu Jun 03 12:25:34 E Java Virtual Machine Information 016822/QPGMR/11111 ........................................................................ . Classpath . ........................................................................ java.version=1.4 sun.boot.class.path=/QIBM/ProdData/OS400/Java400/jdk/lib/jdkptf14.zip:/QIBM /ProdData/OS400/Java400/ext/ibmjssefw.jar:/QIBM/ProdData/CAP/ibmjsseprovide r.jar:/QIBM/ProdData/OS400/Java400/ext/ibmjsseprovider2.jar:/QIBM/ProdData/ OS400/Java400/ext/ibmpkcs11impl.jar:/QIBM/ProdData/CAP/ibmjssefips.jar:/QIB M/ProdData/OS400/Java400/jdk/lib/IBMiSeriesJSSE.jar:/QIBM/ProdData/OS400/Ja va400/jdk/lib/jce.jar:/QIBM/ProdData/OS400/Java400/jdk/lib/jaas.jar:/QIBM/P rodData/OS400/Java400/jdk/lib/ibmcertpathfw.jar:/QIBM/ProdData/OS400/Java40 0/jdk/lib/ibmcertpathprovider.jar:/QIBM/ProdData/OS400/Java400/ext/ibmpkcs. jar:/QIBM/ProdData/OS400/Java400/jdk/lib/ibmjgssfw.jar:/QIBM/ProdData/OS400 /Java400/jdk/lib/ibmjgssprovider.jar:/QIBM/ProdData/OS400/Java400/jdk/lib/s ecurity.jar:/QIBM/ProdData/OS400/Java400/jdk/lib/charsets.jar:/QIBM/ProdDat a/OS400/Java400/jdk/lib/resources.jar:/QIBM/ProdData/OS400/Java400/jdk/lib/ rt.jar:/QIBM/ProdData/OS400/Java400/jdk/lib/sunrsasign.jar:/QIBM/ProdData/O S400/Java400/ext/IBMmisc.jar:/QIBM/ProdData/Java400/ java.class.path=/myhome/lib/commons-codec-1.3.jar:/myhome/lib/commons-httpc lient-3.1.jar:/myhome/lib/commons-logging-1.1.jar:/myhome/lib/log4j-1.2.15.jar:/myhome/lib/log4j-core.jar ; java.ext.dirs=/QIBM/ProdData/OS400/Java400/jdk/lib/ext:/QIBM/UserData/Java4 00/ext:/QIBM/ProdData/Java400/jdk14/lib/ext java.library.path=/QSYS.LIB/ROBOTLIB.LIB:/QSYS.LIB/QTEMP.LIB:/QSYS.LIB/ODIP GM.LIB:/QSYS.LIB/QGPL.LIB ........................................................................ . Garbage Collection . ........................................................................ Garbage collector parameters Initial size: 16384 K Max size: 240000000 K Current values Heap size: 437952 K Garbage collections: 58 Additional values JIT heap size: 53824 K JVM heap size: 55752 K Last GC cycle time: 1333 ms ........................................................................ . Thread information . ........................................................................ Information for 4 thread(s) of 4 thread(s) processed Thread: 00000004 Thread-0 TDE: B00380000BAA0000 Thread priority: 5 Thread status: Running Thread group: main Runnable: java/lang/Thread Stack: java/io/Os400FileSystem.list(Ljava/io/File;)[Ljava/lang/String;+0 (Os400FileSystem.java:0) java/io/File.list()[Ljava/lang/String;+19 (File.java:922) javax/crypto/b.e()[B+127 (:0) javax/crypto/b.a(Ljava/security/cert/X509Certificate;)V+7 (:0) javax/crypto/b.access$500(Ljava/security/cert/X509Certificate;)V+1 (:0) javax/crypto/b$0.run()Ljava/lang/Object;+98 (:0) javax/crypto/b.()V+507 (:0) javax/crypto/Mac.getInstance(Ljava/lang/String;)Ljavax/crypto/Mac;+10 (:0) Locks: None Thread: 00000007 jitcompilethread TDE: B00380000BD58000 Thread priority: 5 Thread status: Java wait Thread group: system Runnable: java/lang/Thread Stack: None Locks: None Thread: 00000005 Reference Handler TDE: B00380000BAAC000 Thread priority: 10 Thread status: Waiting Wait object: java/lang/ref/Reference$Lock Thread group: system Runnable: java/lang/ref/Reference$ReferenceHandler Stack: java/lang/Object.wait()V+1 (Object.java:452) java/lang/ref/Reference$ReferenceHandler.run()V+47 (Reference.java:169) Locks: None Thread: 00000006 Finalizer TDE: B00380000BAB3000 Thread priority: 8 Thread status: Waiting Wait object: java/lang/ref/ReferenceQueue$Lock Thread group: system Runnable: java/lang/ref/Finalizer$FinalizerThread Stack: java/lang/ref/ReferenceQueue.remove(J)Ljava/lang/ref/Reference;+43 (ReferenceQueue.java:111) java/lang/ref/ReferenceQueue.remove()Ljava/lang/ref/Reference;+1 (ReferenceQueue.java:127) java/lang/ref/Finalizer$FinalizerThread.run()V+3 (Finalizer.java:171) Locks: None ........................................................................ . Class loader information . ........................................................................ 0 Default class loader 1 sun/reflect/DelegatingClassLoader 2 sun/misc/Launcher$ExtClassLoader ........................................................................ . GC heap information . ........................................................................ Loader Objects Class name ------ ------- ---------- 0 1493 [C 0 2122181 java/lang/String 0 47 [Ljava/util/Hashtable$Entry; 0 68 [Ljava/lang/Object; 0 1016 java/lang/Class 0 31 java/util/HashMap 0 37 java/util/Hashtable 0 2 java/lang/ThreadGroup 0 2 java/lang/RuntimePermission 0 2 java/lang/ref/ReferenceQueue$Null 0 5 java/lang/ref/ReferenceQueue 0 50 java/util/Vector 0 4 java/util/Stack 0 3 sun/misc/SoftCache 0 1 [Ljava/lang/ThreadGroup; 0 5 [Ljava/io/ObjectStreamField; 0 1 sun/reflect/ReflectionFactory 0 7 java/lang/ref/ReferenceQueue$Lock 0 10 java/lang/Object 0 1 java/lang/String$CaseInsensitiveComparator 0 1 java/util/Hashtable$EmptyEnumerator 0 1 java/util/Hashtable$EmptyIterator 0 33 [Ljava/util/HashMap$Entry; 0 19210 [J 0 1 sun/nio/cs/StandardCharsets 0 5 java/util/TreeMap 0 1075 java/util/TreeMap$Entry 0 469 [Ljava/lang/String; 0 1 java/lang/StringBuffer 0 2 java/io/FileInputStream 0 2 java/io/FileOutputStream 0 2 java/io/BufferedOutputStream 0 1 java/lang/reflect/ReflectPermission 0 1 [[Ljava/lang/ref/SoftReference; 0 2 [Ljava/lang/ref/SoftReference; 0 2 sun/nio/cs/Surrogate$Parser 0 3 sun/misc/Signal 0 1 [Ljava/io/File; 0 6 java/io/File 0 1 java/util/BitSet 0 17 sun/reflect/NativeConstructorAccessorImpl 0 2 java/net/URLClassLoader$ClassFinder 0 12 java/util/ArrayList 0 32 java/io/RandomAccessFile 0 16 java/lang/Thread 0 1 java/lang/ref/Reference$ReferenceHandler 0 1 java/lang/ref/Finalizer$FinalizerThread 0 266 [B 0 2 java/util/Properties 0 71 java/lang/ref/Finalizer 0 2 com/ibm/nio/cs/DirectEncoder 0 38 java/lang/reflect/Constructor 0 33 java/util/jar/JarFile 0 19200 java/lang/StackOverflowError 0 5 java/security/AccessControlContext 0 2 [Ljava/lang/Thread; 0 4 java/lang/OutOfMemoryError 0 1065 java/util/Hashtable$Entry 0 1 java/io/BufferedInputStream 0 2 java/io/PrintStream 0 2 java/io/OutputStreamWriter 0 428 [I 0 3 java/lang/ClassLoader$NativeLibrary 0 25 java/util/Locale 0 3 sun/misc/URLClassPath 0 30 java/util/zip/Inflater 0 612 java/util/HashMap$Entry 0 2 java/io/FilePermission 0 10 java/io/ObjectStreamField 0 1 java/security/BasicPermissionCollection 0 2 java/security/ProtectionDomain 0 1 java/lang/Integer$1 0 1 java/lang/ref/Reference$Lock 0 1 java/lang/Shutdown$Lock 0 1 java/lang/Runtime 0 36 java/io/FileDescriptor 0 1 java/lang/Long$1 0 202 java/lang/Long 0 3 java/lang/ThreadLocal 0 3 java/nio/charset/CodingErrorAction 0 2 java/nio/charset/CoderResult 0 1 java/nio/charset/CoderResult$1 0 1 java/nio/charset/CoderResult$2 0 1 sun/misc/Unsafe 0 2 java/nio/ByteOrder 0 1 java/io/Os400FileSystem 0 3 java/lang/Boolean 0 1 java/lang/Terminator$1 0 23 java/lang/Integer 0 2 sun/misc/NativeSignalHandler 0 1 sun/misc/Launcher$Factory 0 1 sun/misc/Launcher 0 53 [Ljava/lang/Class; 0 1 java/lang/reflect/ReflectAccess 0 18 sun/reflect/DelegatingConstructorAccessorImpl 0 1 sun/net/www/protocol/file/Handler 0 3 java/util/HashSet 0 3 sun/net/www/protocol/jar/Handler 0 1 java/util/jar/JavaUtilJarAccessImpl 0 1 java/net/UnknownContentHandler 0 2 [Ljava/security/Principal; 0 10 [Ljava/security/cert/Certificate; 0 2 sun/misc/AtomicLongCSImpl 0 3 sun/reflect/DelegatingMethodAccessorImpl 0 1 sun/security/util/ByteArrayLexOrder 0 1 sun/security/util/ByteArrayTagOrder 0 7 sun/security/x509/CertificateVersion 0 7 sun/security/x509/CertificateSerialNumber 0 7 sun/security/x509/SerialNumber 0 7 sun/security/x509/CertificateAlgorithmId 0 7 sun/security/x509/CertificateIssuerName 0 60 sun/security/x509/RDN 0 60 [Lsun/security/x509/AVA; 0 67 sun/security/util/DerInputStream 0 3 [Ljava/math/BigInteger; 0 2 com/ibm/nio/cs/Converter 0 2 sun/nio/cs/StreamEncoder$CharsetSE 0 35 java/lang/ref/SoftReference 0 2 java/nio/HeapByteBuffer 0 2 java/io/BufferedWriter 0 33 sun/misc/URLClassPath$JarLoader 0 4 java/lang/ThreadLocal$ThreadLocalMap$Entry 0 76 java/net/URL 0 1 sun/misc/Launcher$ExtClassLoader 0 1 sun/misc/Launcher$AppClassLoader 0 4 java/lang/Throwable 0 7 java/lang/reflect/Method 0 2 sun/misc/URLClassPath$FileLoader 0 2 java/security/CodeSource 0 2 java/security/Permissions 0 2 java/io/FilePermissionCollection 0 1 java/lang/ThreadLocal$ThreadLocalMap 0 1 javax/crypto/spec/SecretKeySpec 0 17 java/util/jar/Attributes$Name 0 1 [Ljava/lang/ThreadLocal$ThreadLocalMap$Entry; 0 1 java/security/SecureRandom 0 2 sun/security/provider/Sun 0 1 java/util/jar/JarFile$JarFileEntry 0 1 java/util/jar/JarVerifier 0 3 sun/reflect/NativeMethodAccessorImpl 0 116 sun/security/util/ObjectIdentifier 0 1 java/lang/Package 0 2 [S 0 104 java/math/BigInteger 0 20 sun/security/x509/AlgorithmId 0 14 sun/security/x509/X500Name 0 14 [Lsun/security/x509/RDN; 0 60 sun/security/x509/AVA 0 67 sun/security/util/DerValue 0 67 sun/security/util/DerInputBuffer 0 21 sun/security/x509/AVAKeyword 0 6 sun/security/x509/X509CertImpl 0 7 sun/security/x509/X509CertInfo 0 1 [Lsun/security/util/ObjectIdentifier; 0 1 [[Ljava/lang/Byte; 0 3 [[B 0 7 sun/security/provider/DSAPublicKey 0 7 sun/security/x509/AuthorityKeyIdentifierExtension 0 12 [Ljava/lang/Byte; 0 14 java/lang/Byte 0 7 sun/security/x509/CertificateSubjectName 0 7 sun/security/x509/CertificateX509Key 0 14 sun/security/x509/KeyIdentifier 0 4 [Z 0 5 sun/text/Normalizer$Mode 0 7 sun/security/x509/CertificateValidity 0 14 java/util/Date 0 7 sun/security/provider/DSAParameters 0 7 sun/security/util/BitArray 0 7 sun/security/x509/CertificateExtensions 0 7 java/security/AlgorithmParameters 0 7 sun/security/x509/SubjectKeyIdentifierExtension 0 5 sun/security/x509/BasicConstraintsExtension 0 2 sun/security/x509/KeyUsageExtension 0 1 sun/text/CompactCharArray 0 1 sun/text/CompactByteArray 0 1 sun/net/www/protocol/jar/JarFileFactory 0 1 java/util/Collections$EmptySet 0 1 java/util/Collections$EmptyList 0 1 java/util/Collections$ReverseComparator 0 1 com/ibm/security/jgss/i18n/PropertyResource 0 1 javax/crypto/b$0 0 1 sun/security/provider/X509Factory 0 1 sun/reflect/BootstrapConstructorAccessorImpl 1 1 sun/reflect/GeneratedConstructorAccessor3202134454 2 1 com/ibm/crypto/provider/IBMJCE 0 6 java/util/ResourceBundle$LoaderReference 0 1 [Lsun/security/x509/NetscapeCertTypeExtension$MapEntry; 0 1 com/sun/rsajca/Provider 0 1 com/ibm/security/cert/IBMCertPath 0 1 com/ibm/as400/ibmonly/net/ssl/Provider 0 1 com/ibm/jsse/IBMJSSEProvider 0 1 com/ibm/security/jgss/IBMJGSSProvider 0 5 org/ietf/jgss/Oid 0 1 java/util/PropertyResourceBundle 0 7 java/util/ResourceBundle$ResourceCacheKey 0 2 sun/net/www/protocol/jar/URLJarFile 0 6 sun/misc/SoftCache$ValueCell 0 1 java/util/Random 0 1 java/util/Collections$EmptyMap 0 112 com/ibm/security/util/ObjectIdentifier 0 5 java/security/Security$ProviderProperty 0 1 java/security/cert/CertificateFactory 0 1 sun/security/provider/SecureRandom 0 2 java/security/MessageDigest$Delegate 0 2 sun/security/provider/SHA 0 1 sun/util/calendar/ZoneInfo 0 4 com/ibm/security/x509/X500Name 0 2 [Ljava/security/cert/X509Certificate; 0 1 sun/reflect/DelegatingClassLoader 0 1 sun/security/x509/NetscapeCertTypeExtension 0 7 sun/security/x509/NetscapeCertTypeExtension$MapEntry 0 3 [[Ljava/lang/String; 0 3 java/util/Arrays$ArrayList 0 7 com/ibm/security/x509/NetscapeCertTypeExtension$MapEntry 0 1 com/ibm/security/validator/EndEntityChecker 0 1 java/util/AbstractList$Itr 0 1 com/ibm/security/util/ByteArrayLexOrder 0 1 com/ibm/security/util/ByteArrayTagOrder 0 18 [Lcom/ibm/security/x509/AVA; 0 18 com/ibm/security/util/DerInputStream 0 5 com/ibm/security/util/text/Normalizer$Mode 0 1 com/ibm/security/validator/SimpleValidator 0 1 [Lcom/ibm/security/x509/NetscapeCertTypeExtension$MapEntry; 0 4 [Lcom/ibm/security/x509/RDN; 0 1 java/util/Hashtable$Enumerator 0 4 java/util/LinkedHashMap$Entry 0 1 sun/text/resources/LocaleElements 0 1 sun/text/resources/LocaleElements_en 0 22 com/ibm/security/x509/AVAKeyword 0 4 javax/security/auth/x500/X500Principal 0 18 com/ibm/security/x509/RDN 0 18 com/ibm/security/x509/AVA 0 18 com/ibm/security/util/DerInputBuffer 0 18 com/ibm/security/util/DerValue 0 1 com/ibm/security/util/text/CompactCharArray 0 1 com/ibm/security/util/text/CompactByteArray 0 2 java/util/LinkedHashMap 0 1 java/net/InetAddress$1 0 2 [Ljava/net/InetAddress; 0 2 java/net/InetAddress$Cache 0 1 java/net/Inet4AddressImpl 0 3 java/net/Inet4Address 0 2 java/net/InetAddress$CacheEntry ........................................................................ . Global registry information . ........................................................................ Loader Objects Class name ------ ------- ---------- 0 23 [C 0 1017 java/lang/Class 0 1 java/lang/ref/Reference$ReferenceHandler 0 1 java/lang/ref/Finalizer$FinalizerThread 0 1 sun/misc/Launcher$AppClassLoader 0 32 java/io/RandomAccessFile 0 32 [B Can someone please advise me? Thanks a lot, Prasanna

    Read the article

  • Why does snmp fail to use its own MIBs?

    - by chrisdew
    I've done a fresh install of Ubuntu 12.04LTS, and installed the snmpd and snmp packages. If I type: snmpwalk -m ALL -v2c -c public localhost 1.3 I get swathes of errors, of the form: Cannot adopt OID in SQUID-MIB: cacheClients ::= { cacheProtoAggregateStats 15 } Cannot adopt OID in NET-SNMP-EXTEND-MIB: nsExtendLineIndex ::= { nsExtendOutput2Entry 1 } Cannot adopt OID in NET-SNMP-EXTEND-MIB: nsExtendOutLine ::= { nsExtendOutput2Entry 2 } Cannot adopt OID in UCD-SNMP-MIB: laIndex ::= { laEntry 1 } Cannot adopt OID in UCD-SNMP-MIB: laNames ::= { laEntry 2 } Cannot adopt OID in UCD-SNMP-MIB: laLoad ::= { laEntry 3 } Cannot adopt OID in UCD-SNMP-MIB: laConfig ::= { laEntry 4 } Cannot adopt OID in UCD-SNMP-MIB: laLoadInt ::= { laEntry 5 } Cannot adopt OID in UCD-SNMP-MIB: laLoadFloat ::= { laEntry 6 } Cannot adopt OID in UCD-SNMP-MIB: laErrorFlag ::= { laEntry 100 } Cannot adopt OID in UCD-SNMP-MIB: laErrMessage ::= { laEntry 101 } Cannot adopt OID in NET-SNMP-AGENT-MIB: nsNotifyRestart ::= { netSnmpNotifications 3 } Cannot adopt OID in NET-SNMP-AGENT-MIB: nsNotifyShutdown ::= { netSnmpNotifications 2 } Cannot adopt OID in NET-SNMP-AGENT-MIB: nsNotifyStart ::= { netSnmpNotifications 1 } There a literally hundreds of these. If snmp doesn't even like the distro-included MIBs, what chance to I have of getting my own used? (I get the same form of error with my own MIB, on a different machine, which is why I set up a clean install.) Do other distros have this issue? Is there something obvious that I am overlooking here? Thanks, Chris.

    Read the article

  • authbind, privbind or iptables REDIRECT (port 80 to 8080)?

    - by chris_l
    Hi, I'd like to run Glassfish v3 as a non-privileged user on Linux (Debian), but make it available on port 80. I'm currently doing this with iptables: iptables -t nat -I PREROUTING -p tcp -d x.x.x.x --dport 80 -j REDIRECT --to-port 8080 This works, but I wonder: If this has any significant performance impact compared to binding directly to port 80 If I could make a similar setup also work for HTTPS (or if that must run on 443) If there's a way to avoid other users from binding to port 8080 (in case my server crashes) - maybe block that port permanently to other users somehow? ...or if I should use authbind/privbind instead? Problem: I couldn't make it work with authbind or privbind so far. For authbind, I edited asadmin's last line to: exec authbind --deep "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... For privbind: exec privbind -u glassfish "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... (Only) with these settings, I can successfully perform a create-domain --domainport 80. This proves, that authbind and privbind actually work (the authbind version of the script is called by the glassfish user; the privbind version is called by root of course). However, in both cases I get the following exception, when starting the domain (start-domain): [#|2010-03-20T13:25:21.925+0100|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=11;_ThreadName=FelixStartLevel;|Shutting down v3 due to startup exception : Permission denied: 80=com.sun.enterprise.v3.services.impl.monitor.MonitorableSelectorHandler@1fc25e5|#] I haven't found a solution for that yet (after searching the web, it seems, that this isn't so easy?) But maybe, the solution with iptables is good enough - what do you think? Thanks, Chris

    Read the article

  • Apache httpd Problem

    - by Christopher
    Hey, I am getting intermittent issues with my site. Pages often hang with huge loading times and sometimes fail to load. The httpd error logs contain the following: [Wed Feb 23 06:54:17 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 5871 for worker proxy:reverse [Wed Feb 23 06:54:17 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 5871 for (*) [Wed Feb 23 06:54:24 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 5872 for worker proxy:reverse [Wed Feb 23 06:54:24 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Wed Feb 23 06:54:24 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 5872 for (*) [Wed Feb 23 06:59:15 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 5954 for worker proxy:reverse [Wed Feb 23 06:59:15 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized The server is currently running with 800mb free memory, so it is not caused by lack of RAM. The current number of httpd procceses is 11. This does increase as the error persists and can rise up to 25+. Also, I am running Apache/2.2.3 (CentOS). Any suggestions would be greatly appreciated. Many thanks, Chris.

    Read the article

  • High Apache CPU usage, but low nginx - Configured correctly?

    - by Buckers
    We've just moved a website of ours over to a brand new high-spec Linux server (1x Intel Xeon E3-1230 v2 @ 3.30GHz, 8GB DDR3 ECC, 2x 128GB SATA SSD RAID1). The server has been configured to use nginx but we're not sure if its working correctly. The site always loads very fast to us (http://www.onedirection.net), but Plesk often sends us reports that the Apache CPU usage percentage reaches high leves, yet when we look at the nginx percentage it's always very low. We've come from a Windows background so are very new to Linux, but shouldn't nginx run INSTEAD of apache? Here's a screenshot from Plesk showing the CPU usage: http://www.pixelkicks.co.uk/_download/plesk.JPG The website gets around 20,000 visitors per day, and we use W3 Total Cache to get it running as fast as possible. MySQL has been optimised well. Memory usage is only running at 2GB of the 8GB. Does this look right? How can we tell that nginx is doing most of the work? Thanks, Chris.

    Read the article

  • Body of email breaks distribution list in exchange?

    - by widgisoft
    Hi, I have a very odd problem that I'm not sure is a programming issue or a server issue :-p. Basically I'm sending an email to an exchange distribution list that includes a PHP stack trace; during certain faults the trace includes really high level information such as the machine's environment variables (during file reads, etc.). I went through a copy of the email line by line until the email sent and it appears the line: [SUDO_COMMAND] => /etc/init.d/httpd restart is the culprit. Adding a string replacement in before the email is sent allows a successful send. What I don't understand is WHY these stream of characters are causing the issue ONLY on the distribution email. If I send the email to myself as well, i.e. "[email protected]; [email protected]", then I get the email fine. Re-ordering the list doesn't make a difference the group never gets the email. Because the individual gets the email and not the group I'm assuming the fault is with exchange and some rogue filtering - I've gone through it with the sysadmins and there's no filtering of any sort on that group... so maybe it's a bug? I can't find anyone else having recorded this specific fault so I figured I'd open it here. For now I'm just not using the distribution list but it'd be nice to eventually find the solution. Many thanks, Chris

    Read the article

  • Connecting to MySQL Server from PHP Command Line (MAMP)

    - by Austin White
    First of all, I'm using Mac OSX 1.6, MAMP 1.9, PHP 5.3.4, and MySQL 5.1.44. I'm in the process of setting up a video encoding service for a site using Chris Boulton's PHP-Resque and Redis. Once the worker process is fired and the videos have been encoded, I need to save their locations to a mysql database. The php script is being run from the shell, so that is where the issue begins. I import the mysql settings and when it attempts to connect, I get the following errors: Warning: mysqli::mysqli(): php_network_getaddresses: getaddrinfo failed: nodename nor servname provided, or not known in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::mysqli(): [2002] php_network_getaddresses: getaddrinfo failed: nodename nor servn (trying to connect via tcp://MYSQL_SERVER:3306) in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::mysqli(): (HY000/2002): php_network_getaddresses: getaddrinfo failed: nodename nor servname provided, or not known in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 24 Warning: mysqli::set_charset(): Couldn't fetch MySQLi_Extended in /Users/austingym/Documents/Dropbox/Website/htdocs/homefree/lib/MySQLi_Extended.class.php on line 32 I realize that the error is occurring because it's trying to connect to tcp://MYSQL_SERVER:3306, when MySQL is on port 8889. I've been reading about Mac OSX and MAMP errors regarding the mysql.sock and I've gone through multiple forums and tried various fixes, but none have worked. I've tried PATH=/Applications/MAMP/Library/bin/:/Applications/MAMP/bin/php5.3/bin/:/opt/local/bin:/opt/local/sbin:$PATH and sudo ln -s /Applications/MAMP/tmp/mysql/mysql.sock /tmp/mysql.sock but neither have worked. I even ran a search on my machine for "3306" to find where it's being set, but because that's the normal default, I'm guessing it's not being set explicitly. Any clues on how to fix this rather challenging error?

    Read the article

  • Sync Two Exchange accounts or Ready Only access to subfolders

    - by cpgascho
    This is two questions kind of. The situation is as follows. I am running SBS 2008 with Exchange 2007. There is a shared account which has subfolders to keep track of the process of jobs that are coming into the company (ie: sales) I need to give other people in the company read access to this mailbox not full control. When I give ready only access to the root other users can only see the Inbox and not subfolders. Permissions have to be applied to each folder. One solution I have considered is creating a secondary mailbox that everyone could have full access too which would have a one way sync from the sales mailbox to the secondary mailbox. Then people could see what was happening without messing up the main mailbox by accident (at worst they would mess up the secondary mailbox) Ideally I could find a way to propgate the READ ONLY Permissiosn to all the subfolders. I have tried using PFDavAdmin to do this but have not been able to get it to connect successfully from Windows 7 To Exchange 2007 Any idea on how to 1. Propogate permissions (get PFDavAdmin to work??!) 2. Sync mailboxes 3. Other solution? Thanks Chris

    Read the article

  • I need some help with either my SQL or my PHP I do not know which...

    - by sico87
    Hello I am creating a CMS and some of the functionality of it that the images that are within the content are managable. I currently trying to display a table that shows the the content title and then the associated images, ideally I would like a layout similar to this, Content Title Image 1 Image 2 Image 3 Content Title 2 Image 1 Image 2 Content Title 3 Image 1 The SQL the returns the data is actually formed using Codeigniters Active Record class, function getAllContentImages() { $this->db->select('*'); $this->db->from('contentImagesTable'); $this->db->join('contentTable', 'contentTable.contentId = contentImagesTable.contentId'); $this->db->join('categoryTable', 'categoryTable.categoryId = contentTable.categoryId'); $query = $this->db->get(); return $query->result_array(); } The array that is returned is looks like this, I have cut the size down for readability. Array ( [0] => Array ( [contentImageId] => 25 [contentImageName] => green.png [contentImageType] => .png [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/2/green.png [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265222654 [contentId] => 2 [dashboardUserId] => 0 [contentTitle] => sadsadsadassss [contentAbstract] => <p>Pllllleeeeeeeaaaaasssssseeeeee Work</p> [contentBody] => <p>Please work :-( please</p> [contentOnline] => 0 [contentAllowComments] => 0 [contentDateCreated] => 1265124038 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [1] => Array ( [contentImageId] => 28 [contentImageName] => yellow.png [contentImageType] => .png [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/7/yellow.png [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265388055 [contentId] => 7 [dashboardUserId] => 0 [contentTitle] => Another Blog [contentAbstract] => <p>This is another blog and it is shit becuase this does not work</p> [contentBody] => <p>ioasfihfududfhdufhuishdfiudshfiudhsfiuhdsiufhusdhfuids</p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265388034 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [2] => Array ( [contentImageId] => 33 [contentImageName] => portaski.jpg [contentImageType] => .jpg [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/11/portaski.jpg [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265714175 [contentId] => 11 [dashboardUserId] => 0 [contentTitle] => Portaski - new product and brand launch by Bang [contentAbstract] => <p>Bang's experience in new product development has helped launch PortaSki &ndash; the pocket-sized device which is set to revolutionise skiing.</p> [contentBody] => <p>After developing Portaski's brand identity and positioning, Bang re-designed the product and its packaging ahead of launch in late 2008.</p> <p>A media and PR strategy was devised and implemented using Bang's close relationship with two of the UK's most influential organisations in the Advertising and Media Buying industries. On-line advertising was supported with editorial reviews in the UK's leading broadsheets and tabloids, which combined with pin-point HTML direct mail to drive consumers to the new e-commerce site.</p> <p>Impressive month-on-month growth has been achieved since launch, and the direct marketing activity resulted in an unprecedented 2.71% of targets going on-line to purchase a PortaSki.</p> <p>For further information visit <a href="http://www.portaski.com" target="_blank">www.portaski.com</a></p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265718184 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [3] => Array ( [contentImageId] => 26 [contentImageName] => housingplus.jpg [contentImageType] => .jpg [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/5/housingplus.jpg [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265284989 [contentId] => 5 [dashboardUserId] => 0 [contentTitle] => Bang launches Housing Plus [contentAbstract] => <p>Bang has launched Housing Plus, the new brand for the Central Borders Housing Group, along with new sub-brands Property Care and SSHA.</p> [contentBody] => <p>The Midlands based Group, with turnover in excess of &pound;21M, appointed Bang in 2008 following an open pitch of over 40 agencies. Bang's work began with an extensive marketing research strategy that challenged the Group's former positioning and brand structure.</p> <p>The research unveiled that the housing sector demanded a values-led Group. This led Bang to develop the brave &lsquo;Together for the Right Reasons' positioning for Housing Plus.</p> <p>Chris Garratt, Marketing Director at Bang explained "The housing sector has witnessed wholesale change in recent years. Much to tenant's dismay, many associations and Groups appear to be losing touch with their roots, we wanted to develop a Group for associations who place principles at the heart of their corporate strategy".</p> <p>The repositioned sub-brands also play an important role in the Group's revised brand by highlighting Housing Plus' willingness to embrace and nurture individual identities. Chris Garratt continued "By adopting a &lsquo;house of brands' hierarchy from the outset, Housing Plus has sent out a strong message to prospective strategic partners".</p> <p>Bang handled all aspects of work for the redevelopment of the three brands, including research, brand creation, naming, positioning, internal branding and communications, advertising, the brand launches, building the brands' on-line presence and the creation of a powerful brand film &ndash; which is already attracting significant interest from across the sector.</p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265285940 [categoryId] => 8 [categoryTitle] => News [categoryAbstract] => <p>The world at Bang Marketing moves fast, keep up to date w [categorySlug] => news [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1265283717 ) I need a way that I can get all the content images associated with the same content title in one group and then display under the content title. Can anyone help?

    Read the article

  • Iptables: "-p udp --state ESTABLISHED"

    - by chris_l
    Hi, let's look at these two iptables rules which are often used to allow outgoing DNS: iptables -A OUTPUT -p udp --sport 1024:65535 --dport 53 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -p udp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT My question is: How exactly should I understand the ESTABLISHED state in UDP? UDP is stateless. Here is my intuition - I'd like to know, if or where this is incorrect: The man page tells me this: state This module, when combined with connection tracking, allows access to the connection tracking state for this packet. --state ... So, iptables basically remembers the port number that was used for the outgoing packet (what else could it remember for a UDP packet?), and then allows the first incoming packet that is sent back within a short timeframe? An attacker would have to guess the port number (would that really be too hard?) About avoiding conflicts: The kernel keeps track of which ports are blocked (either by other services, or by previous outgoing UDP packets), so that these ports will not be used for new outgoing DNS packets within the timeframe? (What would happen, if I accidentally tried to start a service on that port within the timeframe - would that attempt be denied/blocked?) Please find all errors in the above text :-) Thanks, Chris

    Read the article

  • Controlling clone access to multiple mercurial repos served via hgwebdir.cgi

    - by chrislawlor
    I'm trying to host multiple hg repositories to use for my clients. I need to control access to each repository individually - not just push access, but clone as well. I've got an .htaccess set which requires authentication globally: AuthUserFile /path/to/hgweb.passwd AuthGroupFile /dev/null AuthName "Chris Lawlor Client Mercurial Repositories" AuthType Basic <Limit GET POST PUT> Require valid-user </Limit> <FilesMatch "\.(htaccess|passwd|config|bak)$"> Order Allow,Deny Deny from all </FilesMatch> Then in each repository, I've got a .hg/hgrc file requiring a valid user [web] allow_push = <comma seperated user list> This almost does what I need. The problem is that I need to add ALL my clients to hgweb.passwd, which gives them clone access to ALL of the repositories. The only solution I can think of is to have another .htaccess and .passwd file in EACH repository. I don't really want to do that though, seems a little convoluted. I can already specify a list of authorized users for each repository in that repos' hgrc file with the allow_push setting. If only there were an allow_clone setting as well... All the documentation I've found for hgwebdir.cgi is incomplete. I've read: http://mercurial.selenic.com/wiki/HgWebDirStepByStep http://hgbook.red-bean.com/read/collaborating-with-other-people.html#sec:collab:cgi http://hgbook.red-bean.com/read/collaborating-with-other-people.html And others. I've yet to find a comprehensive list of hgrc settings. I guess this is as much an Apache question than a mercurial question. Unless I can find a better approach, I'll be going with a seperate .htaccess and .passwd file for each repo. This is a virtual host on Webfaction if it matters - set up roughly like this http://docs.webfaction.com/software/mercurial.html

    Read the article

  • How to change .htaccess file to work right in localhost?

    - by Manolo Salsas
    I have this snippet code in my .htaccess file to prevent users from hotlinking the server's images: RewriteEngine On RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} !^http://(www.)?itransformer.es/.*$ [NC] RewriteRule \.(gif|jpe?g|png|wbmp)$ http://itransformer.es [R,L] Of course, it is not working in my localhost, but don't know how to achieve it. My guess is that I should change the domain name with any wildcard. Any idea? Update I've finally found out the answer thanks to @Chris solution: RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} ^https?://%{HTTP_HOST}/.*/usuarios/.*$ [NC] RewriteRule \.(gif|jpe?g|png|wbmp)$ http://%{HTTP_HOST} [R=301,L] The /usuarios/ directory is because I only want to deny direct access to files inside this directory. Update2 For some reason, it doesn't work again. Finally I think that I found out a better solution: RewriteCond %{REQUEST_FILENAME} .*/usuarios/.*$ [NC] RewriteRule \.(gif|jpe?g|png|wbmp)$ http://%{HTTP_HOST} [R=301,L] I say better solution because what I want to deny is direct access to a file (image). Update3 Well, after a while I discovered above wasn't exactly what I wanted, so the next is definitive: RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} !^https?://itransformer.*$ [NC] RewriteRule /usuarios/.*\.(gif|jpe?g|png|wbmp)$ - [R=404,L] Just two doubts: If I change the above to: RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} !^https?://%{HTTP_HOST}.*$ [NC] RewriteRule /usuarios/.*\.(gif|jpe?g|png|wbmp)$ - [R=404,L] it doesn't work. I don't understand why, because %{HTTP_HOST} is equal to itransformer in my localhost, and it should work. The second doubt is why is shown the default 404 page and not my custom page (that is shown in all other 404 responses).

    Read the article

  • How to install (old) packages for Ubuntu 9.04?

    - by wchrisjohnson
    Based on some excellent feedback by Mark here (http://serverfault.com/questions/285598/should-i-clone-a-physical-server-to-create-a-vm-for-a-staging-server), today I was able to use the vmware converter to clone my production server for a staging server. However the nic won't come up no matter what I do. I attempted to inistall vmware tools, as I suspect that the fact that it is not installed might prevent the nic from working. (I have the nic set as a vmxnet3 card in the vm settings). The install failed because there were several dependencies missing as well as the Linux headers. Given that Ubuntu 9.04 has been EOL'd, the packages I need to install to get the vmware tools to install are no longer available. I doubt the ubuntu 9.04 install CD has the packages on it. What are my options? I'd rather not upgrade the version of Ubuntu yet, as the point of the vm right now is to maintain parity with the production server. Might I have better luck resetting the driver to use vmxnet2 instead of the vmxnet3? Thanks in advance! Chris

    Read the article

  • 2.6.9 Kernel on virtual server (non upgradable) - any expected problems?

    - by chris_l
    Hi, I'm considering to rent a virtual server (for me personally). The product I'm currently looking at offers IMO fair pricing, very good hardware etc. The only problem is, that I won't be able to do an upgrade to a newer kernel than 2.6.9 (running Debian Etch). Also, I can't install my own kernel modules. (The server runs with Virtuozzo, so as far as I understand it, it just does some chroot instead of a real virtualization (?)) I want to run GlassFish, Postgres, Subversion, Trac and maybe some other things on it. It will also have to employ a firewall, and provide OpenSSL for https. Ideally, it would also be able to do AIO (asynchronous IO), which could speed up some server I/O. Should I expect problems with that old kernel version, in conjunction with the software I want to install (I'd like to use current versions of the software)? One thing I already found out, is that you can't do everything with iptables, since some kernel modules are missing/things are not build into the kernel. GlassFish v3 appears to run fine at first glance. I was able to test the server for a few hours. Installing my whole setup wasn't feasible in that time, but what I can say is, that it's amazingly fast for an entry-level vserver, especially hard disk and network performance (averaging at ca. 400MBit/s). So if the kernel won't be a problem, I'd really like to take it. Thanks, Chris PS Exact kernel version: 2.6.9-023stab051.3-smp

    Read the article

  • SQL SERVER – Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage

    - by pinaldave
    I have mentioned several times on this blog that the best part of blogging is the questions I receive from readers. They are often very interesting. The questions from readers give me a good idea what other readers might be thinking as well. After reading my earlier article Simple Example to Configure Resource Governor – Introduction to Resource Governor – I received an email from a reader and we exchanged a few emails. After exchanging emails we both figured out what is going on. It was indeed interesting and reader suggested to that I should blog about it.  I asked for permission to publish his name but he does not like the attention so we will just call him Jeff. I have converted our emails into chat for easy consumption. Jeff: Your script does not work at all. I think either there is a bug in SQL Server. Pinal: Would you please explain in detail? Jeff: Your code does not limit the CPU usage? Pinal: How did you measure it? Jeff: Well, we have third party tools for it but let us say I have limited the resources for Reporting Services and used your script described in your blog. After that I ran only reporting service workload the CPU is still used more than 100% and it is not limited to 30% as described in your script. Clearly something is wrong somewhere. Pinal: Did you say you ONLY ran reporting server load? Jeff: Yeah, to validate I ran ONLY reporting server load and CPU did not throttle at 30% as per your script. Pinal: Oh! I get it here is the answer - CAP_CPU_PERCENT = 30. Use it. Jeff: What is that, I think your earlier script says it will throttle the Reporting Service workload and Application/OLTP workload and balance it. Pinal: Exactly, that is correct. Jeff: You need to write more in email buddy! Just like your blogs, your answers do not make sense! No Offense! Pinal: Hmm…feedback well taken. Let me try again. In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Configuration: [Read Earlier Post] Reporting Workload: MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30 Application/OLTP Workload: MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100 Example 1: If there is only Reporting Workload on the server: SQL Server will not limit usage of CPU to only 30% workload but SQL Server instance will use all available CPU (if needed). In another word in this scenario it will use more than 30% CPU. Example 2: If there is Reproting Workload and heavy Application/OLTP workload: SQL Server will allocate a maximum of 30% CPU resources to Reporting Workload and allocate remaining resources to heavy application/OLTP workload. The reason for this enhancement is for better utilization of the resources. Let us think, if there is only single workload, which we have limited to max CPU usage to 30%. The other unused available CPU resources is now wasted. In this situation SQL Server allows the workload to use more than 30% resources leading to overall improved/optimized performance. However, in the case of multiple workload where lots of resources are needed the limits specified in MAX_CPU_PERCENT are acknowledged. Example 3: If there is a situation where the max CPU workload has to be enforced: This is a very interesting scenario, in the case when the max CPU workload has to be enforced irrespective of the workload and enhanced algorithm, the keyword CAP_CPU_PERCENT is essential. It specifies a hard cap on the CPU bandwidth that all requests in the resource pool will receive. It will never let CPU usage for reporting workload to go over 30% in our case. You can use the key word as follows: -- Creating Resource Pool for Report Server CREATE RESOURCE POOL ReportServerPool WITH ( MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30, CAP_CPU_PERCENT=40, MIN_MEMORY_PERCENT=0, MAX_MEMORY_PERCENT=30) GO Notice that there is MAX_CPU_PERCENT=30 and CAP_CPU_PERCENT=40, what it means is that when SQL Server Instance is under heavy load under different workload it will use the maximum CPU at 30%. However, when the SQL Server instance is not under workload it will go over the 30% limit. However, as CAP_CPU_PERCENT is set to 40, it will not go over 40% in any case by limiting the usage of CPU. CAP_CPU_PERCENT puts a hard limit on the resources usage by workload. Jeff: Nice Pinal, you should blog about it. [A day passes by] Pinal: Jeff, it is done! Click here to read it. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Service Broker

    Read the article

  • New .NET Library for Accessing the Survey Monkey API

    - by Ben Emmett
    I’ve used Survey Monkey’s API for a while, and though it’s pretty powerful, there’s a lot of boilerplate each time it’s used in a new project, and the json it returns needs a bunch of processing to be able to use the raw information. So I’ve finally got around to releasing a .NET library you can use to consume the API more easily. The main advantages are: Only ever deal with strongly-typed .NET objects, making everything much more robust and a lot faster to get going Automatically handles things like rate-limiting and paging through results Uses combinations of endpoints to get all relevant data for you, and processes raw response data to map responses to questions To start, either install it using NuGet with PM> Install-Package SurveyMonkeyApi (easier option), or grab the source from https://github.com/bcemmett/SurveyMonkeyApi if you prefer to build it yourself. You’ll also need to have signed up for a developer account with Survey Monkey, and have both your API key and an OAuth token. A simple usage would be something like: string apiKey = "KEY"; string token = "TOKEN"; var sm = new SurveyMonkeyApi(apiKey, token); List<Survey> surveys = sm.GetSurveyList(); The surveys object is now a list of surveys with all the information available from the /surveys/get_survey_list API endpoint, including the title, id, date it was created and last modified, language, number of questions / responses, and relevant urls. If there are more than 1000 surveys in your account, the library pages through the results for you, making multiple requests to get a complete list of surveys. All the filtering available in the API can be controlled using .NET objects. For example you might only want surveys created in the last year and containing “pineapple” in the title: var settings = new GetSurveyListSettings { Title = "pineapple", StartDate = DateTime.Now.AddYears(-1) }; List<Survey> surveys = sm.GetSurveyList(settings); By default, whenever optional fields can be requested with a response, they will all be fetched for you. You can change this behaviour if for some reason you explicitly don’t want the information, using var settings = new GetSurveyListSettings { OptionalData = new GetSurveyListSettingsOptionalData { DateCreated = false, AnalysisUrl = false } }; Survey Monkey’s 7 read-only endpoints are supported, and the other 4 which make modifications to data might be supported in the future. The endpoints are: Endpoint Method Object returned /surveys/get_survey_list GetSurveyList() List<Survey> /surveys/get_survey_details GetSurveyDetails() Survey /surveys/get_collector_list GetCollectorList() List<Collector> /surveys/get_respondent_list GetRespondentList() List<Respondent> /surveys/get_responses GetResponses() List<Response> /surveys/get_response_counts GetResponseCounts() Collector /user/get_user_details GetUserDetails() UserDetails /batch/create_flow Not supported Not supported /batch/send_flow Not supported Not supported /templates/get_template_list Not supported Not supported /collectors/create_collector Not supported Not supported The hierarchy of objects the library can return is Survey List<Page> List<Question> QuestionType List<Answer> List<Item> List<Collector> List<Response> Respondent List<ResponseQuestion> List<ResponseAnswer> Each of these classes has properties which map directly to the names of properties returned by the API itself (though using PascalCasing which is more natural for .NET, rather than the snake_casing used by SurveyMonkey). For most users, Survey Monkey imposes a rate limit of 2 requests per second, so by default the library leaves at least 500ms between requests. You can request higher limits from them, so if you want to change the delay between requests just use a different constructor: var sm = new SurveyMonkeyApi(apiKey, token, 200); //200ms delay = 5 reqs per sec There’s a separate cap of 1000 requests per day for each API key, which the library doesn’t currently enforce, so if you think you’ll be in danger of exceeding that you’ll need to handle it yourself for now.  To help, you can see how many requests the current instance of the SurveyMonkeyApi object has made by reading its RequestsMade property. If the library encounters any errors, including communicating with the API, it will throw a SurveyMonkeyException, so be sure to handle that sensibly any time you use it to make calls. Finally, if you have a survey (or list of surveys) obtained using GetSurveyList(), the library can automatically fill in all available information using sm.FillMissingSurveyInformation(surveys); For each survey in the list, it uses the other endpoints to fill in the missing information about the survey’s question structure, respondents, and responses. This results in at least 5 API calls being made per survey, so be careful before passing it a large list. It also joins up the raw response information to the survey’s question structure, so that for each question in a respondent’s set of replies, you can access a ProcessedAnswer object. For example, a response to a dropdown question (from the /surveys/get_responses endpoint) might be represented in json as { "answers": [ { "row": "9384627365", } ], "question_id": "615487516" } Separately, the question’s structure (from the /surveys/get_survey_details endpoint) might have several possible answers, one of which might look like { "text": "Fourth item in dropdown list", "visible": true, "position": 4, "type": "row", "answer_id": "9384627365" } The library understands how this mapping works, and uses that to give you the following ProcessedAnswer object, which first describes the family and type of question, and secondly gives you the respondent’s answers as they relate to the question. Survey Monkey has many different question types, with 11 distinct data structures, each of which are supported by the library. If you have suggestions or spot any bugs, let me know in the comments, or even better submit a pull request .

    Read the article

  • I Didn&rsquo;t Get You Anything&hellip;

    - by Bob Rhubart
    Nearly every day this blog features a  list posts and articles written by members of the OTN architect community. But with Christmas just days away, I thought a break in that routine was in order. After all, if the holidays aren’t excuse enough for an off-topic post, then the terrorists have won. Rather than buy gifts for everyone -- which, given the readership of this blog and my budget could amount to a cash outlay of upwards of $15.00 – I thought I’d share a bit of holiday humor. I wrote the following essay back in the mid-90s, for a “print” publication that used “paper” as a content delivery system.  That was then. I’m older now, my kids are older, but my feelings toward the holidays haven’t changed… It’s New, It’s Improved, It’s Christmas! The holidays are a time of rituals. Some of these, like the shopping, the music, the decorations, and the food, are comforting in their predictability. Other rituals, like the shopping, the  music, the decorations, and the food, can leave you curled into the fetal position in some dark corner, whimpering. How you react to these various rituals depends a lot on your general disposition and credit card balance. I, for one, love Christmas. But there is one Christmas ritual that really tangles my tinsel: the seasonal editorializing about how our modern celebration of the holidays pales in comparison to that of Christmas past. It's not that the old notions of how to celebrate the holidays aren't all cozy and romantic--you can't watch marathon broadcasts of "It's A Wonderful White Christmas Carol On Thirty-Fourth Street Story" without a nostalgic teardrop or two falling onto your plate of Christmas nachos. It's just that the loudest cheerleaders for "old-fashioned" holiday celebrations overlook the fact that way-back-when those people didn't have the option of doing it any other way. Dashing through the snow in a one-horse open sleigh? No thanks. When Christmas morning rolls around, I'm going to be mighty grateful that the family is going to hop into a nice warm Toyota for the ride over to grandma's place. I figure a horse-drawn sleigh is big fun for maybe fifteen minutes. After that you’re going to want Old Dobbin to haul ass back to someplace warm where the egg nog is spiked and the family can gather in the flickering glow of a giant TV and contemplate the true meaning of football. Chestnuts roasting on an open fire? Sorry, no fireplace. We've got a furnace for heat, and stuffing nuts in there voids the warranty. Any of the roasting we do these days is in the microwave, and I'm pretty sure that if you put chestnuts in the microwave they would become little yuletide hand grenades. Although, if you've got a snoot full of Yule grog, watching chestnuts explode in your microwave might be a real holiday hoot. Some people may see microwave ovens as a symptom of creeping non-traditional holiday-ism. But I'll bet you that if there were microwave ovens around in Charles Dickens' day, the Cratchits wouldn't have had to entertain an uncharacteristically giddy Scrooge for six or seven hours while the goose cooked. Holiday entertaining is, in fact, the one area that even the most severe critic of modern practices would have to admit has not changed since Tim was Tiny. A good holiday celebration, then as now, involves lots of food, free-flowing drink, and a gathering of friends and family, some of whom you are about as happy to see as a subpoena. Just as the Cratchit's Christmas was spent with a man who, for all they knew, had suffered some kind of head trauma, so the modern holiday gathering includes relatives or acquaintances who, because they watch too many talk shows, and/or have poor personal hygiene, and/or fail to maintain scheduled medication, you would normally avoid like a plate of frosted botulism. But in the season of good will towards men, you smile warmly at the mystery uncle wandering around half-crocked with a clump of mistletoe dangling from the bill of his N.R.A. cap. Dickens' story wouldn't have become the holiday classic it has if, having spotted on their doorstep an insanely grinning, raw poultry-bearing, fresh-off-a-rough-night Scrooge, the Cratchits had pulled their shades and pretended not to be home. Which is probably what I would have done. Instead, knowing full well his reputation as a career grouch, they welcomed him into their home, and we have a touching story that teaches a valuable lesson about how the Christmas spirit can get the boss to pump up the payroll. Despite what the critics might say, our modern Christmas isn't all that different from those of long ago. Sure, the technology has changed, but that just means a bigger, brighter, louder Christmas, with lasers and holograms and stuff. It's our modern celebration of a season that even the least spiritual among us recognizes as a time of hope that the nutcases of the world will wake up and realize that peace on earth is a win/win proposition for everybody. If Christmas has changed, it's for the better. We should continue making Christmas bigger and louder and shinier until everybody gets it.  *** Happy Holidays, everyone!   del.icio.us Tags: holiday,humor Technorati Tags: holiday,humor

    Read the article

  • Partition Wise Joins

    - by jean-pierre.dijcks
    Some say they are the holy grail of parallel computing and PWJ is the basis for a shared nothing system and the only join method that is available on a shared nothing system (yes this is oversimplified!). The magic in Oracle is of course that is one of many ways to join data. And yes, this is the old flexibility vs. simplicity discussion all over, so I won't go there... the point is that what you must do in a shared nothing system, you can do in Oracle with the same speed and methods. The Theory A partition wise join is a join between (for simplicity) two tables that are partitioned on the same column with the same partitioning scheme. In shared nothing this is effectively hard partitioning locating data on a specific node / storage combo. In Oracle is is logical partitioning. If you now join the two tables on that partitioned column you can break up the join in smaller joins exactly along the partitions in the data. Since they are partitioned (grouped) into the same buckets, all values required to do the join live in the equivalent bucket on either sides. No need to talk to anyone else, no need to redistribute data to anyone else... in short, the optimal join method for parallel processing of two large data sets. PWJ's in Oracle Since we do not hard partition the data across nodes in Oracle we use the Partitioning option to the database to create the buckets, then set the Degree of Parallelism (or run Auto DOP - see here) and get our PWJs. The main questions always asked are: How many partitions should I create? What should my DOP be? In a shared nothing system the answer is of course, as many partitions as there are nodes which will be your DOP. In Oracle we do want you to look at the workload and concurrency, and once you know that to understand the following rules of thumb. Within Oracle we have more ways of joining of data, so it is important to understand some of the PWJ ideas and what it means if you have an uneven distribution across processes. Assume we have a simple scenario where we partition the data on a hash key resulting in 4 hash partitions (H1 -H4). We have 2 parallel processes that have been tasked with reading these partitions (P1 - P2). The work is evenly divided assuming the partitions are the same size and we can scan this in time t1 as shown below. Now assume that we have changed the system and have a 5th partition but still have our 2 workers P1 and P2. The time it takes is actually 50% more assuming the 5th partition has the same size as the original H1 - H4 partitions. In other words to scan these 5 partitions, the time t2 it takes is not 1/5th more expensive, it is a lot more expensive and some other join plans may now start to look exciting to the optimizer. Just to post the disclaimer, it is not as simple as I state it here, but you get the idea on how much more expensive this plan may now look... Based on this little example there are a few rules of thumb to follow to get the partition wise joins. First, choose a DOP that is a factor of two (2). So always choose something like 2, 4, 8, 16, 32 and so on... Second, choose a number of partitions that is larger or equal to 2* DOP. Third, make sure the number of partitions is divisible through 2 without orphans. This is also known as an even number... Fourth, choose a stable partition count strategy, which is typically hash, which can be a sub partitioning strategy rather than the main strategy (range - hash is a popular one). Fifth, make sure you do this on the join key between the two large tables you want to join (and this should be the obvious one...). Translating this into an example: DOP = 8 (determined based on concurrency or by using Auto DOP with a cap due to concurrency) says that the number of partitions >= 16. Number of hash (sub) partitions = 32, which gives each process four partitions to work on. This number is somewhat arbitrary and depends on your data and system. In this case my main reasoning is that if you get more room on the box you can easily move the DOP for the query to 16 without repartitioning... and of course it makes for no leftovers on the table... And yes, we recommend up-to-date statistics. And before you start complaining, do read this post on a cool way to do stats in 11.

    Read the article

  • Explaining Explain Plan Notes for Auto DOP

    - by jean-pierre.dijcks
    I've recently gotten some questions around "why do I not see a parallel plan" while Auto DOP is on (I think)...? It is probably worthwhile to quickly go over some of the ways to find out what Auto DOP was thinking. In general, there is no need to go tracing sessions and look under the hood. The thing to start with is to do an explain plan on your statement and to look at the parameter settings on the system. Parameter Settings to Look At First and foremost, make sure that parallel_degree_policy = AUTO. If you have that parameter set to LIMITED you will not have queuing and we will only do the auto magic if your objects are set to default parallel (so no degree specified). Next you want to look at the value of parallel_degree_limit. It is typically set to CPU, which in default settings equates to the Default DOP of the system. If you are testing Auto DOP itself and the impact it has on performance you may want to leave it at this CPU setting. If you are running concurrent statements you may want to give this some more thoughts. See here for more information. In general, do stick with either CPU or with a specific number. For now avoid the IO setting as I've seen some mixed results with that... In 11.2.0.2 you should also check that IO Calibrate has been run. Best to simply do a: SQL> select * from V$IO_CALIBRATION_STATUS; STATUS        CALIBRATION_TIME ------------- ---------------------------------------------------------------- READY         04-JAN-11 10.04.13.104 AM You should see that your IO Calibrate is READY and therefore Auto DOP is ready. In any case, if you did not run the IO Calibrate step you will get the following note in the explain plan: Note -----    - automatic DOP: skipped because of IO calibrate statistics are missing One more note on calibrate_io, if you do not have asynchronous IO enabled you will see:  ERROR at line 1: ORA-56708: Could not find any datafiles with asynchronous i/o capability ORA-06512: at "SYS.DBMS_RMIN", line 463 ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 1296 ORA-06512: at line 7 While this is changed in some fixes to the calibrate procedure, you should really consider switching asynchronous IO on for your data warehouse. Explain Plan Explanation To see the notes that are shown and explained here (and the above little snippet ) you can use a simple explain plan mechanism. There should  be no need to add +parallel etc. explain plan for <statement> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); Auto DOP The note structure displaying why Auto DOP did not work (with the exception noted above on IO Calibrate) is like this: Automatic degree of parallelism is disabled: <reason> These are the reason codes: Parameter -  parallel_degree_policy = manual which will not allow Auto DOP to kick in  Hint - One of the following hints are used NOPARALLEL, PARALLEL(1), PARALLEL(MANUAL) Outline - A SQL outline of an older version (before 11.2) is used SQL property restriction - The statement type does not allow for parallel processing Rule-based mode - Instead of the Cost Based Optimizer the system is using the RBO Recursive SQL statement - The statement type does not allow for parallel processing pq disabled/pdml disabled/pddl disabled - For some reason (alter session?) parallelism is disabled Limited mode but no parallel objects referenced - your parallel_degree_policy = LIMITED and no objects in the statement are decorated with the default PARALLEL degree. In most cases all objects have a specific degree in which case Auto DOP will honor that degree. Parallel Degree Limited When Auto DOP does it works you may see the cap you imposed with parallel_degree_limit showing up in the note section of the explain plan: Note -----    - automatic DOP: Computed Degree of Parallelism is 16 because of degree limit This is an obvious indication that your are being capped for this statement. There is one quite interesting one that happens when you are being capped at DOP = 1. First of you get a serial plan and the note changes slightly in that it does not indicate it is being capped (we hope to update the note at some point in time to be more specific). It right now looks like this: Note -----    - automatic DOP: Computed Degree of Parallelism is 1 Dynamic Sampling With 11.2.0.2 you will start seeing another interesting change in parallel plans, and since we are talking about the note section here, I figured we throw this in for good measure. If we deem the parallel (!) statement complex enough, we will enact dynamic sampling on your query. This happens as long as you did not change the default for dynamic sampling on the system. The note looks like this: Note ----- - dynamic sampling used for this statement (level=5)

    Read the article

  • Libnoise producing completely random noise

    - by Doodlemeat
    I am using libnoise in C++ taken and I have some problems with getting coherent noise. I mean, the noise produced now are completely random and it doesn't look like a noise. Here's a to the image produced by my game. I am diving the map into several chunks, but I can't seem to find any problem doing that since libnoise supports tileable noise. The code can be found below. Every chunk is 8x8 tiles large. Every tile is 64x64 pixels. I am also providing a link to download the entire project. It was made in Visual Studio 2013. Download link This is the code for generating a chunk Chunk *World::loadChunk(sf::Vector2i pPosition) { sf::Vector2i chunkPos = pPosition; pPosition.x *= mChunkTileSize.x; pPosition.y *= mChunkTileSize.y; sf::FloatRect bounds(static_cast<sf::Vector2f>(pPosition), sf::Vector2f(static_cast<float>(mChunkTileSize.x), static_cast<float>(mChunkTileSize.y))); utils::NoiseMap heightMap; utils::NoiseMapBuilderPlane heightMapBuilder; heightMapBuilder.SetSourceModule(mNoiseModule); heightMapBuilder.SetDestNoiseMap(heightMap); heightMapBuilder.SetDestSize(mChunkTileSize.x, mChunkTileSize.y); heightMapBuilder.SetBounds(bounds.left, bounds.left + bounds.width - 1, bounds.top, bounds.top + bounds.height - 1); heightMapBuilder.Build(); Chunk *chunk = new Chunk(this); chunk->setPosition(chunkPos); chunk->buildChunk(&heightMap); chunk->setTexture(&mTileset); mChunks.push_back(chunk); return chunk; } This is the code for building the chunk void Chunk::buildChunk(utils::NoiseMap *pHeightMap) { // Resize the tiles space mTiles.resize(pHeightMap->GetWidth()); for (int x = 0; x < mTiles.size(); x++) { mTiles[x].resize(pHeightMap->GetHeight()); } // Set vertices type and size mVertices.setPrimitiveType(sf::Quads); mVertices.resize(pHeightMap->GetWidth() * pHeightMap->GetWidth() * 4); // Get the offset position of all tiles position sf::Vector2i tileSize = mWorld->getTileSize(); sf::Vector2i chunkSize = mWorld->getChunkSize(); sf::Vector2f offsetPositon = sf::Vector2f(mPosition); offsetPositon.x *= chunkSize.x; offsetPositon.y *= chunkSize.y; // Build tiles for (int x = 0; x < mTiles.size(); x++) { for (int y = 0; y < mTiles[x].size(); y++) { // Sometimes libnoise can return a value over 1.0, better be sure to cap the top and bottom.. float heightValue = pHeightMap->GetValue(x, y); if (heightValue > 1.f) heightValue = 1.f; if (heightValue < -1.f) heightValue = -1.f; // Instantiate a new Tile object with the noise value, this doesn't do anything yet.. mTiles[x][y] = new Tile(this, pHeightMap->GetValue(x, y)); // Get a pointer to the current tile's quad sf::Vertex *quad = &mVertices[(y + x * pHeightMap->GetWidth()) * 4]; quad[0].position = sf::Vector2f(offsetPositon.x + x * tileSize.x, offsetPositon.y + y * tileSize.y); quad[1].position = sf::Vector2f(offsetPositon.x + (x + 1) * tileSize.x, offsetPositon.y + y * tileSize.y); quad[2].position = sf::Vector2f(offsetPositon.x + (x + 1) * tileSize.x, offsetPositon.y + (y + 1) * tileSize.y); quad[3].position = sf::Vector2f(offsetPositon.x + x * tileSize.x, offsetPositon.y + (y + 1) * tileSize.y); // find out which type of tile to render, atm only air or stone TileStop *tilestop = mWorld->getTileStopAt(heightValue); sf::Vector2i texturePos = tilestop->getTexturePosition(); // define its 4 texture coordinates quad[0].texCoords = sf::Vector2f(texturePos.x, texturePos.y); quad[1].texCoords = sf::Vector2f(texturePos.x + 64, texturePos.y); quad[2].texCoords = sf::Vector2f(texturePos.x + 64, texturePos.y + 64); quad[3].texCoords = sf::Vector2f(texturePos.x, texturePos.y + 64); } } } All the code that uses libnoise in some way are World.cpp, World.h and Chunk.cpp, Chunk.h in the project.

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • Star Rating widget for jQuery UI

    - by Roger
    I was introduced to the Star Rating widget for jQuery UI: http://orkans-tmp.22web.net/star_rating/ I was originally using this one: http://www.fyneworks.com/jquery/star-rating/ Is there any difference between using the two? Well trying to use the jquery UI one, I can't get the input buttons to show up as stars. I have these js and css files included: <script src="jquery-1.3.2.min.js" type="text/javascript"></script> <script src="jquery-ui-1.7.2.custom.min.js" type="text/javascript"></script> <script src="ui.stars.js" type="text/javascript"></script> <link href="ui.stars.css" type="text/css" rel="stylesheet" /> And for my code, simply: <form> Rating: <span id="stars-cap"></span> <div id="stars-wrapper1"> <input type="radio" name="newrate" value="1" title="Very poor" /> <input type="radio" name="newrate" value="2" title="Poor" /> <input type="radio" name="newrate" value="3" title="Not that bad" /> <input type="radio" name="newrate" value="4" title="Fair" /> <input type="radio" name="newrate" value="5" title="Average" checked="checked" /> <input type="radio" name="newrate" value="6" title="Almost good" /> <input type="radio" name="newrate" value="7" title="Good" /> <input type="radio" name="newrate" value="8" title="Very good" /> <input type="radio" name="newrate" value="9" title="Excellent" /> <input type="radio" name="newrate" value="10" title="Perfect" /> </div> </form> All the files and images are in the same folder. I've never used jquery UI before, so I'm not sure if all I need is that file. I'm not sure if it needs it either, the other star rating plugin I was using didn't require it. Anyone know what I'm missing anything?

    Read the article

  • .net List<string>.Remove bug Should I submit a MS Connect bug report on this?

    - by Dean Lunz
    So I was beating my head against a wall for a while before it dawned on me. I have some code that saves a list of names into a text file ala ... System.IO.File.WriteAllLines(dlg.FileName, this.characterNameMasterList.Distinct().ToArray()); The character names can contain special characters. These names come from the wow armory at www.wowarmory.com There are about 26000 names or so saved in the .txt file. The names get saved to the .txt file just fine. I wrote another application that reads these names from that .txt file using this code // download the names from the db var webNames = this.DownloadNames("character"); // filter names and get ones that need to be added to the db var localNames = new List<string>(System.IO.File.ReadAllLines(dlg.FileName)); foreach (var name in webNames) { if (localNames.Contains(name.Trim())) localNames.Remove(name); } return localNames; ... the code downloads a list of names from my website that are already in the db. Then reads the local .txt file and singles out every name that is not yet in the db so it can later add it. The names that get read from the .txt file also get read just fine with no problems. The problem comes in when removing names from the localNames list. localNames is a List type. As soon as localNames.Remove(name) gets called any names in the list that had special characters in them would get corrupted and be converted into ? characters. See for screen cap http://yfrog.com/12badcharsp So i tried doing it another way using ... // download the names from web that are already in the db var webNames = this.DownloadNames("character"); // filter names and get ones that need to be added to the db var localNames = new List<string>(System.IO.File.ReadAllLines(dlg.FileName)); int index = 0; while (index < webNames.Count) { var name = webNames[index++]; var pos = localNames.IndexOf(name.Trim()); if (pos != -1) localNames.RemoveAt(pos); } return localNames; .. But using localNames.RemoveAt also corrupts the items in the list converting special characters into ?. So is this a known bug with the List.remove methods? Does anyone know? Has anyone else had this problem? I also used .NET Reflector to disassemble/inspect the list.remove and list.RemoveAt code and it appear to be calling some external Copy function. Aside from the fact that this is prob not the best way to get a unique list of items from 2 lists am I missing something or should be aware of when using the List.Remove methods ? I am running windows 7 vs2010 and my app is set for .net 4 (no client profile )

    Read the article

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >