Search Results

Search found 5655 results on 227 pages for 'stl algorithm'.

Page 214/227 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • Constrained A* problem

    - by Ragekit
    I've got a little problem with an A* algorithm that I need to Constrained a little bit. Basically : I use an A* to find the shortest path between 2 randomly placed room in 3D space, and then build a corridor between them. The problem I found is that sometimes it makes chimney like corridors that are not ideal, so I constrict the A* so that if the last movement was up or down, you go sideways. Everything is fine, but in some corner cases, it fails to find a path (when there is obviously one). Like here between the blue and red dot : (i'm in unity btw, but i don't think it matters) Here is the code of the actual A* (a bit long, and some redundency) while(current != goal) { //add stair up / stair down foreach(Node<GridUnit> test in current.Neighbors) { if(!test.Data.empty && test != goal) continue; //bug at arrival; if(test == goal && penul !=null) { Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(currentDiff.y,0)) { //wanna drop on the last if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,to.Data.bounds.center)) { continue; } else { if(Mathf.Approximately(to.Data.bounds.center.x, current.Data.parentUnit.bounds.center.x) && Mathf.Approximately(to.Data.bounds.center.z, current.Data.parentUnit.bounds.center.z)) { continue; } } } } if(current.Data.parentUnit != null) { Vector3 previousDiff = current.Data.parentUnit.bounds.center - current.Data.bounds.center; Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(previousDiff.y,0)) { if(!Mathf.Approximately(currentDiff.y,0)) { //you wanna drop now : continue; } if(current.Data.parentUnit.parentUnit != null) { if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,current.Data.parentUnit.parentUnit.bounds.center)) { continue; }else { if(Mathf.Approximately(test.Data.bounds.center.x, current.Data.parentUnit.parentUnit.bounds.center.x) && Mathf.Approximately(test.Data.bounds.center.z, current.Data.parentUnit.parentUnit.bounds.center.z)) { continue; } } } } } g = current.Data.g + HEURISTIC(current.Data,test.Data); h = HEURISTIC(test.Data,goal.Data); f = g + h; if(open.Contains(test) || closed.Contains(test)) { if(test.Data.f > f) { //found a shorter path going passing through that point test.Data.f = f; test.Data.g = g; test.Data.h = h; test.Data.parentUnit = current.Data; } } else { //jamais rencontré test.Data.f = f; test.Data.h = h; test.Data.g = g; test.Data.parentUnit = current.Data; open.Add(test); } } closed.Add (current); if(open.Count == 0) { Debug.Log("nothingfound"); //nothing more to test no path found, stay to from; List<GridUnit> r = new List<GridUnit>(); r.Add(from.Data); return r; } //sort open from small to biggest travel cost open.Sort(delegate(Node<GridUnit> x, Node<GridUnit> y) { return (int)(x.Data.f-y.Data.f); }); //get the smallest travel cost node; Node<GridUnit> smallest = open[0]; current = smallest; open.RemoveAt(0); } //build the path going backward; List<GridUnit> ret = new List<GridUnit>(); if(penul != null) { ret.Insert(0,to.Data); } GridUnit cur = goal.Data; ret.Insert(0,cur); do{ cur = cur.parentUnit; ret.Insert(0,cur); } while(cur != from.Data); return ret; You see at the start of the foreach i constrict the A* like i said. If you have any insight it would be cool. Thanks

    Read the article

  • Compiling OpenCV in Android NDK

    - by evident
    PLEASE SEE THE ADDITIONS AT THE BOTTOM! The first problem is solved in Linux, not under Windows and Cygwin yet, but there is a new problem. Please see below! I am currently trying to compile OpenCV for Android NDK so that I can use it in my apps. For this I tried to follow this guide: http://www.stanford.edu/~zxwang/android_opencv.html But when compiling the downloaded stuff with ndk-build I get this error: $ /cygdrive/u/flori/workspace/android-ndk-r5b/ndk-build Compile++ thumb : opencv <= cvjni.cpp Compile++ thumb : cxcore <= cxalloc.cpp Compile++ thumb : cxcore <= cxarithm.cpp Compile++ thumb : cxcore <= cxarray.cpp Compile++ thumb : cxcore <= cxcmp.cpp Compile++ thumb : cxcore <= cxconvert.cpp Compile++ thumb : cxcore <= cxcopy.cpp Compile++ thumb : cxcore <= cxdatastructs.cpp Compile++ thumb : cxcore <= cxdrawing.cpp Compile++ thumb : cxcore <= cxdxt.cpp Compile++ thumb : cxcore <= cxerror.cpp Compile++ thumb : cxcore <= cximage.cpp Compile++ thumb : cxcore <= cxjacobieigens.cpp Compile++ thumb : cxcore <= cxlogic.cpp Compile++ thumb : cxcore <= cxlut.cpp Compile++ thumb : cxcore <= cxmathfuncs.cpp Compile++ thumb : cxcore <= cxmatmul.cpp Compile++ thumb : cxcore <= cxmatrix.cpp Compile++ thumb : cxcore <= cxmean.cpp Compile++ thumb : cxcore <= cxmeansdv.cpp Compile++ thumb : cxcore <= cxminmaxloc.cpp Compile++ thumb : cxcore <= cxnorm.cpp Compile++ thumb : cxcore <= cxouttext.cpp Compile++ thumb : cxcore <= cxpersistence.cpp Compile++ thumb : cxcore <= cxprecomp.cpp Compile++ thumb : cxcore <= cxrand.cpp Compile++ thumb : cxcore <= cxsumpixels.cpp Compile++ thumb : cxcore <= cxsvd.cpp Compile++ thumb : cxcore <= cxswitcher.cpp Compile++ thumb : cxcore <= cxtables.cpp Compile++ thumb : cxcore <= cxutils.cpp StaticLibrary : libstdc++.a StaticLibrary : libcxcore.a Compile++ thumb : cv <= cvaccum.cpp Compile++ thumb : cv <= cvadapthresh.cpp Compile++ thumb : cv <= cvapprox.cpp Compile++ thumb : cv <= cvcalccontrasthistogram.cpp Compile++ thumb : cv <= cvcalcimagehomography.cpp Compile++ thumb : cv <= cvcalibinit.cpp Compile++ thumb : cv <= cvcalibration.cpp Compile++ thumb : cv <= cvcamshift.cpp Compile++ thumb : cv <= cvcanny.cpp Compile++ thumb : cv <= cvcolor.cpp Compile++ thumb : cv <= cvcondens.cpp Compile++ thumb : cv <= cvcontours.cpp Compile++ thumb : cv <= cvcontourtree.cpp Compile++ thumb : cv <= cvconvhull.cpp Compile++ thumb : cv <= cvcorner.cpp Compile++ thumb : cv <= cvcornersubpix.cpp Compile++ thumb : cv <= cvderiv.cpp Compile++ thumb : cv <= cvdistransform.cpp Compile++ thumb : cv <= cvdominants.cpp Compile++ thumb : cv <= cvemd.cpp Compile++ thumb : cv <= cvfeatureselect.cpp Compile++ thumb : cv <= cvfilter.cpp Compile++ thumb : cv <= cvfloodfill.cpp Compile++ thumb : cv <= cvfundam.cpp Compile++ thumb : cv <= cvgeometry.cpp Compile++ thumb : cv <= cvhaar.cpp Compile++ thumb : cv <= cvhistogram.cpp Compile++ thumb : cv <= cvhough.cpp Compile++ thumb : cv <= cvimgwarp.cpp Compile++ thumb : cv <= cvinpaint.cpp Compile++ thumb : cv <= cvkalman.cpp Compile++ thumb : cv <= cvlinefit.cpp Compile++ thumb : cv <= cvlkpyramid.cpp Compile++ thumb : cv <= cvmatchcontours.cpp Compile++ thumb : cv <= cvmoments.cpp Compile++ thumb : cv <= cvmorph.cpp Compile++ thumb : cv <= cvmotempl.cpp Compile++ thumb : cv <= cvoptflowbm.cpp Compile++ thumb : cv <= cvoptflowhs.cpp Compile++ thumb : cv <= cvoptflowlk.cpp Compile++ thumb : cv <= cvpgh.cpp Compile++ thumb : cv <= cvposit.cpp Compile++ thumb : cv <= cvprecomp.cpp Compile++ thumb : cv <= cvpyramids.cpp Compile++ thumb : cv <= cvpyrsegmentation.cpp Compile++ thumb : cv <= cvrotcalipers.cpp Compile++ thumb : cv <= cvsamplers.cpp Compile++ thumb : cv <= cvsegmentation.cpp Compile++ thumb : cv <= cvshapedescr.cpp Compile++ thumb : cv <= cvsmooth.cpp Compile++ thumb : cv <= cvsnakes.cpp Compile++ thumb : cv <= cvstereobm.cpp Compile++ thumb : cv <= cvstereogc.cpp Compile++ thumb : cv <= cvsubdivision2d.cpp Compile++ thumb : cv <= cvsumpixels.cpp Compile++ thumb : cv <= cvsurf.cpp Compile++ thumb : cv <= cvswitcher.cpp Compile++ thumb : cv <= cvtables.cpp Compile++ thumb : cv <= cvtemplmatch.cpp Compile++ thumb : cv <= cvthresh.cpp Compile++ thumb : cv <= cvundistort.cpp Compile++ thumb : cv <= cvutils.cpp StaticLibrary : libcv.a SharedLibrary : libopencv.so U:/flori/workspace/android-ndk-r5b/toolchains/arm-linux-androideabi-4.4.3/prebui lt/windows/bin/../lib/gcc/arm-linux-androideabi/4.4.3/../../../../arm-linux-andr oideabi/bin/ld.exe: cannot find -lcxcore collect2: ld returned 1 exit status make: *** [/cygdrive/u/flori/workspace/android/testOpenCV/obj/local/armeabi/libo pencv.so] Error 1 I am trying to compile it on a Windows system and with the newest NDK version... Does anybody have an idea what this linking error means and what I can to to have it work again? Would be great if anybody could help After getting the problem to work I found that there is another way of compiling OpenCV for Android, using the current version of OpenCV (instead of the 1.1 one from above) and the modified Android NDK from crystax, which supports STL and exceptions and therefore supports the newest OpenCV Version. All information on that can be found here: http://opencv.willowgarage.com/wiki/Android There it says to download the current svn trunk and the crystax-r4 android-ndk, as well as swig, which I did. I entered the folder, created the build directory, ran cmake and then built the static libs, which seemed to work. At least it successfully ran the make-command without errors. I now wanted to build the shared libraries so I entered the android-jni folder and ran 'make' again, but got this error: % make -j4 OPENCV_CONFIG = ../build/android-opencv.mk make clean-swig &&\ mkdir -p jni/gen &&\ mkdir -p src/com/opencv/jni &&\ swig -java -c++ -package "com.opencv.jni" \ -outdir src/com/opencv/jni \ -o jni/gen/android_cv_wrap.cpp jni/android-cv.i OPENCV_CONFIG = ../build/android-opencv.mk make[1]: Entering directory `/home/florian/android-opencv-willowgarage/android/android-jni' make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. rm -f jni/gen/android_cv_wrap.cpp make[1]: Leaving directory `/home/florian/android-opencv-willowgarage/android/android-jni' /home/florian/android-ndk-r4-crystax/ndk-build OPENCV_CONFIG=../build/android-opencv.mk \ PROJECT_PATH= ARM_TARGETS="armeabi armeabi-v7a" V= /home/florian/android-ndk-r4-crystax/ndk-build OPENCV_CONFIG=../build/android-opencv.mk \ PROJECT_PATH= ARM_TARGETS="armeabi armeabi-v7a" V= make[1]: Entering directory `/home/florian/android-opencv-willowgarage/android/android-jni' /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory make[1]: Entering directory `/home/florian/android-opencv-willowgarage/android/android-jni' /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory make[1]: *** No rule to make target `../build/android-opencv.mk'. Stop. make[1]: Leaving directory `/home/florian/android-opencv-willowgarage/android/android-jni' make: *** [libs/armeabi/libandroid-opencv.so] Error 2 make: *** Waiting for unfinished jobs.... make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. make[1]: *** No rule to make target `../build/android-opencv.mk'. Stop. make[1]: Leaving directory `/home/florian/android-opencv-willowgarage/android/android-jni' make: *** [libs/armeabi-v7a/libandroid-opencv.so] Error 2 Does anybody have an idea what this means and what I can do to build the shared libraries? ... Ok after having a look at the error message it came to me that it seems to have something missing in the build directory... but there wasn't even a build directory in the android folder so I created one, ran 'cmake' in there and 'make' again but get this error: Compile thumb : opencv_lapack <= /home/florian/android-opencv-willowgarage/3rdparty/lapack/sgetrf.c Compile thumb : opencv_lapack <= /home/florian/android-opencv-willowgarage/3rdparty/lapack/scopy.c Compile++ thumb: opencv_core <= /home/florian/android-opencv-willowgarage/modules/core/src/matrix.cpp cc1plus: error: /home/florian/android-opencv-willowgarage/android/../modules/index.rst/include: Not a directory make[3]: *** [/home/florian/android-opencv-willowgarage/android/build/obj/local/armeabi/objs/opencv_core/src/matrix.o] Error 1 make[3]: *** Waiting for unfinished jobs.... make[2]: *** [android-opencv] Error 2 make[1]: *** [CMakeFiles/ndk.dir/all] Error 2 make: *** [all] Error 2 Anybody know what this means?

    Read the article

  • How do I stop and repair a RAID 5 array that has failed and has I/O pending?

    - by Ben Hymers
    The short version: I have a failed RAID 5 array which has a bunch of processes hung waiting on I/O operations on it; how can I recover from this? The long version: Yesterday I noticed Samba access was being very sporadic; accessing the server's shares from Windows would randomly lock up explorer completely after clicking on one or two directories. I assumed it was Windows being a pain and left it. Today the problem is the same, so I did a little digging; the first thing I noticed was that running ps aux | grep smbd gives a lot of lines like this: ben 969 0.0 0.2 96088 4128 ? D 18:21 0:00 smbd -F root 1708 0.0 0.2 93468 4748 ? Ss 18:44 0:00 smbd -F root 1711 0.0 0.0 93468 1364 ? S 18:44 0:00 smbd -F ben 3148 0.0 0.2 96052 4160 ? D Mar07 0:00 smbd -F ... There are a lot of processes stuck in the "D" state. Running ps aux | grep " D" shows up some other processes including my nightly backup script, all of which need to access the volume mounted on my RAID array at some point. After some googling, I found that it might be down to the RAID array failing, so I checked /proc/mdstat, which shows this: ben@jack:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdb1[3](F) sdc1[1] sdd1[2] 2930271872 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU] unused devices: <none> And running mdadm --detail /dev/md0 gives this: ben@jack:~$ sudo mdadm --detail /dev/md0 /dev/md0: Version : 00.90 Creation Time : Sat Oct 31 20:53:10 2009 Raid Level : raid5 Array Size : 2930271872 (2794.53 GiB 3000.60 GB) Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Mar 7 03:06:35 2011 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : f114711a:c770de54:c8276759:b34deaa0 Events : 0.208245 Number Major Minor RaidDevice State 3 8 17 0 faulty spare rebuilding /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 I believe this says that sdb1 has failed, and so the array is running with two drives out of three 'up'. Some advice I found said to check /var/log/messages for notices of failures, and sure enough there are plenty: ben@jack:~$ grep sdb /var/log/messages ... Mar 7 03:06:35 jack kernel: [4525155.384937] md/raid:md0: read error NOT corrected!! (sector 400644912 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389686] md/raid:md0: read error not correctable (sector 400644920 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389686] md/raid:md0: read error not correctable (sector 400644928 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389688] md/raid:md0: read error not correctable (sector 400644936 on sdb1). Mar 7 03:06:56 jack kernel: [4525176.231603] sd 0:0:1:0: [sdb] Unhandled sense code Mar 7 03:06:56 jack kernel: [4525176.231605] sd 0:0:1:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Mar 7 03:06:56 jack kernel: [4525176.231608] sd 0:0:1:0: [sdb] Sense Key : Medium Error [current] [descriptor] Mar 7 03:06:56 jack kernel: [4525176.231623] sd 0:0:1:0: [sdb] Add. Sense: Unrecovered read error - auto reallocate failed Mar 7 03:06:56 jack kernel: [4525176.231627] sd 0:0:1:0: [sdb] CDB: Read(10): 28 00 17 e1 5f bf 00 01 00 00 To me it is clear that device sdb has failed, and I need to stop the array, shutdown, replace it, reboot, then repair the array, bring it back up and mount the filesystem. I cannot hot-swap a replacement drive in, and don't want to leave the array running in a degraded state. I believe I am supposed to unmount the filesystem before stopping the array, but that is failing, and that is where I'm stuck now: ben@jack:~$ sudo umount /storage umount: /storage: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) It is indeed busy; there are some 30 or 40 processes waiting on I/O. What should I do? Should I kill all these processes and try again? Is that a wise move when they are 'uninterruptable'? What would happen if I tried to reboot? Please let me know what you think I should do. And please ask if you need any extra information to diagnose the problem or to help!

    Read the article

  • Xen kernel can't see 2 disks of 6 of 1TB, does it have a limitation?

    - by PartySoft
    Linux gentoo-xen 2.6.18-xen-r12 #3 SMP Tue Oct 5 09:28:53 PDT 2010 x86_64 Intel(R) Xeon(R) CPU E5506 @ 2.13GHz GenuineIntel GNU/Linux I have 6 disks of 1 TB and i can't see all of them only 4, can anyone give me an ideea what can i do ? Filesystem Size Used Avail Use% Mounted on rootfs 886G 4.4G 836G 1% / /dev/sda3 886G 4.4G 836G 1% / rc-svcdir 1.0M 44K 980K 5% /lib64/rc/init.d shm 7.9G 0 7.9G 0% /dev/shm /dev/sdb1 917G 200M 871G 1% /home2 /dev/sdc1 917G 200M 871G 1% /home3 /dev/sdd1 917G 200M 871G 1% /home4 The hardware is Dual xeon E5506 processors on a supermicro X8DTL mobo 4.346585] ata3.00: ATA-8, max UDMA/133, 1953525168 sectors: LBA48 NCQ (depth 0/32) [ 4.346588] ata3.00: ata3: dev 0 multi count 16 [ 4.352861] ata3.00: configured for UDMA/133 [ 4.352867] scsi3 : ata_piix [ 4.352875] PM: Adding info for No Bus:host3 [ 4.510584] ata4.00: ATA-8, max UDMA/133, 1953525168 sectors: LBA48 NCQ (depth 0/32) [ 4.510587] ata4.00: ata4: dev 0 multi count 16 [ 4.516848] ata4.00: configured for UDMA/133 [ 4.516861] PM: Adding info for No Bus:target2:0:0 [ 4.516905] Vendor: ATA Model: SAMSUNG HD103SJ Rev: 1AJ1 [ 4.516910] Type: Direct-Access ANSI SCSI revision: 05 [ 4.516920] PM: Adding info for scsi:2:0:0:0 [ 4.517452] SCSI device sde: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.517460] sde: Write Protect is off [ 4.517461] sde: Mode Sense: 00 3a 00 00 [ 4.517478] SCSI device sde: drive cache: write back [ 4.517514] SCSI device sde: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.517521] sde: Write Protect is off [ 4.517522] sde: Mode Sense: 00 3a 00 00 [ 4.517532] SCSI device sde: drive cache: write back [ 4.517534] sde: sde1 [ 4.524551] sd 2:0:0:0: Attached scsi disk sde [ 4.524855] sd 2:0:0:0: Attached scsi generic sg4 type 0 [ 4.524874] PM: Adding info for No Bus:target3:0:0 [ 4.524928] Vendor: ATA Model: SAMSUNG HD103SJ Rev: 1AJ1 [ 4.524933] Type: Direct-Access ANSI SCSI revision: 05 [ 4.524946] PM: Adding info for scsi:3:0:0:0 [ 4.525216] SCSI device sdf: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.525227] sdf: Write Protect is off [ 4.525228] sdf: Mode Sense: 00 3a 00 00 [ 4.525242] SCSI device sdf: drive cache: write back [ 4.525280] SCSI device sdf: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.525286] sdf: Write Protect is off [ 4.525289] sdf: Mode Sense: 00 3a 00 00 [ 4.525301] SCSI device sdf: drive cache: write back [ 4.525302] sdf: sdf1 [ 4.532691] sd 3:0:0:0: Attached scsi disk sdf [ 4.533010] sd 3:0:0:0: Attached scsi generic sg5 type 0 [ 4.977669] scsi: <fdomain> Detection failed (no card) [ 5.030479] GDT-HA: Storage RAID Controller Driver. Version: 3.05 [ 5.030635] GDT-HA: Found 0 PCI Storage RAID Controllers [ 5.372350] Fusion MPT base driver 3.04.01 [ 5.372358] Copyright (c) 1999-2005 LSI Logic Corporation [ 5.579176] Fusion MPT SPI Host driver 3.04.01 [ 5.881777] ieee1394: Initialized config rom entry `ip1394' [ 6.166745] ieee1394: sbp2: Driver forced to serialize I/O (serialize_io=1) [ 6.166748] ieee1394: sbp2: Try serialize_io=0 for better performance [ 6.428866] md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27 [ 6.428872] md: bitmap version 4.39 [ 6.431518] md: raid0 personality registered for level 0 [ 6.495979] md: raid1 personality registered for level 1 [ 6.570270] raid5: automatically using best checksumming function: generic_sse [ 6.575523] generic_sse: 6608.000 MB/sec [ 6.575526] raid5: using function: generic_sse (6608.000 MB/sec) [ 6.596226] raid6: int64x1 1835 MB/s [ 6.613231] raid6: int64x2 1773 MB/s [ 6.630256] raid6: int64x4 1675 MB/s [ 6.647296] raid6: int64x8 1027 MB/s [ 6.664267] raid6: sse2x1 3578 MB/s [ 6.681268] raid6: sse2x2 4207 MB/s [ 6.698280] raid6: sse2x4 4625 MB/s [ 6.698281] raid6: using algorithm sse2x4 (4625 MB/s) [ 6.698285] md: raid6 personality registered for level 6 [ 6.698286] md: raid5 personality registered for level 5 [ 6.698288] md: raid4 personality registered for level 4 [ 6.781090] md: raid10 personality registered for level 10 [ 7.007043] Intel(R) PRO/1000 Network Driver - version 7.1.9-k4 [ 7.007046] Copyright (c) 1999-2006 Intel Corporation. [ 9.229465] kjournald starting. Commit interval 5 seconds [ 9.229476] EXT3-fs: mounted filesystem with ordered data mode.

    Read the article

  • Openswan ipsec transport tunnel not going up

    - by gparent
    On ClusterA and B I have installed the "openswan" package on Debian Squeeze. ClusterA ip is 172.16.0.107, B is 172.16.0.108 When they ping one another, it does not reach the destination. /etc/ipsec.conf: version 2.0 # conforms to second version of ipsec.conf specification config setup protostack=netkey oe=off conn L2TP-PSK-CLUSTER type=transport left=172.16.0.107 right=172.16.0.108 auto=start ike=aes128-sha1-modp2048 authby=secret compress=yes /etc/ipsec.secrets: 172.16.0.107 172.16.0.108 : PSK "L2TPKEY" 172.16.0.108 172.16.0.107 : PSK "L2TPKEY" Here is the result of ipsec verify on both machines: root@cluster2:~# ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.28/K2.6.32-5-amd64 (netkey) Checking for IPsec support in kernel [OK] NETKEY detected, testing for disabled ICMP send_redirects [OK] NETKEY detected, testing for disabled ICMP accept_redirects [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [FAILED] Checking for 'ip' command [OK] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] root@cluster2:~# This is the end of the output of ipsec auto --status: 000 "cluster": 172.16.0.108<172.16.0.108>[+S=C]...172.16.0.107<172.16.0.107>[+S=C]; prospective erouted; eroute owner: #0 000 "cluster": myip=unset; hisip=unset; 000 "cluster": ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0 000 "cluster": policy: PSK+ENCRYPT+COMPRESS+PFS+UP+IKEv2ALLOW+lKOD+rKOD; prio: 32,32; interface: eth0; 000 "cluster": newest ISAKMP SA: #1; newest IPsec SA: #0; 000 "cluster": IKE algorithm newest: AES_CBC_128-SHA1-MODP2048 000 000 #3: "cluster":500 STATE_QUICK_R0 (expecting QI1); EVENT_CRYPTO_FAILED in 298s; lastdpd=-1s(seq in:0 out:0); idle; import:admin initiate 000 #2: "cluster":500 STATE_QUICK_I1 (sent QI1, expecting QR1); EVENT_RETRANSMIT in 13s; lastdpd=-1s(seq in:0 out:0); idle; import:admin initiate 000 #1: "cluster":500 STATE_MAIN_I4 (ISAKMP SA established); EVENT_SA_REPLACE in 2991s; newest ISAKMP; lastdpd=-1s(seq in:0 out:0); idle; import:admin initiate 000 Interestingly enough, if I do ike-scan on the server here's what happens: Doesn't seem to take my ike settings into account root@cluster1:~# ike-scan -M 172.16.0.108 Starting ike-scan 1.9 with 1 hosts (http://www.nta-monitor.com/tools/ike-scan/) 172.16.0.108 Main Mode Handshake returned HDR=(CKY-R=641bffa66ba717b6) SA=(Enc=3DES Hash=SHA1 Auth=PSK Group=2:modp1024 LifeType=Seconds LifeDuration(4)=0x00007080) VID=4f45517b4f7f6e657a7b4351 VID=afcad71368a1f1c96b8696fc77570100 (Dead Peer Detection v1.0) Ending ike-scan 1.9: 1 hosts scanned in 0.008 seconds (118.19 hosts/sec). 1 returned handshake; 0 returned notify root@cluster1:~# I can't tell what's going on here, this is pretty much the simplest config I can have according to the examples.

    Read the article

  • Site-to-site VPN using MD5 instead of SHA and getting regular disconnection

    - by Steven
    We are experiencing some strange behavior with a site-to-site IPsec VPN that goes down about every week for 30 minutes (Iam told 30 minutes exactly). I don't have access to the logs, so it's difficult to troubleshoot. What is also strange is that the two VPN devices are set to use SHA hash algorithm but apparently end up agreeing to use MD5. Does anybody have a clue? or is this just insufficient information? Edit: Here is an extract of the log of one of the two VPN devices, which is a Cisco 3000 series VPN concentrator. 27981 03/08/2010 10:02:16.290 SEV=4 IKE/41 RPT=16120 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 27983 03/08/2010 10:02:56.930 SEV=4 IKE/41 RPT=16121 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 27986 03/08/2010 10:03:35.370 SEV=4 IKE/41 RPT=16122 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) [… same continues for another 15 minutes …] 28093 03/08/2010 10:19:46.710 SEV=4 IKE/41 RPT=16140 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 28096 03/08/2010 10:20:17.720 SEV=5 IKE/172 RPT=1291 xxxxxxxx Group [xxxxxxxx] Automatic NAT Detection Status: Remote end is NOT behind a NAT device This end IS behind a NAT device 28100 03/08/2010 10:20:17.820 SEV=3 IKE/134 RPT=79 xxxxxxxx Group [xxxxxxxx] Mismatch: Configured LAN-to-LAN proposal differs from negotiated proposal. Verify local and remote LAN-to-LAN connection lists. 28103 03/08/2010 10:20:17.820 SEV=4 IKE/119 RPT=1197 xxxxxxxx Group [xxxxxxxx] PHASE 1 COMPLETED 28104 03/08/2010 10:20:17.820 SEV=4 AUTH/22 RPT=1031 xxxxxxxx User [xxxxxxxx] Group [xxxxxxxx] connected, Session Type: IPSec/LAN- to-LAN 28106 03/08/2010 10:20:17.820 SEV=4 AUTH/84 RPT=39 LAN-to-LAN tunnel to headend device xxxxxxxx connected 28110 03/08/2010 10:20:17.920 SEV=5 IKE/25 RPT=1291 xxxxxxxx Group [xxxxxxxx] Received remote Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28113 03/08/2010 10:20:17.920 SEV=5 IKE/24 RPT=88 xxxxxxxx Group [xxxxxxxx] Received local Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28116 03/08/2010 10:20:17.920 SEV=5 IKE/66 RPT=1290 xxxxxxxx Group [xxxxxxxx] IKE Remote Peer configured for SA: L2L: 1A 28117 03/08/2010 10:20:17.930 SEV=5 IKE/25 RPT=1292 xxxxxxxx Group [xxxxxxxx] Received remote Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28120 03/08/2010 10:20:17.930 SEV=5 IKE/24 RPT=89 xxxxxxxx Group [xxxxxxxx] Received local Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28123 03/08/2010 10:20:17.930 SEV=5 IKE/66 RPT=1291 xxxxxxxx Group [xxxxxxxx] IKE Remote Peer configured for SA: L2L: 1A 28124 03/08/2010 10:20:18.070 SEV=4 IKE/173 RPT=17330 xxxxxxxx Group [xxxxxxxx] NAT-Traversal successfully negotiated! IPSec traffic will be encapsulated to pass through NAT devices. 28127 03/08/2010 10:20:18.070 SEV=4 IKE/49 RPT=17332 xxxxxxxx Group [xxxxxxxx] Security negotiation complete for LAN-to-LAN Group (xxxxxxxx) Responder, Inbound SPI = 0x56a4fe5c, Outbound SPI = 0xcdfc3892 28130 03/08/2010 10:20:18.070 SEV=4 IKE/120 RPT=17332 xxxxxxxx Group [xxxxxxxx] PHASE 2 COMPLETED (msgid=37b3b298) 28131 03/08/2010 10:20:18.750 SEV=4 IKE/41 RPT=16141 xxxxxxxx Group [xxxxxxxx] IKE Initiator: New Phase 2, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 28135 03/08/2010 10:20:18.870 SEV=4 IKE/173 RPT=17331 xxxxxxxx Group [xxxxxxxx] NAT-Traversal successfully negotiated! IPSec traffic will be encapsulated to pass through NAT devices.

    Read the article

  • Linux software RAID6: rebuild slow

    - by Ole Tange
    I am trying to find the bottleneck in the rebuilding of a software raid6. ## Pause rebuilding when measuring raw I/O performance # echo 1 > /proc/sys/dev/raid/speed_limit_min # echo 1 > /proc/sys/dev/raid/speed_limit_max ## Drop caches so that does not interfere with measuring # sync ; echo 3 | tee /proc/sys/vm/drop_caches >/dev/null # time parallel -j0 "dd if=/dev/{} bs=256k count=4000 | cat >/dev/null" ::: sdbd sdbc sdbf sdbm sdbl sdbk sdbe sdbj sdbh sdbg 4000+0 records in 4000+0 records out 1048576000 bytes (1.0 GB) copied, 7.30336 s, 144 MB/s [... similar for each disk ...] # time parallel -j0 "dd if=/dev/{} skip=15000000 bs=256k count=4000 | cat >/dev/null" ::: sdbd sdbc sdbf sdbm sdbl sdbk sdbe sdbj sdbh sdbg 4000+0 records in 4000+0 records out 1048576000 bytes (1.0 GB) copied, 12.7991 s, 81.9 MB/s [... similar for each disk ...] So we can read sequentially at 140 MB/s in the outer tracks and 82 MB/s in the inner tracks on all the drives simultaneously. Sequential write performance is similar. This would lead me to expect a rebuild speed of 82 MB/s or more. # echo 800000 > /proc/sys/dev/raid/speed_limit_min # echo 800000 > /proc/sys/dev/raid/speed_limit_max # cat /proc/mdstat md2 : active raid6 sdbd[10](S) sdbc[9] sdbf[0] sdbm[8] sdbl[7] sdbk[6] sdbe[11] sdbj[4] sdbi[3](F) sdbh[2] sdbg[1] 27349121408 blocks super 1.2 level 6, 128k chunk, algorithm 2 [9/8] [UUU_UUUUU] [=========>...........] recovery = 47.3% (1849905884/3907017344) finish=855.9min speed=40054K/sec But we only get 40 MB/s. And often this drops to 30 MB/s. # iostat -dkx 1 sdbc 0.00 8023.00 0.00 329.00 0.00 33408.00 203.09 0.70 2.12 1.06 34.80 sdbd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdbe 13.00 0.00 8334.00 0.00 33388.00 0.00 8.01 0.65 0.08 0.06 47.20 sdbf 0.00 0.00 8348.00 0.00 33388.00 0.00 8.00 0.58 0.07 0.06 48.00 sdbg 16.00 0.00 8331.00 0.00 33388.00 0.00 8.02 0.71 0.09 0.06 48.80 sdbh 961.00 0.00 8314.00 0.00 37100.00 0.00 8.92 0.93 0.11 0.07 54.80 sdbj 70.00 0.00 8276.00 0.00 33384.00 0.00 8.07 0.78 0.10 0.06 48.40 sdbk 124.00 0.00 8221.00 0.00 33380.00 0.00 8.12 0.88 0.11 0.06 47.20 sdbl 83.00 0.00 8262.00 0.00 33380.00 0.00 8.08 0.96 0.12 0.06 47.60 sdbm 0.00 0.00 8344.00 0.00 33376.00 0.00 8.00 0.56 0.07 0.06 47.60 iostat says the disks are not 100% busy (but only 40-50%). This fits with the hypothesis that the max is around 80 MB/s. Since this is software raid the limiting factor could be CPU. top says: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 38520 root 20 0 0 0 0 R 64 0.0 2947:50 md2_raid6 6117 root 20 0 0 0 0 D 53 0.0 473:25.96 md2_resync So md2_raid6 and md2_resync are clearly busy taking up 64% and 53% of a CPU respectively, but not near 100%. The chunk size (128k) of the RAID was chosen after measuring which chunksize gave the least CPU penalty. If this speed is normal: What is the limiting factor? Can I measure that? If this speed is not normal: How can I find the limiting factor? Can I change that?

    Read the article

  • e2fsck / resize2fs problems

    - by BlakBat
    I've got 6 drives (each 1.5T, all same model and firmware revision) that are part of a RAID5 array. The RAID5 makes a LVM volume group and a logical group. The latter contains only one ext3 partition. I've recently ran: e2fsck -f /dev/vg03/lv01 && resize2fs -M /dev/vg03/lv01 which exited without an error. Now when I try to mount /dev/vg03/lv01 I get: EXT3-fs error (device dm-0): ext3_check_descriptors: Block bitmap for group 30533 not in group (block 1000532368)! EXT3-fs: group descriptors corrupted! How do I get out of this predicament? This is all the info I can currently give you: fdisk -l /dev/sd[cdefgh] shows (correctly) that they are "Linux raid autodetect" but fdisk now shows: fdisk -l /dev/md0 Disk /dev/md0: 7501.5 GB, 7501495664640 bytes ... Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table (instead of a LVM type partition) fdisk -l /dev/vg03/lv01 Disk /dev/vg03/lv01: 7501.5 GB, 7501491732480 bytes ... Disk identifier: 0x00000000 Disk /dev/vg03/lv01 doesn't contain a valid partition table (instead of a ext3 type partition) I've tried: e2fsck -fy /dev/vg03/lv01 e2fsck 1.41.12 (17-May-2010) e2fsck: Group descriptors look bad... trying backup blocks... Block bitmap for group 30533 is not in group. (block 1000532368) Relocate? yes Inode bitmap for group 30533 is not in group. (block 1000532369) Relocate? yes Pass 1: Checking inodes, blocks, and sizes Relocating group 30533's block bitmap to 1000524246... Error allocating 1 contiguous block(s) in block group 30533 for inode bitmap: Could not allocate block in ext2 filesystem e2fsck: aborted Extra information I can give you: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active (auto-read-only) raid5 sdg1[0] sdh1[5] sdf1[4] sde1[3] sdc1[2] sdd1[1] 7325679360 blocks level 5, 128k chunk, algorithm 2 [6/6] [UUUUUU] bitmap: 1/175 pages [4KB], 4096KB chunk unused devices: Lastly, all smartctl tests (short and extendend) showed no errors on any of the disks. Should I try to resize2fs to grow /dev/vg03/lv01 and redo a e2fsck ? Should I cfdisk /dev/md0 and /dev/vg03/lv01 back to their real types? Thanks in advance for all and any help. 2011-09-20 UPDATE I issued the following commands and was able to remount the partition, but by viewing the size (df) of before and after, it seems that 1Tb of data have gone missing. By checking the MD5SUMS (from an old backup) of some files with the "same" files from the remounted partition, some errors have been detected. Commands issued to remount the partition were: dumpe2fs /dev/vg03/lv01 Block count: 1000491435<br /> Block size: 4096<br /> tune2fs -O ^has_journal /dev/vg03/lv01 resize2fs -p /dev/vg03/lv01 dumpe2fs /dev/vg03/lv01 Block count: 1831418880<br /> Block size: 4096<br /> mount -o ro,noatime /dev/vg03/lv01 /mnt/raid OK... but files have been damaged / gone missing.

    Read the article

  • Windows 7 inbuilt and 3rd party (de)fragmentation related queries

    - by Karan
    I have a pretty good idea of how files end up getting fragmented. That said, I just copied ~3,200 files of varying sizes (from a few KB to ~20GB) from an external USB HDD to an internal, freshly formatted (under Windows 7 x64), NTFS, 2TB, 5400RPM, WD, SATA, non-system (i.e. secondary) drive, filling it up 57%. Since it should have been very much possible for each file to have been stored in one contiguous block, I expected the drive to be fragmented not more than 1-2% at most after this rather lengthy exercise (unfortunately this older machine doesn't support USB 3.0). Windows 7's inbuilt defrag utility told me after a quick analysis that the drive was fragmented only 1% or so, which dovetailed neatly with my expectations. However, just out of curiosity I downloaded and ran the latest portable x64 version of Piriform's Defraggler, and was shocked to see the drive being reported as being ~85% fragmented! The portable version of Auslogics Disk Defrag also agreed with Defraggler, and both clearly expected to grind away for ~10 hours to completely defragment the drive. 1) How in blazes could the inbuilt and 3rd party defrag utils disagree so badly? I mean, 10-20% variance is probably understandable, but 1% and 85% are miles apart! This Engineering Windows 7 blog post states: In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. ... [Please read the entire post so the quote is not taken out of context.] Could it simply be that the 3rd party defrag utils ignore this post-XP change and continue to use analysis algos similar to those XP used? 2) Assuming that the 3rd party utils aren't lying about the real extent of fragmentation (which Windows is downplaying post-XP), how could the files have even got fragmented so badly given they were just copied over afresh to an empty drive? 3) If vastly differing analysis algos explain the yawning gap, which do I believe? I'm no defrag fanatic for sure, but 85% is enough to make me seriously consider spending 10 hours defragging this drive. On the other hand, 1% reported by Windows' own defragger clearly implies that there is no cause for concern and defragging would actually have negative consequences (as per the post). Is Windows' assumption valid and should I just let it be, or will there be any noticeable performance gains after running one of the 3rd party utils for 10 hours straight? 4) I see that out of the box Windows 7 defrag is scheduled to run weekly. Does anyone know whether it defrags every single time, or only if its analysis reveals a fragmentation percentage over a set threshold? If the latter, what is this threshold and can it be changed, maybe via a Registry edit? Thanks for reading through (my first query on this wonderful site!) and for any helpful replies. Also, if you're answering question #3, please keep in mind that any speed increases post defragging with 3rd party utils vis-à-vis Windows' inbuilt program should not include pre-Vista (preferably pre-Win7) examples. Further, examples of programs that made your system boot faster won't help in this case, since this is a non-system drive (although one that'll still be used daily).

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • duplicate cache pages: Varnish

    - by Sukhjinder Singh
    Recently we have configured Varnish on our server, it was successfully setup but we noticed that if we open any page in multiple browsers, the Varnish send request to Apache not matter page is cached or not. If we refresh twice on each browser it creates duplicate copies of the same page. What exactly should happen: If any page is cached by Varnish, the subsequent request should be served from Varnish itself when we are opening the same page in browser OR we are opening that page from different IP address. Following is my default.vcl file backend default { .host = "127.0.0.1"; .port = "80"; } sub vcl_recv { if( req.url ~ "^/search/.*$") { }else { set req.url = regsub(req.url, "\?.*", ""); } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (!req.backend.healthy) { unset req.http.Cookie; } set req.grace = 6h; if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } if (req.http.Cookie) { set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { unset req.http.Cookie; } else { return (pass); } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") {return(pipe);} /* Non-RFC2616 or CONNECT which is weird. */ if (req.request != "GET" && req.request != "HEAD") { return (pass); } if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unknown algorithm remove req.http.Accept-Encoding; } } return (lookup); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } sub vcl_fetch { if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } set beresp.grace = 6h; } sub vcl_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return (hash); } sub vcl_pipe { set req.http.connection = "close"; } sub vcl_hit { if (req.request == "PURGE") {ban_url(req.url); error 200 "Purged";} if (!obj.ttl > 0s) {return(pass);} } sub vcl_miss { if (req.request == "PURGE") {error 200 "Not in cache";} }

    Read the article

  • SunTlsRsaPremasterSecret KeyGenerator not available

    - by Jill
    Hi, I encountered an error when my application tries to load a RSA Algorithm provider class from JAVA. The exception stack is as follow: javax.jms.JMSException: RSA premaster secret error at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:49) at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1255) at org.apache.activemq.ActiveMQConnection.ensureConnectionInfoSent(ActiveMQConnection.java:1350) at org.apache.activemq.ActiveMQConnection.setClientID(ActiveMQConnection.java:388) at com.trendmicro.tmsm.TMSMAgent.open(TMSMAgent.java:63) Caused by: javax.net.ssl.SSLKeyException: RSA premaster secret error at com.sun.net.ssl.internal.ssl.RSAClientKeyExchange.<init>(RSAClientKeyExchange.java:97) at com.sun.net.ssl.internal.ssl.ClientHandshaker.serverHelloDone(ClientHandshaker.java:634) at com.sun.net.ssl.internal.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:226) at com.sun.net.ssl.internal.ssl.Handshaker.processLoop(Handshaker.java:516) at com.sun.net.ssl.internal.ssl.Handshaker.process_record(Handshaker.java:454) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:884) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1112) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:623) at com.sun.net.ssl.internal.ssl.AppOutputStream.write(AppOutputStream.java:59) at org.apache.activemq.transport.tcp.TcpBufferedOutputStream.flush(TcpBufferedOutputStream.java:115) at java.io.DataOutputStream.flush(DataOutputStream.java:106) at org.apache.activemq.transport.tcp.TcpTransport.oneway(TcpTransport.java:167) at org.apache.activemq.transport.InactivityMonitor.oneway(InactivityMonitor.java:237) at org.apache.activemq.transport.WireFormatNegotiator.sendWireFormat(WireFormatNegotiator.java:168) at org.apache.activemq.transport.WireFormatNegotiator.sendWireFormat(WireFormatNegotiator.java:84) at org.apache.activemq.transport.WireFormatNegotiator.start(WireFormatNegotiator.java:74) at org.apache.activemq.transport.failover.FailoverTransport.doReconnect(FailoverTransport.java:715) at org.apache.activemq.transport.failover.FailoverTransport$2.iterate(FailoverTransport.java:115) at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:122) at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:43) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:637) Caused by: java.security.NoSuchAlgorithmException: SunTlsRsaPremasterSecret KeyGenerator not available at javax.crypto.KeyGenerator.<init>(DashoA13*..) at javax.crypto.KeyGenerator.getInstance(DashoA13*..) at com.sun.net.ssl.internal.ssl.JsseJce.getKeyGenerator(JsseJce.java:223) at com.sun.net.ssl.internal.ssl.RSAClientKeyExchange.<init>(RSAClientKeyExchange.java:89) ... 22 more I've googled the error message and most of posts says it's because JVM cannot find sunjce_provider.jar. However, I can find the file in /Library/Java/Home/lib/ext folder. The platform is Mac OS X 10.6 and Java version is 1.6.0_17. My questions are: Why JVM does not search /Library/Java/Home/lib/ext for jar files? Can we change CLASSPATH or java.ext.dirs property by modify any config file? Any suggestion to solve this problem? Thanks in advance.

    Read the article

  • Focusable EditText inside ListView

    - by Joe
    I've spent about 6 hours on this so far, and been hitting nothing but roadblocks. The general premise is that there is some row in a ListView (whether it's generated by the adapter, or added as a header view) that contains an EditText widget and a Button. All I want to do is be able to use the jogball/arrows, to navigate the selector to individual items like normal, but when I get to a particular row -- even if I have to explicitly identify the row -- that has a focusable child, I want that child to take focus instead of indicating the position with the selector. I've tried many possibilities, and have so far had no luck. layout: <ListView android:id="@android:id/list" android:layout_height="fill_parent" android:layout_width="fill_parent" /> Header view: EditText view = new EditText(this); listView.addHeaderView(view, null, true); Assuming there are other items in the adapter, using the arrow keys will move the selection up/down in the list, as expected; but when getting to the header row, it is also displayed with the selector, and no way to focus into the EditText using the jogball. Note: tapping on the EditText will focus it at that point, however that relies on a touchscreen, which should not be a requirement. ListView apparently has two modes in this regard: 1. setItemsCanFocus(true): selector is never displayed, but the EditText can get focus when using the arrows. Focus search algorithm is hard to predict, and no visual feedback (on any rows: having focusable children or not) on which item is selected, both of which can give the user an unexpected experience. 2. setItemsCanFocus(false): selector is always drawn in non-touch-mode, and EditText can never get focus -- even if you tap on it. To make matters worse, calling editTextView.requestFocus() returns true, but in fact does not give the EditText focus. What I'm envisioning is basically a hybrid of 1 & 2, where rather than the list setting if all items are focusable or not, I want to set focusability for a single item in the list, so that the selector seamlessly transitions from selecting the entire row for non-focusable items, and traversing the focus tree for items that contain focusable children. Any takers?

    Read the article

  • 'NoneType' object has no attribute 'get' error using SQLAlchemy

    - by Az
    I've been trying to map an object to a database using SQLAlchemy but have run into a snag. Version info if handy: [OS: Mac OSX 10.5.8 | Python: 2.6.4 | SQLAlchemy: 0.5.8] The class I'm going to map: class Student(object): def __init__(self, name, id): self.id = id self.name = name self.preferences = collections.defaultdict(set) self.allocated_project = None self.allocated_rank = 0 def __repr__(self): return str(self) def __str__(self): return "%s %s" %(self.id, self.name) Background: Now, I've got a function that reads in the necessary information from a text database into these objects. The function more or less works and I can easily access the information from the objects. Before the SQLAlchemy code runs, the function will read in the necessary info and store it into the Class. There is a dictionary called students which stores this as such: students = {} students[id] = Student(<all the info from the various "reader" functions>) Afterwards, there is an "allocation" algorithm that will allocate projects to student. It does that well enough. The allocated_project remains as None if a student is unsuccessful in getting a project. SQLAlchemy bit: So after all this happens, I'd like to map my object to a database table. Using the documentation, I've used the following code to only map certain bits. I also begin to create a Session. from sqlalchemy import * from sqlalchemy.orm import * engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() students_table = Table('studs', metadata, Column('id', Integer, primary_key=True), Column('name', String) ) metadata.create_all(engine) mapper(Student, students_table) Session = sessionmaker(bind=engine) sesh = Session() Now after that, I was curious to see if I could print out all the students from my students dictionary. for student in students.itervalues(): print student What do I get but an error: Traceback (most recent call last): File "~/FYP_Tests/FYP_Tests.py", line 140, in <module> print student File "/~FYP_Tests/Parties.py", line 30, in __str__ return "%s %s" %(self.id, self.name) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/attributes.py", line 158, in __get__ return self.impl.get(instance_state(instance), instance_dict(instance)) AttributeError: 'NoneType' object has no attribute 'get' I'm at a loss as to how to resolve this issue, if it is an issue. If more information is required, please ask and I will provide it.

    Read the article

  • Apache HttpClient Digest authentication

    - by Milan Jovic
    Hi, Basically what I need to do is to perform digest authentication. First thing I tried is the official example available here. But when I try to execute it(with some small changes, Post instead of the the Get method) I get a org.apache.http.auth.MalformedChallengeException: missing nonce in challange at org.apache.http.impl.auth.DigestScheme.processChallenge(DigestScheme.java:132) When this failed I tried using: DefaultHttpClient client = new DefaultHttpClient(); client.getCredentialsProvider().setCredentials(new AuthScope(null, -1, null), new UsernamePasswordCredentials("<username>", "<password>")); HttpPost post = new HttpPost(URI.create("http://<someaddress>")); List<NameValuePair> nvps = new ArrayList<NameValuePair>(); nvps.add(new BasicNameValuePair("domain", "<username>")); post.setEntity(new UrlEncodedFormEntity(nvps, HTTP.UTF_8)); DigestScheme digestAuth = new DigestScheme(); digestAuth.overrideParamter("algorithm", "MD5"); digestAuth.overrideParamter("realm", "http://<someaddress>"); digestAuth.overrideParamter("nonce", Long.toString(new Random().nextLong(), 36)); digestAuth.overrideParamter("qop", "auth"); digestAuth.overrideParamter("nc", "0"); digestAuth.overrideParamter("cnonce", DigestScheme.createCnonce()); Header auth = digestAuth.authenticate(new UsernamePasswordCredentials("<username>", "<password>"), post); System.out.println(auth.getName()); System.out.println(auth.getValue()); post.setHeader(auth); HttpResponse ret = client.execute(post); ByteArrayOutputStream v2 = new ByteArrayOutputStream(); ret.getEntity().writeTo(v2); System.out.println("----------------------------------------"); System.out.println(v2.toString()); System.out.println("----------------------------------------"); System.out.println(ret.getStatusLine().getReasonPhrase()); System.out.println(ret.getStatusLine().getStatusCode()); At first I have only overridden "realm" and "nonce" DigestScheme parameters. But it turned out that PHP script running on the server requires all other params, but no matter if I specify them or not DigestScheme doesn't generate them when I call its authenticate() method. I've been struggling with this for two days, and no luck. Based on everything I think that the cause of the problem is the PHP script. It looks to me that it doesn't send a challenge when app tries to access it unauthorized. Any ideas anyone?

    Read the article

  • Using Artificial Intelligence (AI) to predict Stock Prices

    - by akaphenom
    Given a set of datavery similar to the Motley Fool CAPS system, where individual users enter BUY and SELL recommendations on various equities. What I would like to do is show each recommendation and I guess some how rate (1-5) as to whether it was good predictor<5 (ie corellation coeffient = 1) of the future stock price (or eps or whatever) or a horrible predictor (ie corellation coeffient = -1) or somewhere inbetween. Each recommendation is tagged to a particular user, so that can be tracked over time. I can also track market direction (bullish / bearish) based off of something like sp500 price. The components I think that would make sense in the model would be: user direction (long/short) market direction sector of stock The thought is that some users are better in bull markets than bear (and vice versa), and some are better at shorts than longs- and then a cobination the above. I can automatically tag the market direction and sector (based off the market at the time and the equity being recommended). The thought is that I could present a series of screens and allow me to rank each individual recommendation by displaying available data absolute, market and sector out performance for a specfic time period out. I would follow a detailed list for ranking the stocks so that the ranking is as objective as possible. My assumtion is that a single user is right no more than 57% of the time - but who knows. I could load the system and say "Lets rank the recommendation as a predictor of stock value 90 days forward"; and that would represent a very explicit set of rankings. NOW here is the crux - I want to create some sort of machine learning algorithm that can identify patterns over a series of time so that as recommendations stream into the application we maintain a ranking of that stock (ie. similar to correlation coeeficient) as to the likelihood of that recommendation (in addition to the past series of recommendations ) will affect the price. Now here is the super crux. I have never taken an AI class / read an AI book / never mind specific to machine learning. So I cam looking for guidance - sample or description of a similar system I could adapt. Place to look for info or any general help. Or even push me in the right direction to get started... My hope is to implment this with F# and be able to impress my friends with a new skillset in F# with an implementation of machine learnign and potentially something (application / source) I can include in a tech portfolio or blog space; Thank you for any advice in advance.

    Read the article

  • Any significant performance improvement by using bitwise operators instead of plain int sums in C#?

    - by tunnuz
    Hello, I started working with C# a few weeks ago and I'm now in a situation where I need to build up a "bit set" flag to handle different cases in an algorithm. I have thus two options: enum RelativePositioning { LEFT = 0, RIGHT = 1, BOTTOM = 2, TOP = 3, FRONT = 4, BACK = 5 } pos = ((eye.X < minCorner.X ? 1 : 0) << RelativePositioning.LEFT) + ((eye.X > maxCorner.X ? 1 : 0) << RelativePositioning.RIGHT) + ((eye.Y < minCorner.Y ? 1 : 0) << RelativePositioning.BOTTOM) + ((eye.Y > maxCorner.Y ? 1 : 0) << RelativePositioning.TOP) + ((eye.Z < minCorner.Z ? 1 : 0) << RelativePositioning.FRONT) + ((eye.Z > maxCorner.Z ? 1 : 0) << RelativePositioning.BACK); Or: enum RelativePositioning { LEFT = 1, RIGHT = 2, BOTTOM = 4, TOP = 8, FRONT = 16, BACK = 32 } if (eye.X < minCorner.X) { pos += RelativePositioning.LEFT; } if (eye.X > maxCorner.X) { pos += RelativePositioning.RIGHT; } if (eye.Y < minCorner.Y) { pos += RelativePositioning.BOTTOM; } if (eye.Y > maxCorner.Y) { pos += RelativePositioning.TOP; } if (eye.Z > maxCorner.Z) { pos += RelativePositioning.FRONT; } if (eye.Z < minCorner.Z) { pos += RelativePositioning.BACK; } I could have used something as ((eye.X > maxCorner.X) << 1) but C# does not allow implicit casting from bool to int and the ternary operator was similar enough. My question now is: is there any performance improvement in using the first version over the second? Thank you Tommaso

    Read the article

  • Using Objective-C Blocks

    - by Sean
    Today I was experimenting with Objective-C's blocks so I thought I'd be clever and add to NSArray a few functional-style collection methods that I've seen in other languages: @interface NSArray (FunWithBlocks) - (NSArray *)collect:(id (^)(id obj))block; - (NSArray *)select:(BOOL (^)(id obj))block; - (NSArray *)flattenedArray; @end The collect: method takes a block which is called for each item in the array and expected to return the results of some operation using that item. The result is the collection of all of those results. (If the block returns nil, nothing is added to the result set.) The select: method will return a new array with only the items from the original that, when passed as an argument to the block, the block returned YES. And finally, the flattenedArray method iterates over the array's items. If an item is an array, it recursively calls flattenedArray on it and adds the results to the result set. If the item isn't an array, it adds the item to the result set. The result set is returned when everything is finished. So now that I had some infrastructure, I needed a test case. I decided to find all package files in the system's application directories. This is what I came up with: NSArray *packagePaths = [[[NSSearchPathForDirectoriesInDomains(NSAllApplicationsDirectory, NSAllDomainsMask, YES) collect:^(id path) { return (id)[[[NSFileManager defaultManager] contentsOfDirectoryAtPath:path error:nil] collect:^(id file) { return (id)[path stringByAppendingPathComponent:file]; }]; }] flattenedArray] select:^(id fullPath) { return [[NSWorkspace sharedWorkspace] isFilePackageAtPath:fullPath]; }]; Yep - that's all one line and it's horrid. I tried a few approaches at adding newlines and indentation to try to clean it up, but it still feels like the actual algorithm is lost in all the noise. I don't know if it's just a syntax thing or my relative in-experience with using a functional style that's the problem, though. For comparison, I decided to do it "the old fashioned way" and just use loops: NSMutableArray *packagePaths = [NSMutableArray new]; for (NSString *searchPath in NSSearchPathForDirectoriesInDomains(NSAllApplicationsDirectory, NSAllDomainsMask, YES)) { for (NSString *file in [[NSFileManager defaultManager] contentsOfDirectoryAtPath:searchPath error:nil]) { NSString *packagePath = [searchPath stringByAppendingPathComponent:file]; if ([[NSWorkspace sharedWorkspace] isFilePackageAtPath:packagePath]) { [packagePaths addObject:packagePath]; } } } IMO this version was easier to write and is more readable to boot. I suppose it's possible this was somehow a bad example, but it seems like a legitimate way to use blocks to me. (Am I wrong?) Am I missing something about how to write or structure Objective-C code with blocks that would clean this up and make it clearer than (or even just as clear as) the looped version?

    Read the article

  • NTRU Pseudo-code for computing Polynomial Inverses

    - by Neville
    Hello all. I was wondering if anyone could tell me how to implement line 45 of the following pseudo-code. Require: the polynomial to invert a(x), N, and q. 1: k = 0 2: b = 1 3: c = 0 4: f = a 5: g = 0 {Steps 5-7 set g(x) = x^N - 1.} 6: g[0] = -1 7: g[N] = 1 8: loop 9: while f[0] = 0 do 10: for i = 1 to N do 11: f[i - 1] = f[i] {f(x) = f(x)/x} 12: c[N + 1 - i] = c[N - i] {c(x) = c(x) * x} 13: end for 14: f[N] = 0 15: c[0] = 0 16: k = k + 1 17: end while 18: if deg(f) = 0 then 19: goto Step 32 20: end if 21: if deg(f) < deg(g) then 22: temp = f {Exchange f and g} 23: f = g 24: g = temp 25: temp = b {Exchange b and c} 26: b = c 27: c = temp 28: end if 29: f = f XOR g 30: b = b XOR c 31: end loop 32: j = 0 33: k = k mod N 34: for i = N - 1 downto 0 do 35: j = i - k 36: if j < 0 then 37: j = j + N 38: end if 39: Fq[j] = b[i] 40: end for 41: v = 2 42: while v < q do 43: v = v * 2 44: StarMultiply(a; Fq; temp;N; v) 45: temp = 2 - temp mod v 46: StarMultiply(Fq; temp; Fq;N; v) 47: end while 48: for i = N - 1 downto 0 do 49: if Fq[i] < 0 then 50: Fq[i] = Fq[i] + q 51: end if 52: end for 53: {Inverse Poly Fq returns the inverse polynomial, Fq, through the argument list.} The function StarMultiply returns a polynomial (array) stored in the variable temp. Basically temp is a polynomial (I'm representing it as an array) and v is an integer (say 4 or 8), so what exactly does temp = 2-temp mod v equate to in normal language? How should i implement that line in my code. Can someone give me an example. The above algorithm is for computing Inverse polynomials for NTRUEncrypt key generation. The pseudo-code can be found on page 28 of this document. Thanks in advance.

    Read the article

  • C# file Decryption - Bad Data

    - by Jon
    Hi all, I am in the process of rewriting an old application. The old app stored data in a scoreboard file that was encrypted with the following code: private const String SSecretKey = @"?B?n?Mj?"; public DataTable GetScoreboardFromFile() { FileInfo f = new FileInfo(scoreBoardLocation); if (!f.Exists) { return setupNewScoreBoard(); } DESCryptoServiceProvider DES = new DESCryptoServiceProvider(); //A 64 bit key and IV is required for this provider. //Set secret key For DES algorithm. DES.Key = ASCIIEncoding.ASCII.GetBytes(SSecretKey); //Set initialization vector. DES.IV = ASCIIEncoding.ASCII.GetBytes(SSecretKey); //Create a file stream to read the encrypted file back. FileStream fsread = new FileStream(scoreBoardLocation, FileMode.Open, FileAccess.Read); //Create a DES decryptor from the DES instance. ICryptoTransform desdecrypt = DES.CreateDecryptor(); //Create crypto stream set to read and do a //DES decryption transform on incoming bytes. CryptoStream cryptostreamDecr = new CryptoStream(fsread, desdecrypt, CryptoStreamMode.Read); DataTable dTable = new DataTable("scoreboard"); dTable.ReadXml(new StreamReader(cryptostreamDecr)); cryptostreamDecr.Close(); fsread.Close(); return dTable; } This works fine. I have copied the code into my new app so that I can create a legacy loader and convert the data into the new format. The problem is I get a "Bad Data" error: System.Security.Cryptography.CryptographicException was unhandled Message="Bad Data.\r\n" Source="mscorlib" The error fires at this line: dTable.ReadXml(new StreamReader(cryptostreamDecr)); The encrypted file was created today on the same machine with the old code. I guess that maybe the encryption / decryption process uses the application name / file or something and therefore means I can not open it. Does anyone have an idea as to: A) Be able explain why this isn't working? B) Offer a solution that would allow me to be able to open files that were created with the legacy application and be able to convert them please? Thank you

    Read the article

  • Calling a .NET web service (WSE 3.0, WS-Security) from JAXWS-RI

    - by elduff
    I'm writing a JAXWS-RI client that must call a .NET Web Service that is using WS-Security. The service's WSDL does not contain any WS-Security info, but I have an example soap message from the service's authors and know that I must include wsse:Security headers, including X:509 tokens. I've been researching, and I've seen example of folks calling this type of web service from Axis and CXF (in conjunction with Rampart and/or WSS4J), but nothing about using plain JAXWS-RI itself. However, I'm (unfortunately) constrained to using JAXWS-RI by my gov't client. Does anyone have any examples/documentation of doing this from JAXWS-RI? I need to ultimately generate a SOAP header that looks something like the one below - this is a sample soap:header from a .NET client written by the service's authors. (Note: I've put the 'VALUE_HERE' string in places where I need to provide my own values) <soapenv:Envelope xmlns:iri="http://EOIR/IRIES" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xenc="http://www.w3.org/2001/04/xmlenc#"> <soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing"> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401- wss-wssecurity-secext-1.0.xsd"> <xenc:EncryptedKey Id="VALUE_HERE"> <xenc:EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#rsa-oaep-mgf1p"/> <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <wsse:SecurityTokenReference> <wsse:KeyIdentifier EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3"> VALUE_HERE </wsse:KeyIdentifier> </wsse:SecurityTokenReference> </ds:KeyInfo> <xenc:CipherData> <xenc:CipherValue>VALUE_HERE</xenc:CipherValue> </xenc:CipherData> <xenc:ReferenceList> <xenc:DataReference URI="#EncDataId-8"/> </xenc:ReferenceList> </xenc:EncryptedKey> </wsse:Security>

    Read the article

  • Solving the NP-complete problem in XKCD

    - by Adam Tuttle
    The problem/comic in question: http://xkcd.com/287/ I'm not sure this is the best way to do it, but here's what I've come up with so far. I'm using CFML, but it should be readable by anyone. <cffunction name="testCombo" returntype="boolean"> <cfargument name="currentCombo" type="string" required="true" /> <cfargument name="currentTotal" type="numeric" required="true" /> <cfargument name="apps" type="array" required="true" /> <cfset var a = 0 /> <cfset var found = false /> <cfloop from="1" to="#arrayLen(arguments.apps)#" index="a"> <cfset arguments.currentCombo = listAppend(arguments.currentCombo, arguments.apps[a].name) /> <cfset arguments.currentTotal = arguments.currentTotal + arguments.apps[a].cost /> <cfif arguments.currentTotal eq 15.05> <!--- print current combo ---> <cfoutput><strong>#arguments.currentCombo# = 15.05</strong></cfoutput><br /> <cfreturn true /> <cfelseif arguments.currentTotal gt 15.05> <cfoutput>#arguments.currentCombo# > 15.05 (aborting)</cfoutput><br /> <cfreturn false /> <cfelse> <!--- less than 15.05 ---> <cfoutput>#arguments.currentCombo# < 15.05 (traversing)</cfoutput><br /> <cfset found = testCombo(arguments.currentCombo, arguments.currentTotal, arguments.apps) /> </cfif> </cfloop> </cffunction> <cfset mf = {name="Mixed Fruit", cost=2.15} /> <cfset ff = {name="French Fries", cost=2.75} /> <cfset ss = {name="side salad", cost=3.35} /> <cfset hw = {name="hot wings", cost=3.55} /> <cfset ms = {name="moz sticks", cost=4.20} /> <cfset sp = {name="sampler plate", cost=5.80} /> <cfset apps = [ mf, ff, ss, hw, ms, sp ] /> <cfloop from="1" to="6" index="b"> <cfoutput>#testCombo(apps[b].name, apps[b].cost, apps)#</cfoutput> </cfloop> The above code tells me that the only combination that adds up to $15.05 is 7 orders of Mixed Fruit, and it takes 232 executions of my testCombo function to complete. Is there a better algorithm to come to the correct solution? Did I come to the correct solution?

    Read the article

  • python RSA implemention with PKCS1

    - by user307016
    I got the following code in javascript for RSA implementionhttp://www-cs-students.stanford.edu/~tjw/jsbn/: // Return the PKCS#1 RSA encryption of "text" as an even-length hex string function RSAEncrypt(text) { var m = pkcs1pad2(text,(this.n.bitLength()+7)>>3); if(m == null) return null; var c = this.doPublic(m); if(c == null) return null; var h = c.toString(16); if((h.length & 1) == 0) return h; else return "0" + h; } // PKCS#1 (type 2, random) pad input string s to n bytes, and return a bigint function pkcs1pad2(s,n) { if(n < s.length + 11) { // TODO: fix for utf-8 alert("Message too long for RSA"); return null; } var ba = new Array(); var i = s.length - 1; while(i >= 0 && n > 0) { var c = s.charCodeAt(i--); if(c < 128) { // encode using utf-8 ba[--n] = c; } else if((c > 127) && (c < 2048)) { ba[--n] = (c & 63) | 128; ba[--n] = (c >> 6) | 192; } else { ba[--n] = (c & 63) | 128; ba[--n] = ((c >> 6) & 63) | 128; ba[--n] = (c >> 12) | 224; } } ba[--n] = 0; var rng = new SecureRandom(); var x = new Array(); while(n > 2) { // random non-zero pad x[0] = 0; while(x[0] == 0) rng.nextBytes(x); ba[--n] = x[0]; } ba[--n] = 2; ba[--n] = 0; return new BigInteger(ba); } In the snippets above, it seems that the pkcs1pad2 function is used for padding the message with some random bytes(maybe sth like 0|2|random|0 ) in front of the message. I'm using the python rsa package (http://stuvel.eu/rsa) for imitating the javascript result, i'm a newbie to python world and have no idea to traslate javascript algorithm code to the python code. Any help would be appreciated. Jiee

    Read the article

  • Maze Navigation in Player Stage with Roomba

    - by Scott
    Here is my code: /* Scott Landau Robot Lab Assignment 1 */ // Standard Java Libs import java.io.*; // Player/Stage Libs import javaclient2.*; import javaclient2.structures.*; import javaclient2.structures.sonar.*; // Begin public class SpinningRobot { public static Position2DInterface pos = null; public static LaserInterface laser = null; public static void main(String[] args) { PlayerClient robot = new PlayerClient("localhost", 6665); laser = robot.requestInterfaceLaser(0, PlayerConstants.PLAYER_OPEN_MODE); pos = robot.requestInterfacePosition2D(0,PlayerConstants.PLAYER_OPEN_MODE); robot.runThreaded (-1, -1); pos.setSpeed(0.5f, -0.25f); // end pos float x, y; x = 46.0f; y = -46.0f; boolean done = false; while( !done ){ if(laser.isDataReady()) { float[] laser_data = laser.getData().getRanges(); System.out.println("== IR Sensor =="); System.out.println("Left Wall Distance: "+laser_data[360]); System.out.println("Right Wall Distance: " +laser_data[0]); // if laser doesn't reach left wall, move to detect it // so we can guide using left wall if ( laser_data[360] < 0.6f ) { while ( laser_data[360] < 0.6f ) { pos.setSpeed(0.5f, -0.5f); } } else if ( laser_data[0] < 0.6f ) { while(laser_data[0<0.6f) { pos.setSpeed(0.5f, 0.5f); } } pos.setSpeed(0.5f, -0.25f); // end pos? done = ( (pos.getX() == x) && (pos.getY() == y) ); } } } } // End I was trying to have the Roomba go continuously at a slight right curve, quickly turning away from each wall it came to close to if it recognized it with it's laser. I can only use laser_data[360] and laser_data[0] for this one robot. I think this would eventually navigate the maze. However, I am using the Player Stage platform, and Stage freezes when the Roomba comes close to a wall using this code, I have no idea why. Also, if you can think of a better maze navigation algorithm, please let me know. Thank you!

    Read the article

  • How can I take the first 100 characters of html content ( without stripping the TAGS! )

    - by Atomiton
    There are lots of questions on how to strip html tags, but not many on functions/methods to close them. Here's the situation. I have a 500 character Message summary ( which includes html tags ), but I only want the first 100 characters. Problem is if I truncate the message, it could be in the middle of an html tag... which messes up stuff. Assuming the html is something like this: <div class="bd">"Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. <br/> <br/>Some Dates: April 30 - May 2, 2010 <br/> <p>Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. <em>Duis aute irure dolor in reprehenderit</em> in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. <br/> </p> For more information about Lorem Ipsum doemdloe, visit: <br/> <a href="http://www.somesite.com" title="Some Conference">Some text link</a><br/> </div> How would I take the first ~100 characters or so? ( Although, ideally that would be the first approximately 100 characters of "CONTENT" ( in between the html tags ) I'm assuming the best way to do this would be a recursive algorithm that keeps track of the html tags and appends any tags that would be truncated, but that may not be the best approach. My first thoughts are using recursion to count nested tags, and when we reach 100 characters, look for the next "<" and then use recursion to write the closing html tags needed from there. The reason for doing this is to make a short summary of existing articles without requiring the user to go back and provide summaries for all the articles. I want to keep the html formatting, if possible. NOTE: Please ignore that the html isn't totally semantic. This is what I have to deal with from my WYSIWYG. EDIT: I added a potential solution ( that seems to work ) I figure others will run into this problem as well. I'm not sure it's the best... and it's probably not totally robust ( in fact, I know it isn't ), but I'd appreciate any feedback

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >