Search Results

Search found 11321 results on 453 pages for 'shared libraries'.

Page 430/453 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • Cisco ASA 5505 - L2TP over IPsec

    - by xraminx
    I have followed this document on cisco site to set up the L2TP over IPsec connection. When I try to establish a VPN to ASA 5505 from my Windows XP, after I click on "connect" button, the "Connecting ...." dialog box appears and after a while I get this error message: Error 800: Unable to establish VPN connection. The VPN server may be unreachable, or security parameters may not be configured properly for this connection. ASA version 7.2(4) ASDM version 5.2(4) Windows XP SP3 Windows XP and ASA 5505 are on the same LAN for test purposes. Edit 1: There are two VLANs defined on the cisco device (the standard setup on cisco ASA5505). - port 0 is on VLAN2, outside; - and ports 1 to 7 on VLAN1, inside. I run a cable from my linksys home router (10.50.10.1) to the cisco ASA5505 router on port 0 (outside). Port 0 have IP 192.168.1.1 used internally by cisco and I have also assigned the external IP 10.50.10.206 to port 0 (outside). I run a cable from Windows XP to Cisco router on port 1 (inside). Port 1 is assigned an IP from Cisco router 192.168.1.2. The Windows XP is also connected to my linksys home router via wireless (10.50.10.141). Edit 2: When I try to establish vpn, the Cisco device real time Log viewer shows 7 entries like this: Severity:5 Date:Sep 15 2009 Time: 14:51:29 SyslogID: 713904 Destination IP = 10.50.10.141, Decription: No crypto map bound to interface... dropping pkt Edit 3: This is the setup on the router right now. Result of the command: "show run" : Saved : ASA Version 7.2(4) ! hostname ciscoasa domain-name default.domain.invalid enable password HGFHGFGHFHGHGFHGF encrypted passwd NMMNMNMNMNMNMN encrypted names name 192.168.1.200 WebServer1 name 10.50.10.206 external-ip-address ! interface Vlan1 nameif inside security-level 100 ip address 192.168.1.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address external-ip-address 255.0.0.0 ! interface Vlan3 no nameif security-level 50 no ip address ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive dns server-group DefaultDNS domain-name default.domain.invalid object-group service l2tp udp port-object eq 1701 access-list outside_access_in remark Allow incoming tcp/http access-list outside_access_in extended permit tcp any host WebServer1 eq www access-list outside_access_in extended permit udp any any eq 1701 access-list inside_nat0_outbound extended permit ip any 192.168.1.208 255.255.255.240 access-list inside_cryptomap_1 extended permit ip interface outside interface inside pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool PPTP-VPN 192.168.1.210-192.168.1.220 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-524.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface www WebServer1 www netmask 255.255.255.255 access-group outside_access_in in interface outside timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute http server enable http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec transform-set TRANS_ESP_3DES_MD5 esp-3des esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_MD5 mode transport crypto map outside_map 1 match address inside_cryptomap_1 crypto map outside_map 1 set transform-set TRANS_ESP_3DES_MD5 crypto map outside_map interface inside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash md5 group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! dhcpd address 192.168.1.2-192.168.1.33 inside dhcpd enable inside ! group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.1.1 vpn-tunnel-protocol IPSec l2tp-ipsec username myusername password FGHFGHFHGFHGFGFHF nt-encrypted tunnel-group DefaultRAGroup general-attributes address-pool PPTP-VPN default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key * tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! ! prompt hostname context Cryptochecksum:a9331e84064f27e6220a8667bf5076c1 : end

    Read the article

  • FFMPEG Segfault Solutions

    - by Brentley_11
    I'm trying to convert a bunch of movies into h.264 mp4's using FFMPEG. These movies are sourced from various portable camcorders such as the Flip Mino HD and the Kodak ZI8. One issue I'm having with video from the ZI8 is it seems to be causing FFMPEG to segfault. Here is my command: ffmpeg -i 'XmasSailor720p60fps.MOV' -threads 2 -acodec libfaac -ab 96kb -vcodec libx264 -vpre hq -b 500kb -s 484x272 XmasSailor.mp4 Here is the output: FFmpeg version SVN-r20668, Copyright (c) 2000-2009 Fabrice Bellard, et al. built on Dec 2 2009 18:37:34 with gcc 4.2.4 (Ubuntu 4.2.4-1ubuntu4) configuration: --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared libavutil 50. 5. 1 / 50. 5. 1 libavcodec 52.42. 0 / 52.42. 0 libavformat 52.39. 2 / 52.39. 2 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 2 / 0. 7. 2 libpostproc 51. 2. 0 / 51. 2. 0 Seems stream 0 codec frame rate differs from container frame rate: 59.94 (60000/1001) -> 29.97 (30000/1001) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'XmasSailor720p60fps.MOV': Duration: 00:00:05.37, start: 0.000000, bitrate: 12021 kb/s Stream #0.0(eng): Video: h264, yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 11994 kb/s, 29.97 tbr, 90k tbn, 59.94 tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 128 kb/s Metadata major_brand : qt minor_version : 0 compatible_brands: qt comment : KODAK Zi8 Pocket Video Camera comment-eng : KODAK Zi8 Pocket Video Camera [libx264 @ 0x99e1020]using SAR=1/1 [libx264 @ 0x99e1020]using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.1 Cache64 [libx264 @ 0x99e1020]profile High, level 2.1 Output #0, mp4, to 'XmasSailor.mp4': Stream #0.0(eng): Video: libx264, yuv420p, 484x272 [PAR 1:1 DAR 121:68], q=10-51, 500 kb/s, 30k tbn, 29.97 tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 96 kb/s Metadata comment : Encoded with the Statusfirm Video Transcoder Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding [h264 @ 0x99de950]B picture before any references, skipping [h264 @ 0x99de950]decode_slice_header error [h264 @ 0x99de950]no frame! Error while decoding stream #0.0 [h264 @ 0x99de950]B picture before any references, skipping [h264 @ 0x99de950]decode_slice_header error [h264 @ 0x99de950]no frame! Error while decoding stream #0.0 frame= 20 fps= 0 q=13797729.0 size= 0kB time=0.66 bitrate= 0.6kbits/s frame= 39 fps= 37 q=13797729.0 size= 0kB time=1.30 bitrate= 0.3kbits/s frame= 48 fps= 30 q=33.0 size= 11kB time=0.10 bitrate= 903.0kbits/s frame= 58 fps= 27 q=31.0 size= 22kB time=0.43 bitrate= 421.0kbits/s frame= 67 fps= 25 q=29.0 size= 41kB time=0.73 bitrate= 462.6kbits/s frame= 75 fps= 23 q=29.0 size= 59kB time=1.00 bitrate= 486.7kbits/s frame= 83 fps= 22 q=29.0 size= 81kB time=1.27 bitrate= 521.9kbits/s frame= 90 fps= 21 q=29.0 size= 97kB time=1.50 bitrate= 530.1kbits/s frame= 98 fps= 20 q=29.0 size= 114kB time=1.77 bitrate= 526.9kbits/s frame= 106 fps= 20 q=29.0 size= 134kB time=2.04 bitrate= 537.7kbits/s frame= 114 fps= 19 q=29.0 size= 150kB time=2.30 bitrate= 533.7kbits/s frame= 122 fps= 19 q=29.0 size= 172kB time=2.57 bitrate= 547.8kbits/s frame= 130 fps= 19 q=29.0 size= 193kB time=2.84 bitrate= 557.5kbits/s frame= 136 fps= 18 q=29.0 size= 211kB time=3.04 bitrate= 570.0kbits/s frame= 144 fps= 18 q=29.0 size= 242kB time=3.30 bitrate= 599.5kbits/s frame= 152 fps= 17 q=30.0 size= 261kB time=3.57 bitrate= 598.6kbits/s frame= 157 fps= 15 q=-1.0 Lsize= 368kB time=5.21 bitrate= 579.3kbits/s video:302kB audio:61kB global headers:0kB muxing overhead 1.416371% [libx264 @ 0x99e1020]frame I:1 Avg QP:27.22 size: 8720 [libx264 @ 0x99e1020]frame P:48 Avg QP:25.15 size: 3759 [libx264 @ 0x99e1020]frame B:108 Avg QP:30.10 size: 1105 [libx264 @ 0x99e1020]consecutive B-frames: 0.6% 11.5% 28.8% 59.0% [libx264 @ 0x99e1020]mb I I16..4: 28.5% 47.6% 23.9% [libx264 @ 0x99e1020]mb P I16..4: 0.8% 1.3% 0.5% P16..4: 50.6% 17.7% 13.1% 0.0% 0.0% skip:15.9% [libx264 @ 0x99e1020]mb B I16..4: 0.2% 0.3% 0.1% B16..8: 44.0% 1.2% 2.6% direct: 5.1% skip:46.5% L0:45.5% L1:51.0% BI: 3.5% [libx264 @ 0x99e1020]final ratefactor: 23.51 [libx264 @ 0x99e1020]8x8 transform intra:49.9% inter:67.9% [libx264 @ 0x99e1020]direct mvs spatial:98.1% temporal:1.9% [libx264 @ 0x99e1020]coded y,uvDC,uvAC intra: 54.7% 76.1% 41.4% inter: 17.1% 24.4% 7.8% [libx264 @ 0x99e1020]i16 v,h,dc,p: 18% 52% 5% 25% [libx264 @ 0x99e1020]i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 12% 22% 9% 7% 10% 10% 9% 8% 13% [libx264 @ 0x99e1020]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 13% 18% 8% 8% 10% 13% 10% 9% 12% [libx264 @ 0x99e1020]Weighted P-Frames: Y:10.4% [libx264 @ 0x99e1020]ref P L0: 60.2% 15.3% 11.0% 7.6% 5.2% 0.7% [libx264 @ 0x99e1020]ref B L0: 72.6% 15.6% 11.8% [libx264 @ 0x99e1020]kb/s:471.17 Segmentation fault I'm wondering if anyone else has ran into similar issues. I wasn't able to find anything helpful via Google. Another question I have is if anyone knows of a company that offers paid support for FFMPEG. Thank you for your time.

    Read the article

  • Move <option> to top of list with Javascript

    - by Adam
    I'm trying to create a button that will move the currently selected OPTION in a SELECT MULTIPLE list to the top of that list. I currently have OptionTransfer.js implemented, which is allowing me to move items up and down the list. I want to add a new function function MoveOptionTop(obj) { ... } Here is the source of OptionTransfer.js // =================================================================== // Author: Matt Kruse // WWW: http://www.mattkruse.com/ // // NOTICE: You may use this code for any purpose, commercial or // private, without any further permission from the author. You may // remove this notice from your final code if you wish, however it is // appreciated by the author if at least my web site address is kept. // // You may *NOT* re-distribute this code in any way except through its // use. That means, you can include it in your product, or your web // site, or any other form where the code is actually being used. You // may not put the plain javascript up on your site for download or // include it in your javascript libraries for download. // If you wish to share this code with others, please just point them // to the URL instead. // Please DO NOT link directly to my .js files from your site. Copy // the files to your server and use them there. Thank you. // =================================================================== /* SOURCE FILE: selectbox.js */ function hasOptions(obj){if(obj!=null && obj.options!=null){return true;}return false;} function selectUnselectMatchingOptions(obj,regex,which,only){if(window.RegExp){if(which == "select"){var selected1=true;var selected2=false;}else if(which == "unselect"){var selected1=false;var selected2=true;}else{return;}var re = new RegExp(regex);if(!hasOptions(obj)){return;}for(var i=0;i(b.text+"")){return 1;}return 0;});for(var i=0;i3){var regex = arguments[3];if(regex != ""){unSelectMatchingOptions(from,regex);}}if(!hasOptions(from)){return;}for(var i=0;i=0;i--){var o = from.options[i];if(o.selected){from.options[i] = null;}}if((arguments.length=0;i--){if(obj.options[i].selected){if(i !=(obj.options.length-1) && ! obj.options[i+1].selected){swapOptions(obj,i,i+1);obj.options[i+1].selected = true;}}}} function removeSelectedOptions(from){if(!hasOptions(from)){return;}for(var i=(from.options.length-1);i=0;i--){var o=from.options[i];if(o.selected){from.options[i] = null;}}from.selectedIndex = -1;} function removeAllOptions(from){if(!hasOptions(from)){return;}for(var i=(from.options.length-1);i=0;i--){from.options[i] = null;}from.selectedIndex = -1;} function addOption(obj,text,value,selected){if(obj!=null && obj.options!=null){obj.options[obj.options.length] = new Option(text, value, false, selected);}} /* SOURCE FILE: OptionTransfer.js */ function OT_transferLeft(){moveSelectedOptions(this.right,this.left,this.autoSort,this.staticOptionRegex);this.update();} function OT_transferRight(){moveSelectedOptions(this.left,this.right,this.autoSort,this.staticOptionRegex);this.update();} function OT_transferAllLeft(){moveAllOptions(this.right,this.left,this.autoSort,this.staticOptionRegex);this.update();} function OT_transferAllRight(){moveAllOptions(this.left,this.right,this.autoSort,this.staticOptionRegex);this.update();} function OT_saveRemovedLeftOptions(f){this.removedLeftField = f;} function OT_saveRemovedRightOptions(f){this.removedRightField = f;} function OT_saveAddedLeftOptions(f){this.addedLeftField = f;} function OT_saveAddedRightOptions(f){this.addedRightField = f;} function OT_saveNewLeftOptions(f){this.newLeftField = f;} function OT_saveNewRightOptions(f){this.newRightField = f;} function OT_update(){var removedLeft = new Object();var removedRight = new Object();var addedLeft = new Object();var addedRight = new Object();var newLeft = new Object();var newRight = new Object();for(var i=0;i0){str=str+delimiter;}str=str+val;}return str;} function OT_setDelimiter(val){this.delimiter=val;} function OT_setAutoSort(val){this.autoSort=val;} function OT_setStaticOptionRegex(val){this.staticOptionRegex=val;} function OT_init(theform){this.form = theform;if(!theform[this.left]){alert("OptionTransfer init(): Left select list does not exist in form!");return false;}if(!theform[this.right]){alert("OptionTransfer init(): Right select list does not exist in form!");return false;}this.left=theform[this.left];this.right=theform[this.right];for(var i=0;i

    Read the article

  • Linux C++: Linker is outputting strange errors

    - by knight666
    Alright, here is the output I get: arm-none-linux-gnueabi-ld --entry=main -dynamic-linker=/system/bin/linker -rpath-link=/home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib -L/home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib -nostdlib -lstdc++ -lm -lGLESv1_CM -rpath=/home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib -rpath=../../YoghurtGum/lib/Android -L./lib/Android intermediate/Alien.o intermediate/Bullet.o intermediate/Game.o intermediate/Player.o ../../YoghurtGum/bin/YoghurtGum.a -o bin/Galaxians.android intermediate/Game.o: In function `Galaxians::Init()': /media/YoghurtGum/Tests/Galaxians/src/Game.cpp:45: undefined reference to `__cxa_end_cleanup' /media/YoghurtGum/Tests/Galaxians/src/Game.cpp:44: undefined reference to `__cxa_end_cleanup' intermediate/Game.o:(.ARM.extab+0x18): undefined reference to `__gxx_personality_v0' intermediate/Game.o: In function `Player::Update()': /media/YoghurtGum/Tests/Galaxians/src/Player.h:41: undefined reference to `__cxa_end_cleanup' intermediate/Game.o:(.ARM.extab.text._ZN6Player6UpdateEv[_ZN6Player6UpdateEv]+0x0): undefined reference to `__gxx_personality_v0' intermediate/Game.o:(.rodata._ZTIN10YoghurtGum4GameE[_ZTIN10YoghurtGum4GameE]+0x0): undefined reference to `vtable for __cxxabiv1::__class_type_info' intermediate/Game.o:(.rodata._ZTI6Player[_ZTI6Player]+0x0): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' intermediate/Game.o:(.rodata._ZTIN10YoghurtGum6EntityE[_ZTIN10YoghurtGum6EntityE]+0x0): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' intermediate/Game.o:(.rodata._ZTIN10YoghurtGum6ObjectE[_ZTIN10YoghurtGum6ObjectE]+0x0): undefined reference to `vtable for __cxxabiv1::__class_type_info' intermediate/Game.o:(.rodata._ZTI6Bullet[_ZTI6Bullet]+0x0): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' intermediate/Game.o:(.rodata._ZTI5Alien[_ZTI5Alien]+0x0): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' intermediate/Game.o:(.rodata+0x20): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' ../../YoghurtGum/bin/YoghurtGum.a(Sprite.o):(.rodata._ZTIN10YoghurtGum16SpriteDataOpenGLE[_ZTIN10YoghurtGum16SpriteDataOpenGLE]+0x0): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' ../../YoghurtGum/bin/YoghurtGum.a(Sprite.o):(.rodata._ZTIN10YoghurtGum10SpriteDataE[_ZTIN10YoghurtGum10SpriteDataE]+0x0): undefined reference to `vtable for __cxxabiv1::__class_type_info' make: *** [bin/Galaxians.android] Fout 1 Here's an error I managed to decipher: intermediate/Game.o: In function `Galaxians::Init()': /media/YoghurtGum/Tests/Galaxians/src/Game.cpp:45: undefined reference to `__cxa_end_cleanup' /media/YoghurtGum/Tests/Galaxians/src/Game.cpp:44: undefined reference to `__cxa_end_cleanup' This is line 43 through 45: Assets::AddSprite(new Sprite("media\\ViperMarkII.bmp"), "ship"); Assets::AddSprite(new Sprite("media\\alien.bmp"), "alien"); Assets::AddSprite(new Sprite("media\\bat_ball.bmp"), "bullet"); So, what seems funny to me is that the first new is fine (line 43), but the second one isn't. What could cause this? intermediate/Game.o: In function `Player::Update()': /media/YoghurtGum/Tests/Galaxians/src/Player.h:41: undefined reference to `__cxa_end_cleanup' Another issue with new: Engine::game->scene_current->AddObject(new Bullet(m_X + 10, m_Y)); I have no idea where to begin with the other issues. These are my makefiles, They're a giant mess because I'm just trying to get it to work. Static library: # ====================================== # # # # YoghurtGum static library # # # # ====================================== # include ../YoghurtGum.mk PROGS = bin/YoghurtGum.a SOURCES = $(wildcard src/*.cpp) #$(YG_PATH_LIB)/libGLESv1_CM.so \ #$(YG_PATH_LIB)/libEGL.so \ YG_LINK_OPTIONS = -shared YG_LIBRARIES = \ $(YG_PATH_LIB)/libc.a \ $(YG_PATH_LIB)/libc.so \ $(YG_PATH_LIB)/libstdc++.a \ $(YG_PATH_LIB)/libstdc++.so \ $(YG_PATH_LIB)/libm.a \ $(YG_PATH_LIB)/libm.so \ $(YG_PATH_LIB)/libui.so \ $(YG_PATH_LIB)/liblog.so \ $(YG_PATH_LIB)/libGLESv2.so \ $(YG_PATH_LIB)/libcutils.so \ YG_OBJECTS = $(patsubst src/%.cpp, $(YG_INT)/%.o, $(SOURCES)) YG_NDK_PATH_LIB = /home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib all: $(PROGS) rebuild: clean $(PROGS) # remove all .o objects from intermediate and all .android objects from bin clean: rm -f $(YG_INT)/*.o $(YG_BIN)/*.a copy: acpy ../$(PROGS) $(PROGS): $(YG_OBJECTS) $(YG_ARCHIVER) -vq $(PROGS) $(YG_NDK_PATH_LIB)/crtbegin_static.o $(YG_NDK_PATH_LIB)/crtend_android.o $^ && \ $(YG_ARCHIVER) -vr $(PROGS) $(YG_LIBRARIES) $(YG_OBJECTS): $(YG_INT)/%.o : $(YG_SRC)/%.cpp $(YG_COMPILER) $(YG_FLAGS) -I $(GLES_INCLUDES) -c $< -o $@ Test game project: # ====================================== # # # # Galaxians # # # # ====================================== # include ../../YoghurtGum.mk PROGS = bin/Galaxians.android YG_COMPILER = arm-none-linux-gnueabi-g++ YG_LINKER = arm-none-linux-gnueabi-ld YG_PATH_LIB = ./lib/Android YG_LIBRARIES = ../../YoghurtGum/bin/YoghurtGum.a YG_PROGS = bin/Galaxians.android GLES_INCLUDES = ../../YoghurtGum/src ANDROID_NDK_ROOT = /home/oem/android-ndk-r3 NDK_PLATFORM_VER = 5 YG_NDK_PATH_LIB = $(ANDROID_NDK_ROOT)/build/platforms/android-$(NDK_PLATFORM_VER)/arch-arm/usr/lib YG_LIBS = -nostdlib -lstdc++ -lm -lGLESv1_CM #YG_COMPILE_OPTIONS = -g -rdynamic -Wall -Werror -O2 -w YG_COMPILE_OPTIONS = -g -Wall -Werror -O2 -w YG_LINK_OPTIONS = --entry=main -dynamic-linker=/system/bin/linker -rpath-link=$(YG_NDK_PATH_LIB) -L$(YG_NDK_PATH_LIB) $(YG_LIBS) SOURCES = $(wildcard src/*.cpp) YG_OBJECTS = $(patsubst src/%.cpp, intermediate/%.o, $(SOURCES)) all: $(PROGS) rebuild: clean $(PROGS) clean: rm -f intermediate/*.o bin/*.android $(PROGS): $(YG_OBJECTS) $(YG_LINKER) $(YG_LINK_OPTIONS) -rpath=$(YG_NDK_PATH_LIB) -rpath=../../YoghurtGum/lib/Android -L$(YG_PATH_LIB) $^ $(YG_LIBRARIES) -o $@ $(YG_OBJECTS): intermediate/%.o : src/%.cpp $(YG_COMPILER) $(YG_COMPILE_OPTIONS) -I ../../YoghurtGum/src/GLES -I ../../YoghurtGum/src -c $< -o $@ Any help would be appreciated.

    Read the article

  • Could not find rake-10.1.0 in any of the sources

    - by spuder
    I've got a ruby on rails application (gitlab) which is installed via puppet. Everything on the test system runs fine, but production generates an error about rake Running /home/git/gitlab-shell/bin/check Could not find rake-10.1.0 in any of the sources Run bundle install to install missing gems. Here is the full rake check: root@gitlab:/home/git# sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.1 ? ... OK (1.7.1) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... Could not find rake-10.1.0 in any of the sources Run `bundle install` to install missing gems. gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... Spencer Owen / bar ... yes Projects have satellites? ... Spencer Owen / bar ... can't create, repository is empty Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.4) Checking GitLab ... Finished The step 'gitlab-shell check' effectively runs the following command. If I run that command manually, everything passes. root@gitlab:/home/git/gitlab# sudo -u git -H /home/git/gitlab-shell/bin/check Check GitLab API access: OK Check directories and files: /home/git/repositories: OK /home/git/.ssh/authorized_keys: OK I have verified that rake is in fact installed root@gitlab:/home/git/gitlab# gem install rake -v 10.1.0 root@gitlab:/home/git/gitlab# bundle install root@gitlab:/home/git/gitlab# sudo -u git -H gem install rake -v 10.1.0 root@gitlab:/home/git/gitlab# sudo -u git -H bundle install Ruby is installed with update alternatives root@gitlab:/home/git/gitlab# sudo -u git -H ruby --version ruby 1.9.3p0 (2011-10-30 revision 33570) [x86_64-linux] root@gitlab:/home/git/gitlab# sudo -u git -H ls -l `which ruby` lrwxrwxrwx 1 root root 22 Oct 8 20:26 /usr/bin/ruby -> /etc/alternatives/ruby root@gitlab:/home/git/gitlab# sudo -u git -H gem --version 2.1.10 root@gitlab:/home/git/gitlab# sudo -u git -H ls -l `which gem` lrwxrwxrwx 1 root root 21 Oct 10 20:50 /usr/bin/gem -> /etc/alternatives/gem I've tried the solution mentioned below, to allow shared gems http://stackoverflow.com/questions/19284914/bundle-exec-fails-with-could-not-find-rake-10-1-0-in-any-of-the-sources http://stackoverflow.com/questions/18978002/could-not-find-rake-with-bundle-exec root@gitlab:/home/git/gitlab# cat /home/git/gitlab/.bundle/config --- BUNDLE_FROZEN: '1' BUNDLE_PATH: vendor/bundle BUNDLE_WITHOUT: development:test:postgres BUNDLE_DISABLE_SHARED_GEMS: '1' I've exhausted google, so I'm hoping for someone more familiar with ruby to offer any ideas how to resolve the error. Could not find rake-10.1.0 in any of the sources

    Read the article

  • Trac FastCGI + Python on dreamhost leaves zombie python running

    - by Katsuke
    Hello there, Recently I installed trac in one of my dreamhost domains. I followed the instructions of http://trac.mlalonde.net/wiki/CreamyTrac and everything worked perfectly. At least i thought that was the case. After a few days, i started to notice that i was getting random 500 pages. I quickly checked the error log, and found a bunch of: [Fri Apr 16 14:35:34 2010] [error] [client *.*.*.*] Premature end of script headers: dispatch.fcgi [Fri Apr 16 14:35:54 2010] [error] [client *.*.*.*] Premature end of script headers: dispatch.fcgi, referer: http://www.trac.****.com/login [Fri Apr 16 16:05:58 2010] [error] [client *.*.*.*] Premature end of script headers: dispatch.fcgi, referer: http://www.trac.****.com/timeline The trac instance has very low traffic so it is possible that this might have been triggered faster if there was more use of it. So next step i went to look at top and am amazed by what i see: (this is NOT exactly what i saw, i just reproduced this from memory) 23730 rl_inst 20 0 44516 15m 4012 S 0.0 0.4 0:00.17 python2.4 23731 rl_inst 20 0 44616 15m 4012 S 0.0 0.4 0:03.17 python2.4 23732 rl_inst 20 0 44116 15m 4012 S 0.0 0.4 0:01.17 python2.4 23733 rl_inst 20 0 44826 15m 4012 S 0.0 0.4 0:04.17 python2.4 23734 rl_inst 20 0 44216 15m 4012 S 0.0 0.4 0:02.17 python2.4 23735 rl_inst 20 0 44416 15m 4012 S 0.0 0.4 0:01.17 python2.4 I opened the trac site again and that changed to this: 23730 rl_inst 20 0 44516 15m 4012 S 0.0 0.4 0:00.17 python2.4 23731 rl_inst 20 0 44616 15m 4012 S 0.0 0.4 0:03.17 python2.4 23732 rl_inst 20 0 44116 15m 4012 S 0.0 0.4 0:01.17 python2.4 23733 rl_inst 20 0 44826 15m 4012 S 0.0 0.4 0:04.17 python2.4 23734 rl_inst 20 0 44216 15m 4012 S 0.0 0.4 0:02.17 python2.4 23735 rl_inst 20 0 44416 15m 4012 S 0.0 0.4 0:01.17 python2.4 28378 rl_inst 20 0 2608 1208 1008 S 0.0 0.0 0:00.00 dispatch.fcgi 28382 rl_inst 20 0 44248 15m 4012 S 0.0 0.4 0:02.19 python2.4 So the dispatch.fcgi was being called correctly and was spawning correctly other python2.4 process. After the IDLE time passed this is the results: 23730 rl_inst 20 0 44516 15m 4012 S 0.0 0.4 0:00.17 python2.4 23731 rl_inst 20 0 44616 15m 4012 S 0.0 0.4 0:03.17 python2.4 23732 rl_inst 20 0 44116 15m 4012 S 0.0 0.4 0:01.17 python2.4 23733 rl_inst 20 0 44826 15m 4012 S 0.0 0.4 0:04.17 python2.4 23734 rl_inst 20 0 44216 15m 4012 S 0.0 0.4 0:02.17 python2.4 23735 rl_inst 20 0 44416 15m 4012 S 0.0 0.4 0:01.17 python2.4 28382 rl_inst 20 0 44248 15m 4012 S 0.0 0.4 0:02.19 python2.4 dispatch.fcgi was gone but the corresponding python2.4 was still there o_O. I had started with 5 sleeping processes of python and after idle time i ended up with 6. I repeated this and found that it kept spawning more and more python2.4, this was indeed what was causing my 500s indirectly. Let me explain my findings. Am on a shared hosting so my processes get killed if they are way too many. So everytime i opened trac 2 new processes spawned and one remained, to the point that dreamhost was killing a random process when i reached my limit. Sometimes killing the python2.4 that actually was rendering the current page. Hence header premature ending, python is gone the .fcgi doesnt know what to do and throws an 500. I found a dirty solution to this. I changed my dispatch.fcgi to contain a line that killed any currently running python2.4 process and then spawn a new one. Since then i dont get any rogues what so ever. But i dont think this is the best solution, calling killall in every fcgi just seems wrong. Anyone has run into this issue and found a cleaner solution? Is there anything that i have overlooked?

    Read the article

  • Set up linux box for hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms And this is where I'm at. I will keep editing this as I make progress. Any tips on how to Configure virtual interfaces/ip based virtual hosts for SSL, setting up a CA, or anything else would be appreciated.

    Read the article

  • JNLP File Association: How do I open the file which was double-clicked on?

    - by Sam Barnum
    I've got the following JNLP: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE jnlp PUBLIC "-//Sun Microsystems, Inc//DTD JNLP Descriptor 6.0.10//EN" "http://java.sun.com/dtd/JNLP-6.0.10.dtd"> <jnlp spec="6.0.10" version="1.63" codebase="http://foo.example.com/msi" href="Foo.jnlp"> <information> <title>Foo</title> <vendor> Foo Systems, Inc.</vendor> <homepage href="http://Foo.com"/> <description>Foo Viewer/Editor Application</description> <icon href="splash.gif" width="425" height="102" kind="splash"/> <icon href="Foo.gif" width="64" height="64"/> <offline-allowed/> <shortcut> <desktop/> <menu submenu="Foo Systems, Inc."/> </shortcut> <association mime-type="application-x/wlog" extensions="wlog"/> <association mime-type="application-x/mplot" extensions="mplot"/> </information> <security> <all-permissions/> </security> <resources> <j2se version="1.6+" initial-heap-size="32m" max-heap-size="255m"/> <jar href="jars_deployment/TimingFramework-1.0.jar"/> <jar href="jars_deployment/iText-2.1.5.jar"/> <jar href="jars_deployment/jai_codec.jar"/> <jar href="Foo.jar"/> <jar href="jars_deployment/TimingFramework-1.0.jar"/> <jar href="jars_deployment/iText-2.1.5.jar"/> <jar href="jars_deployment/jai_codec.jar"/> <jar href="jars_deployment/jsch-20090402.jar"/> <property name="apple.laf.useScreenMenuBar" value="true"/> <property name="apple.awt.graphics.UseQuartz" value="false"/> <property name="com.apple.mrj.application.apple.menu.about.name" value="Foo"/> <property name="java.util.logging.config.file" value="/Users/Shared/logging.properties"/> </resources> <application-desc main-class="com.prosc.msi.editor.ui.test.Sandbox"/> </jnlp> Most everything is working. When I double-click a .wlog file, it opens up my application. However, it doesn't open the correct file. I read somewhere that JNLP was supposed to pass parameters to the main method indicating which file caused the app to be launched, but this is not happening (on OS X 10.6). I get an empty array to my application's main method. Probably unrelated, my splash screen doesn't work :( Any pointers on getting this working?

    Read the article

  • Someone please help google create instuctions that windows users understand. googles "instructions"

    - by nathan
    Below are the only instructions i managed to find from google on how to install the android NDK, it is written as if we all run Linux and presumes we all understand what these obscure tools are. My comments and questions appear in Italics if somone who knows unix and windows would translate for google that would be great! Android NDK Installation Introduction: Please read docs/OVERVIEW.TXT to understand what the Android NDK is and is not. This file gives instructions on how to properly setup your NDK. I. Requirements: The Android NDK currently requires a Linux, OS X or Windows host operating system. Windows users will need to install Cygwin (http://www.cygwin.com) to use it. Note that running the NDK under MSys is not supported. You will need to have the Android SDK and its dependencies installed. The NDK cannot generate final application packages (.apk files), only the shared library files that can go into them. IMPORTANT: The Android NDK can only be used to target system images using the Cupcake (1.5) or later releases of the platform. This is due to subtle toolchain and ABI related changed that make it incompatible with 1.0 and 1.1 system images. The NDK requires GNU Make 3.81 or later being available on your development system. Earlier versions of GNU Make might work but have not been tested. You can check this by running 'make -v' from the command-line. The output should look like: GNU Make 3.81 Copyright (C) 2006 Free Software Foundation, Inc. ... On certain systems, GNU Make might be available through a different command like 'gmake' or 'gnumake'. For these systems, replace 'make' by the appropriate command when invoking the NDK build system as described in the documentation. Great some strange thing called gnu make.. if your not going to tell me what it does maybe you then at least you could give me a URL to it? The NDK also requires a Nawk or GNU Awk executable being available on your development system. Note that the original 'awk' program doesn't implement the 'match' and 'substr' functions used by the NDK build system. Ok another tool, with 1 of 2 possible names, but not the third... and again where should i download this?? On Windows, you will need to install a recent release of Cygwin to use the NDK. See http://www.cygwin.com for instructions. Woohoo a URL! download took about a day because these install instructions do not specify what parts to download. II. Preparing your installation prebuilt cross-toolchain binaries: After installing and unarchiving the NDK, you will need to run the following command from the root folder: build/host-setup.sh hello? windows dont run nothing but .exe .com or .dll, just tell me how you want me to run it.. This will test your setup and make sure the NDK can work properly. Nothing is said about where any of these things need to be installed to (what directory)

    Read the article

  • Why so Long time span in creating Session Factory?

    - by vijay.shad
    Hi My project is web application running in the tomcat container. This application is a spring framework based hibernate application. The problem with this is it takes a lot of time when creates session factory. here is the logs 2010-04-15 23:05:28,053 DEBUG [SessionFactoryImpl] Session factory constructed with filter configurations : {} 2010-04-15 23:05:28,053 DEBUG [SessionFactoryImpl] instantiating session factory with properties: {java.vendor=Sun Microsystems Inc., sun.java.launcher=SUN_STANDARD, catalina.base=/usr/local/InstalledPrograms/apache-tomcat-6.0.20, sun.management.compiler=HotSpot Tiered Compilers, catalina.useNaming=true, os.name=Linux, sun.boot.class.path=/usr/java/jdk1.6.0_17/jre/lib/resources.jar:/usr/java/jdk1.6.0_17/jre/lib/rt.jar:/usr/java/jdk1.6.0_17/jre/lib/sunrsasign.jar:/usr/java/jdk1.6.0_17/jre/lib/jsse.jar:/usr/java/jdk1.6.0_17/jre/lib/jce.jar:/usr/java/jdk1.6.0_17/jre/lib/charsets.jar:/usr/java/jdk1.6.0_17/jre/classes, java.util.logging.config.file=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/conf/logging.properties, java.vm.specification.vendor=Sun Microsystems Inc., hibernate.generate_statistics=true, java.runtime.version=1.6.0_17-b04, hibernate.cache.provider_class=org.hibernate.cache.EhCacheProvider, user.name=root, shared.loader=, tomcat.util.buf.StringCache.byte.enabled=true, hibernate.connection.release_mode=auto, user.language=en, java.naming.factory.initial=org.apache.naming.java.javaURLContextFactory, sun.boot.library.path=/usr/java/jdk1.6.0_17/jre/lib/i386, java.version=1.6.0_17, java.util.logging.manager=org.apache.juli.ClassLoaderLogManager, user.timezone=Canada/Pacific, sun.arch.data.model=32, java.endorsed.dirs=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/endorsed, sun.cpu.isalist=, sun.jnu.encoding=UTF-8, file.encoding.pkg=sun.io, package.access=sun.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper.,sun.beans., file.separator=/, java.specification.name=Java Platform API Specification, java.class.version=50.0, user.country=US, java.home=/usr/java/jdk1.6.0_17/jre, java.vm.info=mixed mode, os.version=2.6.18-128.el5, path.separator=:, java.vm.version=14.3-b01, hibernate.jdbc.batch_size=25, java.awt.printerjob=sun.print.PSPrinterJob, sun.io.unicode.encoding=UnicodeLittle, package.definition=sun.,java.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper., java.naming.factory.url.pkgs=org.apache.naming, sun.rmi.dgc.client.gcInterval=3600000, user.home=/root, java.specification.vendor=Sun Microsystems Inc., java.library.path=/usr/java/jdk1.6.0_17/jre/lib/i386/server:/usr/java/jdk1.6.0_17/jre/lib/i386:/usr/java/jdk1.6.0_17/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib, java.vendor.url=http://java.sun.com/, java.vm.vendor=Sun Microsystems Inc., hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect, sun.rmi.dgc.server.gcInterval=3600000, common.loader=${catalina.home}/lib,${catalina.home}/lib/*.jar, java.runtime.name=Java(TM) SE Runtime Environment, java.class.path=:/usr/local/InstalledPrograms/apache-tomcat-6.0.20/bin/bootstrap.jar, hibernate.bytecode.use_reflection_optimizer=false, java.vm.specification.name=Java Virtual Machine Specification, java.vm.specification.version=1.0, catalina.home=/usr/local/InstalledPrograms/apache-tomcat-6.0.20, sun.cpu.endian=little, sun.os.patch.level=unknown, hibernate.cache.use_query_cache=true, hibernate.connection.provider_class=org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider, java.io.tmpdir=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/temp, java.vendor.url.bug=http://java.sun.com/cgi-bin/bugreport.cgi, server.loader=, os.arch=i386, java.awt.graphicsenv=sun.awt.X11GraphicsEnvironment, java.ext.dirs=/usr/java/jdk1.6.0_17/jre/lib/ext:/usr/java/packages/lib/ext, user.dir=/, line.separator=, java.vm.name=Java HotSpot(TM) Server VM, hibernate.cache.use_second_level_cache=true, file.encoding=UTF-8, java.specification.version=1.6, hibernate.show_sql=true} 2010-04-15 23:08:53,516 DEBUG [AbstractEntityPersister] Static SQL for entity: com.vsd.model.Order There you can see the time delay of more than 3 mins in executing these processes. My database is mysql and database server is running on the local machine only. The container environment is Centos Linux system. I am clueless about why it takes that much of time in executing these process, But when i do the same task from under eclipse it does not take that much of time. Development environment is Windows.

    Read the article

  • Google Drive Update keeps throwing an error when I try to save a thumbnail

    - by Dano64
    The Base64 string checks out fine. I was able to export it to another website and download it again as my image. When I try to do an update using the Drive Javascript api, it just keeps returning this error: Invalidvaluefor:Notavalidbase64bytestring I also true making the string URL safe. Per this page https://google-api-client-libraries.appspot.com/documentation/drive/v2/python/latest/drive_v2.files.html Am I doing something wrong here, the documentation says send Base64, the string is valid and, the string is intact throughout the process, but Google will not accept it? I am using the javascript api, I think there is maybe a bug sending the thumbnail using the javascript api. This is the request Request URL:https://www.googleapis.com/upload/drive/v2/files/?uploadType=multipart Request Method:POST Status Code:400 Bad Request **Request Headers** :host:www.googleapis.com :method:POST :path:/upload/drive/v2/files/?uploadType=multipart :scheme:https :version:HTTP/1.1 accept:*/* accept-encoding:gzip,deflate,sdch accept-language:en-US,en;q=0.8 authorization:Bearer ya29.AHES6ZQr...YXDacdY4 content-length:14313 content-type:multipart/mixed; boundary="--mpart_delim" origin:https://www.googleapis.com x-javascript-user-agent:google-api-javascript-client/1.1.0-beta x-origin:https://app.pinteract.com x-referer:https://app.pinteract.com **Query String Parameters** uploadType:multipart **Request Payload** ----mpart_delim Content-Type: application/json {"id":null,"title":"Test Pinup.pint","mimeType":"application/vnd.pinteract.pint","thumbnail":{"mimeType":"image/png","image":"iVBORw0KGgoAAAANSUh...UVORK5CYII%3D"}} ----mpart_delim Content-Type: application/vnd.pinteract.pint Content-Transfer-Encoding: binary { "header" : {"id":"215A660A"} "members" : [ {"id":"100523752012631912873"} ], "manifest" : [ {"id":"0000","ele":[],"own":"100523752012631912873","dtc":1371679680000,"txt":"&Delta; Created","typ":0} ], "elements" : [ {"id":"0F54","x":560,"y":264,"bak":"#44ff44","own":"100523752012631912873","srt":"544","sta":0,"wid":120,"hgt":120,"dtc":0,"rec":"","txt":"This is Note"} ] } ----mpart_delim-- **Response Headers** content-length:10848 content-type:application/json date:Mon, 01 Jul 2013 14:41:33 GMT server:HTTP Upload Server Built on Jun 25 2013 11:32:14 (1372185134) status:400 Bad Request version:HTTP/1.1 This is the Response { "error": { "errors": [ { "domain": "global", "reason": "invalid", "message": "Invalid value for: Not a valid base64 byte string: iVBORw0KGgoAAAANSUhEUg...AASUVORK5CYII%3D" } ], "code": 400, "message": "Invalid value for: Not a valid base64 byte string: iVBORw0KGgoAAAANSU...lRtkAAAAASUVORK5CYII%3D" } } Raw Base64 String iVBORw0KGgoAAAANSUhEUgAAAGAAAABgCAYAAADimHc4AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAADjVJREFUeNrsXX2MHVUVP3fevH27SxfKhyBCsIqoKOhCBBISY5Hyj2JSGuXLP6SJwYAYICgxsUBr1AhqbE2I8Q9jjYE/CLFbP4IxVbaIYjAmS0wUC6ELRGy7tPt2t9vd997MPd6Z+3Xunfsq1r7tvNe56TAfO2/ezO93zu/ce+6ZB0DVqla1qlWtalWrWtWqVrWqVa1qJ1Nj/XCTm5+bHeUcz9+3f25tmvLVSYePIyLIBRhX24yxKXH63Kljw5Ni97UfbVhzpCLgGNs3/jp//r9en72l3UnWCsCvSjrpmQA54Nkq28q3FQn5mucL5OuIsUPi6f5Si6KJVasaT26/+T1vVgS8hfbl3+27br65dE+r1VknrH6IIfkjolmh/A8hwYJv1lySInYXhXc8W49r235++yVPVQQE2lee3n/d3OzS5tZS+2oWurlMYgj4qI45HsApCZYAzgFSsU7FfhSx5+txtHnnHR9+qiJAtPsnD1y4MLf09aXF9q3ZzTAmb8ohQaKtHEDLkGv9qIBHBbgmICXrlKyjKHoirkWbfn3X+EsnLQH37dr3mebs4o8x4WMSfGbAtyQoDSLga8BBeQJqqXE8QQIt12D2syVR2+LcxVpcu/23d1/++ElFwPdeXB555eWZ7x453LozEvuRtvqMgCyAhm7QyI7Sf7BEcI4mCNP9lBKhrD9R2zkJqVwLWfpOYyje8tSXLlsceAIy8Pf8Y98vOq1kXQZ8JG5By06kLZ/5+m8DcEiCMt3XXpCTwN0YkKrYkFu/0CfpCSC3M1IEEeILJ0ca9et/c/fliwNLQAb+SxR8JTkRJUAdy4DOtxCdQGyDb6ALyq0HuEHYyo+OAYlaUkVI7hkAk8ON+Ppd935kceAIMOC3BfjiayUB1gO0FAG4QTjfN9pTjAO0/0+9ICeDBmGyJDQoEzIyjxCr3YKET/7+vitWhIRopQh4+Z/7fyAtX4PPQG/X1JLt19Rit8XfIjDb7uftdWqa0Ox8sNfIF/CvK4mPFAB6yf4uaPzYYit5dKVwWREC7vrV619sLXU+L8Gg4AEBlXgFBdN8JgQ6M+fUHJDJNWiQD1yHkXOlDLLMcz53xTf/dNtASNBXn5l598y++b+L3k2j5gFdU1KkA2/kdEOR9IDsOpSCQBN4M/3nJuCaOEC03pEhJTuJJ0XtbEHsxBG75PmvXb2nrz3gzZnD2wVCjQgYsUAgVm2lRwdiR6LA9w4lS2YNnlcRq/avB653MfrdnueJf/XllP+kryXozl++toF30o9GnoVrvZU9HtINNaD44BFgHSmx4DP/swRkKz9F3WcGCOt9NQ0MwtWXbnn2xr4loL2cbNYDLAdcQgTz9FpbtAaUERANsJEFklHvIOMISiBzrqlJD4w/yKJJEL2lh/qSgDt2vrohTdJL/QdzBlvMtUQ6Io6cXgwBngZWD/xIeZVeM49o5g30KEGh+1Td4g+8f/Mfbuw7Atotbf0EVOchmQuGts5APsh4UNCbSBoDPE/oYt3B48zekzEIVF7Ae+cFPSHgCxOvXpwm/FLHslQ6GZgLfNfFk54iuMwhyAf2aJLTfcHg8cwLLnzwmQ/2DQFCem4pgOR4gzviLQDD3gIRzE1h+PshyaEB3wPYORccImVyUHR1P9s3BCSd9CZ/sOFPsjCV4ymkHViYCJOmYEV5co8xF1DnHrDwfe6cg1wzdZCRE8WY4tN9QYAY9Z4rBj7vzZNpiMWRHqJ5yKMCQcBg3cgJkpFPzjtkRw7x+ruwqxQBkUxNnGgXXbBp8rzSE3BksT3uZDDJUJahD7hKsGFACug+Cx8vegQWRtEsOOzHwO1h8WEQncOiSzpeegLaneQqB2wMEKHWGHhgigUzs2FYsHyAcNzonmfBgneZBfDoORk03nlF6QkQffCL3WdFM4uFSLUWHcuzTqPrTtCdkvSMlwb5ozZVPQHo3g8L+qgjOZ7t5HvHvScUH/ceUMrPrmkTzW+agbRrZswIkXmGKHUbPWCQWDH6shUK8n4g9aVQGQFT3+kYguGdGAoWHOas0nsAqowlnbUyD+BPpoNX56M/R7wAzHkItjLCDaQO1gZXBOcQ3Tego4dzYAsBMBw5yukB8o6ZARiVZSs7l8e8NSOKr60euwRM39J1bZBZG/KsRTuxhRoE+Q6T3gZqNPk0pWUXsfwEaA/IHyQHX1sZM/CiY9OWLH0GJ66ZSwYJuoYgDxMj9QiFeiFbzIWGA3oOBKrvdIWF9j9Z8NUPHoBoQUVq7RJsbnpCTIFNgEeZUANFghmZogWeYVG6bAmirYqgJSwFItC3eNdwqIdwpN7SBx7A0YZdRCo1ECQiMlYvk3WcDJAi7QGBYAzepDyqSXlTpKXWEPAI7sQk7RmWJFPeqGXInNcHHtBKUhiJa/KBsoq0CBSwQJIENg2RP7AqyNIdpYhYnfYI9KYo/SlJWxdKSSHgBTyAQzfZcqc9OfGG0veC4ijaTcsEgdZskqo1W7/jVrbpEhL6N7kPzpwu99dqHljPDaNXqGuvby0cQyQpY+EK8BSsV4i2u/QecLjdmR4biq2Gc5WCZmgSXJwzKeYkqqKyfDqyjZT12VEtFrqZ6FTC0bJ0S7AGMD8PSB2R8RjrOUhK3FPQniIlSpAxXXoCFlrJ1Kp6rLScmVElU2gzf5TFbJIMzawYM8GWBXI4tEoaObrvB5A6UBMLPI8qWj6aLmdeyqjApx6QSqKmjjdePSlL+dQPX5gdjqPVea2OKqqyBVFuXZCukgOg9TvZMVuamGc3/RJFJAMsXqyOowW53CtF0UW6VtK4qpgGaIvtlrhOS6yz8hS5na2xefDha0/vg4FYrsUT4v5vy1PD3JUfSjnKMZvt66PNxWA+k8VtahjdHhCQFzVoebpTC0S2uV+2SOOCkSFl/WitPy9hlN4x0TcTMjOLrZ1GRwMFsno77yVxWjzlWqUTfNH/u1to637GBT8/jxbr6uCOhBR1TqplCKTscNDygzt7gVXPKuM+8ejUbKOWyRCt61TlHqYQihVKSY7+loy0eqZAgkCRrl8d7b8Zk5rXlVSlnC5bF8dasiJOV8Zp6RHb0DzUA/npmQfkwXi5s80CwF2rRbdc0PEQ9eJEwdqJ9yQFj+HBc5MA+Nx5m9LuJ0R+8hhhJCgnd1uvcOoZAS/sn9sqBmVNCnoIzAJwafdj+uUKnvKuwCeBgCutHozM+DEhUYDbNSgScu1vit2tfUfA/CPrmgvtRHqBM4gCByBOQKTAUPAl8FwAb4tpE97dS5JuZFNZUtadKsATQ4RYAIhHwLameJa+IyBrfzswv3WxI7yASIEPjvumilpS1xMK3UjHe3xC1DXQexmDyhEJwh2OLviuJwjgcWsvMep5efr4t55b//ZThnbk4wBw+/6F2n3wS0v8EnUyIYM0tex1Q8l26qUkUjViThT4HZSLDLyQB9+ODLzZuTcI65/oawKyduW3/7zj9OH6+ihEgDcA82t3GEBxMkQBqntFvPC+MDjdS5qm0IGWgt/RwKveT7Yvjk8I8G/oNTbxShDw4qHDGz901tia0bg2jirvo9fcIQC7lJ+gk6RHPdEPUMhcOj9ZAMW353O5kgAroNU2pxKUpxw2rgQ2K/aS3jsf3D1+/qrhp0fE2MCkIzwJ6lYy6Fu/JcCf8QqMB8Baf6JAT8i2lp9EyY441hSfuWbukXVTA0VATsIDu8fPO6Xx9HBGAi1NJ2XkAMUKtcKcJ523JeBz5ycLbMo5VUDbtZYgua/BF72fpvjMNc0VAn/FCcjaBQ9Mjr9jtJF7An1zhQZiO0VvM6VQSEdbDwCPACpDKenVpMbybW9Hb2ejXfHpFQX/hBCQk7Bpcvzs4aEdo3G0hr4fHEGoiNdNRdOUhEuElSOdTEvABt2EkNFx+v3CCwCmUfZ4plYaixP2Yx2n3r9r9QWnNHasrsdrQ68KwX8hAOiEjJ421H18oGkFRQSXg6+O7gUZr4BJcYUbejnYKiUBuq3ZNHnPGY36Q42IrWYBL2B+4Szt/2vZAfJDTd6kigZeSw7J82R6v2X2kXVbT+Tzl+IHm4Q3rDlnuP6Q6KbeVjfxAPWbis7NojcdqSfWubJ8riUIrAfo1ENiPAS2i2NbRE9n+kQ/e6l+suw0QcSZwhtEL8kQQUvbTb0aLec0lk9Gv2AHYwkhQvzbLpYthx6+drosz1zKH+3LPGIsrq2PDu79/vA5F8p3igMEAFjZQV+CrCxNicM/FdY/UQaL7wsCdBsZGcH6GefB2EVXwsi574Pa294FSeM0aNVPpeMyFYQRhtrzEC0dgnTmFWi9sQcW9k5NLr2x5+PQm7ra/klF/F8WcngG4r1/hLG5F+HsQ+fA6OgoDDUaEMdx9rtv0vLTFNqdDrSWl2FhYQEOHNgPyeysCLOzzAsfpWsRDH5jZfb2gSQA3fQFKzMJJ4sHlJaEeFA9QHlB6G1YLFNMGGAPQPqMrKyeEMNgN5pg5V08AU6kNwwsASQQUy/nXdzkhElSPODgs4DMhkioPKA3RJgfvwoFCN6FkIqA4xwDQgTo90eSKgj3oHGe0q5oHBgVZwR0ql5Qz2MAhjwg+2O7Goj1VvtpD6jugb8AJUrMDagEcToSjgn48yCrzkvTBjYZx7khoK6e82AZgu5JQ4Bqi+oZ/10m3e8bAgSQE8dOQE7CJMh3e5fL+oylJiCKom3HSoBYDqdp+pjYLfX/Ta/UBBw5cmSSMbb1WAhgLNqYJMnLZZfL0seAZrN5rwDzfyGhWa/XNx48+OaT/RCv+iIIT0/vvbfRaFwjvOFoMSErLdxeq9Uu27v3le390mFg0IftpptvXetURXDefPyxn01B1apWtapVrWpVq1rVqla1t9L+I8AAvydNUrElRtkAAAAASUVORK5CYII="}],"code":400,"message":"Invalidvaluefor:Notavalidbase64bytestring:iVBORw0KGgoAAAANSUhEUgAAAGAAAABgCAYAAADimHc4AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAADjVJREFUeNrsXX2MHVUVP3fevH27SxfKhyBCsIqoKOhCBBISY5Hyj2JSGuXLP6SJwYAYICgxsUBr1AhqbE2I8Q9jjYE/CLFbP4IxVbaIYjAmS0wUC6ELRGy7tPt2t9vd997MPd6Z+3Xunfsq1r7tvNe56TAfO2/ezO93zu/ce+6ZB0DVqla1qlWtalWrWtWqVrWqVa1qJ1Nj/XCTm5+bHeUcz9+3f25tmvLVSYePIyLIBRhX24yxKXH63Kljw5Ni97UfbVhzpCLgGNs3/jp//r9en72l3UnWCsCvSjrpmQA54Nkq28q3FQn5mucL5OuIsUPi6f5Si6KJVasaT26/+T1vVgS8hfbl3+27br65dE+r1VknrH6IIfkjolmh/A8hwYJv1lySInYXhXc8W49r235++yVPVQQE2lee3n/d3OzS5tZS+2oWurlMYgj4qI45HsApCZYAzgFSsU7FfhSx5+txtHnnHR9+qiJAtPsnD1y4MLf09aXF9q3ZzTAmb8ohQaKtHEDLkGv9qIBHBbgmICXrlKyjKHoirkWbfn3X+EsnLQH37dr3mebs4o8x4WMSfGbAtyQoDSLga8BBeQJqqXE8QQIt12D2syVR2+LcxVpcu/23d1/++ElFwPdeXB555eWZ7x453LozEvuRtvqMgCyAhm7QyI7Sf7BEcI4mCNP9lBKhrD9R2zkJqVwLWfpOYyje8tSXLlsceAIy8Pf8Y98vOq1kXQZ8JG5By06kLZ/5+m8DcEiCMt3XXpCTwN0YkKrYkFu/0CfpCSC3M1IEEeILJ0ca9et/c/fliwNLQAb+SxR8JTkRJUAdy4DOtxCdQGyDb6ALyq0HuEHYyo+OAYlaUkVI7hkAk8ON+Ppd935kceAIMOC3BfjiayUB1gO0FAG4QTjfN9pTjAO0/0+9ICeDBmGyJDQoEzIyjxCr3YKET/7+vitWhIRopQh4+Z/7fyAtX4PPQG/X1JLt19Rit8XfIjDb7uftdWqa0Ox8sNfIF/CvK4mPFAB6yf4uaPzYYit5dKVwWREC7vrV619sLXU+L8Gg4AEBlXgFBdN8JgQ6M+fUHJDJNWiQD1yHkXOlDLLMcz53xTf/dNtASNBXn5l598y++b+L3k2j5gFdU1KkA2/kdEOR9IDsOpSCQBN4M/3nJuCaOEC03pEhJTuJJ0XtbEHsxBG75PmvXb2nrz3gzZnD2wVCjQgYsUAgVm2lRwdiR6LA9w4lS2YNnlcRq/avB653MfrdnueJf/XllP+kryXozl++toF30o9GnoVrvZU9HtINNaD44BFgHSmx4DP/swRkKz9F3WcGCOt9NQ0MwtWXbnn2xr4loL2cbNYDLAdcQgTz9FpbtAaUERANsJEFklHvIOMISiBzrqlJD4w/yKJJEL2lh/qSgDt2vrohTdJL/QdzBlvMtUQ6Io6cXgwBngZWD/xIeZVeM49o5g30KEGh+1Td4g+8f/Mfbuw7Atotbf0EVOchmQuGts5APsh4UNCbSBoDPE/oYt3B48zekzEIVF7Ae+cFPSHgCxOvXpwm/FLHslQ6GZgLfNfFk54iuMwhyAf2aJLTfcHg8cwLLnzwmQ/2DQFCem4pgOR4gzviLQDD3gIRzE1h+PshyaEB3wPYORccImVyUHR1P9s3BCSd9CZ/sOFPsjCV4ymkHViYCJOmYEV5co8xF1DnHrDwfe6cg1wzdZCRE8WY4tN9QYAY9Z4rBj7vzZNpiMWRHqJ5yKMCQcBg3cgJkpFPzjtkRw7x+ruwqxQBkUxNnGgXXbBp8rzSE3BksT3uZDDJUJahD7hKsGFACug+Cx8vegQWRtEsOOzHwO1h8WEQncOiSzpeegLaneQqB2wMEKHWGHhgigUzs2FYsHyAcNzonmfBgneZBfDoORk03nlF6QkQffCL3WdFM4uFSLUWHcuzTqPrTtCdkvSMlwb5ozZVPQHo3g8L+qgjOZ7t5HvHvScUH/ceUMrPrmkTzW+agbRrZswIkXmGKHUbPWCQWDH6shUK8n4g9aVQGQFT3+kYguGdGAoWHOas0nsAqowlnbUyD+BPpoNX56M/R7wAzHkItjLCDaQO1gZXBOcQ3Tego4dzYAsBMBw5yukB8o6ZARiVZSs7l8e8NSOKr60euwRM39J1bZBZG/KsRTuxhRoE+Q6T3gZqNPk0pWUXsfwEaA/IHyQHX1sZM/CiY9OWLH0GJ66ZSwYJuoYgDxMj9QiFeiFbzIWGA3oOBKrvdIWF9j9Z8NUPHoBoQUVq7RJsbnpCTIFNgEeZUANFghmZogWeYVG6bAmirYqgJSwFItC3eNdwqIdwpN7SBx7A0YZdRCo1ECQiMlYvk3WcDJAi7QGBYAzepDyqSXlTpKXWEPAI7sQk7RmWJFPeqGXInNcHHtBKUhiJa/KBsoq0CBSwQJIENg2RP7AqyNIdpYhYnfYI9KYo/SlJWxdKSSHgBTyAQzfZcqc9OfGG0veC4ijaTcsEgdZskqo1W7/jVrbpEhL6N7kPzpwu99dqHljPDaNXqGuvby0cQyQpY+EK8BSsV4i2u/QecLjdmR4biq2Gc5WCZmgSXJwzKeYkqqKyfDqyjZT12VEtFrqZ6FTC0bJ0S7AGMD8PSB2R8RjrOUhK3FPQniIlSpAxXXoCFlrJ1Kp6rLScmVElU2gzf5TFbJIMzawYM8GWBXI4tEoaObrvB5A6UBMLPI8qWj6aLmdeyqjApx6QSqKmjjdePSlL+dQPX5gdjqPVea2OKqqyBVFuXZCukgOg9TvZMVuamGc3/RJFJAMsXqyOowW53CtF0UW6VtK4qpgGaIvtlrhOS6yz8hS5na2xefDha0/vg4FYrsUT4v5vy1PD3JUfSjnKMZvt66PNxWA+k8VtahjdHhCQFzVoebpTC0S2uV+2SOOCkSFl/WitPy9hlN4x0TcTMjOLrZ1GRwMFsno77yVxWjzlWqUTfNH/u1to637GBT8/jxbr6uCOhBR1TqplCKTscNDygzt7gVXPKuM+8ejUbKOWyRCt61TlHqYQihVKSY7+loy0eqZAgkCRrl8d7b8Zk5rXlVSlnC5bF8dasiJOV8Zp6RHb0DzUA/npmQfkwXi5s80CwF2rRbdc0PEQ9eJEwdqJ9yQFj+HBc5MA+Nx5m9LuJ0R+8hhhJCgnd1uvcOoZAS/sn9sqBmVNCnoIzAJwafdj+uUKnvKuwCeBgCutHozM+DEhUYDbNSgScu1vit2tfUfA/CPrmgvtRHqBM4gCByBOQKTAUPAl8FwAb4tpE97dS5JuZFNZUtadKsATQ4RYAIhHwLameJa+IyBrfzswv3WxI7yASIEPjvumilpS1xMK3UjHe3xC1DXQexmDyhEJwh2OLviuJwjgcWsvMep5efr4t55b//ZThnbk4wBw+/6F2n3wS0v8EnUyIYM0tex1Q8l26qUkUjViThT4HZSLDLyQB9+ODLzZuTcI65/oawKyduW3/7zj9OH6+ihEgDcA82t3GEBxMkQBqntFvPC+MDjdS5qm0IGWgt/RwKveT7Yvjk8I8G/oNTbxShDw4qHDGz901tia0bg2jirvo9fcIQC7lJ+gk6RHPdEPUMhcOj9ZAMW353O5kgAroNU2pxKUpxw2rgQ2K/aS3jsf3D1+/qrhp0fE2MCkIzwJ6lYy6Fu/JcCf8QqMB8Baf6JAT8i2lp9EyY441hSfuWbukXVTA0VATsIDu8fPO6Xx9HBGAi1NJ2XkAMUKtcKcJ523JeBz5ycLbMo5VUDbtZYgua/BF72fpvjMNc0VAn/FCcjaBQ9Mjr9jtJF7An1zhQZiO0VvM6VQSEdbDwCPACpDKenVpMbybW9Hb2ejXfHpFQX/hBCQk7Bpcvzs4aEdo3G0hr4fHEGoiNdNRdOUhEuElSOdTEvABt2EkNFx+v3CCwCmUfZ4plYaixP2Yx2n3r9r9QWnNHasrsdrQ68KwX8hAOiEjJ421H18oGkFRQSXg6+O7gUZr4BJcYUbejnYKiUBuq3ZNHnPGY36Q42IrWYBL2B+4Szt/2vZAfJDTd6kigZeSw7J82R6v2X2kXVbT+Tzl+IHm4Q3rDlnuP6Q6KbeVjfxAPWbis7NojcdqSfWubJ8riUIrAfo1ENiPAS2i2NbRE9n+kQ/e6l+suw0QcSZwhtEL8kQQUvbTb0aLec0lk9Gv2AHYwkhQvzbLpYthx6+drosz1zKH+3LPGIsrq2PDu79/vA5F8p3igMEAFjZQV+CrCxNicM/FdY/UQaL7wsCdBsZGcH6GefB2EVXwsi574Pa294FSeM0aNVPpeMyFYQRhtrzEC0dgnTmFWi9sQcW9k5NLr2x5+PQm7ra/klF/F8WcngG4r1/hLG5F+HsQ+fA6OgoDDUaEMdx9rtv0vLTFNqdDrSWl2FhYQEOHNgPyeysCLOzzAsfpWsRDH5jZfb2gSQA3fQFKzMJJ4sHlJaEeFA9QHlB6G1YLFNMGGAPQPqMrKyeEMNgN5pg5V08AU6kNwwsASQQUy/nXdzkhElSPODgs4DMhkioPKA3RJgfvwoFCN6FkIqA4xwDQgTo90eSKgj3oHGe0q5oHBgVZwR0ql5Qz2MAhjwg+2O7Goj1VvtpD6jugb8AJUrMDagEcToSjgn48yCrzkvTBjYZx7khoK6e82AZgu5JQ4Bqi+oZ/10m3e8bAgSQE8dOQE7CJMh3e5fL+oylJiCKom3HSoBYDqdp+pjYLfX/Ta/UBBw5cmSSMbb1WAhgLNqYJMnLZZfL0seAZrN5rwDzfyGhWa/XNx48+OaT/RCv+iIIT0/vvbfRaFwjvOFoMSErLdxeq9Uu27v3le390mFg0IftpptvXetURXDefPyxn01B1apWtapVrWpVq1rVqla1t9L+I8AAvydNUrElRtkAAAAASUVORK5CYII= Url Encoded Base64 String iVBORw0KGgoAAAANSUhEUgAAAGAAAABgCAYAAADimHc4AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAADjVJREFUeNrsXX2MHVUVP3fevH27SxfKhyBCsIqoKOhCBBISY5Hyj2JSGuXLP6SJwYAYICgxsUBr1AhqbE2I8Q9jjYE%2FCLFbP4IxVbaIYjAmS0wUC6ELRGy7tPt2t9vd997MPd6Z%2B3Xunfsq1r7tvNe56TAfO2%2FezO93zu%2Fce%2B6ZB0DVqla1qlWtalWrWtWqVrWqVa1qJ1Nj%2FXCTm5%2BbHeUcz9%2B3f25tmvLVSYePIyLIBRhX24yxKXH63Kljw5Ni97UfbVhzpCLgGNs3%2Fjp%2F%2Fr9en72l3UnWCsCvSjrpmQA54Nkq28q3FQn5mucL5OuIsUPi6f5Si6KJVasaT26%2F%2BT1vVgS8hfbl3%2B27br65dE%2Br1VknrH6IIfkjolmh%2FA8hwYJv1lySInYXhXc8W49r235%2B%2ByVPVQQE2lee3n%2Fd3OzS5tZS%2B2oWurlMYgj4qI45HsApCZYAzgFSsU7FfhSx5%2BtxtHnnHR9%2BqiJAtPsnD1y4MLf09aXF9q3ZzTAmb8ohQaKtHEDLkGv9qIBHBbgmICXrlKyjKHoirkWbfn3X%2BEsnLQH37dr3mebs4o8x4WMSfGbAtyQoDSLga8BBeQJqqXE8QQIt12D2syVR2%2BLcxVpcu%2F23d1%2F%2B%2BElFwPdeXB555eWZ7x453LozEvuRtvqMgCyAhm7QyI7Sf7BEcI4mCNP9lBKhrD9R2zkJqVwLWfpOYyje8tSXLlsceAIy8Pf8Y98vOq1kXQZ8JG5By06kLZ%2F5%2Bm8DcEiCMt3XXpCTwN0YkKrYkFu%2F0CfpCSC3M1IEEeILJ0ca9et%2Fc%2FfliwNLQAb%2BSxR8JTkRJUAdy4DOtxCdQGyDb6ALyq0HuEHYyo%2BOAYlaUkVI7hkAk8ON%2BPpd935kceAIMOC3BfjiayUB1gO0FAG4QTjfN9pTjAO0%2F0%2B9ICeDBmGyJDQoEzIyjxCr3YKET%2F7%2BvitWhIRopQh4%2BZ%2F7fyAtX4PPQG%2FX1JLt19Rit8XfIjDb7uftdWqa0Ox8sNfIF%2FCvK4mPFAB6yf4uaPzYYit5dKVwWREC7vrV619sLXU%2BL8Gg4AEBlXgFBdN8JgQ6M%2BfUHJDJNWiQD1yHkXOlDLLMcz53xTf%2FdNtASNBXn5l598y%2B%2Bb%2BL3k2j5gFdU1KkA2%2FkdEOR9IDsOpSCQBN4M%2F3nJuCaOEC03pEhJTuJJ0XtbEHsxBG75PmvXb2nrz3gzZnD2wVCjQgYsUAgVm2lRwdiR6LA9w4lS2YNnlcRq%2FavB653MfrdnueJf%2FXllP%2BkryXozl%2B%2BtoF30o9GnoVrvZU9HtINNaD44BFgHSmx4DP%2FswRkKz9F3WcGCOt9NQ0MwtWXbnn2xr4loL2cbNYDLAdcQgTz9FpbtAaUERANsJEFklHvIOMISiBzrqlJD4w%2FyKJJEL2lh%2FqSgDt2vrohTdJL%2FQdzBlvMtUQ6Io6cXgwBngZWD%2FxIeZVeM49o5g30KEGh%2B1Td4g%2B8f%2FMfbuw7Atotbf0EVOchmQuGts5APsh4UNCbSBoDPE%2FoYt3B48zekzEIVF7Ae%2BcFPSHgCxOvXpwm%2FFLHslQ6GZgLfNfFk54iuMwhyAf2aJLTfcHg8cwLLnzwmQ%2F2DQFCem4pgOR4gzviLQDD3gIRzE1h%2BPshyaEB3wPYORccImVyUHR1P9s3BCSd9CZ%2FsOFPsjCV4ymkHViYCJOmYEV5co8xF1DnHrDwfe6cg1wzdZCRE8WY4tN9QYAY9Z4rBj7vzZNpiMWRHqJ5yKMCQcBg3cgJkpFPzjtkRw7x%2BruwqxQBkUxNnGgXXbBp8rzSE3BksT3uZDDJUJahD7hKsGFACug%2BCx8vegQWRtEsOOzHwO1h8WEQncOiSzpeegLaneQqB2wMEKHWGHhgigUzs2FYsHyAcNzonmfBgneZBfDoORk03nlF6QkQffCL3WdFM4uFSLUWHcuzTqPrTtCdkvSMlwb5ozZVPQHo3g8L%2BqgjOZ7t5HvHvScUH%2FceUMrPrmkTzW%2BagbRrZswIkXmGKHUbPWCQWDH6shUK8n4g9aVQGQFT3%2BkYguGdGAoWHOas0nsAqowlnbUyD%2BBPpoNX56M%2FR7wAzHkItjLCDaQO1gZXBOcQ3Tego4dzYAsBMBw5yukB8o6ZARiVZSs7l8e8NSOKr60euwRM39J1bZBZG%2FKsRTuxhRoE%2BQ6T3gZqNPk0pWUXsfwEaA%2FIHyQHX1sZM%2FCiY9OWLH0GJ66ZSwYJuoYgDxMj9QiFeiFbzIWGA3oOBKrvdIWF9j9Z8NUPHoBoQUVq7RJsbnpCTIFNgEeZUANFghmZogWeYVG6bAmirYqgJSwFItC3eNdwqIdwpN7SBx7A0YZdRCo1ECQiMlYvk3WcDJAi7QGBYAzepDyqSXlTpKXWEPAI7sQk7RmWJFPeqGXInNcHHtBKUhiJa%2FKBsoq0CBSwQJIENg2RP7AqyNIdpYhYnfYI9KYo%2FSlJWxdKSSHgBTyAQzfZcqc9OfGG0veC4ijaTcsEgdZskqo1W7%2FjVrbpEhL6N7kPzpwu99dqHljPDaNXqGuvby0cQyQpY%2BEK8BSsV4i2u%2FQecLjdmR4biq2Gc5WCZmgSXJwzKeYkqqKyfDqyjZT12VEtFrqZ6FTC0bJ0S7AGMD8PSB2R8RjrOUhK3FPQniIlSpAxXXoCFlrJ1Kp6rLScmVElU2gzf5TFbJIMzawYM8GWBXI4tEoaObrvB5A6UBMLPI8qWj6aLmdeyqjApx6QSqKmjjdePSlL%2BdQPX5gdjqPVea2OKqqyBVFuXZCukgOg9TvZMVuamGc3%2FRJFJAMsXqyOowW53CtF0UW6VtK4qpgGaIvtlrhOS6yz8hS5na2xefDha0%2Fvg4FYrsUT4v5vy1PD3JUfSjnKMZvt66PNxWA%2Bk8VtahjdHhCQFzVoebpTC0S2uV%2B2SOOCkSFl%2FWitPy9hlN4x0TcTMjOLrZ1GRwMFsno77yVxWjzlWqUTfNH%2Fu1to637GBT8%2Fjxbr6uCOhBR1TqplCKTscNDygzt7gVXPKuM%2B8ejUbKOWyRCt61TlHqYQihVKSY7%2Bloy0eqZAgkCRrl8d7b8Zk5rXlVSlnC5bF8dasiJOV8Zp6RHb0DzUA%2FnpmQfkwXi5s80CwF2rRbdc0PEQ9eJEwdqJ9yQFj%2BHBc5MA%2BNx5m9LuJ0R%2B8hhhJCgnd1uvcOoZAS%2Fsn9sqBmVNCnoIzAJwafdj%2BuUKnvKuwCeBgCutHozM%2BDEhUYDbNSgScu1vit2tfUfA%2FCPrmgvtRHqBM4gCByBOQKTAUPAl8FwAb4tpE97dS5JuZFNZUtadKsATQ4RYAIhHwLameJa%2BIyBrfzswv3WxI7yASIEPjvumilpS1xMK3UjHe3xC1DXQexmDyhEJwh2OLviuJwjgcWsvMep5efr4t55b%2F%2FZThnbk4wBw%2B%2F6F2n3wS0v8EnUyIYM0tex1Q8l26qUkUjViThT4HZSLDLyQB9%2BODLzZuTcI65%2FoawKyduW3%2F7zj9OH6%2BihEgDcA82t3GEBxMkQBqntFvPC%2BMDjdS5qm0IGWgt%2FRwKveT7Yvjk8I8G%2FoNTbxShDw4qHDGz901tia0bg2jirvo9fcIQC7lJ%2Bgk6RHPdEPUMhcOj9ZAMW353O5kgAroNU2pxKUpxw2rgQ2K%2FaS3jsf3D1%2B%2Fqrhp0fE2MCkIzwJ6lYy6Fu%2FJcCf8QqMB8Baf6JAT8i2lp9EyY441hSfuWbukXVTA0VATsIDu8fPO6Xx9HBGAi1NJ2XkAMUKtcKcJ523JeBz5ycLbMo5VUDbtZYgua%2FBF72fpvjMNc0VAn%2FFCcjaBQ9Mjr9jtJF7An1zhQZiO0VvM6VQSEdbDwCPACpDKenVpMbybW9Hb2ejXfHpFQX%2FhBCQk7Bpcvzs4aEdo3G0hr4fHEGoiNdNRdOUhEuElSOdTEvABt2EkNFx%2Bv3CCwCmUfZ4plYaixP2Yx2n3r9r9QWnNHasrsdrQ68KwX8hAOiEjJ421H18oGkFRQSXg6%2BO7gUZr4BJcYUbejnYKiUBuq3ZNHnPGY36Q42IrWYBL2B%2B4Szt%2F2vZAfJDTd6kigZeSw7J82R6v2X2kXVbT%2BTzl%2BIHm4Q3rDlnuP6Q6KbeVjfxAPWbis7NojcdqSfWubJ8riUIrAfo1ENiPAS2i2NbRE9n%2BkQ%2Fe6l%2Bsuw0QcSZwhtEL8kQQUvbTb0aLec0lk9Gv2AHYwkhQvzbLpYthx6%2Bdrosz1zKH%2B3LPGIsrq2PDu79%2FvA5F8p3igMEAFjZQV%2BCrCxNicM%2FFdY%2FUQaL7wsCdBsZGcH6GefB2EVXwsi574Pa294FSeM0aNVPpeMyFYQRhtrzEC0dgnTmFWi9sQcW9k5NLr2x5%2BPQm7ra%2FklF%2FF8WcngG4r1%2FhLG5F%2BHsQ%2BfA6OgoDDUaEMdx9rtv0vLTFNqdDrSWl2FhYQEOHNgPyeysCLOzzAsfpWsRDH5jZfb2gSQA3fQFKzMJJ4sHlJaEeFA9QHlB6G1YLFNMGGAPQPqMrKyeEMNgN5pg5V08AU6kNwwsASQQUy%2FnXdzkhElSPODgs4DMhkioPKA3RJgfvwoFCN6FkIqA4xwDQgTo90eSKgj3oHGe0q5oHBgVZwR0ql5Qz2MAhjwg%2B2O7Goj1VvtpD6jugb8AJUrMDagEcToSjgn48yCrzkvTBjYZx7khoK6e82AZgu5JQ4Bqi%2BoZ%2F10m3e8bAgSQE8dOQE7CJMh3e5fL%2BoylJiCKom3HSoBYDqdp%2BpjYLfX%2FTa%2FUBBw5cmSSMbb1WAhgLNqYJMnLZZfL0seAZrN5rwDzfyGhWa%2FXNx48%2BOaT%2FRCv%2BiIIT0%2FvvbfRaFwjvOFoMSErLdxeq9Uu27v3le390mFg0IftpptvXetURXDefPyxn01B1apWtapVrWpVq1rVqla1t9L%2BI8AAvydNUrElRtkAAAAASUVORK5CYII%3D

    Read the article

  • help setting up an IPSEC vpn from my linux box

    - by robthewolf
    I have an office with a router and a remote server (Linux - Ubuntu 10.10). Both locations need to connect to a data supplier through a VPN. The VPN is an IPSEC gateway. I was able to configure my Linksys rv42 router to create a VPN connection successfully and now I need to do the same for Linux server. I have been messing around with this for too long. First I tried OpenVPN, but that is SSL and not IPSEC. Then I tried Shrew. I think I have the settings correct but I haven't been able to create the connection. It maybe that I have to use something else like a direct IPSEC config or something like that. If someone knows of a way to turn the following settings that I have been given below into a working IPSEC VPN connection I would be very grateful. Here are the settings I was given that must be used to connect to my supplier: Local destination network: 192.168.4.0/24 Local destination hosts: 192.168.4.100 Remote destination network: 192.167.40.0/24 Remote destination hosts: 192.168.40.27 VPN peering point: xxx.xxx.xxx.xxx Then they have given me the following details: IPSEC/ISAKMP Phase 1 Parameters: Authentication method: pre shared secret Diffie Hellman group: group 2 Encryption Algorithm: 3DES Lifetime in seconds:28800 Phase 2 parameters: IPSEC security: ESP Encryption algortims: 3DES Authentication algorithms: MD5 lifetime in seconds: 28800 pfs: disabled Here are the settings from my attempt to use shrew: n:version:2 n:network-ike-port:500 n:network-mtu-size:1380 n:client-addr-auto:0 n:network-frag-size:540 n:network-dpd-enable:1 n:network-notify-enable:1 n:client-banner-enable:1 n:client-dns-used:1 b:auth-mutual-psk:YjJzN2QzdDhyN2EyZDNpNG42ZzQ= n:phase1-dhgroup:2 n:phase1-keylen:0 n:phase1-life-secs:28800 n:phase1-life-kbytes:0 n:vendor-chkpt-enable:0 n:phase2-keylen:0 n:phase2-pfsgroup:-1 n:phase2-life-secs:28800 n:phase2-life-kbytes:0 n:policy-nailed:0 n:policy-list-auto:1 n:client-dns-auto:1 n:network-natt-port:4500 n:network-natt-rate:15 s:client-dns-addr:0.0.0.0 s:client-dns-suffix: s:network-host:xxx.xxx.xxx.xxx s:client-auto-mode:pull s:client-iface:virtual s:client-ip-addr:192.168.4.0 s:client-ip-mask:255.255.255.0 s:network-natt-mode:enable s:network-frag-mode:disable s:auth-method:mutual-psk s:ident-client-type:address s:ident-client-data:192.168.4.0 s:ident-server-type:address s:ident-server-data:192.168.40.0 s:phase1-exchange:aggressive s:phase1-cipher:3des s:phase1-hash:md5 s:phase2-transform:3des s:phase2-hmac:md5 s:ipcomp-transform:disabled Finally here is the debug output from the shrew log: 10/12/22 17:22:18 ii : ipc client process thread begin ... 10/12/22 17:22:18 < A : peer config add message 10/12/22 17:22:18 DB : peer added ( obj count = 1 ) 10/12/22 17:22:18 ii : local address 217.xxx.xxx.xxx selected for peer 10/12/22 17:22:18 DB : tunnel added ( obj count = 1 ) 10/12/22 17:22:18 < A : proposal config message 10/12/22 17:22:18 < A : proposal config message 10/12/22 17:22:18 < A : client config message 10/12/22 17:22:18 < A : local id '192.168.4.0' message 10/12/22 17:22:18 < A : remote id '192.168.40.0' message 10/12/22 17:22:18 < A : preshared key message 10/12/22 17:22:18 < A : peer tunnel enable message 10/12/22 17:22:18 DB : new phase1 ( ISAKMP initiator ) 10/12/22 17:22:18 DB : exchange type is aggressive 10/12/22 17:22:18 DB : 217.xxx.xxx.xxx:500 <- 206.xxx.xxx.xxx:500 10/12/22 17:22:18 DB : c1a8b31ac860995d:0000000000000000 10/12/22 17:22:18 DB : phase1 added ( obj count = 1 ) 10/12/22 17:22:18 : security association payload 10/12/22 17:22:18 : - proposal #1 payload 10/12/22 17:22:18 : -- transform #1 payload 10/12/22 17:22:18 : key exchange payload 10/12/22 17:22:18 : nonce payload 10/12/22 17:22:18 : identification payload 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v00 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v01 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v02 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v03 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( rfc ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports DPDv1 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is SHREW SOFT compatible 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is NETSCREEN compatible 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is SIDEWINDER compatible 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is CISCO UNITY compatible 10/12/22 17:22:18 = : cookies c1a8b31ac860995d:0000000000000000 10/12/22 17:22:18 = : message 00000000 10/12/22 17:22:18 - : send IKE packet 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 ( 484 bytes ) 10/12/22 17:22:18 DB : phase1 resend event scheduled ( ref count = 2 ) 10/12/22 17:22:18 ii : opened tap device tap0 10/12/22 17:22:28 - : resend 1 phase1 packet(s) 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 10/12/22 17:22:38 - : resend 1 phase1 packet(s) 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 10/12/22 17:22:48 - : resend 1 phase1 packet(s) 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 10/12/22 17:22:58 ii : resend limit exceeded for phase1 exchange 10/12/22 17:22:58 ii : phase1 removal before expire time 10/12/22 17:22:58 DB : phase1 deleted ( obj count = 0 ) 10/12/22 17:22:58 ii : closed tap device tap0 10/12/22 17:22:58 DB : tunnel stats event canceled ( ref count = 1 ) 10/12/22 17:22:58 DB : removing tunnel config references 10/12/22 17:22:58 DB : removing tunnel phase2 references 10/12/22 17:22:58 DB : removing tunnel phase1 references 10/12/22 17:22:58 DB : tunnel deleted ( obj count = 0 ) 10/12/22 17:22:58 DB : removing all peer tunnel refrences 10/12/22 17:22:58 DB : peer deleted ( obj count = 0 ) 10/12/22 17:22:58 ii : ipc client process thread exit ...

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • Update to Easy Slider 1.7 made all my JQuery code stop working.

    - by Anders H
    I'm pretty novice as a JQuery user goes. I've got some experience implementing different plugins but would be lost trying to customize my own. I can't share the exact site details with you due to a NDA, so I hope someone can give me a little help. I've got a project due today (Just HTML/CSS/JQuery). It has a lightbox, show/hide login menu and a slider is Easy Slider 1.5. Everything was working together, until I attempted to update to Easy Slider 1.7 (see link on same page, I'm too new to post more than 1 link). When I did so, JQuery stopped working for all the plugins. I've attempted to revert back to the original state, by undoing my work (didn't do much), ad JQuery remains broken. Firebug Error Console shared no errors. I can't find anything in the code no matter how hard I look at it. Can anyone help me troubleshoot this JQuery problem? Delivery is supposed to be tonight for the project. EDIT: Generic header info: <!-- Global Style Sheet --> <link rel="stylesheet" href="style.css" media="screen" type="text/css" /> <!-- Cufon --> <script src="cufon/cufon.js" type="text/javascript"></script> <script src="cufon/gotham_325-gotham_350.font.js" type="text/javascript"></script> <!-- jQuery Javascript --> <script type="text/javascript" language="javascript" src="js/jquery-1.3.2.min.js"></script> <script type="text/javascript" language="javascript" src="js/jquery-ui-1.7.1.custom.min.js"></script> <script type="text/javascript" language="javascript" src="js/jquery.colorbox.js"></script> <script type="text/javascript" language="javascript" src="js/global.js"></script> <script type="text/javascript" language="javascript" src="js/home.js"></script> <script type="text/javascript"> $(document).ready(function() { $(".signin").click(function(e) { e.preventDefault(); $("fieldset#signin_menu").toggle(); $(".signin").toggleClass("menu-open"); }); $("fieldset#signin_menu").mouseup(function() { return false }); $(document).mouseup(function(e) { if($(e.target).parent("a.signin").length==0) { $(".signin").removeClass("menu-open"); $("fieldset#signin_menu").hide(); } }); }); </script> <script src="javascripts/jquery.tipsy.js" type="text/javascript"></script> <script type='text/javascript'> $(function() { $('#forgot_username_link').tipsy({gravity: 'w'}); }); </script> <script type="text/javascript" language="javascript" src="js/easySlider1.5.js"></script> <script type="text/javascript"> $(document).ready(function(){ $("#slider").easySlider(); }); </script> <script type="text/javascript"> $(document).ready(function(){ $(".regbox").colorbox({iframe:true, innerWidth:270, innerHeight:270}); }); </script>

    Read the article

  • When should I use indexed arrays of OpenGL vertices?

    - by Tartley
    I'm trying to get a clear idea of when I should be using indexed arrays of OpenGL vertices, drawn with gl[Multi]DrawElements and the like, versus when I should simply use contiguous arrays of vertices, drawn with gl[Multi]DrawArrays. (Update: The consensus in the replies I got is that one should always be using indexed vertices.) I have gone back and forth on this issue several times, so I'm going to outline my current understanding, in the hopes someone can either tell me I'm now finally more or less correct, or else point out where my remaining misunderstandings are. Specifically, I have three conclusions, in bold. Please correct them if they are wrong. One simple case is if my geometry consists of meshes to form curved surfaces. In this case, the vertices in the middle of the mesh will have identical attributes (position, normal, color, texture coord, etc) for every triangle which uses the vertex. This leads me to conclude that: 1. For geometry with few seams, indexed arrays are a big win. Follow rule 1 always, except: For geometry that is very 'blocky', in which every edge represents a seam, the benefit of indexed arrays is less obvious. To take a simple cube as an example, although each vertex is used in three different faces, we can't share vertices between them, because for a single vertex, the surface normals (and possible other things, like color and texture co-ord) will differ on each face. Hence we need to explicitly introduce redundant vertex positions into our array, so that the same position can be used several times with different normals, etc. This means that indexed arrays are of less use. e.g. When rendering a single face of a cube: 0 1 o---o |\ | | \ | | \| o---o 3 2 (this can be considered in isolation, because the seams between this face and all adjacent faces mean than none of these vertices can be shared between faces) if rendering using GL_TRIANGLE_FAN (or _STRIP), then each face of the cube can be rendered thus: verts = [v0, v1, v2, v3] colors = [c0, c0, c0, c0] normal = [n0, n0, n0, n0] Adding indices does not allow us to simplify this. From this I conclude that: 2. When rendering geometry which is all seams or mostly seams, when using GL_TRIANGLE_STRIP or _FAN, then I should never use indexed arrays, and should instead always use gl[Multi]DrawArrays. (Update: Replies indicate that this conclusion is wrong. Even though indices don't allow us to reduce the size of the arrays here, they should still be used because of other performance benefits, as discussed in the comments) The only exception to rule 2 is: When using GL_TRIANGLES (instead of strips or fans), then half of the vertices can still be re-used twice, with identical normals and colors, etc, because each cube face is rendered as two separate triangles. Again, for the same single cube face: 0 1 o---o |\ | | \ | | \| o---o 3 2 Without indices, using GL_TRIANGLES, the arrays would be something like: verts = [v0, v1, v2, v2, v3, v0] normals = [n0, n0, n0, n0, n0, n0] colors = [c0, c0, c0, c0, c0, c0] Since a vertex and a normal are often 3 floats each, and a color is often 3 bytes, that gives, for each cube face, about: verts = 6 * 3 floats = 18 floats normals = 6 * 3 floats = 18 floats colors = 6 * 3 bytes = 18 bytes = 36 floats and 18 bytes per cube face. (I understand the number of bytes might change if different types are used, the exact figures are just for illustration.) With indices, we can simplify this a little, giving: verts = [v0, v1, v2, v3] (4 * 3 = 12 floats) normals = [n0, n0, n0, n0] (4 * 3 = 12 floats) colors = [c0, c0, c0, c0] (4 * 3 = 12 bytes) indices = [0, 1, 2, 2, 3, 0] (6 shorts) = 24 floats + 12 bytes, and maybe 6 shorts, per cube face. See how in the latter case, vertices 0 and 2 are used twice, but only represented once in each of the verts, normals and colors arrays. This sounds like a small win for using indices, even in the extreme case of every single geometry edge being a seam. This leads me to conclude that: 3. When using GL_TRIANGLES, one should always use indexed arrays, even for geometry which is all seams. Please correct my conclusions in bold if they are wrong.

    Read the article

  • Win32 -- Object cleanup and global variables

    - by KaiserJohaan
    Hello, I've got a question about global variables and object cleanup in c++. For example, look at the code here; case WM_PAINT: paintText(&hWnd); break; void paintText(HWND* hWnd) { PAINTSTRUCT ps; HBRUSH hbruzh = CreateSolidBrush(RGB(0,0,0)); HDC hdz = BeginPaint(*hWnd,&ps); char s1[] = "Name"; char s2[] = "IP"; SelectBrush(hdz,hbruzh); SelectFont(hdz,hFont); SetBkMode(hdz,TRANSPARENT); TextOut(hdz,3,23,s1,sizeof(s1)); TextOut(hdz,10,53,s2,sizeof(s2)); EndPaint(*hWnd,&ps); DeleteObject(hdz); DeleteObject(hbruzh); // bad? DeleteObject(ps); // bad? } 1)First of all; which objects are good to delete and which ones are NOT good to delete and why? Not 100% sure of this. 2)Since WM_PAINT is called everytime the window is redrawn, would it be better to simply store ps, hdz and hbruzh as global variables instead of re-initializing them everytime? The downside I guess would be tons of global variables in the end _ but performance-wise would it not be less CPU-consuming? I know it won't matter prolly but I'm just aiming for minimalistic as possible for educational purposes. 3) What about libraries that are loaded in? For example: // // Main // int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { // initialize vars HWND hWnd; WNDCLASSEX wc; HINSTANCE hlib = LoadLibrary("Riched20.dll"); ThishInstance = hInstance; ZeroMemory(&wc,sizeof(wc)); // set WNDCLASSEX props wc.cbSize = sizeof(WNDCLASSEX); wc.lpfnWndProc = WindowProc; wc.hInstance = ThishInstance; wc.hIcon = LoadIcon(hInstance,MAKEINTRESOURCE(IDI_MYICON)); wc.lpszMenuName = MAKEINTRESOURCE(IDR_MENU1); wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH)COLOR_WINDOW; wc.lpszClassName = TEXT("PimpClient"); RegisterClassEx(&wc); // create main window and display it hWnd = CreateWindowEx(NULL, wc.lpszClassName, TEXT("PimpClient"), 0, 300, 200, 450, 395, NULL, NULL, hInstance, NULL); createWindows(&hWnd); ShowWindow(hWnd,nCmdShow); // loop message queue MSG msg; while (GetMessage(&msg, NULL,0,0)) { TranslateMessage(&msg); DispatchMessage(&msg); } // cleanup? FreeLibrary(hlib); return msg.wParam; } 3cont) is there a reason to FreeLibrary at the end? I mean when the process terminates all resources are freed anyway? And since the library is used to paint text throughout the program, why would I want to free before that? Cheers

    Read the article

  • Server hung with "blocked for more than 120 seconds", diskless

    - by alterpub
    I have server which hung every 2-5 days. dmesg show following situation: kernel: [490894.231753] INFO: task munin-html:10187 blocked for more than 120 seconds. kernel: [490894.231799] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kernel: [490894.231843] munin-html D 0000000000000000 0 10187 9796 0x00000000 kernel: [490894.231878] ffff88063174b968 0000000000000082 ffff88063174b8d8 0000000000015b80 kernel: [490894.231930] ffff88063174bfd8 0000000000015b80 ffff88063174bfd8 ffff88062fe644d0 kernel: [490894.231982] 0000000000015b80 0000000000015b80 ffff88063174bfd8 0000000000015b80 kernel: [490894.232033] Call Trace: kernel: [490894.232059] [<ffffffff8117fce0>] ? sync_buffer+0x0/0x50 kernel: [490894.232089] [<ffffffff815a1f13>] io_schedule+0x73/0xc0 kernel: [490894.232115] [<ffffffff8117fd25>] sync_buffer+0x45/0x50 kernel: [490894.232143] [<ffffffff815a258f>] __wait_on_bit+0x5f/0x90 kernel: [490894.232170] [<ffffffff8117fce0>] ? sync_buffer+0x0/0x50 kernel: [490894.232197] [<ffffffff815a2638>] out_of_line_wait_on_bit+0x78/0x90 kernel: [490894.232227] [<ffffffff81080250>] ? wake_bit_function+0x0/0x40 kernel: [490894.232255] [<ffffffff8117fcd6>] __wait_on_buffer+0x26/0x30 kernel: [490894.232288] [<ffffffffa00131be>] squashfs_read_data+0x1be/0x520 [squashfs] kernel: [490894.232320] [<ffffffff8114f0f1>] ? __mem_cgroup_try_charge+0x71/0x450 kernel: [490894.232350] [<ffffffffa0013963>] squashfs_cache_get+0x1c3/0x320 [squashfs] kernel: [490894.232381] [<ffffffffa00136eb>] ? squashfs_copy_data+0x10b/0x130 [squashfs] kernel: [490894.232426] [<ffffffff815a3dbe>] ? _raw_spin_lock+0xe/0x20 kernel: [490894.232454] [<ffffffffa0013b68>] ? squashfs_read_metadata+0x48/0xf0 [squashfs] kernel: [490894.232499] [<ffffffffa0013ae1>] squashfs_get_datablock+0x21/0x30 [squashfs] kernel: [490894.232544] [<ffffffffa0015026>] squashfs_readpage+0x436/0x4a0 [squashfs] kernel: [490894.232575] [<ffffffff8111a375>] ? __inc_zone_page_state+0x35/0x40 kernel: [490894.232606] [<ffffffff8110d072>] __do_page_cache_readahead+0x172/0x210 kernel: [490894.232636] [<ffffffff8110d131>] ra_submit+0x21/0x30 kernel: [490894.232662] [<ffffffff811045f3>] filemap_fault+0x3f3/0x450 kernel: [490894.232691] [<ffffffff812bd156>] ? prio_tree_insert+0x256/0x2b0 kernel: [490894.232726] [<ffffffffa009225d>] aufs_fault+0x11d/0x170 [aufs] kernel: [490894.232755] [<ffffffff8111f6d4>] __do_fault+0x54/0x560 kernel: [490894.232782] [<ffffffff81122f39>] handle_mm_fault+0x1b9/0x440 kernel: [490894.232811] [<ffffffff811286f5>] ? do_mmap_pgoff+0x335/0x380 kernel: [490894.232840] [<ffffffff815a7af5>] do_page_fault+0x125/0x350 kernel: [490894.232867] [<ffffffff815a4675>] page_fault+0x25/0x30 Os info cat /etc/issue.net Ubuntu 10.04.2 LTS uname -a Linux Shard1Host3 2.6.35-32-server #68~lucid1-Ubuntu SMP Wed Mar 28 18:33:00 UTC 2012 x86_64 GNU/Linux This system load via ltsp and hasn't harddrives also it has a lot of memory 24Gb. free -m total used free shared buffers cached Mem: 24152 17090 7061 0 50 494 -/+ buffers/cache: 16545 7607 Swap: 0 0 0 I put vmstat info here but I think it won't give results after reboot vmstat 1 30 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 7231156 52196 506400 0 0 0 0 172 143 7 0 92 0 1 0 0 7231024 52196 506400 0 0 0 0 7859 16233 5 0 94 0 0 0 0 7231024 52196 506400 0 0 0 0 7870 16446 2 0 98 0 0 0 0 7230900 52196 506400 0 0 0 0 7308 15661 5 0 95 0 0 0 0 7231100 52196 506400 0 0 0 0 7960 16543 6 0 94 0 0 0 0 7231100 52196 506400 0 0 0 0 7542 16047 5 1 94 0 3 0 0 7231100 52196 506400 0 0 0 0 7709 16621 3 0 96 0 0 0 0 7231220 52196 506400 0 0 0 0 7857 16552 4 0 96 0 0 0 0 7231220 52196 506400 0 0 0 0 7192 15491 6 0 94 0 0 0 0 7231220 52196 506400 0 0 0 0 7423 15792 5 1 94 0 1 0 0 7231260 52196 506404 0 0 0 0 7686 16296 2 0 98 0 0 0 0 7231260 52196 506404 0 0 0 0 6976 15183 5 0 95 0 0 0 0 7231260 52196 506404 0 0 0 0 7303 15600 4 0 95 0 0 0 0 7231320 52196 506404 0 0 0 0 7967 16241 1 0 98 0 0 0 0 7231444 52196 506404 0 0 0 0 6948 15113 6 0 94 0 0 0 0 7231444 52196 506404 0 0 0 0 7931 16181 6 0 94 0 1 0 0 7231516 52196 506404 0 0 0 0 7715 15829 6 0 94 0 0 0 0 7231516 52196 506404 0 0 0 0 7771 16036 2 0 97 0 0 0 0 7231268 52196 506404 0 0 0 0 7782 16202 6 0 94 0 1 0 0 7231212 52196 506404 0 0 0 0 7457 15622 4 0 96 0 0 0 0 7231212 52196 506404 0 0 0 0 7573 16045 2 0 98 0 2 0 0 7231216 52196 506404 0 0 0 0 7689 16076 6 0 94 0 0 0 0 7231424 52196 506404 0 0 0 0 7429 15650 4 0 95 0 3 0 0 7231424 52196 506404 0 0 0 0 7534 16168 3 0 97 0 1 0 0 7230548 52196 506404 0 0 0 0 8559 15926 7 1 92 0 0 0 0 7230672 52196 506404 0 0 0 0 7720 15905 2 0 98 0 0 0 0 7230548 52196 506404 0 0 0 0 7677 16313 5 0 95 0 1 0 0 7230676 52196 506404 0 0 0 0 7209 15432 5 0 95 0 0 0 0 7230800 52196 506404 0 0 0 0 7522 15861 2 0 98 0 0 0 0 7230552 52196 506404 0 0 0 0 7760 16661 5 0 95 0 In the munin(monitoring system) graphs I see(before server hung): Disk(nbd0) IOs per device: read: 289m but avg by week 2.09m Disk(nbd0) throughput per device: read: 4.73k but avg by week 108.76 Disk(nbd0) utilization per device: 100% but avg by week 1.2% Eth0 traffic was low: in/out only 2Mbps Number of threads increased to 566 usually 392 Fork rate 1.08 but usually 2.82 VMStat(processes state) Increased to 17.77(from 0 as far as I could see in the graph) CPU usage iowait 880.46% when usually 6.67% It'll be great if somebody help me to understand what's up.

    Read the article

  • Managing BES Software Configurations

    - by DaveJohnston
    Hi, I am having problems with OTA deployment of a bespoke application that we have written. I have read loads of threads elsewhere and I have got mixed help, but for my particular case none of it has really helped. So I thought I would explain my exact situation and try and get some help here. I am running BES version 4.1.5 (Bundle 79) for Microsoft Exchange. The application we have written is split into 5 modules, which we control, and another 4 modules which are 3rd party libraries that we require. So for our modules the version numbers are regularly changing but for the others they are pretty much always going to remain the same. We have an alx file set up that identifies all of the files required and in fact I am able to create a software configuration and deploy the application with no problems. What I am trying to do however is maintain multiple versions of our application on the BES and be able to select which version I want to deploy to each user. I have tried this a number of ways (as I said I have read lots of other threads with solutions to this problem) but each seems to come with its own problem. First of all I tried just creating different configurations for each version of the application, but because they each had the same application ID the BES informed me that I couldn't do this. I read somewhere that the solution was to create a second shared folder (e.g. \Program Files\Common Files\RIM) and add the apploader stuff and the new version of the app to this folder. I could then create a second software configuration that would have the same application ID. The result of this seemed promising to start with. When I changed the config that was assigned to a user the new version was pushed out fine. But afterwards the BES reported that the device state was invalid, which meant I couldn't push anything else until I reactivated the device. I guess this is because the first config was never set to disallowed so the old version wasn't removed and the device essentially reported that it had multiple versions of the same application installed. The next suggestion I got was to change the application ID for each version, e.g. to include the version number. This meant that each version of the application could be included in a single configuration and I could set one to disallowed and the other to required. Initially this worked and the first version was deployed. But when I switched (i.e. the old version became disallowed and the new version required) the BES reported upgrade required and removed the old version. The device restarts and the old version is gone but the new version is not pushed out. I checked the BES and it still said Upgrade Required. I checked the log files and found: [40000] (11/12 09:50:27.397):{0xEB8} {[email protected], PIN=1234, UserId=2}SCS::PollDBQueueNewRequests - Queuing POLL_FOR_MISSING_APPS request [40000] (11/12 09:50:28.241):{0xE9C} RequestHandler::PollForMissingApps: Starting Poll For Missing Apps. [40304] (11/12 09:50:28.241):{0xE90} WorkerThreadPool:: ThreadProc(): Thread released with empty queue [40000] (11/12 09:50:28.241):{0xE9C} SCS::RemoveAppDeliveryRequests - No App Delivery Requests purged for User id 2 [30000] (11/12 09:50:28.960):{0xE9C} Discard duplicate module group "name" on device [30000] (11/12 09:50:28.960):{0xE9C} Discard duplicate module group "name" on device [40000] (11/12 09:50:29.163):{0xE9C} RequestHandler::PollForMissingApps: Completed Poll For Missing Apps, elapsed time 0.922 seconds. (You will notice I have removed actual names and email addresses etc for privacy reasons. But one question: where does the name of the module group come from? In my case it is close to the application ID but doesn't include the version number that I added at the end in order to get it to work. Is that information embedded in a COD file or something??) So it is reporting a duplicate module group on the device? What does this mean? I checked the device properties (as reported on the BES) and it confirms that the modules with the old version numbers are still present on the device. So the application has been removed but not the modules?? I checked the device and the modules are gone, so it is just the BES reporting that they are still there?? I checked the database and it has the modules in questions in the SyncDeviceMgmt table. If I delete these from the DB the BES changes to report Install Required, and low and behold the new version of the app is pushed out. So at the end of all that, my question is: does anyone have any other suggestions of how to handle upgrading our bespoke application OTA from the BES? Or can anyone point out something I am doing wrong in what I described above that might solve the problems I am having? I guess the question is why does the database maintain that the modules are on the device after they are removed? Thanks for any help you can provide.

    Read the article

  • g++ symbol versioning. Set it to GCC_3.0 using version 4 of g++

    - by Ismael
    Hi all I need to implemente a Java class which uses JNI to control a fiscal printer in XUbuntu 8.10 with sun-java6-jdk installed. The structure is the following: EpsonDriver.java loads libEpson.so libEpson is linked dynamically with EpsonFiscalProtocol.so ( provided by Epson, no source available ) and pthread I use javah to generate the header file, and the code compiles. Then I put the libEpson.so in $JAVA_HOME/jre/lib/i386, and EpsonDriver.java uses an static initializar System.loadLibrary("libEpson") That part works, however, when I try to use any of the methods I get an unsatisfiedLinkError exception. Some time ago, a coworker did a version that works, and using objdump -Dslx I got the following: Program Header: LOAD off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**12 filesz 0x0000ccc4 memsz 0x0000ccc4 flags r-x LOAD off 0x0000d000 vaddr 0x0000d000 paddr 0x0000d000 align 2**12 filesz 0x00000250 memsz 0x00044a5c flags rw- DYNAMIC off 0x0000d014 vaddr 0x0000d014 paddr 0x0000d014 align 2**2 filesz 0x000000f0 memsz 0x000000f0 flags rw- NOTE off 0x000000d4 vaddr 0x000000d4 paddr 0x000000d4 align 2**2 filesz 0x00000024 memsz 0x00000024 flags r-- STACK off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**2 filesz 0x00000000 memsz 0x00000000 flags rw- Dynamic Section: NEEDED EpsonFiscalProtocol.so NEEDED libpthread.so.0 NEEDED libstdc++.so.6 NEEDED libm.so.6 NEEDED libc.so.6 SONAME libcom_tichile_jpos_EpsonSerialDriver.so INIT 0x00007254 FINI 0x0000ba08 GNU_HASH 0x000000f8 STRTAB 0x00001f50 SYMTAB 0x00000ae0 STRSZ 0x00002384 SYMENT 0x00000010 PLTGOT 0x0000d108 PLTRELSZ 0x00000008 PLTREL 0x00000011 JMPREL 0x0000724c REL 0x000045c4 RELSZ 0x00002c88 RELENT 0x00000008 TEXTREL 0x00000000 VERNEED 0x00004564 VERNEEDNUM 0x00000002 VERSYM 0x000042d4 RELCOUNT 0x000000ac Version References: required from libstdc++.so.6: 0x056bafd3 0x00 05 CXXABI_1.3 0x08922974 0x00 04 GLIBCXX_3.4 required from libc.so.6: 0x0b792650 0x00 03 GCC_3.0 0x0d696910 0x00 02 GLIBC_2.0 In the recently compiled file I get: Program Header: LOAD off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**12 filesz 0x00005300 memsz 0x00005300 flags r-x LOAD off 0x00005300 vaddr 0x00006300 paddr 0x00006300 align 2**12 filesz 0x00000274 memsz 0x00010314 flags rw- DYNAMIC off 0x00005314 vaddr 0x00006314 paddr 0x00006314 align 2**2 filesz 0x000000e0 memsz 0x000000e0 flags rw- EH_FRAME off 0x00004a00 vaddr 0x00004a00 paddr 0x00004a00 align 2**2 filesz 0x00000154 memsz 0x00000154 flags r-- Dynamic Section: NEEDED libstdc++.so.5 NEEDED libm.so.6 NEEDED libgcc_s.so.1 NEEDED libc.so.6 SONAME EpsonFiscalProtocol.so INIT 0x00001cb4 FINI 0x00004994 HASH 0x000000b4 STRTAB 0x00000da4 SYMTAB 0x000004f4 STRSZ 0x00000acf SYMENT 0x00000010 PLTGOT 0x0000640c PLTRELSZ 0x00000270 PLTREL 0x00000011 JMPREL 0x00001a44 REL 0x000019dc RELSZ 0x00000068 RELENT 0x00000008 VERNEED 0x0000198c VERNEEDNUM 0x00000002 VERSYM 0x00001874 RELCOUNT 0x00000004 Version References: required from libstdc++.so.5: 0x056bafd2 0x00 04 CXXABI_1.2 required from libc.so.6: 0x09691f73 0x00 03 GLIBC_2.1.3 0x0d696910 0x00 02 GLIBC_2.0 So I suspect the main diference is the GCC_3.0 symbol I compile libcom_tichile_EpsonSerialDriver.so with the following command ( from memory as I not at work right now ) g++ -Wl,-soname=.... -shared -I/*jni libraries*/ -o libcom_tichile_jpos_EpsonSerialDriver -lEpsonFiscalProtocol -lpthread Is there any way to tell g++ to use that symbol version? Or any idea in how to make it work? EDIT: I have another non-working version with the followin dump: Program Header: LOAD off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**12 filesz 0x0000bf68 memsz 0x0000bf68 flags r-x LOAD off 0x0000cc0c vaddr 0x0000cc0c paddr 0x0000cc0c align 2**12 filesz 0x000005e8 memsz 0x00044df0 flags rw- DYNAMIC off 0x0000cc20 vaddr 0x0000cc20 paddr 0x0000cc20 align 2**2 filesz 0x000000f8 memsz 0x000000f8 flags rw- EH_FRAME off 0x0000b310 vaddr 0x0000b310 paddr 0x0000b310 align 2**2 filesz 0x000002bc memsz 0x000002bc flags r-- STACK off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**2 filesz 0x00000000 memsz 0x00000000 flags rw- RELRO off 0x0000cc0c vaddr 0x0000cc0c paddr 0x0000cc0c align 2**0 filesz 0x000003f4 memsz 0x000003f4 flags r-- Dynamic Section: NEEDED EpsonFiscalProtocol.so NEEDED libpthread.so.0 NEEDED libstdc++.so.6 NEEDED libm.so.6 NEEDED libgcc_s.so.1 NEEDED libc.so.6 SONAME libcom_tichile_jpos_EpsonSerialDriver.so INIT 0x000055d8 FINI 0x0000a968 HASH 0x000000f4 GNU_HASH 0x00000a30 STRTAB 0x00002870 SYMTAB 0x00001410 STRSZ 0x00002339 SYMENT 0x00000010 PLTGOT 0x0000cff4 PLTRELSZ 0x00000168 PLTREL 0x00000011 JMPREL 0x00005470 REL 0x00004ea8 RELSZ 0x000005c8 RELENT 0x00000008 VERNEED 0x00004e38 VERNEEDNUM 0x00000002 VERSYM 0x00004baa RELCOUNT 0x00000001 Version References: required from libstdc++.so.6: 0x056bafd3 0x00 05 CXXABI_1.3 0x08922974 0x00 03 GLIBCXX_3.4 required from libc.so.6: 0x09691f73 0x00 06 GLIBC_2.1.3 0x0d696914 0x00 04 GLIBC_2.4 0x0d696910 0x00 02 GLIBC_2.0 Now I think the main difference is in the GCC_3.0 symbol/ABI EDIT: Luckily, a coworker found a way to talk to the printer using Java

    Read the article

  • What are good CLI tools for JSON?

    - by jasonmp85
    General Problem Though I may be diagnosing the root cause of an event, determining how many users it affected, or distilling timing logs in order to assess the performance and throughput impact of a recent code change, my tools stay the same: grep, awk, sed, tr, uniq, sort, zcat, tail, head, join, and split. To glue them all together, Unix gives us pipes, and for fancier filtering we have xargs. If these fail me, there's always perl -e. These tools are perfect for processing CSV files, tab-delimited files, log files with a predictable line format, or files with comma-separated key-value pairs. In other words, files where each line has next to no context. XML Analogues I recently needed to trawl through Gigabytes of XML to build a histogram of usage by user. This was easy enough with the tools I had, but for more complicated queries the normal approaches break down. Say I have files with items like this: <foo user="me"> <baz key="zoidberg" value="squid" /> <baz key="leela" value="cyclops" /> <baz key="fry" value="rube" /> </foo> And let's say I want to produce a mapping from user to average number of <baz>s per <foo>. Processing line-by-line is no longer an option: I need to know which user's <foo> I'm currently inspecting so I know whose average to update. Any sort of Unix one liner that accomplishes this task is likely to be inscrutable. Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us. Previously, I had gotten accustomed to using the wonderful XML::XPath Perl module to accomplish queries like the one above, but after finding a TextMate Plugin that could run an XPath expression against my current window, I stopped writing one-off Perl scripts to query XML. And I just found out about XMLStarlet which is installing as I type this and which I look forward to using in the future. JSON Solutions? So this leads me to my question: are there any tools like this for JSON? It's only a matter of time before some investigation task requires me to do similar queries on JSON files, and without tools like XPath and XSLT, such a task will be a lot harder. If I had a bunch of JSON that looked like this: { "firstName": "Bender", "lastName": "Robot", "age": 200, "address": { "streetAddress": "123", "city": "New York", "state": "NY", "postalCode": "1729" }, "phoneNumber": [ { "type": "home", "number": "666 555-1234" }, { "type": "fax", "number": "666 555-4567" } ] } And wanted to find the average number of phone numbers each person had, I could do something like this with XPath: fn:avg(/fn:count(phoneNumber)) Questions Are there any command-line tools that can "query" JSON files in this way? If you have to process a bunch of JSON files on a Unix command line, what tools do you use? Heck, is there even work being done to make a query language like this for JSON? If you do use tools like this in your day-to-day work, what do you like/dislike about them? Are there any gotchas? I'm noticing more and more data serialization is being done using JSON, so processing tools like this will be crucial when analyzing large data dumps in the future. Language libraries for JSON are very strong and it's easy enough to write scripts to do this sort of processing, but to really let people play around with the data shell tools are needed. Related Questions Grep and Sed Equivalent for XML Command Line Processing Is there a query language for JSON? JSONPath or other XPath like utility for JSON/Javascript; or Jquery JSON

    Read the article

  • Can this be improved? Scrubbing of dangerous html tags.

    - by chobo2
    I been finding that for something that I consider pretty import there is very little information or libraries on how to deal with this problem. I found this while searching. I really don't know all the million ways that a hacker could try to insert the dangerous tags. I have a rich html editor so I need to keep non dangerous tags but strip out bad ones. So is this script missing anything? It uses html agility pack. public string ScrubHTML(string html) { HtmlDocument doc = new HtmlDocument(); doc.LoadHtml(html); //Remove potentially harmful elements HtmlNodeCollection nc = doc.DocumentNode.SelectNodes("//script|//link|//iframe|//frameset|//frame|//applet|//object|//embed"); if (nc != null) { foreach (HtmlNode node in nc) { node.ParentNode.RemoveChild(node, false); } } //remove hrefs to java/j/vbscript URLs nc = doc.DocumentNode.SelectNodes("//a[starts-with(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'javascript')]|//a[starts-with(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'jscript')]|//a[starts-with(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'vbscript')]"); if (nc != null) { foreach (HtmlNode node in nc) { node.SetAttributeValue("href", "#"); } } //remove img with refs to java/j/vbscript URLs nc = doc.DocumentNode.SelectNodes("//img[starts-with(translate(@src, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'javascript')]|//img[starts-with(translate(@src, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'jscript')]|//img[starts-with(translate(@src, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'vbscript')]"); if (nc != null) { foreach (HtmlNode node in nc) { node.SetAttributeValue("src", "#"); } } //remove on<Event> handlers from all tags nc = doc.DocumentNode.SelectNodes("//*[@onclick or @onmouseover or @onfocus or @onblur or @onmouseout or @ondoubleclick or @onload or @onunload]"); if (nc != null) { foreach (HtmlNode node in nc) { node.Attributes.Remove("onFocus"); node.Attributes.Remove("onBlur"); node.Attributes.Remove("onClick"); node.Attributes.Remove("onMouseOver"); node.Attributes.Remove("onMouseOut"); node.Attributes.Remove("onDoubleClick"); node.Attributes.Remove("onLoad"); node.Attributes.Remove("onUnload"); } } // remove any style attributes that contain the word expression (IE evaluates this as script) nc = doc.DocumentNode.SelectNodes("//*[contains(translate(@style, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'expression')]"); if (nc != null) { foreach (HtmlNode node in nc) { node.Attributes.Remove("stYle"); } } return doc.DocumentNode.WriteTo(); } Edit 2 people have suggested whitelisting. I actually like the idea of whitelisting but never actually did it because no one can actually tell me how to do it in C# and I can't even really find tutorials for how to do it in c#(the last time I looked. I will check it out again). How do you make a white list? Is it just a list collection? How do you actual parse out all html tags, script tags and every other tag? Once you have the tags how do you determine which ones are allowed? Compare them to you list collection? But what happens if the content is coming in and has like 100 tags and you have 50 allowed. You got to compare each of those 100 tag by 50 allowed tags. Thats quite a bit to go through and could be slow. Once you found a invalid tag how do you remove it? I don't really want to reject a whole set of text if one tag was found to be invalid. I rather remove and insert the rest. Should I be using html agility pack?

    Read the article

  • Crossover LAN connection between Ubuntu And Windows 7 is not working

    - by brett
    my question is closely related to: How do I connect Ubuntu 10.04 and Windows 7 with an Ethernet cable? What I am after is: Windows 7-------wireless-----\ Wifi router Ubuntu 10.04----wireless-----/ Windows 7-------wireless-----\ | cross_over_cable Wifi router | Ubuntu 10.04----wireless-----/ What I did was On Windows edit system32\drivers\etc\hosts Add the following line: 192.168.253.2 my_ubuntu_computer_name_&-wired //?not sure if this is right On Ubuntu: sudo gedit /etc/hosts Add the following line: 192.168.253.1 my_pc_computer_name&-wired //?not sure if this is right and then Ubuntu 12.04 as the host Right click on the Network Manager applet, click Edit Connections... In the Wired tab, click Auto eth0, then click Edit... In the IPv4 Settings tab, change Method: to Shared to other computers. Click Apply and enter your password when it asks you. Close everything and reboot. Plug the Ethernet cable into both computers. But, I can connect to my windows network folders from ubuntu via wifi I can't connect to my ubuntu network folders from windows via wifi(in fact this bit was working before - so my wifi connection is worse) my ubuntu Auto Ethernet seems to be on From Ubuntu eth0 Link encap:Ethernet HWaddr 00:11:2f:f3:43:8d inet addr:10.42.0.1 Bcast:10.42.0.255 Mask:255.255.255.0 inet6 addr: fe80::211:2fff:fef3:438d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:172 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:27279 (27.2 KB) Interrupt:19 Base address:0xe400 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1147 errors:0 dropped:0 overruns:0 frame:0 TX packets:1147 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:94380 (94.3 KB) TX bytes:94380 (94.3 KB) wlan0 Link encap:Ethernet HWaddr 00:03:c9:e9:6f:bf inet addr:10.1.1.7 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::203:c9ff:fee9:6fbf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:13186 errors:0 dropped:0 overruns:0 frame:0 TX packets:12187 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1598882 (1.5 MB) TX bytes:1189555 (1.1 MB) From Windows: Windows IP Configuration Ethernet adapter Bluetooth Network Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : BoB Link-local IPv6 Address . . . . . : fe80::ecf7:c445:3725:b9c1%12 IPv4 Address. . . . . . . . . . . : 10.1.1.4 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 10.1.1.1 Tunnel adapter Local Area Connection* 15: Connection-specific DNS Suffix . : IPv6 Address. . . . . . . . . . . : 2001:0:4137:9e76:1423:3ae3:f5fe:fefb Link-local IPv6 Address . . . . . : fe80::1423:3ae3:f5fe:fefb%23 Default Gateway . . . . . . . . . : :: Tunnel adapter isatap.BoB: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : BoB Tunnel adapter isatap.{D0C8EBA1-335D-4620-8570-6C36E8786D72}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . :

    Read the article

  • Multi-threaded .NET application blocks during file I/O when protected by Themida

    - by Erik Jensen
    As the title says I have a .NET application that is the GUI which uses multiple threads to perform separate file I/O and notice that the threads occasionally block when the application is protected by Themida. One thread is devoted to reading from serial COM port and another thread is devoted to copying files. What I experience is occasionally when the file copy thread encounters a network delay, it will block the other thread that is reading from the serial port. In addition to slow network (which can be transient), I can cause the problem to happen more frequently by making a PathFileExists call to a bad path e.g. PathFileExists("\\\\BadPath\\file.txt"); The COM port reading function will block during the call to ReadFile. This only happens when the application is protected by Themida. I have tried under WinXP, Win7, and Server 2012. In a streamlined test project, if I replace the .NET application with a MFC unmanaged application and still utilize the same threads I see no issue even when protected with Themida. I have contacted Oreans support and here is their response: The way that a .NET application is protected is very different from a native application. To protect a .NET application, we need to hook most of the file access APIs in order to "cheat" the .NET Framework that the application is protected. I guess that those special hooks (on CreateFile, ReadFile...) are delaying a bit the execution in your application and the problem appears. We did a test making those hooks as light as possible (with minimum code on them) but the problem still appeared in your application. The rest of software protectors that we tried (like Enigma, Molebox...) also use a similar hooking approach as it's the only way to make the .NET packed file to work. If those hooks are not present, the .NET Framework will abort execution as it will see that the original file was tampered (due to all Microsoft checks on .NET files) Those hooks are not present in a native application, that's why it should be working fine on your native application. Oreans support tried other software protectors such as Enigma Protector, Engima VirtualBox, and Molebox and all exhibit the exact same problem. What I have found as a work around is to separate out the file copy logic (where the file exists call is being made) to be performed in a completely separate process. I have experimented with converting the thread functions from unmanaged C++ to VB.NET equivalents (PathFileExists - System.IO.File.Exists and CreateFile/ReadFile - System.IO.Ports.SerialPort.Open/Read) and still see the same serial port read blocked when the file check or copy call is delayed. I have also tried setting the ReadFile to work asynchronously but that had no effect. I believe I am dealing with some low-level windows layer that no matter the language it exhibits a block on a shared resource -- and only when the application is executing under a single .NET process protected by Themida which evidently installs some hooks to allow .NET execution. At this time converting the entire application away from .NET is not an option. Nor is separating out the file copy logic to a separate task. I am wondering if anyone else has more knowledge of how a file operation can block another thread reading from a system port. I have included here example applications that show the problem: https://db.tt/cNMYfEIg - VB.NET https://db.tt/Y2lnTqw7 - MFC They are Visual Studio 2010 solutions. When running the themida protected exe, you can see when the FileThread counter pauses (executing the File.Exists call) while the ReadThread counter also pauses. When running non-protected visual studio output exe, the ReadThread counter does not pause which is how we expect it to function. Thanks!

    Read the article

  • FFMPEG Segfault Solutions

    - by Brentley_11
    I'm trying to convert a bunch of movies into h.264 mp4's using FFMPEG. These movies are sourced from various portable camcorders such as the Flip Mino HD and the Kodak ZI8. One issue I'm having with video from the ZI8 is it seems to be causing FFMPEG to segfault. Here is my command: ffmpeg -i 'XmasSailor720p60fps.MOV' -threads 2 -acodec libfaac -ab 96kb -vcodec libx264 -vpre hq -b 500kb -s 484x272 XmasSailor.mp4 Here is the output: FFmpeg version SVN-r20668, Copyright (c) 2000-2009 Fabrice Bellard, et al. built on Dec 2 2009 18:37:34 with gcc 4.2.4 (Ubuntu 4.2.4-1ubuntu4) configuration: --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared libavutil 50. 5. 1 / 50. 5. 1 libavcodec 52.42. 0 / 52.42. 0 libavformat 52.39. 2 / 52.39. 2 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 2 / 0. 7. 2 libpostproc 51. 2. 0 / 51. 2. 0 Seems stream 0 codec frame rate differs from container frame rate: 59.94 (60000/1001) -> 29.97 (30000/1001) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'XmasSailor720p60fps.MOV': Duration: 00:00:05.37, start: 0.000000, bitrate: 12021 kb/s Stream #0.0(eng): Video: h264, yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 11994 kb/s, 29.97 tbr, 90k tbn, 59.94 tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 128 kb/s Metadata major_brand : qt minor_version : 0 compatible_brands: qt comment : KODAK Zi8 Pocket Video Camera comment-eng : KODAK Zi8 Pocket Video Camera [libx264 @ 0x99e1020]using SAR=1/1 [libx264 @ 0x99e1020]using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.1 Cache64 [libx264 @ 0x99e1020]profile High, level 2.1 Output #0, mp4, to 'XmasSailor.mp4': Stream #0.0(eng): Video: libx264, yuv420p, 484x272 [PAR 1:1 DAR 121:68], q=10-51, 500 kb/s, 30k tbn, 29.97 tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 96 kb/s Metadata comment : Encoded with the Statusfirm Video Transcoder Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding [h264 @ 0x99de950]B picture before any references, skipping [h264 @ 0x99de950]decode_slice_header error [h264 @ 0x99de950]no frame! Error while decoding stream #0.0 [h264 @ 0x99de950]B picture before any references, skipping [h264 @ 0x99de950]decode_slice_header error [h264 @ 0x99de950]no frame! Error while decoding stream #0.0 frame= 20 fps= 0 q=13797729.0 size= 0kB time=0.66 bitrate= 0.6kbits/s frame= 39 fps= 37 q=13797729.0 size= 0kB time=1.30 bitrate= 0.3kbits/s frame= 48 fps= 30 q=33.0 size= 11kB time=0.10 bitrate= 903.0kbits/s frame= 58 fps= 27 q=31.0 size= 22kB time=0.43 bitrate= 421.0kbits/s frame= 67 fps= 25 q=29.0 size= 41kB time=0.73 bitrate= 462.6kbits/s frame= 75 fps= 23 q=29.0 size= 59kB time=1.00 bitrate= 486.7kbits/s frame= 83 fps= 22 q=29.0 size= 81kB time=1.27 bitrate= 521.9kbits/s frame= 90 fps= 21 q=29.0 size= 97kB time=1.50 bitrate= 530.1kbits/s frame= 98 fps= 20 q=29.0 size= 114kB time=1.77 bitrate= 526.9kbits/s frame= 106 fps= 20 q=29.0 size= 134kB time=2.04 bitrate= 537.7kbits/s frame= 114 fps= 19 q=29.0 size= 150kB time=2.30 bitrate= 533.7kbits/s frame= 122 fps= 19 q=29.0 size= 172kB time=2.57 bitrate= 547.8kbits/s frame= 130 fps= 19 q=29.0 size= 193kB time=2.84 bitrate= 557.5kbits/s frame= 136 fps= 18 q=29.0 size= 211kB time=3.04 bitrate= 570.0kbits/s frame= 144 fps= 18 q=29.0 size= 242kB time=3.30 bitrate= 599.5kbits/s frame= 152 fps= 17 q=30.0 size= 261kB time=3.57 bitrate= 598.6kbits/s frame= 157 fps= 15 q=-1.0 Lsize= 368kB time=5.21 bitrate= 579.3kbits/s video:302kB audio:61kB global headers:0kB muxing overhead 1.416371% [libx264 @ 0x99e1020]frame I:1 Avg QP:27.22 size: 8720 [libx264 @ 0x99e1020]frame P:48 Avg QP:25.15 size: 3759 [libx264 @ 0x99e1020]frame B:108 Avg QP:30.10 size: 1105 [libx264 @ 0x99e1020]consecutive B-frames: 0.6% 11.5% 28.8% 59.0% [libx264 @ 0x99e1020]mb I I16..4: 28.5% 47.6% 23.9% [libx264 @ 0x99e1020]mb P I16..4: 0.8% 1.3% 0.5% P16..4: 50.6% 17.7% 13.1% 0.0% 0.0% skip:15.9% [libx264 @ 0x99e1020]mb B I16..4: 0.2% 0.3% 0.1% B16..8: 44.0% 1.2% 2.6% direct: 5.1% skip:46.5% L0:45.5% L1:51.0% BI: 3.5% [libx264 @ 0x99e1020]final ratefactor: 23.51 [libx264 @ 0x99e1020]8x8 transform intra:49.9% inter:67.9% [libx264 @ 0x99e1020]direct mvs spatial:98.1% temporal:1.9% [libx264 @ 0x99e1020]coded y,uvDC,uvAC intra: 54.7% 76.1% 41.4% inter: 17.1% 24.4% 7.8% [libx264 @ 0x99e1020]i16 v,h,dc,p: 18% 52% 5% 25% [libx264 @ 0x99e1020]i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 12% 22% 9% 7% 10% 10% 9% 8% 13% [libx264 @ 0x99e1020]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 13% 18% 8% 8% 10% 13% 10% 9% 12% [libx264 @ 0x99e1020]Weighted P-Frames: Y:10.4% [libx264 @ 0x99e1020]ref P L0: 60.2% 15.3% 11.0% 7.6% 5.2% 0.7% [libx264 @ 0x99e1020]ref B L0: 72.6% 15.6% 11.8% [libx264 @ 0x99e1020]kb/s:471.17 Segmentation fault I'm wondering if anyone else has ran into similar issues. I wasn't able to find anything helpful via Google. Another question I have is if anyone knows of a company that offers paid support for FFMPEG. Thank you for your time.

    Read the article

  • How might maven's buildNumber metadata become inconsistent across multiple build agents?

    - by Brian Laframboise
    We recently added a second build machine to our build environment and began experiencing very odd occasional build failures. I have two separate Maven build machines, A and B, each running Maven 2.2.1 and communicating to a shared Nexus 1.5.0 repository manager. My problem is that builds on B will occasionally fail because it refuses to download a newer version of a common dependency 'acme-1.0.0-SNAPSHOT' previously built by A and uploaded to Nexus. Looking inside the local repositories on both machines I noticed some oddities in the repository metadata. Machine A's acme\1.0.0-SNAPSHOT\maven-metadata-nexus.xml: <metadata> <groupId>acme</groupId> <artifactId>acme</artifactId> <version>1.0.0-SNAPSHOT</version> <versioning> <snapshot> <buildNumber>1</buildNumber> </snapshot> <lastUpdated>20100525173546</lastUpdated> </versioning> </metadata> Machine B's acme\1.0.0-SNAPSHOT\maven-metadata-nexus.xml: <metadata> <groupId>acme</groupId> <artifactId>acme</artifactId> <version>1.0.0-SNAPSHOT</version> <versioning> <snapshot> <buildNumber>2</buildNumber> </snapshot> <lastUpdated>20100519232317</lastUpdated> </versioning> </metadata> In Nexus's acme/1.0.0-SNAPSHOT/maven-metadata.xml: <metadata> <groupId>acme</groupId> <artifactId>acme</artifactId> <version>1.0.0-SNAPSHOT</version> <versioning /> </metadata> If I'm interpreting the metadata files correctly (documentation online is scant), it appears machine B believes it has a newer version of the acme dependency (based on buildNumber) despite the fact that machine A last built it 6 days after machine B did (based on timestamp). Nexus also appears to be unaware of a universally correct buildNumber. How could this situation possibly arise? What could I do to prevent my builds from failing due to inconsistent metadata? Have you experienced anything similar? Important notes: Both build machines have settings.xml files where the updatePolicy is "always". Nexus does indeed have the newer version of acme that was built by A. B simply refuses to download it. A and B are the only machines uploading to Nexus. Both servers share the same system time. All processes involved have write privileges to the metadata files so that they can be updated as necessary. I was unable to find any open Maven or Nexus issues describing this behaviour. Our CI server (Atlassian Bamboo) prevents builds of the same artifact from happening concurrently, so some race condition while uploading to Nexus is rather unlikely.

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >