Search Results

Search found 6963 results on 279 pages for 'ia 32'.

Page 44/279 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • stunnel crashing

    - by Jay
    I'm trying to use stunnel to secure a legacy application's communications. I can't seem to get it setup and working. Can anyone provide any hints where I'm going wrong? Here's what I'm trying to accomplish: A windows service on a client machine connects to a server on port 7000 using TCP. I'd like to encrypt the communication between client and server. Here's what I've tried: Created a new server that accepts ssl connections on port 7443. Got a certificate for the server and installed it. That seems to work with my test setup. Installed stunnel on my windows machine (version 7.43 from the distribution archive file). Installed libssl32.dll and libeay32.dll in the same directory as stunnel.exe ( from the openssl-0.9.8h-1 binary distribution). Installed it as a service using "stunnel -install" Configured stunnel as follows: debug=7 output=C:\p4\internal\Utility\Proxy\proxy.log service=Proxy taskbar=no [exchange] accept=7000 client=yes connect=proxy.blah.com:7443 I changed my hosts file to trick the old application into connecting through stunnel: server.blah.com 127.0.0.1 # when client looks up server it goes to stunnel proxy.blah.com IP-address-of-server.blah.com # stunnel connects to new server "server.blah.com" now resolves to the machine it's running on (i.e. stunnel). "proxy.blah.com" goes to the real server. stunnel should connect to the server. I start the stunnel service and try to connect. It looks like it's working but the stunnel service just shuts down with no message. 2010.04.19 13:16:21 LOG5[4924:3716]: stunnel 4.33 on x86-pc-mingw32-gnu with OpenSSL 0.9.8h 28 May 2008 2010.04.19 13:16:21 LOG5[4924:3716]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.04.19 13:16:49 LOG5[4924:3748]: Service exchange accepted connection from 127.0.0.1:4134 2010.04.19 13:16:49 LOG6[4924:3748]: connect_blocking: connecting x.80.60.32:7443 2010.04.19 13:16:49 LOG5[4924:3748]: connect_blocking: connected x.80.60.32:7443 2010.04.19 13:16:49 LOG5[4924:3748]: Service exchange connected remote server from x.253.120.19:4135 2010.04.19 13:20:24 LOG5[3668:3856]: Reading configuration from file stunnel.conf 2010.04.19 13:20:24 LOG7[3668:3856]: Snagged 64 random bytes from C:/.rnd 2010.04.19 13:20:24 LOG7[3668:3856]: Wrote 1024 new random bytes to C:/.rnd 2010.04.19 13:20:24 LOG7[3668:3856]: RAND_status claims sufficient entropy for the PRNG 2010.04.19 13:20:24 LOG7[3668:3856]: PRNG seeded successfully 2010.04.19 13:20:24 LOG7[3668:3856]: SSL context initialized for service exchange 2010.04.19 13:20:24 LOG5[3668:3856]: Configuration successful 2010.04.19 13:20:24 LOG5[3668:3856]: No limit detected for the number of clients 2010.04.19 13:20:24 LOG7[3668:3856]: FD=312 in non-blocking mode 2010.04.19 13:20:24 LOG7[3668:3856]: Option SO_REUSEADDR set on accept socket 2010.04.19 13:20:24 LOG7[3668:3856]: Service exchange bound to 0.0.0.0:7000 2010.04.19 13:20:24 LOG7[3668:3856]: Service exchange opened FD=312 2010.04.19 13:20:24 LOG5[3668:3856]: stunnel 4.33 on x86-pc-mingw32-gnu with OpenSSL 0.9.8h 28 May 2008 2010.04.19 13:20:24 LOG5[3668:3856]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.04.19 13:21:02 LOG7[3668:4556]: Service exchange accepted FD=372 from 127.0.0.1:4156 2010.04.19 13:21:02 LOG7[3668:4556]: Creating a new thread 2010.04.19 13:21:02 LOG7[3668:4556]: New thread created 2010.04.19 13:21:02 LOG7[3668:3756]: Service exchange started 2010.04.19 13:21:02 LOG7[3668:3756]: FD=372 in non-blocking mode 2010.04.19 13:21:02 LOG5[3668:3756]: Service exchange accepted connection from 127.0.0.1:4156 2010.04.19 13:21:02 LOG7[3668:3756]: FD=396 in non-blocking mode 2010.04.19 13:21:02 LOG6[3668:3756]: connect_blocking: connecting x.80.60.32:7443 2010.04.19 13:21:02 LOG7[3668:3756]: connect_blocking: s_poll_wait x.80.60.32:7443: waiting 10 seconds 2010.04.19 13:21:02 LOG5[3668:3756]: connect_blocking: connected x.80.60.32:7443 2010.04.19 13:21:02 LOG5[3668:3756]: Service exchange connected remote server from x.253.120.19:4157 2010.04.19 13:21:02 LOG7[3668:3756]: Remote FD=396 initialized 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): before/connect initialization 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write client hello A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server hello A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server certificate A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server done A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write client key exchange A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write change cipher spec A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write finished A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 flush data 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read finished A The client thinks the connection is closed: No connection could be made because the target machine actively refused it 127.0.0.1:7000 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.Sockets.Socket.Connect(EndPoint remoteEP) at Service.ConnUtility.Connect() Any suggestions?

    Read the article

  • stunnel crashing

    - by Jay
    I'm trying to use stunnel to secure a legacy application's communications. I can't seem to get it setup and working. Can anyone provide any hints where I'm going wrong? Here's what I'm trying to accomplish: A windows service on a client machine connects to a server on port 7000 using TCP. I'd like to encrypt the communication between client and server. Here's what I've tried: Created a new server that accepts ssl connections on port 7443. Got a certificate for the server and installed it. That seems to work with my test setup. Installed stunnel on my windows machine (version 7.43 from the distribution archive file). Installed libssl32.dll and libeay32.dll in the same directory as stunnel.exe ( from the openssl-0.9.8h-1 binary distribution). Installed it as a service using "stunnel -install" Configured stunnel as follows: debug=7 output=C:\p4\internal\Utility\Proxy\proxy.log service=Proxy taskbar=no [exchange] accept=7000 client=yes connect=proxy.blah.com:7443 I changed my hosts file to trick the old application into connecting through stunnel: server.blah.com 127.0.0.1 # when client looks up server it goes to stunnel proxy.blah.com IP-address-of-server.blah.com # stunnel connects to new server "server.blah.com" now resolves to the machine it's running on (i.e. stunnel). "proxy.blah.com" goes to the real server. stunnel should connect to the server. I start the stunnel service and try to connect. It looks like it's working but the stunnel service just shuts down with no message. 2010.04.19 13:16:21 LOG5[4924:3716]: stunnel 4.33 on x86-pc-mingw32-gnu with OpenSSL 0.9.8h 28 May 2008 2010.04.19 13:16:21 LOG5[4924:3716]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.04.19 13:16:49 LOG5[4924:3748]: Service exchange accepted connection from 127.0.0.1:4134 2010.04.19 13:16:49 LOG6[4924:3748]: connect_blocking: connecting x.80.60.32:7443 2010.04.19 13:16:49 LOG5[4924:3748]: connect_blocking: connected x.80.60.32:7443 2010.04.19 13:16:49 LOG5[4924:3748]: Service exchange connected remote server from x.253.120.19:4135 2010.04.19 13:20:24 LOG5[3668:3856]: Reading configuration from file stunnel.conf 2010.04.19 13:20:24 LOG7[3668:3856]: Snagged 64 random bytes from C:/.rnd 2010.04.19 13:20:24 LOG7[3668:3856]: Wrote 1024 new random bytes to C:/.rnd 2010.04.19 13:20:24 LOG7[3668:3856]: RAND_status claims sufficient entropy for the PRNG 2010.04.19 13:20:24 LOG7[3668:3856]: PRNG seeded successfully 2010.04.19 13:20:24 LOG7[3668:3856]: SSL context initialized for service exchange 2010.04.19 13:20:24 LOG5[3668:3856]: Configuration successful 2010.04.19 13:20:24 LOG5[3668:3856]: No limit detected for the number of clients 2010.04.19 13:20:24 LOG7[3668:3856]: FD=312 in non-blocking mode 2010.04.19 13:20:24 LOG7[3668:3856]: Option SO_REUSEADDR set on accept socket 2010.04.19 13:20:24 LOG7[3668:3856]: Service exchange bound to 0.0.0.0:7000 2010.04.19 13:20:24 LOG7[3668:3856]: Service exchange opened FD=312 2010.04.19 13:20:24 LOG5[3668:3856]: stunnel 4.33 on x86-pc-mingw32-gnu with OpenSSL 0.9.8h 28 May 2008 2010.04.19 13:20:24 LOG5[3668:3856]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.04.19 13:21:02 LOG7[3668:4556]: Service exchange accepted FD=372 from 127.0.0.1:4156 2010.04.19 13:21:02 LOG7[3668:4556]: Creating a new thread 2010.04.19 13:21:02 LOG7[3668:4556]: New thread created 2010.04.19 13:21:02 LOG7[3668:3756]: Service exchange started 2010.04.19 13:21:02 LOG7[3668:3756]: FD=372 in non-blocking mode 2010.04.19 13:21:02 LOG5[3668:3756]: Service exchange accepted connection from 127.0.0.1:4156 2010.04.19 13:21:02 LOG7[3668:3756]: FD=396 in non-blocking mode 2010.04.19 13:21:02 LOG6[3668:3756]: connect_blocking: connecting x.80.60.32:7443 2010.04.19 13:21:02 LOG7[3668:3756]: connect_blocking: s_poll_wait x.80.60.32:7443: waiting 10 seconds 2010.04.19 13:21:02 LOG5[3668:3756]: connect_blocking: connected x.80.60.32:7443 2010.04.19 13:21:02 LOG5[3668:3756]: Service exchange connected remote server from x.253.120.19:4157 2010.04.19 13:21:02 LOG7[3668:3756]: Remote FD=396 initialized 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): before/connect initialization 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write client hello A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server hello A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server certificate A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server done A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write client key exchange A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write change cipher spec A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write finished A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 flush data 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read finished A The client thinks the connection is closed: No connection could be made because the target machine actively refused it 127.0.0.1:7000 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.Sockets.Socket.Connect(EndPoint remoteEP) at Service.ConnUtility.Connect() Any suggestions?

    Read the article

  • Failling install Ralink RT5592 driver on Ubuntu 14.04 LTS

    - by atisou
    My problem concerns the installation of a wi-fi driver (RT5592) for my new wi-fi adapter (PCE-N53) on my newly built computer. Basically, I don't manage to get the driver installed and therefore I cannot get the wifi to work. I know I am not the only one having this issue this year, between RT5592 driver and Ubuntu 14.04 LTS, in one way or the other. Is there anybody who has ever been able to fix this problem? It does not look like on all the posts I have been through... Following an answer to a same problem as mine (I was getting the same error message as Christopher Kyle Horton of "incompatible types" etc), I have applied the instructions and done the editings in a script as suggested by Paul B. Unfortunately I still do get error/warnings message (a different one this time) at the end of the make and the wi-fi still does not work. Below is a snapshot of the end of the message: In file included from /home/username/Downloads/PCE-N53/Linux/DPO_GPL_RT5592STA_LinuxSTA_v2.6.0.0_20120326/include/os/rt_linux.h:31:0, from /home/username/Downloads/PCE-N53/Linux/DPO_GPL_RT5592STA_LinuxSTA_v2.6.0.0_20120326/include/rtmp_os.h:44, from /home/username/Downloads/PCE-N53/Linux/DPO_GPL_RT5592STA_LinuxSTA_v2.6.0.0_20120326/include/rtmp_comm.h:69, from /home/username/Downloads/PCE-N53/Linux/DPO_GPL_RT5592STA_LinuxSTA_v2.6.0.0_20120326/os/linux/../../os/linux/pci_main_dev.c:31: include/linux/module.h:88:32: error: ‘__mod_pci_device_table’ aliased to undefined symbol ‘rt2860_pci_tbl’ extern const struct gtype##_id __mod_##gtype##_table \ ^ include/linux/module.h:146:3: note: in expansion of macro ‘MODULE_GENERIC_TABLE’ MODULE_GENERIC_TABLE(type##_device,name) ^ /home/username/Downloads/PCE-N53/Linux/DPO_GPL_RT5592STA_LinuxSTA_v2.6.0.0_20120326/os/linux/../../os/linux/pci_main_dev.c:73:1: note: in expansion of macro ‘MODULE_DEVICE_TABLE’ MODULE_DEVICE_TABLE(pci, rt2860_pci_tbl); ^ cc1: some warnings being treated as errors make[2]: *** [/home/username/Downloads/PCE-N53/Linux/DPO_GPL_RT5592STA_LinuxSTA_v2.6.0.0_20120326/os/linux/../../os/linux/pci_main_dev.o] Error 1 make[1]: *** [_module_/home/username/Downloads/PCE-N53/Linux/DPO_GPL_RT5592STA_LinuxSTA_v2.6.0.0_20120326/os/linux] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.13.0-32-generic' make: *** [LINUX] Error 2 The full pastebin data: paste.ubuntu.com/8088834/ It looks from the message that one would need to edit manually some of/other scripts in the driver package, as did Paul B suggest in one case. But I have no idea how to do that. Here is the driver package of the wifi adapter: www.asus.com/uk/Networking/PCEN53/HelpDesk_Download/ My system is as following: OS: ubuntu 14.04 LTS wi-fi card: Asus PCE-N53 motherboard: Asus KCMA-D8 processor: AMD Opteron 4228 HE kernel: 3.13.0-32-generic Following this info from chili555 in here, below are some extra info about my system: lspci -nn | grep 0280 gives 04:00.0 Network controller [0280]: Ralink corp. RT5592 PCI2 Wireless Network Adapater [1814:5592] and sudo apt-get install linux-headers-generic returns linux-headers-generic is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. If this is a kernel version (I have 3.13.0-32-generic) incompatibility issue with the driver as chilli555 suggests (the README file in the driver package says indeed it is compatible with kernel 2.6), how could one trick this around to make it work? that should be possible right? On ubuntu forums, the patches proposed dont work (leads the computer to freeze). Basically: is there anybody out there who has ever been able to make a PCE-N53 work on Ubuntu 14.04 LTS (kernel 3.13)? how shall I edit the driver package to make it work for my kernel?

    Read the article

  • How to set default xrandr settings?

    - by echo-flow
    I'm trying to enable dual monitors in Ubuntu. This is working fine, but every time I do it, desktop effects is disabled. I think I've found the reason why, though: https://wiki.ubuntu.com/X/Config/Multihead/ As with the GNOME XRandR configuration method, setting Virtual to too large a value may result in a loss of hardware acceleration, and thus an inability to use Compiz and its desktop effects. When I use the GNOME monitor applet, or the Monitors configuration in the System menu, the default xrandr settings puts the second monitor to the right of the first, and, as I found with this bug, for most monitors this creates a virtual desktop larger than the maximum 2048 horizontal resolution needed for hardware acceleration on my netbook hardware. So, it seems like if I can modify xrandr's default settings so that it places the new desktop above or below (north or south of) the main LVDS display, then hardware acceleration, and therefore compiz will continue to work. Can anyone tell me, what is the easiest way to achieve this? UPDATE: I have confirmed that multihead support with desktop effects and hardware acceleration works when I move the external monitor display north of the main LVDS display. Right now this involves the following process: plugging in the external monitor, starting the Monitors configuration menu, desktop effects are disabled automatically (and all of the windows on my workspaces are moved to the first workspace), repositioning the external display so that it is north of LVDS display and clicking apply, and then navigating to the Appearance menu and telling it to reenable desktop effects. Is there a simpler way do this? UPDATE 2: OK, so I thought that perhaps the GNOME Monitors configuration screen was trying to be clever, and might be disbling desktop effects. So, I just tried using the xrandr command-line client instead, as follows: xrandr --output VGA1 --above LVDS1 When I do that, desktop effects are still disabled, and I need to manually reenable them. This, despite the fact that hardware acceleration works, and there is never a point where hardware acceleration stops working because the horizontal dimension of the virtual display is too large. So what program is trying to be clever, and is turning off desktop effects when it doesn't need to? And how do I make it stop? If there were a way to re-enable desktop effects from the command line, which I could then put into a script along with the proper xrandr invocation, I would accept that as a workaround. UPDATE 3: OK, here's my script to enable a second monitor with desktop effects. It might be evil, I'm not sure: second-monitor.sh xrandr --output VGA1 --above LVDS1 sleep 3 compiz --replace & The sleep statement might not be necessary. If there's a better way to do this, please let me know. UPDATE 4: This is a Dell Mini Inspiron 1012. Here are my system specifications: lspci -vv 00:02.0 VGA compatible controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: Dell Device 041a Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 29 Region 0: Memory at f0b00000 (32-bit, non-prefetchable) [size=512K] Region 1: I/O ports at 18d0 [size=8] Region 2: Memory at d0000000 (32-bit, prefetchable) [size=256M] Region 3: Memory at f0900000 (32-bit, non-prefetchable) [size=1M] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: Dell Device 041a Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Region 0: Memory at f0b80000 (32-bit, non-prefetchable) [size=512K] Capabilities: <access denied> lsmod | grep i915 i915 287458 2 drm_kms_helper 29329 1 i915 drm 162409 3 i915,drm_kms_helper intel_agp 24375 2 i915 i2c_algo_bit 5028 1 i915 video 17375 1 i915

    Read the article

  • Why some links appear in a new tab, others in a new window?

    - by SAMIR BHOGAYTA
    Originally, it was made to resolve problems on IE8 32 bits when you use a 32 bits OS. I changed "%ProgramFiles(x86)%" var, and now my issue is resolved for IE8 32 bits on my Windows 7 64 bits. Please try it, and tell me if everything work for you. For Win 7 64 bits : 1 - Create a new notepad document and paste this text : @echo off echo. echo IEREREG Version 1.07 for IE8 27.03.2009 echo by Kai Schaetzl http://iefaq.info echo installs and registers (if suitable) all DLLs known to be used by IE8. echo should only take a few seconds, but please be patient echo. REM ****************************** echo registering IE files REM IE files (= part of setup) regsvr32 /s /i browseui.dll REM regsvr32 /s /i browseui.dll,NI (unnecessary) regsvr32 /s corpol.dll regsvr32 /s dxtmsft.dll regsvr32 /s dxtrans.dll REM simple HTML Mail API regsvr32 /s "%ProgramFiles(x86)%\internet explorer\hmmapi.dll" REM group policy snap-in regsvr32 /s ieaksie.dll REM smart screen regsvr32 /s ieapfltr.dll REM ieak branding regsvr32 /s iedkcs32.dll REM dev tools regsvr32 /s "%ProgramFiles(x86)%\internet explorer\iedvtool.dll" regsvr32 /s iepeers.dll REM Symptom: IE8 closes immediately on launch, missing from IE7 regsvr32 /s "%ProgramFiles(x86)%\internet explorer\ieproxy.dll" REM no install point anymore REM regsvr32 /s /i iesetup.dll REM no reg point anymore REM regsvr32 /s imgutil.dll regsvr32 /s /i /n inetcpl.cpl REM no install point anymore REM regsvr32 /s /i inseng.dll regsvr32 /s jscript.dll REM license manager regsvr32 /s licmgr10.dll REM regsvr32 /s msapsspc.dll REM regsvr32 /s mshta.exe REM VS debugger regsvr32 /s msdbg2.dll REM no install point anymore REM regsvr32 /s /i mshtml.dll regsvr32 /s mshtmled.dll regsvr32 /s msident.dll REM no reg point anymore REM regsvr32 /s msrating.dll REM multimedia timer regsvr32 /s mstime.dll REM no install point anymore REM regsvr32 /s /i occache.dll REM process debug manager regsvr32 /s "%ProgramFiles(x86)%\internet explorer\pdm.dll" REM no reg point anymore REM regsvr32 /s pngfilt.dll REM regsvr32 /s /i setupwbv.dll (not there anymore!) regsvr32 /s tdc.ocx regsvr32 /s /i urlmon.dll REM regsvr32 /s /i urlmon.dll,NI,HKLM regsvr32 /s vbscript.dll REM VML renderer regsvr32 /s "%CommonProgramFiles%\microsoft shared\vgx\vgx.dll" REM no install point anymore REM regsvr32 /s /i webcheck.dll regsvr32 /s /i /n wininet.dll REM ****************************** echo registering system files REM additional system dlls known to be used by IE REM added 11.05.2006 Symptom: Add-Ons-Manager menu entry is present but nothing happens regsvr32 /s extmgr.dll REM added 12.05.2006 Symptom: Javascript links don't work (Robin Walker) .NET hub file regsvr32 /s mscoree.dll REM added 23.03.2009 Symptom: Find on this page is blank regsvr32 /s oleacc.dll REM added 24.03.2009 Symptom: Printing problems, open in new window regsvr32 /s ole32.dll REM mscorier.dll REM mscories.dll REM Symptom: open in new tab/window not working regsvr32 /s actxprxy.dll regsvr32 /s asctrls.ocx regsvr32 /s cdfview.dll regsvr32 /s comcat.dll regsvr32 /s /i /n comctl32.dll regsvr32 /s cryptdlg.dll regsvr32 /s /i /n digest.dll regsvr32 /s dispex.dll regsvr32 /s hlink.dll regsvr32 /s mlang.dll regsvr32 /s mobsync.dll regsvr32 /s /i msieftp.dll REM regsvr32 /s msnsspc.dll #no entry point regsvr32 /s msr2c.dll regsvr32 /s msxml.dll regsvr32 /s oleaut32.dll REM regsvr32 /s plugin.ocx #no entry point regsvr32 /s proctexe.ocx REM plus DllRegisterServerEx ExA ExW ... ? regsvr32 /s /i scrobj.dll REM shdocvw.dll hasn't been updated for IE7 and IE8, it still registers itself for the Windows Internet Controls regsvr32 /s /i shdocvw.dll regsvr32 /s sendmail.dll REM ****************************** REM PKI/crypto functionality REM initpki can take very long to run and is rarely a problem REM if there are problems with crypto, SSL, certificates REM remove the three following REMs from the lines REM echo We are almost done except one crypto file REM echo but this will take very long, be patient! REM regsvr32 /s /i:A initpki.dll REM ****************************** REM tabbed browser, do at the end, why originally with /n ? regsvr32 /s /i ieframe.dll REM ****************************** echo correcting bugs in the registry REM do some corrective work REM Symptom: new tabs page cannot display content because it cannot access the controls (added 27. 3.2009) REM This is a result of a bug in shdocvw.dll (see above), probably only on Windows XP reg add "HKCR\TypeLib\{EAB22AC0-30C1-11CF-A7EB-0000C05BAE0B}\1.1\0\win32" /ve /t REG_SZ /d %systemroot%\system32\ieframe.dll /f REM ****************************** echo all tasks have been finished echo. pause 2 - Close all your IE windows and processes. 3 - Save your document on your Desktop by example, with the .bat extension. Right-click on it, and select "Run as administrator". 4 - Test if this tip resolved your issue by openning IE. If you use a Windows 32 bits version, please replace %ProgramFiles(x86)% by %ProgramFiles% in your .bat file.

    Read the article

  • How to resize / enlarge / grow a non-LVM ext4 partition

    - by Mischa
    I have already searched the forums, but couldnt find a good suitable answer: I have an Ubuntu Server 10.04 as KVM Host and a guest system, that also runs 10.04. The host system uses LVM and there are three logical volumes, which are provided to the guest as virtual block devices - one for /, one for /home and one for swap. The guest had been partitioned without LVM. I have already enlarged the logical volume in the host system - the guest successfully sees the bigger virtual disk. However, this virtual disk contains one "good old" partition, which still has the old small size. The output of fdisk -l is me@produktion:/$ LC_ALL=en_US sudo fdisk -l Disk /dev/vda: 32.2 GB, 32212254720 bytes 255 heads, 63 sectors/track, 3916 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c8ce7 Device Boot Start End Blocks Id System /dev/vda1 * 1 3917 31455232 83 Linux Disk /dev/vdb: 2147 MB, 2147483648 bytes 244 heads, 47 sectors/track, 365 cylinders Units = cylinders of 11468 * 512 = 5871616 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f2bf7 Device Boot Start End Blocks Id System /dev/vdb1 1 366 2095104 82 Linux swap / Solaris Partition 1 has different physical/logical beginnings (non-Linux?): phys=(0, 32, 33) logical=(0, 43, 28) Partition 1 has different physical/logical endings: phys=(260, 243, 47) logical=(365, 136, 44) Disk /dev/vdc: 225.5 GB, 225485783040 bytes 255 heads, 63 sectors/track, 27413 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00027f25 Device Boot Start End Blocks Id System /dev/vdc1 1 9138 73398272 83 Linux The output of parted print all is Model: Virtio Block Device (virtblk) Disk /dev/vda: 32.2GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 32.2GB 32.2GB primary ext4 boot Model: Virtio Block Device (virtblk) Disk /dev/vdb: 2147MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 2146MB 2145MB primary linux-swap(v1) Model: Virtio Block Device (virtblk) Disk /dev/vdc: 225GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 75.2GB 75.2GB primary ext4 What I want to achieve is to simply grow or resize the partition /dev/vdc1 so that it uses the whole space provided by the virtual block device /dev/vdc. The problem is, that when I try to do that with parted, it complains: (parted) select /dev/vdc Using /dev/vdc (parted) print Model: Virtio Block Device (virtblk) Disk /dev/vdc: 225GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 75.2GB 75.2GB primary ext4 (parted) resize 1 WARNING: you are attempting to use parted to operate on (resize) a file system. parted's file system manipulation code is not as robust as what you'll find in dedicated, file-system-specific packages like e2fsprogs. We recommend you use parted only to manipulate partition tables, whenever possible. Support for performing most operations on most types of file systems will be removed in an upcoming release. Start? [1049kB]? End? [75.2GB]? 224GB Error: File system has an incompatible feature enabled. Compatible features are has_journal, dir_index, filetype, sparse_super and large_file. Use tune2fs or debugfs to remove features. So what can I do? This is a headless production system. What is a safe way to grow this partition? I CAN unmount it, though - so this is not the problem.

    Read the article

  • ORE graphics using Remote Desktop Protocol

    - by Sherry LaMonica
    Oracle R Enterprise graphics are returned as raster, or bitmap graphics. Raster images consist of tiny squares of color information referred to as pixels that form points of color to create a complete image. Plots that contain raster images render quickly in R and create small, high-quality exported image files in a wide variety of formats. However, it is a known issue that the rendering of raster images can be problematic when creating graphics using a Remote Desktop connection. Raster images do not display in the windows device using Remote Desktop under the default settings. This happens because Remote Desktop restricts the number of colors when connecting to a Windows machine to 16 bits per pixel, and interpolating raster graphics requires many colors, at least 32 bits per pixel.. For example, this simple embedded R image plot will be returned in a raster-based format using a standalone Windows machine:  R> library(ORE) R> ore.connect(user="rquser", sid="orcl", host="localhost", password="rquser", all=TRUE)  R> ore.doEval(function() image(volcano, col=terrain.colors(30))) Here, we first load the ORE packages and connect to the database instance using database login credentials. The ore.doEval function executes the R code within the database embedded R engine and returns the image back to the client R session. Over a Remote Desktop connection under the default settings, this graph will appear blank due to the restricted number of colors. Users who encounter this issue have two options to display ORE graphics over Remote Desktop: either raise Remote Desktop's Color Depth or direct the plot output to an alternate device. Option #1: Raise Remote Desktop Color Depth setting In a Remote Desktop session, all environment variables, including display variables determining Color Depth, are determined by the RCP-Tcp connection settings. For example, users can reduce the Color Depth when connecting over a slow connection. The different settings are 15 bits, 16 bits, 24 bits, or 32 bits per pixel. To raise the Remote Desktop color depth: On the Windows server, launch Remote Desktop Session Host Configuration from the Accessories menu.Under Connections, right click on RDP-Tcp and select Properties.On the Client Settings tab either uncheck LimitMaximum Color Depth or set it to 32 bits per pixel. Click Apply, then OK, log out of the remote session and reconnect.After reconnecting, the Color Depth on the Display tab will be set to 32 bits per pixel.  Raster graphics will now display as expected. For ORE users, the increased color depth results in slightly reduced performance during plot creation, but the graph will be created instead of displaying an empty plot. Option #2: Direct plot output to alternate device Plotting to a non-windows device is a good option if it's not possible to increase Remote Desktop Color Depth, or if performance is degraded when creating the graph. Several device drivers are available for off-screen graphics in R, such as postscript, pdf, and png. On-screen devices include windows, X11 and Cairo. Here we output to the Cairo device to render an on-screen raster graphic.  The grid.raster function in the grid package is analogous to other grid graphical primitives - it draws a raster image within the current plot's grid.  R> options(device = "CairoWin") # use Cairo device for plotting during the session R> library(Cairo) # load Cairo, grid and png libraries  R> library(grid) R> library(png)  R> res <- ore.doEval(function()image(volcano,col=terrain.colors(30))) # create embedded R plot  R> img <- ore.pull(res, graphics = TRUE)$img[[1]] # extract image  R> grid.raster(as.raster(readPNG(img)), interpolate = FALSE) # generate raster graph R> dev.off() # turn off first device   By default, the interpolate argument to grid.raster is TRUE, which means that what is actually drawn by R is a linear interpolation of the pixels in the original image. Setting interpolate to FALSE uses a sample from the pixels in the original image.A list of graphics devices available in R can be found in the Devices help file from the grDevices package: R> help(Devices)

    Read the article

  • Multiple markers in Googe Maps API v3 that link to different pages when clicked

    - by Dave
    I have a map with multiple markers, which I populate via an array. Each marker is clickable and should take the user to a different url per marker. The problem is that all the markers, while displaying the correct title, all use the url of the last entry in the array. Here is my code: var myOptions = { zoom: 9, center: new google.maps.LatLng(40.81940575,-73.95647955), mapTypeId: google.maps.MapTypeId.TERRAIN } var map = new google.maps.Map(document.getElementById("bigmap"), myOptions); setMarkers(map, properties); var properties = [ ['106 Ft Washington Avenue',40.8388485,-73.9436015,'Mjg4'], ['213 Bennett Avenue',40.8574384,-73.9333426,'Mjkz'], ['50 Overlook Terrace',40.8543752,-73.9362542,'Mjky'], ['850 West 176 Street',40.8476012,-73.9417571,'OTM='], ['915 West End Avenue',40.8007478,-73.9692155,'Mjkx']]; function setMarkers(map, buildings) { var image = new google.maps.MarkerImage('map_marker.png', new google.maps.Size(19,32), new google.maps.Point(0,0), new google.maps.Point(10,32)); var shadow = new google.maps.MarkerImage('map_marker_shadow.png', new google.maps.Size(28,32), new google.maps.Point(0,0), new google.maps.Point(10,32)); var bounds = new google.maps.LatLngBounds; for (var i in buildings) { var myLatLng = new google.maps.LatLng(buildings[i][1], buildings[i][2]); bounds.extend(myLatLng); var marker = new google.maps.Marker({ position: myLatLng, map: map, shadow: shadow, icon: image, title: buildings[i][0] }); google.maps.event.addListener(marker, 'click', function() { window.location = ('detail?b=' + buildings[i][3]); }); } map.fitBounds(bounds); } Using this code, clicking any marker take the user to 'detail?b=Mjkx' What am I doing wrong?

    Read the article

  • How can I put multiple markers on Google maps with Javascript API v3?

    - by Doe
    Hi, I'd like to know how to put multiple markers for Google Maps using Javascript API v3. I tried the solution posted here: http://stackoverflow.com/questions/1621991/multiple-markers-in-googe-maps-api-v3-that-link-to-different-pages-when-clicked but it does not work for me for some reason. var directionDisplay; function initialize() { var myOptions = { zoom: 9, center: new google.maps.LatLng(40.81940575,-73.95647955), mapTypeId: google.maps.MapTypeId.TERRAIN } var map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); setMarkers(map, properties); var properties = [ ['106 Ft Washington Avenue',40.8388485,-73.9436015,'Mjg4'], ]; function setMarkers(map, buildings) { var image = new google.maps.MarkerImage('map_marker.png', new google.maps.Size(19,32), new google.maps.Point(0,0), new google.maps.Point(10,32)); var shadow = new google.maps.MarkerImage('map_marker_shadow.png', new google.maps.Size(28,32), new google.maps.Point(0,0), new google.maps.Point(10,32)); var bounds = new google.maps.LatLngBounds; for (var i in buildings) { var myLatLng = new google.maps.LatLng(buildings[i][1], buildings[i][2]); bounds.extend(myLatLng); var marker = new google.maps.Marker({ position: myLatLng, map: map, shadow: shadow, icon: image, title: buildings[i][0] }); google.maps.event.addListener(marker, 'click', function() { window.location = ('detail?b=' + buildings[i][3]); }); } map.fitBounds(bounds); } } </script> Could anyone kindly explain why this doesn't work for me?

    Read the article

  • Google Map API v3 — set bounds and center

    - by Michael Bradley
    Hi, I've recently switched to Google Maps API V3. I'm working of a simple example which plots markers from an array, however I do not know how to center and zoom automatically with respect to the markers. I've searched the net high and low, including Google's own documentation, but have not found a clear answer. I know I could simply take an average of the co-ordinates, but how would I set the zoom accordingly? Could somebody please point me in the right direction? Perhaps you know of a good tutorial. Many thanks in advance, Michael function initialize() { var myOptions = { zoom: 10, center: new google.maps.LatLng(-33.9, 151.2), mapTypeId: google.maps.MapTypeId.ROADMAP } var map = new google.maps.Map(document.getElementById("map_canvas"),myOptions); setMarkers(map, beaches); } var beaches = [ ['Bondi Beach', -33.890542, 151.274856, 4], ['Coogee Beach', -33.423036, 151.259052, 5], ['Cronulla Beach', -34.028249, 121.157507, 3], ['Manly Beach', -33.80010128657071, 151.28747820854187, 2], ['Maroubra Beach', -33.450198, 151.259302, 1] ]; function setMarkers(map, locations) { var image = new google.maps.MarkerImage('images/beachflag.png', new google.maps.Size(20, 32), new google.maps.Point(0,0), new google.maps.Point(0, 32)); var shadow = new google.maps.MarkerImage('images/beachflag_shadow.png', new google.maps.Size(37, 32), new google.maps.Point(0,0), new google.maps.Point(0, 32)); var lat = map.getCenter().lat(); var lng = map.getCenter().lng(); var shape = { coord: [1, 1, 1, 20, 18, 20, 18 , 1], type: 'poly' }; for (var i = 0; i < locations.length; i++) { var beach = locations[i]; var myLatLng = new google.maps.LatLng(beach[1], beach[2]); var marker = new google.maps.Marker({ position: myLatLng, map: map, shadow: shadow, icon: image, shape: shape, title: beach[0], zIndex: beach[3] }); } }

    Read the article

  • Scaling a CBitmap - what am I doing wrong?

    - by Smashery
    I've written the following code, which attempts to take a 32x32 bitmap (loaded through MFC's Resource system) and turn it into a 16x16 bitmap, so they can be used as the big and small CImageLists for a CListCtrl. However, when I open the CListCtrl, all the icons are black (in both small and large view). Before I started playing with resizing, everything worked perfectly in Large View. What am I doing wrong? // Create the CImageLists if (!m_imageListL.Create(32,32,ILC_COLOR24, 1, 1)) { throw std::exception("Failed to create CImageList"); } if (!m_imageListS.Create(16,16,ILC_COLOR24, 1, 1)) { throw std::exception("Failed to create CImageList"); } // Fill the CImageLists with items loaded from ResourceIDs int i = 0; for (std::vector<UINT>::iterator it = vec.begin(); it != vec.end(); it++, i++) { CBitmap* bmpBig = new CBitmap(); bmpBig->LoadBitmap(*it); CDC bigDC; bigDC.CreateCompatibleDC(m_itemList.GetDC()); bigDC.SelectObject(bmpBig); CBitmap* bmpSmall = new CBitmap(); bmpSmall->CreateBitmap(16, 16, 1, 24, 0); CDC smallDC; smallDC.CreateCompatibleDC(&bigDC); smallDC.SelectObject(bmpSmall); smallDC.StretchBlt(0, 0, 32, 32, &bigDC, 0, 0, 16, 16, SRCCOPY); m_imageListL.Add(bmpBig, RGB(0,0,0)); m_imageListS.Add(bmpSmall, RGB(0,0,0)); } m_itemList.SetImageList(&m_imageListS, LVSIL_SMALL); m_itemList.SetImageList(&m_imageListL, LVSIL_NORMAL);

    Read the article

  • Cache Simulator in C

    - by DuffDuff
    Ok this is only my second question, and it's quite a doozy. It's for a school assignment, but no one (including the TAs) seems to be able to help me. It's kind of a tall order but I'm not sure where else to turn. Essentially the assignment was to make a cache simulator. This version is direct mapping and is actually only a small portion of the whole project, but if I can't even get this down I have no chance with other associativities. I'm posting my whole code because I don't want to make any assumptions about where the problem is. This is the test case: http://www.mediafire.com/?ty5dnihydnw And you run the following command: ./sims 512 direct 32 fifo wt pinatrace.out You're supposed to get: hits: 604037 misses 138349 writes: 239269 reads: 138349 But I get: Hits: 587148 Misses: 155222 Writes: 239261 Reads: 155222 If anyone could at least point me in the right direction it would be greatly appreciated. I've been stuck on this for about 12 hours. #include <stdio.h> #include <stdlib.h> #include <string.h> #include <math.h> struct myCache { int valid; char *tag; char *block; }; /* sim [-h] <cache size> <associativity> <block size> <replace alg> <write policy> <trace file> */ //God willing I come up with a better Hex to Bin convertion that maintains the beginning 0s... void hex2bin(char input[], char output[]) { int i; int a = 0; int b = 1; int c = 2; int d = 3; int x = 4; int size; size = strlen(input); for (i = 0; i < size; i++) { if (input[i] =='0') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='1') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='2') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='3') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '1'; } else if (input[i] =='x') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='5') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='6') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='7') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '1'; } else if (input[i] =='8') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='9') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='a') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='b') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '1'; } else if (input[i] =='c') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='d') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='e') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='f') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '1'; } } output[32] = '\0'; } int main(int argc, char* argv[]) { FILE *tracefile; char readwrite; int trash; int cachesize; int blocksize; int setnumber; int blockbytes; int setbits; int blockbits; int tagsize; int m; int count = 0; int count2 = 0; int count3 = 0; int i; int j; int xindex; int jindex; int kindex; int lindex; int setadd; int totalset; int writeMiss = 0; int writeHit = 0; int cacheMiss = 0; int cacheHit = 0; int read = 0; int write = 0; int size; int extra; char bbits[100]; char sbits[100]; char tbits[100]; char output[100]; char input[100]; char origtag[100]; if (argc != 7) { if (strcmp(argv[0], "-h")) { printf("./sim2 <cache size> <associativity> <block size> <replace alg> <write policy> <trace file>\n"); return 0; } else { fprintf(stderr, "Error: wrong number of parameters.\n"); return -1; } } tracefile = fopen(argv[6], "r"); if(tracefile == NULL) { fprintf(stderr, "Error: File is NULL.\n"); return -1; } //Determining size of sbits, bbits, and tag cachesize = atoi(argv[1]); blocksize = atoi(argv[3]); setnumber = (cachesize/blocksize); printf("setnumber: %d\n", setnumber); setbits = (round((log(setnumber))/(log(2)))); printf("sbits: %d\n", setbits); blockbits = log(blocksize)/log(2); printf("bbits: %d\n", blockbits); tagsize = 32 - (blockbits + setbits); printf("t: %d\n", tagsize); struct myCache newCache[setnumber]; //Allocating Space for Tag Bits, initiating tag and valid to 0s for(i=0;i<setnumber;i++) { newCache[i].tag = (char *)malloc(sizeof(char)*(tagsize+1)); for(j=0;j<tagsize;j++) { newCache[i].tag[j] = '0'; } newCache[i].valid = 0; } while(fgetc(tracefile)!='#') { setadd = 0; totalset = 0; //read in file fseek(tracefile,-1,SEEK_CUR); fscanf(tracefile, "%x: %c %s\n", &trash, &readwrite, origtag); //shift input Hex size = strlen(origtag); extra = (10 - size); for(i=0; i<extra; i++) input[i] = '0'; for(i=extra, j=0; i<(size-(2-extra)); j++, i++) input[i]=origtag[j+2]; input[8] = '\0'; // Convert Hex to Binary hex2bin(input, output); //Resolving the Address into tbits, sbits, bbits for (xindex=0, jindex=(32-blockbits); jindex<32; jindex++, xindex++) { bbits[xindex] = output[jindex]; } bbits[xindex]='\0'; for (xindex=0, kindex=(32-(blockbits+setbits)); kindex<32-(blockbits); kindex++, xindex++){ sbits[xindex] = output[kindex]; } sbits[xindex]='\0'; for (xindex=0, lindex=0; lindex<(32-(blockbits+setbits)); lindex++, xindex++){ tbits[xindex] = output[lindex]; } tbits[xindex]='\0'; //Convert set bits from char array into ints for(xindex = 0, kindex = (setbits -1); xindex < setbits; xindex ++, kindex--) { if (sbits[xindex] == '1') setadd = 1; if (sbits[xindex] == '0') setadd = 0; setadd = setadd * pow(2, kindex); totalset += setadd; } //Calculating Hits and Misses if (newCache[totalset].valid == 0) { newCache[totalset].valid = 1; strcpy(newCache[totalset].tag, tbits); } else if (newCache[totalset].valid == 1) { if(strcmp(newCache[totalset].tag, tbits) == 0) { if (readwrite == 'W') { cacheHit++; write++; } if (readwrite == 'R') cacheHit++; } else { if (readwrite == 'R') { cacheMiss++; read++; } if (readwrite == 'W') { cacheMiss++; read++; write++; } strcpy(newCache[totalset].tag, tbits); } } } printf("Hits: %d\n", cacheHit); printf("Misses: %d\n", cacheMiss); printf("Writes: %d\n", write); printf("Reads: %d\n", read); }

    Read the article

  • Can i use a different parser for Axis 1.4?

    - by NishM
    The current SAX parser takes a lot of time (20 minutes) and heap memory(around 400mb) to deserialize the response coming from the soap server as per the logs. Our response XMLs are of average size 4 mb. A part of the log when it runs the applicaiton out of heap is below DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing handler org.apache.axis.message.SOAPHandler@16d22f1 DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(newElem00) DEBUG (org.apache.axis.message.MessageElement) New MessageElement (org.apache.axis.message.MessageElement@112c22) named {}name DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing element name DEBUG (org.apache.axis.utils.NSStack) NSPush (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::startElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::endElement(, name) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(popHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Popping handler org.apache.axis.message.SOAPHandler@16d22f1 DEBUG (org.apache.axis.utils.NSStack) NSPop (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Popped element stack to org.apache.axis.message.MessageElement:property DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::endElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::startElement(, value) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(pushHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing handler org.apache.axis.message.SOAPHandler@16880ba DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(newElem00) DEBUG (org.apache.axis.message.MessageElement) New MessageElement (org.apache.axis.message.MessageElement@1db74af) named {}value DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing element value DEBUG (org.apache.axis.utils.NSStack) NSPush (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::startElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::endElement(, value) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(popHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Popping handler org.apache.axis.message.SOAPHandler@16880ba DEBUG (org.apache.axis.utils.NSStack) NSPop (32) I cannot use Axis2 because of technical reasons. I have tried using HTTP Commons client instead of HTTP client but the response time remains the same. How can i link a different parser(example xerces 2.10.0 or xstream 1.3.1?) to Axis 1.4 framework in this context so that memory management and response time is favorable?.

    Read the article

  • jquery selector logical AND?

    - by taber
    In jQuery I'm trying to select only mount nodes where a and b's text values are 64 and "test" accordingly. I'd also like to fallback to 32 if no 64 and "test" exist. What I'm seeing with the code below though, is that the 32 mount is being returned instead of the 64. The XML: <thingses <thing <a32</a <-- note, a here is 32 and not 64 -- <other...</other <mountsample 1</mount <btest</b </thing <thing <a64</a <other...</other <mountsample 2</mount <btest</b </thing <thing <a64</a <other...</other <mountsample 3</mount <bunrelated</b </thing <thing <a128</a <other...</other <mountsample 4</mount <bunrelated</b </thing </thingses And unfortunately I don't have control over the XML as it comes from somewhere else. What I'm doing now is: var ret_val = ''; $data.find('thingses thing').each(function(i, node) { var $node = $(node), found_node = $node.find('b:first:is(test), a:first:is(64)').end().find('mount:first').text(); if(found_node) { ret_val = found_node; return; } found_node = $node.find('b:first:is(test), a:first:is(32)').end().find('mount:first').text(); if(found_node) { ret_val = found_node; return; } ret_val = 'not found'; }); // expected result is "sample 2", but if sample 2's parent "thing" was missing, the result would be "sample 1" alert(ret_val); For my ":is" selector I'm using: if(jQuery){ jQuery.expr[":"].is = function(obj, index, meta, stack){ return (obj.textContent || obj.innerText || $(obj).text() || "").toLowerCase() == meta[3].toLowerCase(); }; } There has to be a better way than how I'm doing it. I wish I could replace the "," with "AND" or something. :) Any help would be much appreciated. thanks!

    Read the article

  • Boost Mersenne Twister: how to seed with more than one value?

    - by Eamon Nerbonne
    I'm using the boost mt19937 implementation for a simulation. The simulation needs to be reproducible, and that means storing and potentially reusing the RNG seeds later. I'm using the windows crypto api to generate the seed values because I need an external source for the seeds and not because of any particular guarantees of randomness. The output of any simulation run will have a note including the RNG seed - so the seed needs to be reasonably short. On the other hand, as part of the analysis of the simulation, I'll be comparing several runs - but to be sure that these runs are actually different, I'll need to use different seeds - so the seed needs to be long enough to avoid accidental collisions. I've determined that 64-bits of seeding should suffice; the chance of a collision will reach 50% after about 2^32 runs - that probability is low enough that the average error caused by it is negligible to me. Using just 32-bits of seed is tricky; the chance of a collision reaches 50% already after 2^16 runs; and that's a little too likely for my tastes. Unfortunately, the boost implementation either seeds with a full state vector - which is far, far too long - or a single 32-bit unsigned long - which isn't ideal. How can I seed the generator with more than 32-bits but less than a full state vector? I tried just padding the vector or repeating the seeds to fill the state vector, but even a cursory glance at the results shows that that generates poor results.

    Read the article

  • Opencv application crashes at runtime with error code 0x0000142

    - by Tuan Anh
    I have openCV and minGW installed with codeblock IDE following the instructions found here http://kevinhughes.ca/tutorials/opencv-install-on-windows-with-codeblocks-and-mingw/ i tried the simple image loading program in the article and the build process went fine. but when i tried running the output program, it crashes with the error message "the application was unable to start correctly (0xc0000142). Click OK to close the application." I used Dependency Walker to see if the program failed to load dll module and here's the output screen of Dependency Walker https://www.dropbox.com/s/f9iaftdt8atjwpl/Screenshot%202013-11-05%2022.21.45.png i am not used to DW but as i can see in its output screen, some openCV dll failed to load and the loaded Windows DLL were 64 bit instead of 32 bit (as minGW is 32 bit). I can't figure out why as i already configure the Path environment variable for the bin directory of openCV and the app still can not load the dll modules. And i think that Windows will automatically load the proper 32 bit DLLs when a 32 bit app is run but this situation the app still failed to load. Anyone has ideas?

    Read the article

  • Epoll in EPOLLET mode returning 2 EPOLLIN before reading from the socket

    - by Arkaitz Jimenez
    The epoll manpage says that a fd registered with EPOLLET(edge triggered) shouldn't notify twice EPOLLIN if no read has been done. So after an EPOLLIN you need to empty the buffer before epoll_wait being able to return a new EPOLLIN on new data. However I'm experiencing problems with this approach as I'm seeing duplicated EPOLLIN events for untouched fds. This is the strace output, 0x200 is EPOLLRDHUP that is not defined yet in my glibc headers but defined in the kernel. 30285 epoll_ctl(3, EPOLL_CTL_ADD, 9, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP|EPOLLET|0x2000, {u32=9, u64=9}}) = 0 30285 epoll_wait(3, {{EPOLLIN, {u32=9, u64=9}}}, 10, -1) = 1 30285 epoll_wait(3, {{EPOLLIN, {u32=9, u64=9}}}, 10, -1) = 1 30285 epoll_wait(3, <unfinished ...> 30349 epoll_ctl(3, EPOLL_CTL_DEL, 9, NULL) = 0 30306 recv(9, "7u\0\0\10\345\241\312\t\20\f\32\r\10\27\20\2\30\200\10 \31(C0\17\32\r\10\27\20\2\30"..., 20000, 0) = 20000 30349 epoll_ctl(3, EPOLL_CTL_DEL, 9, NULL) = -1 ENOENT (No such file or directory) 30305 recv(9, " \31(C0\17\32\r\10\27\20\2\30\200\10 \31(C0\17\32\r\10\27\20\2\30\200\10 \31("..., 20000, 0) = 10011 So, after adding fd number 9 I do receive 2 consecutive EPOLLIN events before having recving the file descriptor, the syscall trace shows how I do delete the fd before reading but it should only happen once, one per event. So either I am not reading the manpage properly or something is now working here.

    Read the article

  • Filling a byte array in Java

    - by Corleone
    Hey all! For part of a project I'm working on I am implementing a RTPpacket where I have to fill the header array of byte with RTP header fields. //size of the RTP header: static int HEADER_SIZE = 12; // bytes //Fields that compose the RTP header public int Version; // 2 bits public int Padding; // 1 bit public int Extension; // 1 bit public int CC; // 4 bits public int Marker; // 1 bit public int PayloadType; // 7 bits public int SequenceNumber; // 16 bits public int TimeStamp; // 32 bits public int Ssrc; // 32 bits //Bitstream of the RTP header public byte[] header = new byte[ HEADER_SIZE ]; This was my approach: /* * bits 0-1: Version * bit 2: Padding * bit 3: Extension * bits 4-7: CC */ header[0] = new Integer( (Version << 6)|(Padding << 5)|(Extension << 6)|CC ).byteValue(); /* * bit 0: Marker * bits 1-7: PayloadType */ header[1] = new Integer( (Marker << 7)|PayloadType ).byteValue(); /* SequenceNumber takes 2 bytes = 16 bits */ header[2] = new Integer( SequenceNumber >> 8 ).byteValue(); header[3] = new Integer( SequenceNumber ).byteValue(); /* TimeStamp takes 4 bytes = 32 bits */ for ( int i = 0; i < 4; i++ ) header[7-i] = new Integer( TimeStamp >> (8*i) ).byteValue(); /* Ssrc takes 4 bytes = 32 bits */ for ( int i = 0; i < 4; i++ ) header[11-i] = new Integer( Ssrc >> (8*i) ).byteValue(); Any other, maybe 'better' ways to do this?

    Read the article

  • uniimageview not updating after views tranistion

    - by Fabio Ricci
    Hi I have one main view where I display an image, in the method viewDidLoad: ballRect = CGRectMake(posBallX, 144, 32.0f, 32.0f); theBall = [[UIImageView alloc] initWithFrame:ballRect]; [theBall setImage:[UIImage imageNamed:@"ball.png"]]; [self.view addSubview:theBall]; [laPalla release]; Obviously, the value of posBallX is defined and then update via a custom method call many times in the same class. theBall.frame = CGRectMake(posBallX, 144, 32, 32); Everything works, but when I go to another view with [self presentModalViewController:viewTwo animated:YES]; and come back with [self presentModalViewController:viewOne animated:YES]; the image is displayed correctly after the method viewDidLoad is called (I retrieve the values with NSUserDefaults) but no more in the second method. In the NSLog I can even see the new posBallX updating correctly, but the Image is simply no more shown... The same happens with a Label as well, which should print the value of posBallX. So, things are just not working if I come back to the viewOne from the viewTwo... Any idea??????? Thanks so much!

    Read the article

  • Linux: modpost does not build anything

    - by waffleman
    I am having problems getting any kernel modules to build on my machine. Whenever I build a module, modpost always says there are zero modules: MODPOST 0 modules To troubleshoot the problem, I wrote a test module (hello.c): #include <linux/module.h> /* Needed by all modules */ #include <linux/kernel.h> /* Needed for KERN_INFO */ #include <linux/init.h> /* Needed for the macros */ static int __init hello_start(void) { printk(KERN_INFO "Loading hello module...\n"); printk(KERN_INFO "Hello world\n"); return 0; } static void __exit hello_end(void) { printk(KERN_INFO "Goodbye Mr.\n"); } module_init(hello_start); module_exit(hello_end); Here is the Makefile for the module: obj-m = hello.o KVERSION = $(shell uname -r) all: make -C /lib/modules/$(KVERSION)/build M=$(shell pwd) modules clean: make -C /lib/modules/$(KVERSION)/build M=$(shell pwd) clean When I build it on my machine, I get the following output: make -C /lib/modules/2.6.32-27-generic/build M=/home/waffleman/tmp/mod-test modules make[1]: Entering directory `/usr/src/linux-headers-2.6.32-27-generic' CC [M] /home/waffleman/tmp/mod-test/hello.o Building modules, stage 2. MODPOST 0 modules make[1]: Leaving directory `/usr/src/linux-headers-2.6.32-27-generic' When I make the module on another machine, it is successful: make -C /lib/modules/2.6.24-27-generic/build M=/home/somedude/tmp/mod-test modules make[1]: Entering directory `/usr/src/linux-headers-2.6.24-27-generic' CC [M] /home/somedude/tmp/mod-test/hello.o Building modules, stage 2. MODPOST 1 modules CC /home/somedude/tmp/mod-test/hello.mod.o LD [M] /home/somedude/tmp/mod-test/hello.ko make[1]: Leaving directory `/usr/src/linux-headers-2.6.24-27-generic' I looked for any relevant documentation about modpost, but found little. Anyone know how modpost decides what to build? Is there an environment that I am possibly missing? BTW here is what I am running: uname -a Linux waffleman-desktop 2.6.32-27-generic #49-Ubuntu SMP Wed Dec 1 23:52:12 UTC 2010 i686 GNU/Linux

    Read the article

  • Targetting x86 vs AnyCPU when building for 64 bit window OSes

    - by Mr Roys
    I have an existing C# application written for .NET 2.0 and targetting AnyCPU at the moment. It currently references some third party .NET DLLs which I don't have the source for (and I'm not sure if they were built for x86, x64 or AnyCPU). If I want to run my application specifically on a 64 bit Windows OS, which platform should I target in order for my app to run without errors? My understanding at the moment is to target: x86: If at least one third party .NET dll is built for x86 or use p/Invoke to interface with Win32 DLLs. Application will run in 32 bit mode on both 32 bit and 64 bit OSes. x64: If all third party .NET dlls are already built for x64 or AnyCPU. Application will only run in 64 bit OSes. AnyCPU: If all third party .NET dlls are already built for AnyCPU. Application will run in 32 bit mode on 32 bit OSes and 64 bit on 64 bit OSes. Also, am I right to believe that while targetting AnyCPU will generate no errors when building a application referencing third party x86 .NET DLLs, the application will throw a runtime exception when it tries to load these DLLs when it runs on a 64 bit OS. Hence, as long as one of my third party DLLs is doing p/Invoke or are x86, I can only target x86 for this application?

    Read the article

  • Django URL Conf Returns Incorrect "Current URL"

    - by natnit
    I have a django app that is mostly done, and the URLs work perfectly when I run it with the manage.py runserver command. However, I've recently tried to get it running via lighttpd, and many links have stopped working. For example: http://mysite.com/races/32 should work, but instead throws this error message. Page not found (404) Request Method: GET Request URL: http://mysite.com/races/32 Using the URLconf defined in racetrack.urls, Django tried these URL patterns, in this order: ^admin/ ^create/$ ^races/$ ^races/(?P<race_id>\d+)/$ ^races/(?P<race_id>\d+)/manage/$ ^races/(?P<text>\w+)/$ ^user/(?P<kol_id>\d+)/$ ^$ ^login/$ ^logout/$ The current URL, 32, didn't match any of these. The request URL is accurate, but the last line (which displays the current URL) is giving 32 instead of races/32 as expected. Here is my urlconf: from django.conf.urls.defaults import * from django.contrib import admin admin.autodiscover() urlpatterns = patterns('racetrack.races.views', (r'^admin/', include(admin.site.urls)), (r'^create/$', 'create'), (r'^races/$', 'index'), (r'^races/(?P<race_id>\d+)/$', 'detail'), (r'^races/(?P<race_id>\d+)/manage/$', 'manage'), (r'^races/(?P<text>\w+)/$', 'index'), (r'^user/(?P<kol_id>\d+)/$', 'user'), # temporary for index page replace with welcome page (r'^$', 'index'), ) urlpatterns += patterns('django.contrib.auth.views', (r'^login/$', 'login', {'template_name': 'races/login.html'}), (r'^logout/$', 'logout', {'next_page': '/'}), ) Thank you.

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • send email to list of users with different timezones?

    - by ylazez
    i use the following method to send email to list of users i want the (To) in each email to be for just the user only not all users i.e appears to the users that the email is sent to only him my guess is to loop on: message.addRecipients(Message.RecipientType.TO, address); then send the message right? , but this is a heavy process sending an email many times any ideas ? suppose that i have the timezone for each user and i want to send each user the message in his timzone, the same issue i guess setting sent date for each user in his timezone then sending the message, right ? the method is: try { Properties props = System.getProperties(); props.put("mail.smtp.host", "localhost"); // Get a mail session Session session = Session.getDefaultInstance(props, null); // Define a new mail message Message message = new MimeMessage(session); InternetAddress ia = new InternetAddress(); ia.setPersonal("MySite"); ia.setAddress(from); message.setFrom(ia); Address[] address = new Address[recievers.size()]; for (int i = 0; i < recievers.size(); i++) { address[i] = new InternetAddress(recievers.get(i)); } message.addRecipients(Message.RecipientType.TO, address); message.setSubject(subject); // Create a message part to represent the body text BodyPart messageBodyPart = new MimeBodyPart(); messageBodyPart.setContent(body, "text/html"); // use a MimeMultipart as we need to handle the file attachments Multipart multipart = new MimeMultipart(); // add the message body to the mime message multipart.addBodyPart(messageBodyPart); // Put all message parts in the message message.setContent(multipart); message.setSentDate(getCurrentDate()); // Send the message Transport.send(message); } catch (Exception ex) {}

    Read the article

  • Ping broadcast on Win XP SP3

    - by PaulH
    I'm trying to ping the broadcast address 255.255.255.255 on WinXP SP3. If I use the command line, I get host error: C:\>ping 255.255.255.255 Ping request could not find host 255.255.255.255. Please check the name and try again. If I try a C++ program using the iphlpapi, IcmpSendEcho() fails and GetLastError returns 11010 IP_REQ_TIMED_OUT. HANDLE h = ::IcmpCreateFile(); IPAddr broadcast = inet_addr( "255.255.255.255" ); BYTE payload[ 32 ] = { 0 }; IP_OPTION_INFORMATION option = { 255, 0, 0, 0, 0 }; // a buffer with room for 32 replies each containing the full payload std::vector< BYTE > replies( 32 * ( sizeof( ICMP_ECHO_REPLY ) + 32 ) ); DWORD res = ::IcmpSendEcho( h, broadcast, payload, sizeof( payload ), &option, &replies[ 0 ], replies.size(), 1000 ); ::IcmpCloseHandle( h ); I can ping the local broadcast 192.168.0.255 with no problem. What do I need to do to ping the global broadcast? Thanks, PaulH

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >