Search Results

Search found 7654 results on 307 pages for 'module'.

Page 123/307 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • DriveImage XML fails with a Windows Volume Shadow Service Error

    - by ssvarc
    I'm trying to image a SATA laptop hard drive, using DriveImageXML, that is attached to my computer via a USB adapter. I'm running Win7 Ultimate 64 bit. DriveXML is returning: Could not initialize Windows Volume Shadow Service (VSS). ERROR C:\Program Files (x86)\Runtime Software\Drivelmage XML\vss64.exe failed to start. ERROR TIMEOUT Make sure VSSVC.EXE is running in your task manager. Click Help for more information. VSSVC.EXE is running in Task Manager, as is VSS64.exe. Looking at the FAQ on the Runtime webpage this turned up: Please verify in Settings-Control Panel-Administrative Tools-Services that the following services are enabled: MS Software Shadow Copy Provider Volume Shadow Copy Also make sure you are able to stop and start these services. Possible reasons for VSS failures: For VSS to work, at least one volume in your computer must be NTFS. If you use only FAT drives, VSS will not function. The required NTFS volume does not need to be identical with the volume you want to image. You should make sure that VSSVC.EXE is running in your task manager. If the problems persist, registering "oleaut.dll" and "oleaut32.dll" using "regsvr32" might help. Both of those services are running and can be started and stopped without issue. Using "regsvr32" to register ""oleaut32.dll" returns successful, but "oleaut.dll" returns: The module "oleaut.dll" failed to load. Make sure the binary is stored at the specified path or debug it to check for problems with the binary or dependent .DLL files. The specified module could not be found. Some other information that might be relevant. Browsing to the drive is successful, but accessing certain folders returns an "access" error. Windows runs a permissions adder that adds the current user profile to the NFTS permissions. Could this be the cause of the issue? DriveImage XML is running as Administrator. Thoughts?

    Read the article

  • SkyDrive broken after upgrade to Windows 8.1: "This location can't be found, please try later"

    - by avo
    Upgrading from Windows 8 to Windows 8.1 via the Store upgrade path has screwed my SkyDrive. The C:\Users\<user name>\SkyDrive folder is empty (it only has single file desktop.ini). When I open the native (Store) SkyDrive app, I see "This location can't be found, please try later". I'm glad to still have my files alive online in my SkyDrive account. I tried disconneting from / reconnecting to my Microsoft Account with no luck. Anyone has an idea on how to fix this without reinstalling/refreshing Windows 8.1? From Event Viewer: Faulting application name: skydrive.exe, version: 6.3.9600.16412, time stamp: 0x5243d370 Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000 Exception code: 0x00000000 Fault offset: 0x0000000000000000 Faulting process ID: 0x4e8 Faulting application start time: 0x01cece256589c7ee Faulting application path: C:\Windows\System32\skydrive.exe Faulting module path: unknown Report ID: {...} Faulting package full name: Faulting package-relative application ID: Also: The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {C2F03A33-21F5-47FA-B4BB-156362A2F239} and APPID {316CDED5-E4AE-4B15-9113-7055D84DCC97} to the user NT AUTHORITY\LOCAL SERVICE SID (S-1-5-19) from address LocalHost (Using LRPC) running in the application container Unavailable SID (Unavailable). This security permission can be modified using the Component Services administrative tool. Never was a big fan of in-place upgrade anyway, but this time it was a machine which I use for work, with a lot of stuff already installed on it. Shouldn't have tried to upgrade it in the first place, but was convinced Windows 8.1 is a solid update. Another lesson learnt.

    Read the article

  • Why does notepad crash on desktop files in the save-as dialog?

    - by deepc
    Here's a puzzling problem - maybe somebody has an idea. Right now I am out of ideas. On Win7 64bit, the following crashes Notepad: On Desktop, right click, select "New | Text Document". This creates "New Text Document.txt". Right click on that file, select "Edit". This opens notepad with the empty file. Select "File | Save as": Notepad crashes and Win7 reports that "Notepad has stopped working". Now, move the file to c:\temp and repeat steps 2 and 3: no crash this time and the save-as dialog appears normally. I can create similar steps for the "open" dialog. Things I have tried: Safe mode - does not work, same problem Create a new user and try again logged in as that user - no crash Name file differently, or create elsewhere and then move to desktop - same problem Use Wordpad instead - same problem Review shell extensions with ShellExView - nothing extraordinary here Stare at the event log entries for each of the crashes. Does not enlighten me. At time of crash look at the process explorer stack view. Hangs at a function "TaskDialog". sfc.exe /scannow repaired some files but the problem persists. This is how the event log entries look like: Log Name: Application Source: Application Error Date: 14.12.2010 00:33:48 Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Description: Faulting application name: NOTEPAD.EXE, version: 6.1.7600.16385, time stamp: 0x4a5bc9b3 Faulting module name: COMCTL32.dll, version: 6.10.7600.16661, time stamp: 0x4c6f6e4b Exception code: 0xc000041d Fault offset: 0x00000000000db770 Faulting process id: 0x198 Faulting application start time: 0x01cb9b1e140ab92a Faulting application path: C:\Windows\system32\NOTEPAD.EXE Faulting module path: C:\Windows\WinSxS\amd64_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16661_none_fa62ad231704eab7\COMCTL32.dll What else should I try, short of dumping my user and starting over with a new profile? Thanks...

    Read the article

  • Windows Error Reporting and IIS7 on Windows Server 2008

    - by graffic
    In a windows webservere I'm trying to get a memory dump of a failing IIS 7 worker process (w3wp.exe) with no avail. In the Event Viewer I get the following. Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bd0eb Faulting module name: clr.dll, version: 4.0.30319.1, time stamp: 0x4ba21eeb Exception code: 0xc00000fd Fault offset: 0x0000000000005c22 Faulting process id: 0x1cac Faulting application start time: 0x01cc23419da54772 Faulting application path: c:\windows\system32\inetsrv\w3wp.exe Faulting module path: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\clr.dll Report Id: b54ec4f8-8fa4-11e0-ab62-005056810035 Even if I've configured LocalDumps for WER, and specifically for w3wp.exe in the registry. I get another event telling me that there is a report here: *C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_cdb8af6deb381574fe9fb0dc9aa3edaad59acd5f_cab_4fbf9b53* It contains the following files: WERD931.tmp.appcompat.txt WERDFE9.tmp.WERInternalMetadata.xml WER99EF.tmp.WERDataCollectionFailure.txt The "depressing one" is the WERDataCollectionFailure that says: Heap dump generation failed: 0x8007012b Mini dump generation failed: 0x8007001f After many tries, lots of msdn documentation and many failed google search. I'm out of ideas on how to get a dump here. Does anyone have any suggestion on how to make WER work? Thank you in advance for your time reading this :)

    Read the article

  • Unable to specify parameters to cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] Manually Copy/Pasting this command in, verbatim, works perfectly fine, but as part of a script, the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is --sout behaving one way in a script (non-interactive shell?) vs. another way in the foreground (interactive shell) ?

    Read the article

  • Disabling Laptop (PB TJ-75) faulty card reader Linux

    - by Gab
    My problem comes from that my laptop [PB TJ-75] has a faulty Alcor card reader. It’s 100% sure, the device is dead and unusable whatever the OS is. It cannot be disabled in BIOS [latest: Vendor: Phoenix Technologies LTD Version: V1.26 Release Date: 05/04/2010]. If I could take it apart from the main board easily, and if with that, the system would never look again for it, I’ll be very happy! Is it possible, has anyone ever tried this? Or maybe, replacing the BIOS with a more open one, which let you disable the card reader. Does this exists? Here's what I've tried to disable it so far. In Win7, I choose ‘disable’ in device manager and that’s ok. If not, the device keeps on appearing and disappearing and lot of resources are used. In Lubuntu 13.04, I got extra boot time, with the msg:'sdb, assuming drive cache, etc.’ I tried other distros (isos booted by grub). I can boot Puppy, Gparted, and Redobackup apparently without any problem. I cannot boot Debian, live or install + tried Crunchbang and Tails. I got a loop :’usb device, scsi n+1 blabla‘. I tried "nousb", no result, I have blacklisted EHCI, no result, then usb_storage module, better boot time in Lubuntu, with just the message "...data transfer failed", better shutdown time too. But, no way to use usb storage medias. In Debian, it ends with BusyBox prompt. Is it possible to just disable that Alcor card reader? Does it have a specific module? Is there a special kernel boot option that I missed? Does it have something to do with kernel recompiling, and if yes, how to do with isos? Programming a driver which says everything is ok (out of my comprehension for the moment)? Disabling device by vendor id? What is the best way?

    Read the article

  • Nginx and automatic updates

    - by Desmond Hume
    I'm on Ubuntu 12.04.1 with unattended-upgrades configured for automatic security updates, and I installed Nginx by first adding deb http://nginx.org/packages/ubuntu/ lucid nginx deb-src http://nginx.org/packages/ubuntu/ lucid nginx to /etc/apt/sources.list file, just as was suggested by the official wiki, and then by sudo apt-get update sudo apt-get install nginx which installed Nginx with all the standard modules. But now I think I could make good use of one or two of the Nginx optional modules, like the gzip precompression module or some security-related one. So far, I see two ways of adding an optional module to Nginx, one is compiling and installing from the source code and the other is described in this article. So, which of the ways should I choose so that automatic updates still run for and apply to Nginx and its optional modules? Or should I create a cron job with a command/script specific for Nginx instead of using unattended-upgrades utility? Can I choose between volume updates and security-only updates to be automatically applied to the standard and optional modules? And finally, is there a possibility to automatically update Nginx's modules on the fly (without any connections having been dropped), like the documentation suggests it's possible with sudo kill -USR2 $( cat /run/nginx.pid ) P.S. Actually I'm not certain if unattended-upgrades utility would automatically update the standard modules in the first place, not enough time has passed since Nginx was installed to say for sure.

    Read the article

  • Install VirtualBox on Ubuntu 12.04.1 (on [Samsung] Chromebook)

    - by iphonedev7
    I have dual booted Ubuntu Linux 12.04.1 LTS on my Samsung Series 5 ChromeBook, and am trying to run/install Oracle VirtualBox (from the generic .run file downloaded from their website). However, every time I try to run it (as root from the command line), it gives me the following error occurs: Please install the build and header files for your current Linux kernel. The current kernel version is 3.4.0 Problems were found which would prevent VirtualBox from installing. I have tried the version from the Software Center, as well as the command line installation, both of which gave me errors based on my linux-headers/linux-kernel/linux-[kernel]-image. Here's an error I keep getting (on the command line): First Installation: checking all kernels... It is likely that 3.4.0 belongs to a chroot's host Building only for 3.5.0-18-generic Building initial module for 3.5.0-18-generic ERROR (dkms apport): kernel package linux-headers-3.5.0-18-generic is not supported Error! Bad return status for module build on kernel: 3.5.0-18-generic (x86_64) Consult /var/lib/dkms/virtualbox/4.1.12/build/make.log for more information. Setting up virtualbox-qt (4.1.12-dfsg-2ubuntu0.2) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place ...And one of the more cryptic errors I get when trying to start any Virtual Machine: Result Code: NS_ERROR_FAILURE (0x80004005) Component: Machine Interface: IMachine {5eaa9319-62fc-4b0a-843c-0cb1940f8a91}

    Read the article

  • Unable to run cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] But the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is it ignoring my --sout input?

    Read the article

  • Keytool and SSL Apache config

    - by Safari
    I have a question about SSL certificate... I have generate a certificate using this keytool command.. keytool -genkey -alias myalias -keyalg RSA -keysize 2048 and I used this command to export the certificate keytool -export -alias myalias -file certificate.crt So, I have a file .crt Now I would to configure my Apache ssl module. I need to use keytool...At the moment I can't to use Openssl How can I configure the module if I have only this certificate.crt file? I see these sections in my ssl.conf # Server Certificate: # Point SSLCertificateFile at a PEM encoded certificate. If # the certificate is encrypted, then you will be prompted for a # pass phrase. Note that a kill -HUP will prompt again. A new # certificate can be generated using the genkey(1) command. #SSLCertificateFile /etc/pki/tls/certs/localhost.crt # Server Private Key: # If the key is not combined with the certificate, use this # directive to point at the key file. Keep in mind that if # you've both a RSA and a DSA private key you can configure # both in parallel (to also allow the use of DSA ciphers, etc.) #SSLCertificateKeyFile /etc/pki/tls/private/localhost.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/pki/tls/certs/server-chain.crt How can I configure the correct section?

    Read the article

  • Poor performance on IIS7, only on Windows Server 2008, fine on Windows 7?

    - by user32005
    Hi, I'm new to IIS7 but have experience with other versions. I've been working on an application that works great in a dev environment (as always) but when I push it to a windows server 2008/IIS7 box performance takes a noticeable hit. The dev environment is Windows7/IIS7. The configuration in IIS is the same on the dev box as the server. I've tried all sorts of things to try and find a reason for this but I cant come to any conclusion. I've ruled out database problems on the live box as all data is cached after the first request. I've confirmed this to be true and made sure there is no additional database traffic. I've ruled out network issues with a combination of monitoring requests with fiddler and local debugging on the server. Whenever the code runs on the server there seems to be a performance issue. The server: Intel Core 2 Quad CPU Q6600 @ 2.40ghz with 2gb RAM. I know this is not fantastic but I was expecting it to at least perform as well as my dev environment (which is running much more on a lower spec). The CPU using peaks at under 60%, and memory usage is less than half of the available. I've enabled failed request tracing and most of the time is spent in a custom HttpModule, this module works to handle every request, I cant get any more detail as to what within the module may be causing the problem. Any ideas, I've been pulling my hair out for days now. Thanks

    Read the article

  • Faking a Linux environment without chroot

    - by Pascal
    For a university project I want to test a C++11 program on a 32-core machine. Unfortunately the machine has Ubuntu 12.04 with GCC 4.6 installed (we need GCC 4.7 because of some C++11 threading features). In such an environment I would normally run a chroot with a custom linux (say a debootstrap with Ubuntu 12.10). Since we don't get root access on the machine we can't use chroot. So far I have prepared a run-time environment using debootstrap for our code, I compiled it in the debootstrap environemnt. Then copied it onto the server (using rsync). In order to run our C++ code I set the LD_LIBRARY_PATH to export LD_LIBRARY_PATH=~/debootstrap/usr/lib/:~/debootstrap/lib64/:~/debootstrap/usr/lib/x86_64-linux-gnu/:~/debootstrap/lib/x86_64-linux-gnu/:$LD_LIBRARY_PATH and so far our code seems to run. I'm however stuck with our python code. It doesn't seem to be sufficient to set the paths manually. export PYTHONPATH=~/debootstrap/usr/lib/python2.7/dist-packages:~/debootstrap/usr/lib/python2.7:~/debootstrap/usr/lib/python2.7/plat-linux2:~/debootstrap/usr/lib/python2.7/lib-tk:~/debootstrap/usr/lib/python2.7/lib-dynload:~/debootstrap/usr/local/lib/python2.7/dist-packages:~/debootstrap/usr/lib/pymodules/python2.7:~/debootstrap/usr/lib/python2.7/dist-packages/PIL:~/debootstrap/usr/lib/python2.7/dist-packages/gtk-2.0:~/debootstrap/usr/lib/python2.7 Executing our script results in ImportError: No module named _path Is there an easier way to accomplish a "fake"-chroot than just overriding and creating environment variables? Note I need python since we created a custom C++-Python module in order to run our tests. Maybe I should create two questions from this.

    Read the article

  • Looking for some IIS redirect help/ideas

    - by CoreyT
    Right now we have a site with a LOT of static asp pages such as, www.site.com/123.asp. This is due to how our current site's CMS builds it's pages by default. I don't have an exact count but we have roughly 6000 asp files in the site right now. We are in the middle of a redesign and restructuring of the site, and are looking to migrate to SEO friendly URLs. The problem we're having right now is what do we do to redirect the old pages to the new friendly URLs? I know how to do redirects that is not the issue here. The problems I am coming up with right now are listed below. 1 - Is there a limit to the number of redirects in IIS? 2 - Would having even a few thousand redirects affect IIS performance? 3 - My understanding is that we would not be passing along page rank to the new URLs, is that true? (not a major question I can ask on more SEO forums if nobody here is sure) 4 - Would using something like the IIS URL Rewrite 2 module for IIS 7 help us out? Or would I still need to define several thousand unique redirects in it? Our server right now is running Server 2003, however in the redesign I would be open to migrating to Server 2008 R2 if there is a good case for it (i.e. the URL Rewrite module). Thanks for any guidance or help. I have been looking for a good way to do this for a while now and keep coming up with things that sound problematic and bad (such as having 6000 redirects).

    Read the article

  • Users getting 'flooded' with not read notifications (NRNs) for old emails and meeting requests

    - by Exile
    I'm being placed under quite a lot of pressure from senior management over a relatively trivial issue. Basically the vast majority of users are complaining that they receive not read notifications (NRNs) for old emails and meeting requests in large numbers multiple times a day. I know something strange is happening because some are delivered at silly times in the morning (i.e 3AM or 4AM). The problem I have is that these some of these NRNs are from meeting requests and messages that are 120 days old, so some users have deleted the original message so I don’t actually know if the NRN is from an email or meeting request. This is typical of what users receive as a NRN: From: Sender Sent: 23 March 2012 04:16 To: Recepient Subject: Not read: Accepted: Status update Your message To: Sender Subject: Accepted: Status update Sent: Wednesday, November 23, 2011 8:59:00 AM (UTC) Dublin, Edinburgh, Lisbon, London was deleted without being read on Friday, March 23, 2012 4:15:32 AM (UTC) Dublin, Edinburgh, Lisbon, London. ... From: Sender Sent: 18 March 2012 01:13 To: Recepient Subject: Not read: Gold delivery - Sourcing module Your message To: Sender Subject: Gold delivery - Sourcing module Sent: Friday, November 18, 2011 9:37:58 AM (UTC) Dublin, Edinburgh, Lisbon, London was deleted without being read on Sunday, March 18, 2012 1:12:37 AM (UTC) Dublin, Edinburgh, Lisbon, London. I have done a search and found the following: http://support.microsoft.com/kb/2544246 http://support.microsoft.com/kb/2471964 But we already installed 'Update Rollup 6 for Exchange Server 2010 Service Pack 1' back in December, so I am not sure what we can do to fix this?

    Read the article

  • Connection timed out on Node.js app running under CentOS

    - by ss1271
    I followed this tutorial to create a simple node.js app on my CentOS: the node.js version is: $ node -v v0.10.28 Here's my app.js: // Include http module, var http = require("http"), // And url module, which is very helpful in parsing request parameters. url = require("url"); // show message at console console.log('Node.js app is running.'); // Create the server. http.createServer(function (request, response) { request.resume(); // Attach listener on end event. request.on("end", function () { // Parse the request for arguments and store them in _get variable. // This function parses the url from request and returns object representation. var _get = url.parse(request.url, true).query; // Write headers to the response. response.writeHead(200, { 'Content-Type': 'text/plain' }); // Send data and end response. response.end('Here is your data: ' + _get['data']); }); // Listen on the 8080 port. }).listen(8080); However, when I uploaded this app onto my remote server (assume the address is 123.456.78.9), I couldn't get access to it on my browser http://123.456.78.9:8080/?data=123 The browser returned Error code: ERR_CONNECTION_TIMED_OUT. I tried the same app.js code which runs fine on my local machine, is there anything I am missing? I tried to ping the server and its address was reachable. Thanks.

    Read the article

  • PHP Page Stopped outputting content After Running "yum install php-devel" Command

    - by stwhite
    This error is bizarre but after running the "yum install php-devel" command (after a long day of trying to install Facedetect and OpenCV for face detection) my site stopped functioning. The site uses mysql and php. When you hit the url, the page executes the mysql and the php, but it appears to randomly stop outputting the content of the page. None of the code was changed and the site was working flawlessly prior to running the mentioned ssh command. I do use output buffering in the site, but after removing the calls "ob_flush", "ob_end_flush" and "ob_start" it didn't appear to help—still having issues with the site. Any ideas what this could be? Here is output from terminal: [myserver ~]# cd Facedetect-4b1dfe1 [myserver Facedetect-4b1dfe1]# phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 [myserver Facedetect-4b1dfe1]# configure bash: configure: command not found [myserver Facedetect-4b1dfe1]# phpize && configure && make && make install Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 bash: configure: command not found bash: Read: command not found [myserver Facedetect-4b1dfe1]# make make: *** No targets specified and no makefile found. Stop. [myserver Facedetect-4b1dfe1]# yum install php5-devel

    Read the article

  • Dealing with upgrade of libevent on Amazon AWS

    - by Dreen
    I am building an application (in Python) on Amazon EC2 that has a following dependency chain: gevent-websocket ---> gevent ---> libevent The last one (libevent) got upgraded on Sunday and my server is now generating this error: (...) File "/usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/__init__.py", line 41, in <module> from gevent import core ImportError: libevent-1.4.so.2: cannot open shared object file: No such file or directory Not wanting to spend much time on the issue, I tried to mitigate it by creating a symlink to an always-recent version: $ sudo ln -s /usr/lib64/libevent.so /usr/lib64/libevent-1.4.so.2 But it didn't quite work: (...) File "/usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/__init__.py", line 41, in <module> from gevent import core ImportError: /usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/core.so: undefined symbol: current_base I am a bit stumped as to how to proceed. Should I create more symlinks? To what? Or is there a better way to solve this problem... PS. For the record I am using Amazon AMI.

    Read the article

  • proxy pass domain FROM default apache port 80 TO nginx on another port

    - by user10580
    Im still learning server things so hope the title is descriptive enough. Basically i have sub.domain.com that i want to run on nginx at port 8090. I want to leave apache alone and have it catch all default traffic at port 80. so i am trying something with a virtual name host to proxy pass to sub.domain.com:8090, nothing working yet and go no idea what the right syntax could be. any ideas? most of what i found was to pass TO apache FROM nginx, but i want to the do the opposite. LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so <VirtualHost sub.domain.com:80> ProxyPreserveHost On ProxyRequests Off ServerName sub.domain.com DocumentRoot /home/app/public ServerAlias sub.domain.com proxyPass / http://appname:8090/ (also tried localhost and sub.domain.com) ProxyPassReverse / http://appname:8090/ </VirtualHost> when i do this i get [warn] module proxy_module is already loaded, skippin [warn] module proxy_http_module is already loaded, skipping [error] (EAI 2)Name or service not known: Could not resolve host name sub.domain.com -- ignoring! and yes, the app is working (i have it running on port 80 with another subdomain) and it works at sub.domain.com:8090

    Read the article

  • mod_perl custom configuration directives don't work when placed in .htaccess and there is <Location>

    - by al_l_ex
    I'm trying to complete Redmine's feature request #2693: Use Redmine.pm to authenticate for any directory (1). I have not much knowledge on all these things and need help. Redmine uses mod_perl module Redmine.pm for authentication & authorization. This module defines several custom configuration directives. I've successfully modified patch from (1) and it works when all config is in <Location>: <Location /digischrank/test> AuthType basic AuthName "Digischrank Test" Require valid-user PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=SomedaTaBAse;host=localhost" RedmineDbUser "SoMeuSer" RedmineDbPass "SomePaSS" RedmineProject "digischrank" </Location> But when I move one of these directives (RedmineProject, see (1)) in .htaccess file, Redmine.pm doesn't see it! I've tried to change <Location> to <Directory> and add AllowOverride All. Directives from .htaccess is visible, but remaining ones from <Directory> - not. I don't want to move all directives to each .htaccess. When I add <Location> in addition to <Directory>, again - only directives from <Location> are visible. As far as I know, directives should be merged. I miss something?

    Read the article

  • How to configure apache to basic authentication or allow when ntlm while proxying?

    - by trotzim
    Here is my study case: browser --- apache proxy --- ISA server --- internet The ISA server requires an authentication. The issue is to allow HTTPS through the two proxies. A configuration that works with HTTP is something like this: (yes, I don't want to use ProxyPass but ProxyRequests) <virtualhost *:8080> ... SetEnv auth-proxy-chain on ... ProxyRequests On ProxyRemote * http://isaproxy:80 ... <proxy *> AuthName "ISA server auth" AuthType Basic [here a module to authenticate] require valid-user Allow from all </proxy> ... </virtualhost> The user can authenticate on the apache proxy then the authentication chain is sent to the ISA server that allows the HTTP trafic. But, while the browser switchs to HTTPS, the ISA server "speaks" NTLM and breaks the authentication on the apache proxy. If I try to use the SSPI module (ntlm) with something like this: blablabla <proxy *> AuthName "ISA server auth" AuthType ntlm [ SSPI stuff ] Require valid-user Allow from all </proxy> The apache server reject the authentication (or the ISA server I don't really know). I use wireshark to look at the nominal process while using directly the ISA server as proxy. The first auth-chain is a BASIC type then it switchs to NTLM (and the challenge continues with NTLM). How should I configure apache that it transfers the NTLM authentication to the ISA proxy without checking it(*)? Or to rewrite headers to force BASIC authentication? (*) It seems not to be as easy as it seems...

    Read the article

  • Removing HttpModule for specific path in ASP.NET / IIS 7 application?

    - by soccerdad
    Most succinctly, my question is whether an ASP.NET 4.0 app running under IIS 7 integrated mode should be able to honor this portion of my Web.config file: <location path="auth/windows"> <system.webServer> <modules> <remove name="FormsAuthentication"/> </modules> </system.webServer> </location> I'm experimenting with mixed mode authentication (Windows and Forms). Using IIS Manager, I've disabled Anonymous authentication to auth/windows/winauth.aspx, which is within the location path above. I have Failed Request Tracing set up to trace various HTTP status codes, including 302s. When I request the winauth.aspx page, a 302 HTTP status code is returned. If I look at the request trace, I can see that a 401 (unauthorized) was originally generated by the AnonymousAuthenticationModule. However, the FormsAuthenticationModule converts that to a 302, which is what the browser sees. So it seems as though my attempt to remove that module from the pipeline for pages in that path isn't working. But I'm not seeing any complaints anywhere (event viewer, yellow pages of death, etc.) that would indicate it's an invalid configuration. I want the 401 returned to the browser, which presumably would include an appropriate WWW-Authenticate header. A few other points: a) I do have <authentication mode="Forms"> in my Web.config, and that is what the 302 redirects to; b) I got the "name" of the module I'm trying to remove from the inetserv\config\applicationHost.config file; c) I have this element in my Web.config file: <modules runAllManagedModulesForAllRequests="false">; d) I tried a <location> element for the path in which I set the authentication mode to "None", but that gave a yellow exception page that the property can't be set below the application level. Anyone had any luck removing modules in this fashion?

    Read the article

  • IIS7ASP.Net 4.0 - 404 errors only for external clients

    - by dmcgiv
    recently we moved an ASP.Net 3.5 website to 4.0 (integrated mode) and when we deployed to the clients server (Windows Server 2008 Web edition) we notice that some .aspx pages are serving 404 errors. What is strange is that 1) the pages exist 2) if you browse from the server itself the page is served as normal, only external clients get the 404 3) it's the default 404 error page not the one configured in the web.config 4) it only happens for some .aspx pages, and I've not been able to establish a link between the pages that are not being served externally. We are using a URL rewriter module which I first thought may be at fault but then realised that only some of the failing pages are being rewritten. I've also tested removing the http module and the problem still persists. As everything is working as expected when logged onto the server I was thinking it my be some sort of permission issue, but why would it only affect a few pages? I turned on failed request tracking and the debug files are being generated with the expected 404 error, although at the moment I'm not sure what most of the data means so can't decipher what's going on internally. I'd really appreciate some help with this one.

    Read the article

  • disable specific PCI device at boot

    - by Rhymoid
    I've just reinstalled Debian on my Sony VAIO laptop, and my dmesg and virtual consoles all get spammed with the same messages over and over again. [ 59.662381] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 59.901732] usb 1-1.2: new high-speed USB device number 91 using ehci_hcd [ 59.917940] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 60.157256] usb 1-1.2: new high-speed USB device number 92 using ehci_hcd I believe these messages are coming from an internally connected USB device, most likely the webcam (since that's the only thing that doesn't work). The only way I can seem to have it shut up (without killing my actually useful USB ports) is to disable one of the USB host controllers: # echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci_hcd/unbind This also takes down my Bluetooth interface, but I'm fine with that. I would like this setting to persist, so that I can painlessly use my virtual console again in case I need it. I want my operating system (Debian amd64) to never wake it up, but I don't know how to do this. I've tried to blacklist the module alias for the PCI device, but it seems to be ignored: $ cat /sys/bus/pci/devices/0000\:00\:1a.0/modalias pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 $ cat /etc/modprobe.d/blacklist blacklist pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 How do I ensure that this specific PCI device is never automatically activated, without disabling its driver altogether? -edit- The module was renamed recently, now the following works from userland: echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci-pci/unbind Still, I'm looking for a way to stop the kernel from binding that device in the first place.

    Read the article

  • Using u32 together with extension headers (how to jump over them?)

    - by bortzmeyer
    I'm trying to filter on some parts of the payload, for an IPv6 packet with extension headers (for instance Destination Options). ip6tables works fine with conditions like --proto udp or --dport 109, even when the packet has extension headers. Netfilter clearly knows how to jump over Destination Options to find the UDP header. Now, I would like to use the u32 module to match a byte in the payload (say "I want the third byte of the payload to be 42). If the packet has no extension headers something like --u32 "48&0x0000ff00=0x2800"` (48 = 40 bytes for the IPv6 header + 8 for the UDP header) works fine, If the packet has a Destination Options, it no longer matches. I would like to write a rule that will work whether the packet has Destination Options or not. I do not find a way to tell Netfilter to parse until the UDP header (something that it is able to do, otherwise --dport 109 would not work) then to leave u32 parse the rest. I'm looking for a simple way, otherwise, as BatchyX mentions, I could write a kernel module doing what I want.

    Read the article

  • Reflect.Emit Dynamic Type Memory Blowup

    - by Firestrand
    Using C# 3.5 I am trying to generate dynamic types at runtime using reflection emit. I used the Dynamic Query Library sample from Microsoft to create a class generator. Everything works, my problem is that 100 generated types inflate the memory usage by approximately 25MB. This is a completely unacceptable memory profile as eventually I want to support having several hundred thousand types generated in memory. Memory profiling shows that the memory is apparently being held by various System.Reflection.Emit types and methods though I can't figure out why. I haven't found others talking about this problem so I am hoping someone in this community either knows what I am doing wrong or if this is expected behavior. Contrived Example below: using System; using System.Collections.Generic; using System.Text; using System.Reflection; using System.Reflection.Emit; namespace SmallRelfectExample { class Program { static void Main(string[] args) { int typeCount = 100; int propCount = 100; Random rand = new Random(); Type dynType = null; for (int i = 0; i < typeCount; i++) { List<DynamicProperty> dpl = new List<DynamicProperty>(propCount); for (int j = 0; j < propCount; j++) { dpl.Add(new DynamicProperty("Key" + rand.Next().ToString(), typeof(String))); } SlimClassFactory scf = new SlimClassFactory(); dynType = scf.CreateDynamicClass(dpl.ToArray(), i); //Optionally do something with the type here } Console.WriteLine("SmallRelfectExample: {0} Types generated.", typeCount); Console.ReadLine(); } } public class SlimClassFactory { private readonly ModuleBuilder module; public SlimClassFactory() { AssemblyName name = new AssemblyName("DynamicClasses"); AssemblyBuilder assembly = AppDomain.CurrentDomain.DefineDynamicAssembly(name, AssemblyBuilderAccess.Run); module = assembly.DefineDynamicModule("Module"); } public Type CreateDynamicClass(DynamicProperty[] properties, int Id) { string typeName = "DynamicClass" + Id.ToString(); TypeBuilder tb = module.DefineType(typeName, TypeAttributes.Class | TypeAttributes.Public, typeof(DynamicClass)); FieldInfo[] fields = GenerateProperties(tb, properties); GenerateEquals(tb, fields); GenerateGetHashCode(tb, fields); Type result = tb.CreateType(); return result; } static FieldInfo[] GenerateProperties(TypeBuilder tb, DynamicProperty[] properties) { FieldInfo[] fields = new FieldBuilder[properties.Length]; for (int i = 0; i < properties.Length; i++) { DynamicProperty dp = properties[i]; FieldBuilder fb = tb.DefineField("_" + dp.Name, dp.Type, FieldAttributes.Private); PropertyBuilder pb = tb.DefineProperty(dp.Name, PropertyAttributes.HasDefault, dp.Type, null); MethodBuilder mbGet = tb.DefineMethod("get_" + dp.Name, MethodAttributes.Public | MethodAttributes.SpecialName | MethodAttributes.HideBySig, dp.Type, Type.EmptyTypes); ILGenerator genGet = mbGet.GetILGenerator(); genGet.Emit(OpCodes.Ldarg_0); genGet.Emit(OpCodes.Ldfld, fb); genGet.Emit(OpCodes.Ret); MethodBuilder mbSet = tb.DefineMethod("set_" + dp.Name, MethodAttributes.Public | MethodAttributes.SpecialName | MethodAttributes.HideBySig, null, new Type[] { dp.Type }); ILGenerator genSet = mbSet.GetILGenerator(); genSet.Emit(OpCodes.Ldarg_0); genSet.Emit(OpCodes.Ldarg_1); genSet.Emit(OpCodes.Stfld, fb); genSet.Emit(OpCodes.Ret); pb.SetGetMethod(mbGet); pb.SetSetMethod(mbSet); fields[i] = fb; } return fields; } static void GenerateEquals(TypeBuilder tb, FieldInfo[] fields) { MethodBuilder mb = tb.DefineMethod("Equals", MethodAttributes.Public | MethodAttributes.ReuseSlot | MethodAttributes.Virtual | MethodAttributes.HideBySig, typeof(bool), new Type[] { typeof(object) }); ILGenerator gen = mb.GetILGenerator(); LocalBuilder other = gen.DeclareLocal(tb); Label next = gen.DefineLabel(); gen.Emit(OpCodes.Ldarg_1); gen.Emit(OpCodes.Isinst, tb); gen.Emit(OpCodes.Stloc, other); gen.Emit(OpCodes.Ldloc, other); gen.Emit(OpCodes.Brtrue_S, next); gen.Emit(OpCodes.Ldc_I4_0); gen.Emit(OpCodes.Ret); gen.MarkLabel(next); foreach (FieldInfo field in fields) { Type ft = field.FieldType; Type ct = typeof(EqualityComparer<>).MakeGenericType(ft); next = gen.DefineLabel(); gen.EmitCall(OpCodes.Call, ct.GetMethod("get_Default"), null); gen.Emit(OpCodes.Ldarg_0); gen.Emit(OpCodes.Ldfld, field); gen.Emit(OpCodes.Ldloc, other); gen.Emit(OpCodes.Ldfld, field); gen.EmitCall(OpCodes.Callvirt, ct.GetMethod("Equals", new Type[] { ft, ft }), null); gen.Emit(OpCodes.Brtrue_S, next); gen.Emit(OpCodes.Ldc_I4_0); gen.Emit(OpCodes.Ret); gen.MarkLabel(next); } gen.Emit(OpCodes.Ldc_I4_1); gen.Emit(OpCodes.Ret); } static void GenerateGetHashCode(TypeBuilder tb, FieldInfo[] fields) { MethodBuilder mb = tb.DefineMethod("GetHashCode", MethodAttributes.Public | MethodAttributes.ReuseSlot | MethodAttributes.Virtual | MethodAttributes.HideBySig, typeof(int), Type.EmptyTypes); ILGenerator gen = mb.GetILGenerator(); gen.Emit(OpCodes.Ldc_I4_0); foreach (FieldInfo field in fields) { Type ft = field.FieldType; Type ct = typeof(EqualityComparer<>).MakeGenericType(ft); gen.EmitCall(OpCodes.Call, ct.GetMethod("get_Default"), null); gen.Emit(OpCodes.Ldarg_0); gen.Emit(OpCodes.Ldfld, field); gen.EmitCall(OpCodes.Callvirt, ct.GetMethod("GetHashCode", new Type[] { ft }), null); gen.Emit(OpCodes.Xor); } gen.Emit(OpCodes.Ret); } } public abstract class DynamicClass { public override string ToString() { PropertyInfo[] props = GetType().GetProperties(BindingFlags.Instance | BindingFlags.Public); StringBuilder sb = new StringBuilder(); sb.Append("{"); for (int i = 0; i < props.Length; i++) { if (i > 0) sb.Append(", "); sb.Append(props[i].Name); sb.Append("="); sb.Append(props[i].GetValue(this, null)); } sb.Append("}"); return sb.ToString(); } } public class DynamicProperty { private readonly string name; private readonly Type type; public DynamicProperty(string name, Type type) { if (name == null) throw new ArgumentNullException("name"); if (type == null) throw new ArgumentNullException("type"); this.name = name; this.type = type; } public string Name { get { return name; } } public Type Type { get { return type; } } } }

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >