Search Results

Search found 14344 results on 574 pages for 'path'.

Page 220/574 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • Safe place to put an executable file on Windows 7 (and Windows XP)

    - by Ricket
    I'm working on a tweak to our logon script which will copy an executable file to the local hard drive and then, using the schtasks command, schedule a task to run that executable daily. It's a standalone executable file, and when run it creates a folder in the working directory (which would be the same directory as the executable in this case). In Windows XP, of course, it can be put anywhere - I'd probably just throw it in C:\SomeRandomFolder and let it be. But this logon script also runs on Windows 7 64-bit machines, and those are trickier with UAC and all that. The user is a local administrator but UAC is enabled, so I'm pretty sure that the executable would be blocked from copying to a location like C:\ or C:\Program Files (since those seem to be at least mildly protected by UAC). The scheduled task needs to run under the user's profile, so I can't just run it with SYSTEM and ignore the UAC boundaries; I need to find a path which the user can copy into. Where can I copy this standalone executable file, so that the copy operation succeeds without a UAC prompt on Windows 7, the path is either common to both WinXP and Win7 or uses environment variables, and the scheduled task running with user permissions is able to launch the executable?

    Read the article

  • WinDbg Problem with ntoskrnl

    - by Wilf
    I've got a similar problem to "BSOD - Unable to verify timestamp for ntoskrnl.exe", in that I can't seem to get the correct symbols to read ntoskrnl. I've followed the advice given by BK1E, but still can't get a result. Text from debug below: Loading Dump File [C:\Users\XXXX\AppData\Local\Temp\WER9D78.tmp\Mini030610-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: SRV*c:\Windows\Symbols*http://msdl.microsoft.com/download/symbols Executable search path is: Unable to load image \SystemRoot\system32\ntoskrnl.exe, Win32 error 0n2 *** WARNING: Unable to verify timestamp for ntoskrnl.exe *** ERROR: Module load completed but symbols could not be loaded for ntoskrnl.exe Windows Server 2008/Windows Vista Kernel Version 6002 (Service Pack 2) MP (4 procs) Free x64 Product: WinNt, suite: TerminalServer SingleUserTS Personal Machine Name: Kernel base = 0xfffff800`01e59000 PsLoadedModuleList = 0xfffff800`0201ddd0 Debug session time: Sat Mar 6 14:08:20.516 2010 (UTC + 0:00) System Uptime: 0 days 0:42:01.723 Unable to load image \SystemRoot\system32\ntoskrnl.exe, Win32 error 0n2 *** WARNING: Unable to verify timestamp for ntoskrnl.exe *** ERROR: Module load completed but symbols could not be loaded for ntoskrnl.exe Loading Kernel Symbols ............................................................... ................................................................ ......................... Loading User Symbols Loading unloaded module list .... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck A, {11, c, 0, fffff80001ec9489} ***** Kernel symbols are WRONG. Please fix symbols to do analysis. How do I fix this issue? OS is Windows Vista x64 SP2.

    Read the article

  • apache authentication

    - by veilig
    I'm trying to set up a local webserver on my network. I want to be able to be able to access the webserver from any machine inside my network w/out authenticating. and two extra domains need access to it w/out authenticating. Everyone else I would like to authenticate in. so far, I can get to it from inside my network. and the two extra domains can access my webserver, but everyone else is just hanging. They don't get an authentication or anything. can anyone tell me what I'm doing wrong here? This is part of my apache's site-available file so far: <Directory /path/to/server/> Options Indexes FollowSymLinks -Multiviews Order Deny,Allow Deny from All Allow from 192.168 Allow from localhost Allow from domain1 Allow from domain2 AuthType Basic AuthName "my authentication" AuthUserFile /path/to/file Require valid-user Satisfy Any AllowOverride All <Files .htaccess> Order Allow,Deny Allow from All </Files> </Directory>

    Read the article

  • Exchange 2007 OWA returns blank page with url xxxxx&reason=0

    - by Dayton Brown
    Hi All: I've just run into an issue with my exchange OWA. It returns a blank page with the url string https://www.xxxxxxxx/&reason=0. Nothing in the logs gives me any good reasons. Here's what I've done so far; 1) reinstall Exchange roll-up 7. 2) recreate virtual directories. 3) reboot. (this was mostly a shot in the dark, but what the hell) Exchange via rpc/https is still working great. Anyone run into this before? EDIT Here is the last snippet from the OWASetupLog. doesn't look like anything blew up. [09:45:36] ******************************************* [09:45:36] * UpdateOwa.ps1: 5/27/2009 9:45:36 AM [09:45:40] Updating OWA on server HOMER [09:45:40] Finding OWA install path on the filesystem [09:45:40] Updating OWA to version 8.1.375.2 [09:45:40] Copying files from 'C:\Program Files\Microsoft\Exchange Server\ClientAccess\owa\Current' to 'C:\Program Files\Microsoft\Exchange Server\ClientAccess\owa\8.1.375.2' [09:45:41] Getting all Exchange 2007 OWA virtual directories [09:45:42] Found 1 OWA virtual directories. [09:45:42] Updating OWA virtual directories [09:45:42] Processing virtual directory with metabase path 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa'. [09:45:42] Metabase entry 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2' exists. Removing it. [09:45:42] Creating metabase entry IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2. [09:45:42] Configuring metabase entry 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2'. [09:45:43] Saving changes to 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2' [09:45:43] Saving changes to 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa'

    Read the article

  • Wrong Outlook anywhere settings

    - by Ken Guru
    Hey all I wanted to enable NTLM authentication on OutlookAnywhere, and after doing the command Set-OutlookAnywhere -IISAuthenticationMethods Basic,NTLM, my settings got changed. This is a dump before I run the command: [PS] C:\Windows\system32Get-OutlookAnywhere ServerName : EXCAS01 SSLOffloading : False ExternalHostname : ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS:///W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : EXCAS01 AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN= EXCAS01,CN=Servers,CN=Exchange Administrative Grou p (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=Fi rst Organization,CN=Microsoft Exchange,CN=Services ,CN=Configuration,DC=asp,DC=ssc,DC=no Identity : EXCAS01\Rpc (Default Web Site) Guid : 289b4865-caf1-4412-95ee-6fb0dff55e8b ObjectCategory : asp.ssc.no/Configuration/Schema/ms-Exch-Rpc-Http-V irtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtual Directory} WhenChanged : 05.01.2011 16:59:55 WhenCreated : 27.11.2009 11:20:12 OriginatingServer : IsValid : True Noticde the settings for "Name", "DistinguishedName", and "Identity". After I run the command, I ended up with this: [PS] C:\Windows\system32Get-OutlookAnywhere ServerName : EXCAS01 SSLOffloading : False ExternalHostname : ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic, Ntlm} MetabasePath : IIS:///W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : EXCAS01 AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : EXCAS01 DistinguishedName : CN=EXCAS01,CN=HTTP,CN=Protocols,CN=EXCAS01,CN=Serv ers,CN=Exchange Administrative Group (FYDIBOHF23SP DLT),CN=Administrative Groups,CN=First Organizatio n,CN=Microsoft Exchange,CN=Services,CN=Configurati on,DC=asp,DC=ssc,DC=no Identity : EXCAS01\EXCAS01 Guid : 289b4865-caf1-4412-95ee-6fb0dff55e8b ObjectCategory : asp.ssc.no/Configuration/Schema/ms-Exch-Rpc-Http-V irtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtual Directory} WhenChanged : 06.01.2011 09:43:50 WhenCreated : 27.11.2009 11:20:12 OriginatingServer : ASP-DC-2. IsValid : True Now, the "Name", "DistinguishedName" and "Identity" has changed, and when I try to change it back by running "Set-OutlookAnywhere -Identity "EXCAS01\Rpc (Default Web Site)", I get the following error: [PS] C:\Windows\system32Set-OutlookAnywhere -Identity "EXCAS01\Rpc (Default Web Site)" Set-OutlookAnywhere : The operation could not be performed because object 'EXCA S01\Rpc (Default Web Site)' could not be found on domain controller 'ASP-DC-2.'. Remember, the RPC over HTTP works fine with Basic authentication (even with the wrong settings), but NTLM still doesnt work. How do I change back the settings?

    Read the article

  • Incorrect deployment of WSGI app to AWS using Elastic Beanstalk

    - by Dzmitry Zhaleznichenka
    cross-link to AWS forums I have developed a simple Python web service using WSGI and would like to deploy it to AWS cloud using Elastic Beanstalk. My problem is I cannot make all the options I specify in Elastic Beanstalk configuration to be correctly configured in the cloud. For deployment, I use Elastic Beanstalk CLI utility. I have run eb init command and set up the required parameters. After this, a directory named .elasticbeanstalk was created in my source tree. It has two config files that are used for deployment, namely config and optionsettings. The latter one among the other options contains the WSGI configuration that has to update /etc/httpd/conf.d/wsgi.conf at the instances. After some of my adjustments the file has the following settings: [aws:elasticbeanstalk:application:environment] DJANGO_SETTINGS_MODULE = PARAM1 = PARAM2 = PARAM4 = PARAM3 = PARAM5 = [aws:elasticbeanstalk:container:python] WSGIPath = handler.py NumProcesses = 2 StaticFiles = /static= NumThreads = 10 [aws:elasticbeanstalk:container:python:staticfiles] /static = static/ [aws:elasticbeanstalk:hostmanager] LogPublicationControl = false [aws:autoscaling:launchconfiguration] InstanceType = t1.micro EC2KeyName = zmicier-aws [aws:elasticbeanstalk:application] Application Healthcheck URL = [aws:autoscaling:asg] MaxSize = 10 MinSize = 1 Custom Availability Zones = [aws:elasticbeanstalk:monitoring] Automatically Terminate Unhealthy Instances = true [aws:elasticbeanstalk:sns:topics] Notification Endpoint = Notification Protocol = email It turns out that not all of these options are considered when I start the environment or update it. Thus, when I update NumThreads or NumProcesses, the respective parameters get changed in wsgi.conf as expected. But whatever I write to the WSGIPath and StaticFiles parameters, I'm not able to automatically change the respective values of wsgi.conf, they remain Alias /static /opt/python/current/app/ WSGIScriptAlias / /opt/python/current/app/application.py which drives me nuts. Moreover, when I deploy my application using git aws.push and having the following contents of .ebextensions/python.config file, neither of options I specify in it affects the deployment. option_settings: - namespace: aws:elasticbeanstalk:container:python option_name: WSGIPath value: mysite/wsgi.py - namespace: aws:elasticbeanstalk:container:python option_name: NumProcesses value: 5 - namespace: aws:elasticbeanstalk:container:python option_name: NumThreads value: 25 - namespace: aws:elasticbeanstalk:container:python:staticfiles option_name: /static/ value: app/static/ I wonder what I should do to force AWS use all the parameters I specify in the configuration, namely the WSGI Path and path to my static data.

    Read the article

  • Access Windows Boot Manager selector when timeout is set to 0?

    - by Kyle Cronin
    I've installed Wubi onto a Windows Vista computer. I've also set the boot timeout to 0: bcdedit /timeout 0 However, now I can't figure out how to get the menu to come up at all! I read on the internets that I had to hold F8 or space when starting up, but they doesn't seem to do anything. Is there a different key or setting I've overlooked? The computer itself is a Dell that's a few months old. The keyboard is USB, but I don't think that's the problem as I can get into the BIOS just fine. Maybe I'm doing it wrong? Am I supposed to hold the keys or rapidly tap them (I've tried both)? If it helps, here's the output from bcdedit: C:\Windows\system32>bcdedit Windows Boot Manager -------------------- identifier {bootmgr} device partition=C: description Windows Boot Manager locale en-US inherit {globalsettings} default {current} resumeobject {5460d9d2-d391-11dc-9d9f-aba67a8797c5} displayorder {current} {e2484fe7-5e97-11de-84d4-0024e8074422} toolsdisplayorder {memdiag} timeout 0 resume No Windows Boot Loader ------------------- identifier {current} device partition=C: path \Windows\system32\winload.exe description Windows Vista locale en-US inherit {bootloadersettings} recoverysequence {572bcd55-ffa7-11d9-aae0-0007e994107d} recoveryenabled Yes osdevice partition=C: systemroot \Windows resumeobject {5460d9d2-d391-11dc-9d9f-aba67a8797c5} nx OptIn Real-mode Boot Sector --------------------- identifier {e2484fe7-5e97-11de-84d4-0024e8074422} device partition=C: path \ubuntu\winboot\wubildr.mbr description Ubuntu

    Read the article

  • "unrecognized options" while installing php

    - by user1692333
    I want to compile php 5.4.8 on my mac 10.8.2, but get some errors which cant solve by my self, so need your help. Firstly i get default php options with php -i | head, after it do this command ./configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --disable-dependency-tracking --sysconfdir=/private/etc --with-apxs2=/usr/sbin/apxs --enable-cli --with-config-file-path=/etc --with-libxml-dir=/usr --with-openssl=/usr --with-kerberos=/usr --with-zlib=/usr --enable-bcmath --with-bz2=/usr --enable-calendar --disable-cgi --with-curl=/usr --enable-dba --enable-ndbm=/usr --enable-exif --enable-fpm --enable-ftp --with-gd --with-freetype-dir=/BinaryCache/apache_mod_php/apache_mod_php-79~4/Root/usr/local --with-jpeg-dir=/BinaryCache/apache_mod_php/apache_mod_php-79~4/Root/usr/local --with-png-dir=/BinaryCache/apache_mod_php/apache_mod_php-79~4/Root/usr/local --enable-gd-native-ttf --with-icu-dir=/usr --with-iodbc=/usr --with-ldap=/usr --with-ldap-sasl=/usr --with-libedit=/usr --enable-mbstring --enable-mbregex --with-mysql=mysqlnd --with-mysqli=mysqlnd --without-pear --with-pdo-mysql=mysqlnd --with-mysql-sock=/var/mysql/mysql.sock --with-readline=/usr --enable-shmop --with-snmp=/usr --enable-soap --enable-sockets --enable-sqlite-utf8 --enable-suhosin --enable-sysvmsg --enable-sysvsem --enable-sysvshm --with-tidy --enable-wddx --with-xmlrpc --with-iconv-dir=/usr --with-xsl=/usr --enable-zend-multibyte --enable-zip --with-pcre-regex --with-pgsql=/usr --with-pdo-pgsql=/usr But get this error config.status: creating Makefile config.status: creating jconfig.h config.status: jconfig.h is unchanged config.status: executing depfiles commands config.status: executing libtool commands configure: WARNING: unrecognized options: --enable-cli, --with-config-file-path, --with-libxml-dir, --with-openssl, --with-kerberos, --with-zlib, --enable-bcmath, --with-bz2, --enable-calendar, --disable-cgi, --with-curl, --enable-dba, --enable-ndbm, --enable-exif, --enable-fpm, --enable-ftp, --with-gd, --with-freetype-dir, --with-jpeg-dir, --with-png-dir, --enable-gd-native-ttf, --with-icu-dir, --with-iodbc, --with-ldap, --with-ldap-sasl, --with-libedit, --enable-mbstring, --enable-mbregex, --with-mysql, --with-mysqli, --without-pear, --with-pdo-mysql, --with-mysql-sock, --with-readline, --enable-shmop, --with-snmp, --enable-soap, --enable-sockets, --enable-sqlite-utf8, --enable-suhosin, --enable-sysvmsg, --enable-sysvsem, --enable-sysvshm, --with-tidy, --enable-wddx, --with-xmlrpc, --with-iconv-dir, --with-xsl, --enable-zend-multibyte, --enable-zip, --with-pcre-regex, --with-pgsql, --with-pdo-pgsql Maybe someone have some suggestions on this?

    Read the article

  • Win2k8R2 / IIS 7.5 - users getting 503 response, no 503 error reported in logs

    - by merk
    I've got 2 web servers with mirrored content. There's a load balancer sitting in front of them. Starting yesterday we've been getting people complaining about 503 errors. i can't find any 503 errors in the IIS log file. However the server host is saying these errors are due to .Net errors in our website which are causing the app pool to recycle. They pointed out several errors in the windows application event log which look like this: Log Name: Application Source: ASP.NET 4.0.30319.0 Date: 3/31/2012 8:35:37 PM Event ID: 1309 Task Category: Web Event Level: Warning Keywords: Classic User: N/A Computer: 6251.local Description: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 3/31/2012 8:35:37 PM Event time (UTC): 4/1/2012 1:35:37 AM Event ID: e7a580c7b38545cca3416a8595408f24 Event sequence: 97 Event occurrence: 1 Event detail code: 0 Application information: Application domain: /LM/W3SVC/2/ROOT-1-129777167518960645 Trust level: Full Application Virtual Path: / Application Path: C:\inetpub\wwwroot\mywebsite\ Machine name: 6252 Process information: Process ID: 20000 Process name: w3wp.exe Account name: IIS APPPOOL\MyAppPool In particular they are saying that the account name under Process Information indicates that the app pool is recycling. They said if the app pool were not recycle, the accountname would instead be the folder where the website files are located. I checked the app pool settings - it's set to recycle every 29 hours. And the rapid fail protection is set to the default of 5 failures in 5 minutes. But i have not seen 5 failures in the event log in that short of a time span. Can anyone help me confirm if the 503 responses are indeed being generated by the app pools recycling? Or are these errors coming from somewhere else? My guess at the time was their load balancer was the one actually returning the 503 error. But that was just a guess.

    Read the article

  • Regular Windows 7 BSOD with Shrew VPN client

    - by Junto
    The Shrew VPN client appears to be a good alternative to the Cisco VPN Client on x64 Windows 7. However, since installing it I've seen fairly regular BSODs. Minidump attached: Microsoft (R) Windows Debugger Version 6.11.0001.404 AMD64 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [D:\Temp\020810-23431-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: SRV*d:\symbols*http://msdl.microsoft.com/download/symbols Executable search path is: Windows 7 Kernel Version 7600 MP (8 procs) Free x64 Product: WinNt, suite: TerminalServer SingleUserTS Built by: 7600.16385.amd64fre.win7_rtm.090713-1255 Machine Name: Kernel base = 0xfffff800`0285f000 PsLoadedModuleList = 0xfffff800`02a9ce50 Debug session time: Mon Feb 8 18:08:12.887 2010 (GMT+1) System Uptime: 0 days 7:52:06.120 Loading Kernel Symbols ............................................................... ................................................................ .............. Loading User Symbols Loading unloaded module list .... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck A, {0, 2, 0, fffff800028d50b6} Unable to load image \SystemRoot\system32\DRIVERS\vfilter.sys, Win32 error 0n2 *** WARNING: Unable to verify timestamp for vfilter.sys *** ERROR: Module load completed but symbols could not be loaded for vfilter.sys Probably caused by : vfilter.sys ( vfilter+29a6 ) Followup: MachineOwner Machine is a brand spanking new Dell Precision T5500. Superuser appears to have several recommendations for the Shrew VPN Client as an alternative to the Cisco VPN client on 64 bit machines, so I wondered if anyone here has seen this problem and possibly found a solution to the problem? I've decided to run the VPN client under Windows 7 compatibility mode for the moment (Vista SP2) with administrator privileges to see if it makes a difference. Oddly, the VPN doesn't necessarily need to be connected. I've noticed it when browsing using Google Chrome and Internet Explorer, usually when I open up a new tab. If it carries on I think I'll be forced to shell out the 120 EUR for the NCP Client instead.

    Read the article

  • How do I create a wifi network bridge with qemu on OS X?

    - by a paid nerd
    I grabbed a small FreeBSD live CD and QEMU, and I'm trying to bridge my Mac OS X 10.8 wifi connection so that the guest OS is available on my LAN. However, the guest OS never gets a DHCP lease. This works perfectly with VirtualBox in their "bridged" network mode, so I know it can be done. I need to get it working with QEMU because VirtualBox doesn't support the architecture that I need for this project. Here's what I've done so far based on hours of googling: Installed TUNTAP for OS X Told OS X to supposedly forward all packets, even ARP: (NOTE: This doesn't appear to work.) $ sudo sysctl -w net.inet.ip.forwarding=1 $ sudo sysctl -w net.link.ether.inet.proxyall=1 $ sudo sysctl -w net.inet.ip.fw.enable=1 Created a bridge: $ sudo ifconfig bridge0 create $ sudo ifconfig bridge0 addm en0 addm tap0 $ sudo ifconfig bridge0 up $ ifconfig bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether ac:de:xx:xx:xx:xx Configuration: priority 0 hellotime 0 fwddelay 0 maxage 0 ipfilter disabled flags 0x2 member: en0 flags=3<LEARNING,DISCOVER> port 4 priority 0 path cost 0 member: tap0 flags=3<LEARNING,DISCOVER> port 8 priority 0 path cost 0 tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether ca:3d:xx:xx:xx:xx open (pid 88244) Started tcpdump with -I in the hopes that it enables promiscuous mode on the wifi device: $ sudo tcpdump -In -i en0 Run QEMU using the bridged network instructions: $ qemu-system-x86_64 -cdrom mfsbsd-9.2-RELEASE-amd64.iso -m 1024 \ -boot d -net nic -net tap,ifname=tap0,script=no,downscript=no But the guest system never gets a DHCP lease: If I tcpdump -ni tap0, I see lots of traffic from the wireless network. But if I tcpdump -ni en0, I don't see any DHCP traffic from the QEMU guest OS. Any ideas? Update 1: I tried sudo defaults write "/Library/Preferences/SystemConfiguration/com.apple.Boot" "Kernel Flags" "net.inet.ip.scopedroute=0" and rebooting per this mailing list suggestion, but this didn't help. In fact, it made VirtualBox bridged mode stop working.

    Read the article

  • Debian, Apache2, CGI: paths issue

    - by Bubnoff
    I have a perl form email script on the servers cgi-bin directory ( /usr/lib/cgi-bin ). /etc/apache2/sites-enabled/000-default ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> The issue is with paths. html calls script here: <form name="Request" method="post" action="http://server-test.local/cgi-bin/formprocessorpro.pl" onsubmit="return checkWholeForm49874(this)"> The directory with the templates and configs is passed here: <input type="hidden" name="base_path" value="../contact" /> The path to this form is: http://server-test.local/formstest/contact.htm No matter what variation I try for the base_path I get an error from the formprocessor script that it can't find the directory: An error occurred when opening the Form Configuration File (../contact/form.cfg): No such file or directory. I need to move this script from an old server, configured by a previous sysadmin, to a new server. Since cgi-bin is automatically linked to /usr/lib/cgi-bin and linked such that the script resides: http://server-test.local/cgi-bin/formprocessorpro.pl I would imagine that, given that the templates are in the webroot in a directory called contact, the correct path would be: ../contact Any ideas? It's been awhile since I've messed with CGI. Bubnoff

    Read the article

  • Choose identity from ssh-agent by file name

    - by leoluk
    Problem: I have some 20-30 ssh-agent identities. Most servers refuse authentication with Too many failed authentications, as SSH usually won't let me try 20 different keys to log in. At the moment, I am specifying the identity file for every host manually, using the IdentityFile and the IdentitiesOnly directive, so that SSH will only try one key file, which works. Unfortunately, this stops working as soon as the original keys aren't available anymore. ssh-add -l shows me the correct paths for every key file, and they match with the paths in .ssh/config, but it doesn't work. Apparently, SSH selects the indentity by public key signature and not by file name, which means that the original files have to be available so that SSH can extract the public key. There are two problems with this: it stops working as soon as I unplug the flash drive holding the keys it renders agent forwarding useless as the key files aren't available on the remote host Of course, I could extract the public keys from my identity files and store them on my computer, and on every remote computer I usually log into. This doesn't looks like a desirable solution, though. What I need is a possibility to select an identity from ssh-agent by file name, so that I can easily select the right key using .ssh/config or by passing -i /path/to/original/key, even on a remote host I SSH'd into. It would be even better if I could "nickname" the keys so that I don't even have to specify the full path.

    Read the article

  • Apache virtualhost - only apply script if file does not exist in document root

    - by Brett Thomas
    Sorry for the newbie apache question. I'm wondering if it's possible to set up the following non-conventional apache virtualhost (for a Django app): -- If a file exists in the DocumentRoot (/var/www) it will be shown. So if /var/www/foo.html exists, then it can be seen at www.example.com/foo.html. -- If file does not exist, it is served via a virtualhost. I'm using mod_wsgi with a WSGIScriptAlias directive that points to a Django app. So if there is no /var/www/bar.html, www.example.com/bar.html will be passed to the Django app, which may or may not be a 404 error. One option is to create an Alias for each individual file/directory, but people want to be able to post a file without adding an alias, and we want to keep the above URL structure for legacy reasons. Simplified Virtualhost is: <VirtualHost *:80> ServerName www.example.com DocumentRoot /var/www WSGIScriptAlias / /path/to/django.wsgi <Directory /path/to/app> Order allow,deny Allow from all </Directory> Alias /hi.html /var/www/hi.html </VirtualHost> The goal is to have www.example.com/hi.html work as above, without the Alias line

    Read the article

  • Windows PE network setup

    - by microchasm
    I'm walking through the following step-by-step guide for deploying Windows 7 via AIK: http://technet.microsoft.com/en-us/library/dd349348%28WS.10%29.aspx On step 4 (Capturing the Installation onto a Network Share), I run into a bit of a snag: attempting to connect to a network drive repeatedly fails. I'm using/deploying Dell Optiplex 380 64 bit machines, and the network cards seem to be really wonky. On the machine that I'm using to run AIK etc, the network driver wasn't found automatically. I had to manually go in and install the driver (which was found on the OEM installation media). I've since copied this to the USB key that I'm using for the Autounattend.xml so its handy. I think that because of this, the PE environment doesn't or can't instantiate the network device. Is there a way to install/configure the network device through the command prompt in PE? If not, I read about adding in the answer file path(s) to drivers, but if I did it this way, would I have to start the process all over again (i.e. create new Autounattend.xml with the PnPcustomizations path included, re-run the installation on the reference machine, install all the applications, re-make the PE iso, reboot into new PE iso)? Any shortcuts, direction, or advice would be appreciated. Thanks!

    Read the article

  • How can I debug solutions in Visual Studio 2010 from a network share?

    - by alastairs
    I've recently got a new Mac laptop and am running VS2010 in a Parallels virtual machine. It's mostly working out well for me, but I'm having some problems with debugging specific project types, related to the fact that the projects are being accessed via a network share. Test projects don't run because the test runner can't load the tests' DLL. Web projects fail to run in the Visual Studio mini web server, throwing the following exception: 'An error occurred loading a configuration file: Failed to start monitoring changes to path\to\web.config'. I've spent the evening trawling the web with little luck on this. After reading these two posts, I tried out the usual CasPol changes, but then found this post from one of the early VS2010 betas indicating that CasPol is no longer needed/supported in .NET 4.0 and VS2010. The network share is accessible via both a mapped drive and the UNC path. The virtual machine runs its applications under the administrator account, which appears to have all the necessary permissions on the network share to create, read, write and delete files and folders. I say "appears to have" as I can't view the Security Properties of the appropriate folder via Explorer: the Security tab just isn't present. Has anyone managed to successfully load and debug web and test projects from a network share in VS2010?

    Read the article

  • Permissions Required for Sharepoint Backups

    - by Wyatt Barnett
    We are in the process of rolling out an extranet for some of our partners using WSS 3.0 as the platform. We already use it internally for a variety of things, and we are using the following powershell script to backup the server: param( $url="http://localhost", $backupFolder="c:\" ) [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") $site= new-Object Microsoft.SharePoint.SPSite($url) $names=$site.WebApplication.Sites.Names foreach ($name in $names) { $n2 = "" if ($name.Length -eq 0) { $n2="ROOT" } else { $n2 = $name } $tmp=$n2.Replace("/", "_") + ".sbk" $saveas = "" if ($backupFolder.Length -eq 0) { $saveas = $tmp } else { $saveas = join-path -path $backupFolder -childPath $tmp } $site.WebApplication.Sites.Backup($name, $saveas, "true") write-host "$n2 backed up to $saveas." } This script works perfectly on the current installation running as our domain backup user. On the new box, it fails when ran as the backup user--claiming "The web application located at http://extranet/" could not be found. That url does, in fact, work so I'm fairly certain it isn't anything that dumb and rather is some permissions issue. Especially because, when executed from my security context, the script works perfectly. I have tried making the backup user a farm owner, as well as added him to the various site collection admin groups on the extranet. The one major difference between the extranet and the intranet server is that the extranet has an alternative access mapping (for https://xnet.example.com) and also uses forms authentication for that mapping. Anyhow, what permissions (or other voodoo) do I need to setup to get this script to work properly?

    Read the article

  • Mac creating files w/ wrong perms on samba share

    - by geoffjentry
    In my group, which is very heterogeneous in terms of machines, we use a samba share to collaborate on files and such. In all but one case, it works as expected (or at least close enough). The one exception is my boss' laptop, a snow leopard macbook air. On his desktop (also snow leopard), if he creates a file it ends up serverside with perms of 774, but when he creates it with the Air, the perms are 644. The key problem is the lack of group write permission on the laptop created files. What's really confusing is that everything that I've looked at on the two machines are identical - same version of OS X, same version of samba (3.0.25b-apple), same settings for the same software, etc. I can't imagine why one machine would be different than the other, but it is. To try to be complete w/ the description, here is the relevant portion of my smb.conf file: comment = my Share path = /path/to/share public = no writeable = yes printable = no force group = myshare directory mask = 0770 create mask = 0770 force create mode = 0770 force directory mode = 0770 EDIT: I looked at three more Macs and all of them worked as expected which leaves this one laptop the true oddball. This wasn't as good as a test as the others though, as they were all leopard.

    Read the article

  • smbclient timing out

    - by Sam Lee
    I am trying to set up a Samba share on a Centos machine. I want to connect to this server using smbclient on OS X. Here is what happens: > smbclient -L X.X.X.X timeout connecting to X.X.X.X:445 timeout connecting to X.X.X.X:139 Error connecting to X.X.X.X (Operation already in progress) Connection to X.X.X.X failed What could be going wrong? Here is my iptables dump on the Centos machine (the server): > iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 And finally, my smb.conf: [global] workgroup = workgroup security = SHARE load printers = No default service = global path = /home available = No encrypt passwords = yes [share] writeable = yes admin users = myusername path = /home/myhome/ force user = root valid users = myusername public = yes available = yes

    Read the article

  • Where to get glib-config for Kubuntu?

    - by Carl Smotricz
    I'm trying to compile Midnight Commander on a KUbuntu 9.10 (Karmic) box with no root access. I've set up a directory under $HOME, downloaded the mc source package and various stuff required for building, such as autotools. I've unpacked the CONTENTS of all those packages into this working directory such that I have the usual ./usr, ./lib, ./etc hierarchy. I manage to get configure through a lot of tests, but I can't seem to fool it into finding glib. checking for glib-2.0... checking for glib-config... no checking for glib12-config... no checking for glib-config... no checking for GLIB - version >= 1.2.6... no *** The glib-config script installed by GLIB could not be found *** If GLIB was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GLIB_CONFIG environment variable to the *** full path to glib-config. configure: error: Test for glib failed. GNU Midnight Commander requires glib 1.2.6 or above. My system has glib installed: /lib/libglib-2.0.so.0 /lib/libglib-2.0.so.0.2200.3 ... and I've also downloaded and unpacked the glib package into my working directory: libglib2.0-0_2.22.2-0ubuntu1_i386.deb libglib2.0-dev_2.22.2-0ubuntu1_i386.deb ... but still the elusive glib-config is nowhere to be found. It's not in any debian package for Karmic, either. So I'd appreciate any help getting over this hurdle. Please note, again, that I don't have root, so I can't just merrily apt-get stuff.

    Read the article

  • .ashx cannot find type error on IIS7 , no problems on webdev server

    - by Aivan Monceller
    I am trying to make AspNetComet.zip work on IIS7 (a simple comet chat implementation) Here is a portion of my web.config. <system.web> <httpHandlers> <add verb="POST" path="DefaultChannel.ashx" type="Server.Channels.DefaultChannelHandler, Server"/> </httpHandlers> </system.web> <system.webServer> <handlers> <add name="DefaultChannelHandler" verb="POST" path="DefaultChannel.ashx" type="Server.Channels.DefaultChannelHandler, Server"/> </handlers> </system.webServer> When I publish the website on my localhost IIS7 I receive an error: POST http://localhost/DefaultChannel.ashx 500 Internal Server Error Could not load type 'Server.Channels.DefaultChannelHandler The target framework of this project is .Net 2.0 I tried the Classic and Integrated Mode application pool for .Net 2.0 with no luck. I also tried converting the project to 4.0 and tried the Classic and Integrated Mode application pool for .Net 4.0 with no luck. I also tried adding the managed handler through IIS Manager's Handler Mappings. If you have time please download the source (184kb) to reproduce the problem on your own machine. The zip contains a VS2010 solution (.Net 2.0). You could also try to convert this to .Net 4.0 I am using Windows 7 anyway if that matters. If you need more details, please drop your comments below. This is working fine by the way on my webdev server.

    Read the article

  • How do I setup routing for 2 companies with different Internet connections on the same LAN?

    - by Clint Miller
    Here's the setup: 2 companies (A & B) share office space and a LAN. A 2nd ISP is brought in and company A wants it's own Internet connection (ISP A) and company B wants it's own Internet connection (ISP B). VLANs are deployed internally to separate the 2 company's networks (company A: VLAN 1, company B: VLAN 2, shared VOIP: VLAN 3). With separate VLANs it's simple enough to use separate DHCP servers (or separate scopes on the same server) to assign the default gateway to each company's gateway for their Internet connection. Static routes can be created on each gateway to point traffic destined for the other company's VLAN or the voice VLAN so that all nodes are reachable as expected. However, I think this is a form of asymmetrical routing, right? (The path from node A1 to node B1 is not the same as the path back from node B1 to node A1). Can I setup policy-based routing to correct this? In that case, can I assign the same default gateway to every device on all VLANs and create a routing policy on a L3 switch to look at the source address and forward traffic to the appropriate next hop? In that case, I want the routing logic to go like this: If the destination address is known, forward the traffic (traffic destined for a different VLAN). If the destination address is unknown, forward the traffic to ISP A's gateway if the source address is on VLAN A; or forward the traffic to ISP B's gateway if the source address is VLAN B. Am I thinking about this problem in the correct way? Is there another way to solve this problem that I am overlooking?

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

  • Libvirt / QEmu Machine Fails and Refuses Restart Because of Memory Allocation Errors

    - by Elmar Weber
    I'm having a problem with libvirt. On a system restart all virtual machines (VMs) are started without a problem and keep running. Then at some point in time a set of machines shuts down according to their log. When I try to restart the machine, I'm getting an error that the memory allocation failed, although more than enough memory is free. server ~ # free total used free shared buffers cached Mem: 16176648 16025476 151172 0 285432 950300 -/+ buffers/cache: 14789744 1386904 Swap: 0 0 0 server ~ # virsh start zimbra error: Failed to start domain zimbra error: Unable to read from monitor: Connection reset by peer server ~ # tail -n 4 /var/log/libvirt/qemu/zimbra.log LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 3072 -smp 2,sockets=2,cores=1,threads=1 -name zimbra -uuid d05ddb7a-83c4-a77b-d8bc-a322648520cf -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/var/lib/libvirt/images/zimbra.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,fd=19,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:21:a9:ad,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 192.168.1.2:25 -k de -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/2 Failed to allocate 3221225472 B: Cannot allocate memory 2012-07-06 08:42:56.076+0000: shutting down server ~ # uname -a Linux server 3.2.0-26-generic #41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux The system is a Ubuntu 12.04 server. The problem seems to occurs since the last restart, which was due to a number of package upgrades and a kernel upgrade. I tried booting with the previous kernel, the problem persists. I was not able to pinpoint an exact event when the machines fail, they do it at nearly the same time. The last time a duplicity job was running, this was not always the case however. Any suggestions on how to debug this? Best regards, elm

    Read the article

  • Rebuild Fedora 19 ISO adding Kickstart for USB install

    - by dooffas
    I am attempting to edit a Fedora 19 DVD ISO to add a kickstart file. I then need this ISO burnt to a USB stick for instillation. The error I get when booting is Warning: Could not boot. Warning: /dev/root does not exist To try and determine which part of the process is failing I have broken the process down in to separate stages. Step 1: Burn the original ISO "Fedora-19-x86_64-DVD.iso" (Available - here) to a pendrive and see if that will install. dd if=/path/to/iso of=/dev/sdc Burning this image was successful and it installed without issue. Step 2: Exctract the ISO, repackage it and burn it to a pendrive and see if that will install. PLEASE NOTE: The final command in this section has been broken down in to multiple lines for ease of reading, in fact it was run as a single command on one line. mkdir -p /mnt/linux mount -o loop /tmp/linux-install.iso /mnt/linux cd /mnt/ tar -cvf - linux | (cd /var/tmp/ && tar -xf - ) cd /var/tmp/linux xorriso -as mkisofs -R -J -V "NewFedoraImage" -o ouput/file.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -isohybrid-mbr /usr/share/syslinux/isohdpfx.bin . This iso was then burnt to a pendrive as before. dd if=/path/to/iso of=/dev/sdc This ISO burnt to the pen drive with no problem and will boot. I then see the fedora options screen. After choosing either "Install Fedora 19" or "Test this media & install Fedora 19" I then receive the errors highlighted above. This means the kickstart file is not to blame, but repackaging the ISO. Is there something I am missing in the repackaging process? Any input would be great! NOTE: If it is of any help, I attempted Step 2 with an Ubuntu server ISO and the process was successful.

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >