Search Results

Search found 22668 results on 907 pages for 'command prompt'.

Page 762/907 | < Previous Page | 758 759 760 761 762 763 764 765 766 767 768 769  | Next Page >

  • CouchDB crashes at startup when path to config file has space(s)

    - by Barry Wark
    I'm hoping to run CouchDB as a per-user Launch Agent on OS X. I'm using the coucdbx-core folder from the CouchDB Server.app as the base of my CouchDB deployment. I'd like each user to have their own couch instance (on a different port), necessitating separate config files for each instance. The logical place to put these files is in ~/Library/Application Support/ for each user. I can put the entire distribution in ~/Library/Application Support/my-app/coucdbx, and put the .ini at ~/Library/Application Support/my-app/local.ini. Starting couchdb as bin/couchdb -a ../local.ini (from ~/Library/Application Support/my-app/coucdbx) works great. But I'd like to save every user the ~50MB couchdbx and install the couchdbx-core in a shared location (e.g. within my app's .app bundle). When I do this, the path to the per-user config file contains a space, and I get the following error when starting CouchDB: $ bin/couchdb -n -a ~/Library/Application\ Support/us.physion.ovation/default.ini {"init terminating in do_boot",{{badmatch,{error,{bad_return,{{couch_app,start,[normal,["/Users/hs/prj/build-couchdb/build/etc/couchdb/default.ini","/Users/hs/prj/build-couchdb/build/etc/couchdb/local.ini"]]},{'EXIT',{{badmatch,{error,{error,enoent}}},[{couch_server_sup,start_server,1,[{file,"/Users/hs/prj/build-couchdb/dependencies/couchdb/src/couchdb/couch_server_sup.erl"},{line,56}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,274}]}]}}}}}},[{couch,start,0,[{file,"/Users/hs/prj/build-couchdb/dependencies/couchdb/src/couchdb/couch.erl"},{line,18}]},{init,start_it,1,[]},{init,start_em,1,[]}]}} Is there any way to provide a config file at the command line, if that config file's path includes space(s)? Despite my best efforts in the mailing list archives, wiki and google, I haven't been able to find a solution or a definitive "it can't work". Any help greatly appreciated.

    Read the article

  • Google Chrome doesn't respond user actions correctly

    - by Carlos A. Junior
    Recently I've changed my OS to Ubuntu 12.04 (Cinnamon, 64 bits) from Mint 13 (KDE, 64 bits) and one same bug still appears on new installation. The Google Chrome it seems that don't refresh (repaint) page based on my interactions. Example: When i'm try comment an Youtube vídeo, when i click on textarea, o cursor don't appear inside of textarea, BUT, when/if i change to another tab and return the cursor appears...OK... If i start write some text...according i'm typing the chars don't appers...again if i change to another tab and return the typed text appears on textarea. Other cases that this bug appears: Modal boxes link...don't show the modal; Forms inside modal boxes don't show typed chars; The common Discus comment plugin don't work when focused; I don't have any idea of reason of this bug. (video driver, window manager, Chrome bug ?, i don't know) Any idea to solve this ? Additional informations: Google Chrome 22.0.1229.79 (Official Build 158531) OS Linux WebKit 537.4 (@129177) JavaScript V8 3.12.19.11 Flash 11.3.31.331 User Agent Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4 Command Line /opt/google/chrome/google-chrome --flag-switches-begin --flag-switches-end Executable Path /opt/google/chrome/google-chrome Profile Path /home/carlos/.config/google-chrome/Default Kernel version: 3.2.0-31-generic-pae Ubuntu 12.04 Best regards.

    Read the article

  • How Could My Website Be Hacked

    - by Kiewic
    Hi! I wonder how this could happen. Someone delete my index.php files from all my domains and puts his own index.php files with the next message: Hacked by Z4i0n - Fatal Error - 2009 [Fatal Error Group Br] Site desfigurado por Z4i0n Somos: Elemento_pcx - s4r4d0 - Z4i0n - Belive Gr33tz: W4n73d - M4v3rick - Observing - MLK - l3nd4 - Soul_Fly 2009 My domain has many subdomains, but only the subdomains that can be accessed with an specific user were hacked, the rest weren't affected. I assumed that someone entered through SSH, because some of these subdomains are empty and Google doesn't know about them. But I checked the access log using the last command, but this didn't show any activity through SSH or FTP the day of the attack neither seven days before. Does anybody has an idea? I already changed my passwords. What do you recommend me to do? UPDATE My website is hosted at Dreamhost. I suppose they have the latest patches installed. But, while I was looking how they entered to my server, I found weird things. In one of my subdomains, there were many scripts for execute commands on the server, upload files, send mass emails and display compromising information. These files had been created since last December!! I have deleted those files and I'm looking for more malicious files. Maybe the security hold is an old and forgotten PHP application. This application has a file upload form protected by a password system based on sessions. One of the malicious scripts was in the uploads directory. This doesn't seem like an SQL Injection attack. Thanks for your help.

    Read the article

  • Sendmail in Nexenta core 3.0.1

    - by maximdim
    I'm trying to setup sendmail in Nexenta core 3.0.1 (Solaris based OS). All I want is to be able to send emails from that host - like notifications about failures, cron jobs output etc. Initially Nexenta core doesn't have sendmail so here is what I've done: apt-get install sunwsndmu Now there is a sendmail in /usr/sbin/sendmail. When I try to send email from command line: $mail maxim test . It doesn't give me any error but in log file I see: Dec 20 12:41:08 nas sendmail[12295]: [ID 801593 mail.info] oBKHf8u7012295: from=maxim, size=107, class=0, nrcpts=1, msgid=<201012201741.oBKHf8u7012295@nas>, relay=maxim@localhost Dec 20 12:41:08 nas sendmail[12295]: [ID 801593 mail.info] oBKHf8u7012295: to=maxim, ctladdr=maxim (1000/10), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30107, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection refused by [127.0.0.1] So I guess I need to have SMTP service running. How do I do that in Nexenta? svcs -a | grep sendmail doesn't return anything and # svcadm enable sendmail svcadm: Pattern 'sendmail' doesn't match any instances I'm not married to sendmail so if there are easier ways to achieve y goal I'm open to suggestions as well. Thanks,

    Read the article

  • Chmod 644 on /etc/ any way to fix?

    - by DazSlayer
    I tried to tab complete something and I guess it wasnt there. I know you are not supposed to set the permissions to /etc/ like that, but my permissions seem to be all messed up. whoami prints out cannot find name for user ID 1002 and I cannot cd into /etc/ anymore. passwd and shadow use 640 and 644 so I am not sure why this is a problem. Regardless, is there any way to fix this? The command run was sudo chmod 644 /etc/ I have no name!@vpn-server:/$ whoami whoami: cannot find name for user ID 1002 I have no name!@vpn-server:/$ cd etc bash: cd: etc: Permission denied I have no name!@vpn-server:/$ ls -al etc d????????? ? ? ? ? ? . d????????? ? ? ? ? ? .. d????????? ? ? ? ? ? acpi -????????? ? ? ? ? ? adduser.conf I have no name!@vpn-server:/$ sudo su sudo: can't open /etc/sudoers: Permission denied

    Read the article

  • Apache2 config variable is not defined

    - by Kurt Bourbaki
    I installed apache2 on ubuntu 13.10. If I try to restart it using sudo /etc/init.d/apache2 restart I get this message: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message So I read that I should edit my httpd.conf file. But, since I can't find it in /etc/apache2/ folder, I tried to locate it using this command: /usr/sbin/apache2 -V But the output I get is this: [Fri Nov 29 17:35:43.942472 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOCK_DIR} is not defined [Fri Nov 29 17:35:43.942560 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_PID_FILE} is not defined [Fri Nov 29 17:35:43.942602 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_RUN_USER} is not defined [Fri Nov 29 17:35:43.942613 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_RUN_GROUP} is not defined [Fri Nov 29 17:35:43.942627 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Nov 29 17:35:43.947913 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Nov 29 17:35:43.948051 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Nov 29 17:35:43.948075 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined AH00526: Syntax error on line 74 of /etc/apache2/apache2.conf: Invalid Mutex directory in argument file:${APACHE_LOCK_DIR} Line 74 of /etc/apache2/apache2.conf is this: Mutex file:${APACHE_LOCK_DIR} default I gave a look at my /etc/apache2/envvar file, but I don't know what to do with it. What should I do?

    Read the article

  • Postfix not sending email after upgrading to Ubuntu 12.04

    - by Luke
    After upgrading a server from Ubuntu 10.04 to 12.04, postfix is no longer sending email through sendgrid.com. I followed this guide about 6 months ago and everything had been working perfectly until the upgrade. Now it doesn't seem to be authenticating with sendgrid. This is the error I get in my syslog when I try to send an email. May 22 10:19:55 server postfix/smtp[3844]: 983B11C5DA: to=<to address>, relay=smtp.sendgrid.net[174.36.32.204]:587, delay=0.05, delays=0.01/0/0.04/0, dsn=5.0.0, status=bounced (host smtp.sendgrid.net[174.36.32.204] said: 550 Cannot receive from specified address <sendgrid username>: Unauthenticated senders not allowed (in reply to MAIL FROM command)) This is from postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = no config_directory = /etc/postfix header_size_limit = 4096000 inet_interfaces = loopback-only mailbox_size_limit = 0 mydestination = localhost, mylinode.members.linode.com myhostname = hostname mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relayhost = [smtp.sendgrid.net]:587 smtp_sasl_auth_enable = yes smtp_sasl_mechanism_filter = login smtp_sasl_password_maps = hash:/etc/postfix/sasl/sendgrid smtp_sasl_security_options = noanonymous smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes Any help would be greatly appreciated. I would be happy to post any other logs or other relevant information.

    Read the article

  • Nginx ignores HTTP Authentication for WordPress login directory

    - by MrNerdy
    I am running WordPress in a subfolder of my domain for testing and development purposes on a VPS LEMP-stack. In order to password-protect the wp-login.php with an etxra layer, I used HTTP authentication for the wp-admin folder. The problem is that the http authentication is ignored. When the wp-login.php or wp-admin-folder is called, it goes directly to the normal WordPress-login. I installed everything from the command line in the following way: sudo apt-get install apache2-utils sudo htpasswd -c /var/www/bitmall/wp-admin/.htpasswd exampleuser New password: Re-type new password: Adding password for user exampleuser My Nginx configuration file looks like this: server { listen 80; root /var/www; index index.php index.html index.htm; server_name example.com; location / { try_files $uri $uri/ /index.html; } location /bitmall/wp-admin/ { auth_basic "Restricted Section"; auth_basic_user_file /var/www/bitmall/wp-admin/.htpasswd; } location ~ /\.ht { deny all; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } I would appreciate your advive on this.

    Read the article

  • package issue with ubuntu 10.10 and passenger requirements

    - by user368937
    I'm trying to get Passenger working with Ubuntu 10.10 and I'm running into a problem. It seems that the passenger installer is not recognizing the virtual package. I'm getting this error: Code: passenger-install-apache2-module ... * OpenSSL support for Ruby... not found ... And then it says, run this: * To install OpenSSL support for Ruby: Please run apt-get install libopenssl-ruby as root. When I run the above command, it refers to the libruby package: sudo apt-get install libopenssl-ruby Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'libruby' instead of 'libopenssl-ruby' libruby is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 43 not upgraded. When I look at the details for libruby, it says it provides libopenssl-ruby: Code: Provides: libbigdecimal-ruby, libcurses-ruby, libdbm-ruby, libdl-ruby, libdrb-ruby, liberb-ruby, libgdbm-ruby, libiconv-ruby, libopenssl-ruby, libpty-ruby, libracc-runtime-ruby, libreadline-ruby, librexml-ruby, libsdbm-ruby, libstrscan-ruby, libsyslog-ruby, libtest-unit-ruby, libwebrick-ruby, libxmlrpc-ruby, libyaml-ruby, libzlib-ruby And when I rerun the passenger installer, it gives the same error: Code: passenger-install-apache2-module ... * OpenSSL support for Ruby... not found ... Let me know if you need more info. How do I fix this?

    Read the article

  • How can i get more low memory with the following setup:

    - by user539484
    Modules using memory below 1 MB: Name Total = Conventional + Upper Memory -------- ---------------- ---------------- ---------------- MSDOS 14 317 (14K) 14 317 (14K) 0 (0K) HIMEM 1 120 (1K) 1 120 (1K) 0 (0K) EMM386 3 120 (3K) 3 120 (3K) 0 (0K) OAKCDROM 36 064 (35K) 36 064 (35K) 0 (0K) POWER 80 (0K) 80 (0K) 0 (0K) NLSFUNC 2 784 (3K) 2 784 (3K) 0 (0K) COMMAND 2 928 (3K) 2 928 (3K) 0 (0K) MSCDEX 15 712 (15K) 15 712 (15K) 0 (0K) SMARTDRV 30 384 (30K) 13 984 (14K) 16 400 (16K) KEYB 6 752 (7K) 6 752 (7K) 0 (0K) MOUSE 17 296 (17K) 17 296 (17K) 0 (0K) DISPLAY 8 336 (8K) 0 (0K) 8 336 (8K) SETVER 512 (1K) 0 (0K) 512 (1K) DOSKEY 4 144 (4K) 0 (0K) 4 144 (4K) POWER 4 672 (5K) 0 (0K) 4 672 (5K) Free 552 944 (540K) 539 088 (526K) 13 856 (14K) Memory Summary: Type of Memory Total = Used + Free ---------------- ---------- ---------- ---------- Conventional 653 312 114 224 539 088 Upper 47 920 34 064 13 856 Reserved 0 0 0 Extended (XMS)* 64 898 256 2 671 824 62 226 432 ---------------- ---------- ---------- ---------- Total memory 65 599 488 2 820 112 62 779 376 Total under 1 MB 701 232 148 288 552 944 Total Expanded (EMS) 33 947 648 (33 152K Free Expanded (EMS)* 33 538 048 (32 752K * EMM386 is using XMS memory to simulate EMS memory as needed. Free EMS memory may change as free XMS memory changes. Largest executable program size 538 976 (526K) Largest free upper memory block 7 488 (7K) MS-DOS is resident in the high memory area. I'm running MS-DOS 6.22 on VMWare virtual hardware. This is memory state after memmaker pass, so i'm looking for optimization beyond memmaker. Note: NLS drivers (DISPLAY, KEYB, NSLFUNC) are essential for me. Thanks to @mtone for valuable reminder about MSCDEX /E which gave me 16KiB of low memory (see the diff)!

    Read the article

  • OSX: Mimic Ubuntu IP Masquerading via iptables with ipfw

    - by Dogbert
    Good day, I am attempting to replicate a setup I have between a router and an Ubuntu PC, and have the same setup working on my MacBook (10.6, Snow Leopard). First, I have a router that has a USB port. When I plug it into my Ubuntu PC, it creates an RNDIS connection, allowing me to connect to the router over the USB cable via an IP connection. When I plug it into my computer via USB, it gets assigned an IP address of 172.16.84.1, and a new adapter appears when I type ifconfig. I can then SSH into the device via ssh [email protected]. When I log in to the device, I flush the routes, then create the default route: admin@localhost> route -f admin@localhost> route add default 172.16.84.2 Now, in my Ubuntu machine, I use iptables to enable IP masquerading: root@Valhalla> sudo iptables -t nat -A POSTROUTING -s 172.16.84.2 -j MASQUERADE Once this is all done, the router has internet access over the USB connection to my PC. I am trying to replicate this exact setup on my MacBook now (Snow Leopard), but iptables does not exist for OSX, not even a Macports version exists. I have scoured through other questions on StackOverflow that cover the usage of the ipfw command, which apparently works as a drop-in replacement for iptables. However, the syntax is significantly different, and I'm pretty much lost. Does anyone with some experience with ipfw have some suggestions on how I could accomplish this and create a NAT connection via IP masquerading like I could with my Ubuntu PC? Thank you for your assistance.

    Read the article

  • apache-user & root access

    - by ahmedshaikhm
    I want to develop few scripts in php that will invoke following commands; using exec() function service network restart crontab -u root /xyz/abc/fjs/crontab etc. The issue is that Apache executes script as apache user (I am on CentOS 5), regardless of adding apache into wheel or doing good, the bad and the ugly group assignment does not run commands (as mentioned above). Following are my configurations; My /etc/sudoers root ALL=(ALL) ALL apache ALL=(ALL) NOPASSWD: ALL %wheel ALL=(ALL) ALL %wheel ALL=(ALL) NOPASSWD: ALL As I've tried couple of combination with sudoer & httpd.conf, the recent httpd.conf look something as follows; my httpd.conf User apache Group wheel my PHP script exec("service network start", $a); print_r($a); exec("sudo -u root service network start", $a); print_r($a); Output Array ( [0] => Bringing up loopback interface: [FAILED] [1] => Bringing up interface eth0: [FAILED] [2] => Bringing up interface eth0_1: [FAILED] [3] => Bringing up interface eth1: [FAILED] ) Array ( [0] => Bringing up loopback interface: [FAILED] [1] => Bringing up interface eth0: [FAILED] [2] => Bringing up interface eth0_1: [FAILED] [3] => Bringing up interface eth1: [FAILED] ) Without any surprise, when I invoke restart network services via ssh, using similar user like apache, the command successfully executes. Its all about accessing such commands via HTTP Protocol. I am sure cPanel/Plesk kind of software do use something like sudoer or something and what I am trying to do is basically possible. But I need your help to understand which piece I am missing? Thanks a lot!

    Read the article

  • Give back full control to a user on a disk from another computer

    - by Foghorn
    I have my friend's hard drive mounted externally. After messing with the permissions with TAKEOWN so I could fix some viruses, I have full control over their drive. The problem is, now it's stuck in a "autochk not found" reboot sequence. I think the problem is that the boot sector is invisible to the drive now. So my question is, How can I use icacls to give back the full ownership, when the user I am giving it to is not on my machine? I ran the TAKEOWN command from my windows 7 laptop, their machine is a windows xp Professional with three partitions, I only altered the one that has the boot sector. Here is the permissions that icacls shows: (Where my computer is %System% my username is ME, and the drive is E:\ C:\Users\ME icacls E:\* E:\$RECYCLE.BIN %System%\ME:(OI)(CI)(F) Mandatory Label\Low Mandatory Level:(OI)(CI)(IO)(NW) E:\ALLDATAW %System%\ME:(I)(OI)(CI)(F) E:\alrt_200.data %System%\ME:(OI)(CI)(F) E:\AUTOEXEC.BAT %System%\ME:(OI)(CI)(F) E:\AZ Commercial %System%\ME:(I)(OI)(CI)(F) E:\boot.ini %System%\ME:(OI)(CI)(F) E:\Config.Msi %System%\ME:(I)(OI)(CI)(F) E:\CONFIG.SYS %System%\ME:(OI)(CI)(F) E:\Documents and Settings %System%\ME:(I)(OI)(CI)(F) E:\IO.SYS %System%\ME:(OI)(CI)(F) E:\Mitchell1 %System%\ME:(I)(OI)(CI)(F) E:\MSDOS.SYS %System%\ME:(OI)(CI)(F) E:\MSOCache %System%\ME:(I)(OI)(CI)(F) E:\NTDClient.log %System%\ME:(OI)(CI)(F) E:\NTDETECT.COM %System%\ME:(OI)(CI)(F) E:\ntldr %System%\ME:(OI)(CI)(F) E:\pagefile.sys %System%\ME:(OI)(CI)(F) E:\Program Files %System%\ME:(I)(OI)(CI)(F) E:\RECYCLER %System%\ME:(I)(OI)(CI)(F) E:\RHDSetup.log %System%\ME:(OI)(CI)(F) E:\System Volume Information %System%\ME:(I)(OI)(CI)(F) E:\WINDOWS %System%\ME:(I)(OI)(CI)(F) Successfully processed 22 files; Failed processing 0 files C:\Users\ME

    Read the article

  • supervisord launches with wrong setuid

    - by friendzis
    I am trying to test a pilot system with nginx connecting to uwsgi served application controlled by supervisord running on ubuntu-server. Application is written in python with Flask in virtualenv, although I'm not sure if that is relevant. To test the system I have created a simple hello world with flask. I want nginx and uwsgi both to run as www-data user. If I launch uwsgi "manually" from root shell I can see uwsgi processes runing as appropriate user (www-data). Although, if I let supervisor launch the application something strange happens - uwsgi processes are runing under my user (friendzis). Consequently, socket file gets created under wrong user and nginx cannot communicate with my applicaion. note: the linux server runs as Hyper-V VM, under Windows Server 2008. Relevant configuration: [uwsgi] socket = /var/www/sockets/cowsay.sock chmod-socket = 666 abstract-socket = false master = true workers = 2 uid = www-data gid = www-data chdir = /var/www/cowsay/cowsay pp = /var/www/cowsay/cowsay pyhome = /var/www/cowsay module = cowsay callable = app supervisor [program:cowsay] command = /var/www/cowsay/bin/uwsgi -s /var/www/sockets/cowsay.sock -w cowsay:app directory = /var/www/cowsay/cowsay user = www-data autostart = true autorestart = true stdout_logfile = /var/www/cowsay/log/supervisor.log redirect_stderr = true stopsignal = QUIT I'm sure I'm missing some minor detail, but I'm unable to notice it. Would appreciate any suggestions.

    Read the article

  • How do you install .net4 on a Server 2008 r2 machine through psremoting in powershell?

    - by Jake
    I need to write a script that installs .net 4 remotely using powershell to a group of Server 2008 R2 machines. I based my script off of http://social.technet.microsoft.com/Forums/en-US/winserverpowershell/thread/3045eb24-7739-4695-ae94-5aa7052119fd/. enter-pssession -computername localhost $arglist = "/q /norestart /log C:\Users\tempuser\Desktop\dotnetfx4" $filepath = "C:\Users\tempuser\Desktop\dotNetFx40_Full_setup.exe" Start-Process -FilePath $filepath -ArgumentList $arglist -Wait -PassThru After running the command I would get the following log errors (running the same lines locally would install .net without error): Action: Downloading Item Failed to CreateJob : hr= 0x80200014 Action: Performing actions on all Items Action: Performing Action on Exe at C:\Users\tempuser\Desktop\dotnetfx4\SetupUtility.exe Exe (C:\Users\tempuser\Desktop\dotnetfx4\SetupUtility.exe) succeeded. Exe Log File: dd_SetupUtility.txt Action complete Action: ServiceControl - Stop clr_optimization_v2.0.50727_32 ServiceControl operation succeeded! Action complete Action: ServiceControl - Stop clr_optimization_v2.0.50727_64 ServiceControl operation succeeded! Action complete Action: Performing Action on Exe at C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\Windows6.1-KB958488-v6001-x64.msu Exe (C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\Windows6.1-KB958488-v6001-x64.msu) failed with 0x5 - Access is denied. . PerformOperation on exe returned exit code 5 (translates to HRESULT = 0x5) Action complete OnFailureBehavior for this item is to Rollback. Action: Performing actions on all Items Action complete Action complete Action: Downloading http://go.microsoft.com/fwlink/?LinkId=164184&clcid=0x409 using WinHttp WinHttpDetectAutoProxyConfigUrl failed with error: 12180 Unable to retrieve Proxy information although WinHttpGetIEProxyConfigForCurrentUser called succeeded Action complete C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\TMPF279.tmp.exe: Verifying signature for netfx_Core.mzz C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\TMPF279.tmp.exe Signature verified successfully for netfx_Core.mzz Action complete Decompression completed with code: 16389 Decompression of payload failed: C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\netfx_Core.mzz Action complete Final Result: Installation failed with error code: (0x80074005) (Elapsed time: 0 00:00:28). Is there some security setting or perhaps something else I've missed?

    Read the article

  • Debugging Samba/CUPS printer sharing with Windows

    - by mrdrbob
    I've got a HP Deskjet hooked up to a Slackware 12.2 box. I've got CUPS set up and can print a test page from the box just fine. I've also got Samba set up and have a couple file shares that work fine. I'm trying to share that HP Deskjet out via Samba, but I can't get it to show up in any Windows system. I see the server and its file shares in Windows networking, but when I open the Printers, no printer shows up. Running net view \\servername from the command line lists the file shares, but no printers. Here's the pertinent part of my smb.conf, if that helps: [global] workgroup = HOMENET security = share hosts allow = 192.168.1. 192.168.2. 127. load printers = yes printcap name = cups printing = cups log file = /var/log/samba.%m max log size = 50 [printers] comment = All Printers path = /var/spool/samba browseable = no public = yes writable = no printable = yes guest only = yes Can anyone give me some pointers as to where to start looking for potential causes? Update: Running testparm shows no errors. Here's the output (minus the file shares): [global] workgroup = HOMENET security = SHARE log file = /var/log/samba.%m max log size = 50 printcap name = cups hosts allow = 192.168.1., 192.168.2., 127. [printers] comment = All Printers path = /var/spool/samba guest only = Yes guest ok = Yes printable = Yes browseable = No

    Read the article

  • Hard drive failed, suspected filesystem corruption, still cannot salvage any data from harddrive

    - by Hippy-Head
    Firstly, I am terribly sorry if this is a duplicate, but I couldn't find a similar issue to mine, so here goes. I have a 1TB hdd bought around 8 months ago used as backup hard drive. I have not used the drive for a period of time whatsoever, and when I was trying to get back to some files on it, it was completely wiped just like that. At first it would not boot I tried everything from command line chkdsk and filesystem recovery software to rebuilt it. After a few attempts I managed to initialize it, at that time it was an achievement. The problems started when I tried to recover the data inside, I have used A LOT of software free and commercial software on both Mac and Windows, with the help of cmd or Terminal commands, however no data of any kind was recovered, even after leaving it thoroughly scan for around 9-10 hours all night sometimes longer, with no results at all. I am somewhat desperate, I am usually good at retrieving data from corrupt hard drives, but this is not the case. Call me paranoid, but I do not want to give it to someone to fix it for me, as I have a lot of photos and personal stuff that I do not want anyone to see.

    Read the article

  • Why Is Web Sharing Broken on My Mac?

    - by Sam Murray-Sutton
    Background: I use my Mac for web development, running copies of web sites locally. I recently installed the Snow Leopard update, which to all intents and purposes seems to have gone fine, except... What's not working? Web-sharing; more specifically I can't turn it on via preferences. The preference pane just hangs when I try to. So Apache doesn't start on reboot. I can start Apache by hand, but I don't know enough to either setup apache to start with the computer, or to properly fix web sharing. Further details My Apache error log shows nothing on when the system boots up (as I would expect). This is the error message when I try to start web sharing from the sharing preference pane. 28/09/2009 10:58:05 System Preferences[834] setInetDServiceEnabled failed with 1 for org.apache.httpd Here's the messages given when I start apache from the command line. [Mon Sep 28 10:35:53 2009] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Mon Sep 28 10:35:54 2009] [warn] mod_bonjour: Skipping user 'sams' - index file /Users/sams/Sites/index.html has zero length. [Mon Sep 28 10:35:54 2009] [notice] Digest: generating secret for digest authentication ... [Mon Sep 28 10:35:54 2009] [notice] Digest: done [Mon Sep 28 10:35:54 2009] [notice] Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8k DAV/2 PHP/5.3.0 Phusion_Passenger/2.2.5 configured -- resuming normal operations Please let me know if you need any further details on this. Any help would be greatly appreciated. UPDATE I have added an answer of my own below - I was able to solve it thanks to being pointed in the right direction by the comments below, so thanks very much. But I'm still not totally clear as to what caused the problem or how my solution addressed it, so I'm leaving the question open for now.

    Read the article

  • Change the default route without affecting existing TCP connections

    - by Patrick Horn
    Let's say I have two public network addresses on my server: one NAT through an ISP (192.168.99.0/24), and a VPN through a different ISP (192.168.1.0/24), already configured with a per-host route to the VPN server through my ISP. Here is my initial routing table. I am currently routing through my ISP on subnet 192.168.99.0/24. $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.99.1 0.0.0.0 UG 0 0 0 eth1 55.66.77.88 192.168.99.1 255.255.255.255 UGH 0 0 0 eth1 192.168.99.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 Now, I want new TCP connections to switch to my 192.168.1.0/24 so I type the following: $ route add -net 0.0.0.0 gw 192.168.1.1 dev tap0 When I do this, it causes some long-standing TCP connections to hang. Is there a way to I safely change the default interface for new connections, while allowing existing TCP connections to use the old route (i.e. do I need enable some sort of stateful routing table)? I am okay with a solution that only works with established TCP connections, and I don't care how hacky it is. For example, if there is a way to add temporary iptables rules for existing connections to force them over the old route. But there has to be some way to do this. EDIT: Just a note about a simple "route add -host ... " for existing connections: this solution would work if I am fine with leaving a subset of IPs on the old interface. However, in my application, this actually doesn't solve my problem because I want to allow new connections to come on the new interface even if they have the same source IP. I'm now looking at using the "ip route" command to set source-based routing rules.

    Read the article

  • Easiest way to send encrypted email?

    - by johnnyb10
    To comply with Massachusetts's new personal information protection law, my company needs to (among other things) ensure that anytime personal information is sent via email, it's encrypted. What is the easiest way to do this? Basically, I'm looking for something that will require the least amount of effort on the part of the recipient. If at all possible, I really want to avoid them having to download a program or go through any steps to generate a key pair, etc. So command-line GPG-type stuff is not an option. We use Exchange Server and Outlook 2007 as our email system. Is there a program that we can use to easily encrypt an email and then fax or call the recipient with a key? (Or maybe our email can include a link to our website containing our public key, that the recipient can download to decrypt the mail?) We won't have to send many of these encrypted emails, but the people who will be sending them will not be particularly technical, so I want it to be as easy as possible. Any recs for good programs would be great. Thanks.

    Read the article

  • IIS FTP error: 426 Connection closed; transfer aborted.

    - by Jiaoziren
    Hi, I have an IIS FTP set up on Windows 2003 SP2 (S1). Everyday in the early morning, a script on another server (S2) will run and initiate FTP transfer of pulling log files from S1 to S2. The FTP client we're using is built-in FTP.exe in Windows 2000 on S2. Recently we replaced S1 with a new server however we kept the IP address. There are multiple IP addresses on new S1. Ever since the new S1 was in place, the '426 Connection closed; transfer aborted.' errors haven been occuring randomly. The log indicated that the transfer started ok however the file cannot be transferred completely, as per log below: mget access*.log 200 Type set to A. 200 PORT command successful. 150 Opening ASCII mode data connection for access02232010.log(205777167 bytes). 426 Connection closed; transfer aborted. ftp: 20454832 bytes received in 283.95Seconds 72.04Kbytes/sec. The firewall monitor suggested that the connection was setup in passive mode however I've been told that MS FTP.exe doesn't support passive mode. Though I can see the response of 'entering passive mode' from server when typing in 'quote pasv'. My network admin has told me to try the transfer in active mode however I don't know how to open active mode on client side. It's getting really frustrating. Wish someone here has the right knowledge/experience could shed me a light. Cheers.

    Read the article

  • Spurious alleged file corruption on Windows 7

    - by Johannes Rössel
    Recently my Laptop sometimes warns about corrupted files on the hard drive (Samsung SSD PB22-JS3 TM). This has only happened so far when updating (or checking out) an SVN repository with either TortoiseSVN or the command line Subversion client. The fun thing is that the corrupted file has always been a .svn directory (although the directory entry may contain files in that directory too, if they're small enough?—?which should be the case with SVN). However, when looking into the warned-about directory I notice nothing strange or unusual and don't get any more warnings about it and another try (SVN stops updating once that error occurs?—?TortoiseSVN even with an appropriate error message) of updating the working copy works (well, mostly; sometimes it does it again, albeit with a different directory). Since the laptop is only a few months old I doubt the SSD is failing already—five months of normal usage shouldn't be too surprising. Also it (so far) occurred only with SVN updates on a large repository. Maybe that's too many writes in a short time and some part between the software and the hardware doesn't quite catch up fast enough or so?—?I don't know enough about this to actually make an informed guess here. Anyone knows what's up here? ETA: Note to add: I've run chkdsk (it seems to schedule itself anyway when this happens) and it didn't find anything out of the ordinary.

    Read the article

  • Umount stale glusterfs partition

    - by Khaled
    I am using glusterfs on several Ubuntu servers: two of them are running glusterfs servers in replication mode. Without any clear error, the glusterfs partition became stale and the system shows this error when I try to access the stale partition: Transport endpoint is not connected Also, when running ls -l on the parent folder I get: d????????? ? ? ? ? ? myfolder I tried all types of commands that I can find to umount this partition, but I could not get it done: umount -l /path/to/mount/point umount -f /path/to/mount/point Also, using fuser command to show processes accessing this folder did not work. Unload the fuse kernel module can not be done as it is clear from the kernel config that fuse is built into the kernel and not a loadable module. I found this line in /boot/config-2.6.32-24-server CONFIG_FUSE_FS=y I have been left with two options: Reboot the system. Create another mount point like myfolder2 and mount this again using sudo glusterfs -f /etc/glustefs/glusterfs.vol /path/to/folder2. Of course, I have chosen to go with option 2. Anyone faced such an issue before? Anyone has a better solution for such a case?

    Read the article

  • Can't log in after restoring from Time Machine

    - by Jay Conrod
    My friend uses a Macbook Pro with Snow Leopard 10.6.2. She uses both FileVault and Time Machine to preserve her data. Recently, she suffered a hard disk failure. After restoring from Time Machine using the Snow Leopard install disk, she gets the following error when logging in: You are unable to log in to the FileVault user account at this time. Logging into the account failed because an error occurred. When examining the file system through Terminal, I noticed her home directory is not present: there is no /Users/username directory, or the FileVault .sparsebundle file that's supposed to be there. When using Time Machine.app on /Users, it appears as if her home directory as never there. Additionally, I did a search on the backup disk with the following command: sudo find /Volumes/backup -name '*.sparsebundle' No results. She told me that after working with some large data files, Time Machine would come on, and it would sound like it was transferring a lot of data to the hard disk. Time Machine must have been doing something, right? How can we recover her files? Are they still there?

    Read the article

  • Parental Controls in Ubuntu - per user

    - by Hamish Downer
    I would like to set up parental controls on Ubuntu for a friend of mine. I want it so that the child user has the controls set, but the parent user is not restricted. To be clear, they are sharing one computer, so a router based solution won't help. And I would like a set of step by step instructions to do this. Just one way of doing it. I'm an experienced Ubuntu user, happy at the command line. I've spent quite some time googling for this along the way. I hope that the GChildCare project will eventually make this easy, but it is not ready yet. In the meantime, the WebContentControl GUI provides a way of managing parental controls, but apply them to every user on the computer (easy WebContentContol install instructions and detailed instructions, discussion and related links on ubuntuforums). The ubuntuforums post has a FAQ that states that user-specific configuration is not possible with WebContentControl, and then provides 3 links he used to help him do it. But they are far from step by step instructions. There is this thread which is notes along the way and linking to this article about squid and dansguardian. And then to these two dansguardian articles which are somewhat in depth ... So does anyone know of an existing guide to how to set up parental controls on ubuntu with some users not affected? If no one has come up with an answer after a little bit, I'll set up a community wiki answer so we can come up with a guide.

    Read the article

< Previous Page | 758 759 760 761 762 763 764 765 766 767 768 769  | Next Page >