Search Results

Search found 12569 results on 503 pages for 'root plist'.

Page 405/503 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • wget is working only when used with sudo

    - by Yusuf
    I'm having quite a strange behavior with wget since yesterday. I can download files by using sudo wget, but when I try the same file with only wget, I can get this error: yusufh@ubuntu-yuh:~$ wget http://www.kegel.com/wine/winetricks --2010-12-17 09:34:11-- http://www.kegel.com/wine/winetricks Resolving www.kegel.com... failed: Name or service not known. wget: unable to resolve host address `www.kegel.com' and with sudo wget: yusufh@ubuntu-yuh:~$ sudo wget http://www.kegel.com/wine/winetricks --2010-12-17 09:35:37-- http://www.kegel.com/wine/winetricks Connecting to 127.0.0.1:5865... connected. Proxy request sent, awaiting response... 200 OK Length: 190672 (186K) [text/plain] Saving to: `winetricks' 100%[==================================================================================================>] 190,672 --.-K/s in 0.03s 2010-12-17 09:35:37 (6.92 MB/s) - `winetricks' saved [190672/190672] After the comments below, here is an update: I can use Google Chrome or Firefox perfectly without running it as root. I use ntlmaps to connect to the office proxy. So I need to use 127.0.0.1:5865 as the proxy for clients. Result for env | grep -i proxy: NO_PROXY=localhost,127.0.0.0/8,*.local, http_proxy=127.0.0.1:5865 ftp_proxy=127.0.0.1:5865 all_proxy=socks://127.0.0.1:5865/ ALL_PROXY=socks://127.0.0.1:5865/ https_proxy=127.0.0.1:5865 no_proxy=localhost,127.0.0.0/8,*.local while sudo env | grep -i proxy is empty! HELP!

    Read the article

  • Shell not finding binary when attempting to execute it (it's _definitely_ there)

    - by eegg
    I have a specific set of binaries installed at: ~/.GutenMark/binary/<binaries...> These were previously working correctly, but for seemingly no reason when I attempt to execute them the shell claims not to find them: james@anubis:~/.GutenMark/binary$ ls -al ... -rwxr-xr-x 1 james james 2979036 2009-05-10 13:34 GUItenMark ... -rwxrwxrwx 1 james james 76952 2009-05-10 13:34 GutenMark ... -rwxr-xr-x 1 james james 10156 2009-05-10 13:34 GutenSplit ... james@anubis:~/.GutenMark/binary$ ./GutenMark bash: ./GutenMark: No such file or directory james@anubis:~/.GutenMark/binary$ I've tried to isolate the cause of this, with no success. The same happens with zsh, bash, and sh (all giving their appropriate "file not found" error -- this is definitely not a strange output from the binary itself). The same happens either as user james or as root. Nor is it directory specific; if I move the whole directory installation, or just a single binary, to anywhere else, the same happens when attempting to execute it. The same even happens when I put the directory in $PATH and just execute "GutenMark". It also happens when I execute it from a script (I've tried Python's commands module -- though this appears to just call sh). The problem appears to be specific to the binaries themselves, yet they appear to never actually get executed. Any ideas?

    Read the article

  • configure server and create website without any control panel

    - by Emad Ahmed
    i am trying to configure my new server without cpanel i've installed php/mysql/apache And it's now working fine if you visit the server ip http://46.166.129.101/ you'll see the welcome page i've configured my dns too my nameserverips file [root@server]# cat /etc/nameserverips 46.166.129.101=ns1.isellsoftwares.com 46.166.129.101=ns2.isellsoftwares.com if you visit this link http://ns1.isellsoftwares.com you'll see the welcome page too!! but if you visit isellsoftwares.com you'll see ( 'Firefox can't find the server at www.isellsoftwares.com.' ) Now my question is: How to create an account for this domain on the server?? i've tryied to add virtualHost tag in apache <VirtualHost *:80> ServerAdmin [email protected] ServerAlias www.isellsoftwares.com DocumentRoot /var/www/html/issoft ServerName isellsoftwares.com ErrorLog logs/dummy-host.example.com-error_log CustomLog logs/dummy-host.example.com-access_log common </VirtualHost> it still not working ... i've added named file for this domain (( isellsoftwares.com.db )) ; Zone file for isellsoftwares.com $TTL 14400 isellsoftwares.com. 86400 IN SOA ns1.isellsoftwares.com. elsolgan.yahoo.com. ( 2012031500 ;Serial Number 86400 ;refresh 7200 ;retry 3600000 ;expire 86400 ;minimum ) isellsoftwares.com. 86400 IN NS ns1.isellsoftwares.com. isellsoftwares.com. 86400 IN NS ns2.isellsoftwares.com. isellsoftwares.com. 14400 IN A 46.166.129.101 localhost 14400 IN A 127.0.0.1 isellsoftwares.com. 14400 IN MX 0 isellsoftwares.com. mail 14400 IN A 46.166.129.101 www 14400 IN CNAME isellsoftwares.com. ftp 14400 IN A 46.166.129.101 cpanel 14400 IN A 46.166.129.101 whm 14400 IN A 46.166.129.101 webmail 14400 IN A 46.166.129.101 webdisk 14400 IN A 46.166.129.101 ns1 14400 IN A 46.166.129.101 ns2 14400 IN A 46.166.129.101 but it still not working !!!!! So, what else i should do??

    Read the article

  • Can Subject Alternative Name accommodate multiple virtual mail domains?

    - by Lawrence
    I am currently running a postfix server with self signed certificates serving one mail domain, mycompany.com, the mail server is mail.mycompany.com and so is the CN of the certificate. Now, I need to add a new domain to it. The new domain name is mycompany.net to the same server. Since the users already have the root of the old certificate, I'd like to reuse that. However, I'd like to issue a new certificate so users using the SMTP from Outlook/Thunderbird of mail.mycompany.net do not get warnings. If I understand correctly, if I issue a new certificate with CN=mail.mycompany.com and a subjectAltName=DNS:mail.mydomain.net and have postfix serve this, the client will not complain either way about the cn not matching the target host name. Am I correct in this assumption or am I misunderstanding the concept of Subject Alternative Name? Just to avoid conversation, I do not want to have users on mycompany.net addresses use the mycompany.com server because I might (not a technical issue) have to split up into two different locations, and I want to produce an easily migrateable setup.

    Read the article

  • Can't connect to Synology DiskStation through HTTPS when using Windows 7 Import

    - by LeonidasFett
    a little background to my problem: I have a Synology DiskStation 213j that I use as a backup/data storage solution. When I'm at work, I would like to push and pull files from my DiskStation but I can't use VPN which is forbidden for outgoing connections. So I wanted to try to use HTTPS so I can at least connect securely to the web interface. I mostly use Chrome which uses the Windows Certificate Store. So I tried importing a self-signed certificate into it, without success. I still get a warning in Chrome telling me the connection is not secure because it can't be verified. When I import the certificate into Firefox though, it works and I can connect through HTTPS. I checked my domain on this site: http://www.sslshopper.com/ssl-checker.html It shows no errors, only a warning that the certificate is self-signed. Which is OK in this case. Any got any idea why importing the certificate into Windows 7 doesn't work? I tried Right-Click domain.mydomain.de.crt File --> Install certificate --> Next --> both options here (in case of "Place certificate in following store:" I selected "Third Party Root Certificate Authorities") to no avail.

    Read the article

  • Debian: What are these files in /sys/devices/pci0000:00/ for?

    - by muhuk
    I am running Debian Squeeze on an MSI M670 laptop. I have these following files on my root drive, each 256MB: file:///sys/devices/pci0000:00/0000:00:05.0/resource1 file:///sys/devices/pci0000:00/0000:00:05.0/resource1_wc Here is my lspci output: muhuk@debian:~$ lspci 00:00.0 RAM memory: nVidia Corporation C51 Host Bridge (rev a2) 00:00.2 RAM memory: nVidia Corporation C51 Memory Controller 1 (rev a2) 00:00.3 RAM memory: nVidia Corporation C51 Memory Controller 5 (rev a2) 00:00.4 RAM memory: nVidia Corporation C51 Memory Controller 4 (rev a2) 00:00.5 RAM memory: nVidia Corporation C51 Host Bridge (rev a2) 00:00.6 RAM memory: nVidia Corporation C51 Memory Controller 3 (rev a2) 00:00.7 RAM memory: nVidia Corporation C51 Memory Controller 2 (rev a2) 00:03.0 PCI bridge: nVidia Corporation C51 PCI Express Bridge (rev a1) 00:05.0 VGA compatible controller: nVidia Corporation C51 [GeForce Go 6100] (rev a2) 00:09.0 RAM memory: nVidia Corporation MCP51 Host Bridge (rev a2) 00:0a.0 ISA bridge: nVidia Corporation MCP51 LPC Bridge (rev a3) 00:0a.1 SMBus: nVidia Corporation MCP51 SMBus (rev a3) 00:0a.3 Co-processor: nVidia Corporation MCP51 PMU (rev a3) 00:0b.0 USB Controller: nVidia Corporation MCP51 USB Controller (rev a3) 00:0b.1 USB Controller: nVidia Corporation MCP51 USB Controller (rev a3) 00:0d.0 IDE interface: nVidia Corporation MCP51 IDE (rev a1) 00:0e.0 IDE interface: nVidia Corporation MCP51 Serial ATA Controller (rev a1) 00:0f.0 IDE interface: nVidia Corporation MCP51 Serial ATA Controller (rev a1) 00:10.0 PCI bridge: nVidia Corporation MCP51 PCI Bridge (rev a2) 00:10.1 Audio device: nVidia Corporation MCP51 High Definition Audio (rev a2) 00:14.0 Bridge: nVidia Corporation MCP51 Ethernet Controller (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 04:04.0 FireWire (IEEE 1394): O2 Micro, Inc. Firewire (IEEE 1394) (rev 02) 04:04.2 SD Host controller: O2 Micro, Inc. Integrated MMC/SD Controller (rev 01) 04:04.3 Mass storage controller: O2 Micro, Inc. Integrated MS/xD Controller (rev 01) 04:09.0 Network controller: RaLink RT2561/RT61 rev B 802.11g I am speculating these have something to do with the shared RAM my GPU is using. But why a file on disk? And why two of them?

    Read the article

  • Write permissions LAMP (Debian Lenny)

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • How to configure IIS 7.5 to allow special chars in Url for ASP.NET 3.5?

    - by Sebastian P.R. Gingter
    I'm trying to configure my IIS 7.5 to allow specials chars in the url for ASP.NET. This is important to support wide-spread legacy url's on a new system. Sample url: http://mydomain.com/FileWith%inTheName.html This would be encoded in the url and requested as http://mydomain.com/FileWith25%inTheName.html This simply works, when creating a new web in IIS 7.5, placing a file with the percentage sign in the file name in the web root and pointing the browser to it. This does not work, however, when the web site is an ASP.NET application. ASP.NET always returns a 400.0 - Bad Request error in the WindowsAuthentication module from the StaticFile handler, when pointing to that url. It however displays the requested url correctly and also resolves correctly to the correct physical file (the information from the field 'Physical Path' from the Server error page points to the physically available file). There are hints on how to enable this, so I followed the instructions on these websites step by step: http://dirk.net/2008/06/09/ampersand-the-request-url-in-iis7/ http://adorr.net/2010/01/configure-iis-to-accept-url-with-special-characters.html The second one actually sums up the information from the first post and adds some more information about x64 systems (we're running x64) and on an additional web.config change for this. I tried all that, and still can't get this running from an asp.net web application. And yes: I rebooted after applying the registry changes. So, what do I have to do in addition to the settings described in above posts, to support the legacy url's which contain percentage characters? Additional info: Application Pool mode is integrated. Push after some days. No idea anyone?

    Read the article

  • Exchange 2010 DAG + VMWare HA = no support?

    - by Dan
    We currently have an Exchange 2003 clustered environment (two machine cluster) that we're looking to upgrade to 2010. We recently purchased a VMWare virtualization environment (three Dell R710's with an EMC NS-120 serving up NFS datastores - iSCSI is available) that we wish to use for this new environment. I'm seeing that Microsoft does not support Exchange 2010 DAGs with a virtualization high availability solution (see links below). I would like to utilize the DAG to ensure the data stays available if one host goes down, and HA to ensure that if the physical host goes down, the VM will come back up on the other available host. Does anybody know why MS does not support this? VMWare HA will only restart the VM if it is hung/down - I don't see any difference between this and restarting the physical box if someone pulled the power... Will we only run into issues with support if it has something to do with HA/DAG failover or will they see we have HA and tell us to put it on a physical box even if it has nothing to do with HA? If we disable HA for these VM's will that satisfy them on a support case? Has anybody set up an Exchange 2010 DAG on VMware with HA enabled? Will they have any issues with using an NFS datastore? We have much greater flexibility on the EMC with NFS vs iSCSI, so I would prefer to continue utilizing that. Thanks for any input! http://www.vmwareinfo.com/2010/01/verifying-microsoft-exchange-2010.html Take a look at the second image under "Not Supported" http://technet.microsoft.com/en-us/library/aa996719.aspx "Microsoft doesn't support combining Exchange high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions. DAGs are supported in hardware virtualization environments provided that the virtualization environment doesn't employ clustered root servers."

    Read the article

  • Blue screen error code 1000008e

    - by Kas
    I'm getting blue screens, mostly when trying to boot a program that required a lot of memory (games, photo editing software.) So far I've only managed to catch one set of error codes: BCCode: 1000008e BCP1: C0000005 BCP2: ADA393BA BCP3: E9BCEBC4 BCP4: 00000000 OS Version: 6_0_6002 Service Pack: 2_0 Product: 768_1 It's on a Sony VAIO Laptop VGN FW-41E, Vista OS service pack 2. Besides these codes it lists two 'temporary' files that were related with this crash: ...AppData\Local\Temp\WER-134925-0.sysdata.xml ...AppData\Local\Temp\WERDA66.tmp.version.txt When I googled these files some site said it was linked to a worm called 'yodo', but virus scans don't return any results (hitman pro, malware bytes, avast antivirus all turn up empty). Upon further searching about this yodo worm, I came across security stronghold where someone posted they had acquired this worm when downloading access and excel templates. Now, I actually did download templates for the same programs, they might have been the same, they may be related or I might be grasping at straws here. I have not noticed any issues other in performance as of late, just BSOD's when I start software that requires some memory, but I never had issues with these exact same programs before. Help and/or hints are required on how to actually figure out what's the root of this BSOD issue and how can I fix it. Do you reckon it's actually a virus? What program should be able to remove YODO worm stuff?

    Read the article

  • Can't connect to MySQL on CentOS 5 Error 13 - Permission Denied

    - by abszero
    Ok, I have an install of CentOS 5 running as a GuestOS in VirtualBox. The network card for the Cent box is bridged with that of my host OS so that the boxes can see each other. Cent has an IP of 192.168.1.108 and my Host box has an IP of .104. Everything, with regard to networking, seems to be working properly as I can access the Drupal install that is on the Cent box from a web browser on my host box by navigating to http://192.168.1.108 however when I try to configure the database for Drupal through the Drupal install interface I am getting the Can't connect to MySQL error. First I thought this might of been a Firewall issue so I stopped iptables but that had no effect. I thought maybe the user I had setup did not have access to the server so I tried root and that did not work. Searching on the net said that I needed to provide a bind-address parameter to my.cnf so I did that with no change. (As a side note the length of my my.cnf file was MUCH shorter than the ones presented online. In fact under mysqld all I have are datadir, socket, user, and bind-address. Is this normal or should the file be more verbose?) After a few hours of messing with permissions and such I tried using 'localhost' as the value for the database server, from my HOST OS, and the Drupal install kicked off without a problem. So while my issue is resolved I am curious as to why 'localhost' works and why 192.168.1.108 did not? Is there something i need to do to specifically access the MySQL box via the aforementioned IP? Thanks.

    Read the article

  • can't connect to vsftpd from outside network

    - by rick
    i know this has been asked many times before, but nothing seems to resolve my issue. i have vsftpd running on ubuntu 10.04. i can connect with ftp localhost on the machine. i can connect from another machine in my network. i just cannot connect from outside. the machine is behind an airport extreme managed by airport utility on a mac. 21 is open as per nmap: macmini:~$ nmap localhost Starting Nmap 5.21 ( http://nmap.org ) at 2011-04-10 23:49 EDT Nmap scan report for localhost (127.0.0.1) Host is up (0.00045s latency). Hostname localhost resolves to 2 IPs. Only scanned 127.0.0.1 rDNS record for 127.0.0.1: localhost.localdomain Not shown: 997 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 631/tcp open ipp netstat says 21 is listening: macmini:~$ netstat -lep --tcp | grep ftp (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 *:ftp *:* LISTEN iptables: macmini:~$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination when i try to connect from my external IP (or a dyndns name which resolves there) it times out. ("control connection timed out") as i know very little about networking, i feel like something may jump out as clearly wrong?

    Read the article

  • Video desktop recording and multiple WM displays, capturing nonactive display

    - by okobaka
    Two WM running on one local machine. WM - Fluxbox. Using ffmpeg to record desktop. ffmpeg -an -f x11grab -s 1920x1080 -r 25 -i :1.0 -sameq /tmp/video.mkv On one display everything works great, but not when i have another WM display startx -- :1. What i am doing right now is to switch ctrl+alt+f8 to display:1.0, and start recording with ffmpeg. Everything is fine until i switch back ctrl+alt+f7 to display:0.0, WM and captured video image freezes, but when i switch back ctrl+alt+f8 to display:1.0, it unfreeze and continue recording. So, how to make display:1.0 not to freeze, while on display:0.0? Tested some more. open [display 0.0] open [display 0.1] from [display 0.0] = open => [display 0.2] same problem For different users and same users results are the same. ffmpeg keeps recording that paused image. Looks like WM root window need to be active, to be recorded.

    Read the article

  • Can I use CNAME with ip address? Why If works (sometimes)?

    - by Maciek Sawicki
    I believe that the easiest answer for the first question is "No, You have "A" for this", but I accidentally setup some subdomain using CNAME pointing to ip address and it worked on few computers in my office. I wonder how it was possible? Now, when I'm checking it from home I have following error: beast:~ viroos$ host somesubdomain.somedomain.com Host somesubdomain.somedomain.com not found: 3(NXDOMAIN) I'm 100% it used to work at my office (currently it looks like it doesn't, but I'm checking it on different machine). Therefore I'm not 100% if it worked due to some special network setup or because I tested it just after adding DNS entry. I know this story sounds, a little crazy/incredibly, but can someone help me solve this puzzle. //edit: I'm adding dig output ; <<>> DiG 9.6-ESV-R4-P3 <<>> somesubdomain.somedomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 60224 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;somesubdomain.somedomain.com. IN A ;; ANSWER SECTION: somesubdomain.somedomain.com. 67 IN CNAME xxx.xxx.xxx.xx1. ;; AUTHORITY SECTION: . 1800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2012040901 1800 900 604800 86400 ;; Query time: 72 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Tue Apr 10 00:11:01 2012 ;; MSG SIZE rcvd: 136

    Read the article

  • Apache crashes a few seconds after the start.

    - by Nacho
    Hi, i've got a problem with apache. When i try to start it (/etc/init.d/apache2 start) it dies after a few seconds. It shows up on "ps aux" consuming a lot of memory and then dies. I don't know what could be causing apache to consume this amount of memory: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 13379 1.0 0.3 14376 3908 ? Ss 22:31 0:00 /usr/sbin/apache2 -k start www-data 13383 0.0 0.4 197316 4196 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13390 0.0 0.3 172728 4172 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13396 0.0 0.3 156336 4160 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13400 0.0 0.3 148140 4156 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13403 0.0 0.3 131748 4148 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start Here is a htop screenshot: http://i.imgur.com/N4Chh.png It happened suddenly, no change had been made to server config, so i don't know whats causing it. The error log of my virtual servers shows this: [Sun Jan 30 22:19:50 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=9685): Couldn't create worker thread 11 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:19:55 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=9685): Couldn't create worker thread 19 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:29:40 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=12009): Couldn't create worker thread 18 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:31:06 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=13396): Couldn't create worker thread 15 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:35:02 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=14009): Couldn't create worker thread 16 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:35:07 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=14009): Couldn't create worker thread 17 in daemon process 'fb.ebookmetafinder.com'. I'm on a ubuntu server vps and i use mod_wsgi with django. Thanks.

    Read the article

  • nginx: how do I add new site/server_name in nginx?

    - by Neo
    I'm just starting to explore Nginx on my Ubuntu 10.04. I installed Nginx and I'm able to get the "Welcome to Nginx" page on localhost. However I'm not able to add a new server_name, even when I make the changes in site-available/default file. Tried reloading/restarting Nginx, but nothing works. One interesting observation. "http://mycomputername" in browser works. So somehow there is a command like 'server_name $hostname' somewhere over-riding my rule. File: sites-available/mine.enpass server { listen 80; server_name mine.enpass ; access_log /var/log/nginx/localhost.access.log; location / { root /var/www/nginx-default; index index.html index.htm; } } File: nginx.confg user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • Linux And NTFS Permissions

    - by VGE IT
    Trying to restrict a folder within a directory created in linux filesystem. I have changed the permissions to: root rwx, a special active directory group rwx and all others r. Upon doing so, people that are not in the special AD group can access the directory and modify files. Upon doing so the group changes to "Domain Users" when the user modifies documents within the directory. I have to manualy change the documents default group back to my AD group. I have tried to create another AD group and modify permissons to deny write access. When doing so through windows explorer, the settings seem to take affect until I go back in a look at permissions for the restricted group. No permissions show when I view for the second time. Please assist. Samba share properties [MyShare] comment = "blah blah blah" browseable = yes guest ok = no read only = no path = /xxx/xxxxx/ create mask = 0640 directory mask = 0750 admin users = @"domain\Domain Admins", @"domain\group A", @"domain\group B" valid users = @"domain\Domain Admins", @"domain\group A", @"domain\group B" nt acl support = Yes inherit acls = yes inherit owner = yes inherit permissions = yes

    Read the article

  • How to get nginx to pass HTTP_AUTHORIZATION header to Apache

    - by codeinthehole
    Am using Nginx as a reverse proxy to an Apache server that uses HTTP Auth. For some reason, I can't get the HTTP_AUTHORIZATION header through to Apache, it seems to get filtered out by Nginx. Hence, no requests can authenticate. Note that the Basic auth is dynamic so I don't want to hard-code it in my nginx config. My nginx config is: server { listen 80; server_name example.co.uk ; access_log /var/log/nginx/access.cdk-dev.tangentlabs.co.uk.log; gzip on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_read_timeout 120; location / { proxy_pass http://localhost:81/; } location ~* \.(jpg|png|gif|jpeg|js|css|mp3|wav|swf|mov|doc|xls|ppt|docx|pptx|xlsx|swf)$ { if (!-f $request_filename) { break; proxy_pass http://localhost:81; } root /var/www/example; } } Anyone know why this is happening? Update - turns out the problem was something I had overlooked in my original question: mod_wsgi. The site in question here is a Django site, and it turns out that Apache does get the auth variables passed through, however mod_wsgi filters them out. The resolution is to use: WSGIPassAuthorization On See http://www.arnebrodowski.de/blog/508-Django,-mod_wsgi-and-HTTP-Authentication.html for more details

    Read the article

  • What is /etc/apache2/sites-available used for and is it necessary?

    - by Mariane
    I have 3 sites, each with a specific IP, running on apache2 (up-to-date Ubuntu). To put a site online, I just created a file in: /etc/apache2/sites-enabled and in this file I told apache which directory was the root directory for this site, and to which IP it should correspond. So I have 000-default 001-www.lapf.eu 002-www.felkin.info 003-www.seidhr.fr in this directory. My first site, lapf suddenly lost contact with its database after the domain name was transferred from another registrar unto the registrar who is also hosting the site's data. Then I did an update, and I reinstalled mysql-server and mysql-common, and I did I-have-forgotten-what to reinstall the locales (uft8 and such) which had vanished for some reason. This fixed my first site. Now I noticed that the other 2 sites are offline. Pointing a browser to them just hangs until timeout. They used to function, and their domain names did not move, they are still registered at the same place. The files are still in /etc/apache2/sites-enabled I noticed another directory: /etc/apache2/sites-available with just defaut and default.ssl in it. Why are there 2 directories, sites-enabled and sites-available? Should I copy the files from "sites-enabled" into "sites-available"? Or should I put a modified version of each in "sites-available"? command: "apache2ctl -S" VirtualHost configuration: 92.243.20.169:80 Charlotte (/etc/apache2/sites-enabled/001-www.lapf.eu:1) 92.243.21.141:80 xvm-21-141.ghst.net (/etc/apache2/sites-enabled/002-www.felkin.info:1) 92.243.4.114:80 xvm-4-114.ghst.net (/etc/apache2/sites-enabled/003-www.seidhr.fr:1) wildcard NameVirtualHosts and default servers: *:80 is a NameVirtualHost default server Charlotte (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost Charlotte (/etc/apache2/sites-enabled/000-default:1) Syntax OK

    Read the article

  • Jenkins shell command isn't executing

    - by Dmitro
    In Jenkins project I add command for executiong rm /var/www/ru.liveyurist.ru/tmp/* But when I build project I get error: Started by user anonymous Building in workspace /var/www/ru.myproject.ru Updating https://subversion.assembla.com/svn/myproject/trunk At revision 1168 no change for https://subversion.assembla.com/svn/liveexpert/trunk since the previous build [ru.myproject.ru] $ /bin/sh -xe /tmp/hudson7189633355149866134.sh FATAL: command execution failed java.io.IOException: Cannot run program "/bin/sh" (in directory "/var/www/ru.myproject.ru"): java.io.IOException: error=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:475) at hudson.Proc$LocalProc.<init>(Proc.java:244) at hudson.Proc$LocalProc.<init>(Proc.java:216) at hudson.Launcher$LocalLauncher.launch(Launcher.java:709) at hudson.Launcher$ProcStarter.start(Launcher.java:338) at hudson.Launcher$ProcStarter.join(Launcher.java:345) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:82) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:703) at hudson.model.Build$RunnerImpl.build(Build.java:178) at hudson.model.Build$RunnerImpl.doRun(Build.java:139) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:473) at hudson.model.Run.run(Run.java:1410) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:238) Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:164) at java.lang.ProcessImpl.start(ProcessImpl.java:81) at java.lang.ProcessBuilder.start(ProcessBuilder.java:468) ... 16 more Build step 'Execute shell' marked build as failure Finished: FAILURE I started Jenkins from root user. Please advise what can be reason of this error?

    Read the article

  • Troubleshooting a Windows 7 PC that wouldn't sleep

    - by NPE
    I have a new Windows 7 PC that wouldn't sleep (not just automatically, but also when specifically told to). The screen goes black momentarily, but within two seconds the machine comes back as if nothing has happened. I tried powercfg energy. This produces some errors quoted at the bottom of this post, plus some warnings about timer resolution. There are no USB devices connected other than wireless keyboard + mouse (Logitech MK250); I tried unplugging them to no effect. The motherboard is Asus P7P55D-E. powercfg lastwake says "Wake History Count - 0", which I take to mean that it never actually went to sleep. I dual boot into Ubuntu, and was having exactly the same problem on the Linux side. That turned out to do with USB 3.0, which I've now disabled in the BIOS. This has solved the problem on the Ubuntu side of things, but made no difference to Windows 7. Any suggestions? Suspend:USB Device not Entering Suspend The USB device did not enter the Suspend state. Processor power management may be prevented if a USB device does not enter the Suspend state when not in use. Device Name Generic USB Hub Host Controller ID PCI\VEN_8086&DEV_3B34 Host Controller Location PCI bus 0, device 29, function 0 Device ID USB\VID_8087&PID_0020 Port Path 1 USB Suspend:USB Device not Entering Suspend The USB device did not enter the Suspend state. Processor power management may be prevented if a USB device does not enter the Suspend state when not in use. Device Name USB Root Hub Host Controller ID PCI\VEN_8086&DEV_3B34 Host Controller Location PCI bus 0, device 29, function 0 Device ID USB\VID_8086&PID_3B34 Port Path USB Suspend:USB Device not Entering Suspend The USB device did not enter the Suspend state. Processor power management may be prevented if a USB device does not enter the Suspend state when not in use. Device Name USB Composite Device Host Controller ID PCI\VEN_8086&DEV_3B34 Host Controller Location PCI bus 0, device 29, function 0 Device ID USB\VID_046D&PID_C52E Port Path 1,8

    Read the article

  • Why do I have to log into hotmail twice?

    - by Tony Lee
    I just recently noticed I have to attempt a login into hotmail twice before it succeeds. Although I'm using Google Chrome (3.0.195.21), the symptoms are well described in a Mozilla thread. In short, I'm told: The e-mail address or password is incorrect. Please try again. The thread on mozilla's site that supposed to describe the latest details (and the 1st hit on google when I search for "hotmail login twice") requires an account to read so I'm hoping someone here has a good synopsis of what the cause is. I normally start at hotmail.com, which redirects to login.live.com/.... I can login by starting at mail.live.com, using IE8 or attempting a 2nd login. Oddly, if I start at login.live.com Chrome tells me there is a redirect loop. Does anyone know or have a public link to the root cause of the double login? (it is a hotmail account I'm login into) EDIT - Caused by my 'restricting how 3rd parties can use cookies'. If I allow all cookies, it works first time.

    Read the article

  • Limit copssh users to home directory Windows 7

    - by Siriss
    Hello all- I have found these two sites below: CopSSH SFTP -- limit users access to their home directory only and http://blogs.windowsnetworking.com/wnadmin/2006/11/07/copssh-restricting-users-access/ as well as the Copssh website, but upon completion they do not seem to work. I have copssh installed and I have a separate Windows account "sftpuser" created that is used to connect. The connection works just fine, but I want to limit that user to just their home directory and sub folders. I have 3 hard drives, the C:, a W: and an S: and I want the FTP account to only be able to access the W: drive and its contents (the root of the W: drive is the FTP home directory). Right now "sftpuser" can access all folders, including jump drives to C:, and S:. The linked tutorials do not seem to work, because it seems when I create a group "ftpusersgroup" and add "sftpuser" to the group, and then deny "ftpusersgroup" access to the C: drive, the service breaks and I can no longer login. I have undone everything and am ready to start fresh. Does anyone know how to do this, or is there a better tutorial that someone has or has found? I hope this makes sense. Thank you very much for any help!

    Read the article

  • Can't open cocoa emacs from terminal using open -a

    - by Shane
    I installed emacs on my MacBook Air running Mac OS X 10.6.5 from this site http://emacsformacosx.com/. I believe this is what used to be called cocoa emacs. I dragged it into my Application folder and it works fine when I run it from there. I want to be able to run it from the Terminal. After some googling, I tried open -a /Application/Emacs.app foo.txt (foo.txt was and existing file). I got two emacs windows - one with welcome screen and one with foo.txt loaded. I tried a few applications in the /Applications directory and they did not seem to behave like this. I had installed it using my own account (an admin account) so after doing ls -l on /Application I noticed that the owner and group were different from the other entries in this folder. I recursively changed the owner and group to root and wheel, like the others, but this did not help. The only thing that looks funny now is that there that ls -l show a @ character which has something to do with extended attributes but I don't know how to check these. Any suggestions on what to check next? Is using the open command the only to run the program? Can I simulate what it does using a shell script?

    Read the article

  • Resize a RAID 1 volume on OS X Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OS X 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >