Search Results

Search found 8028 results on 322 pages for 'unix shell'.

Page 261/322 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • What's the best way to get a stored POP3 password out of Outlook 2007?

    - by Tom Morris
    If you have a password for a POP3 account in Outlook 2007 (Windows 7 Home Premium) and you then forget the password, how do you retrieve it? I tried copy-and-paste. No go. I downloaded Mail PassView, but upon installing it, AVG said it was malware, so I removed it. I eventually found the account details by opening up RegEdit, and found it in HKCU\Software\Microsoft\Windows NT\CurrentVersion\Windows Messaging Subsystem\Profiles\Outlook\ (...) but it was encoded in REG_BINARY. I Googled around and found various Visual Basic routines for decoding it but being a Unix dork I had absolutely no idea what to do with said scripts. By this point, I gave up and managed to get hold of the password by another means (it was written down on a piece of paper in the briefcase of the owner of the account - I know, it makes the inner sysadmin rage). I also attempted to write a simple POP3 server in Python and then get Outlook to log on to it, but that didn't really work out (it was about 4am at that point). For future reference, is there an easy and sensible way of doing this? Is Mail PassView actually evil spyware or was AVG just giving me a false positive? (Any chance of Windows 8 having something like OS X's Keychain?)

    Read the article

  • How can I configure Samba to share (read/write) any folder with root permissions?

    - by Mike Toews
    I have a CentOS 5 VirtualBox guest on a Win7x64 host. I am attempting to setup a read/write share a directory owned by root with my Windows host using Samba, but I'm having no luck after running around in circles. To simplify matters, I've disabled my Firewall (/etc/init.d/iptables stop). As security and permissions are irrelevant for this purpose, I'd rather not have to set up another unix user/group/password. Here is the output from testparm Load smb config files from /etc/samba/smb.conf rlimit_max: rlimit_max (1024) below minimum Windows limit (16384) Processing section "[Guest Share]" Loaded services file OK. Server role: ROLE_STANDALONE and the source of /etc/samba/smb.conf: [global] workgroup = WRKGRP netbios name = SMBSERVER security = SHARE load printers = No [Guest Share] comment = Guest access share path = /root/src read only = No guest ok = Yes Running /etc/init.d/smb restart shows an OK status. However, on my Windows host, I can only see the share folder on the guest \\IPv4, but I cannot go into "Guest Share": "The network name cannot be found" error message is a common error, with a likely cause: The user you are trying to access the share with does not have sufficient permissions to access the path for the share. Both read (r) and access (x) should be possible. Am I trying to use root as a passwordless Samba guest? I'd like to, is it possible? How can I configure Samba to share (read/write) any folder with root permissions?

    Read the article

  • postfix test and configuration problem

    - by Woho87
    Hi Guys! I installed postfix using sudo yum install postfix postfix-mysql. I'm newbie to mail systems, but I have one AMAZON EC2 instance with a public DNS. I used the public DNS in most cases, when I configured the file main.cf. The public DNS I have is from amazon and it is a long string(ec2-123-34-234-677.....amazon.com). // I configured this on main.cf. I replaced example.com with ec2-123-.......amazon.com myhostname = mail.example.com mydomain = example.com myorigin = $mydomain mydestination = example.com, $transport_maps local_recipient_maps = $alias_maps $virtual_mailbox_maps unix:passwd.byname home_mailbox = Maildir/ How do I test postfix? I just want it to send emails for my web application. I tried to test it with >telnet localhost 25 after I typed in SSH >sudo postfix start. but I recieve the message that telnet command can not be found. I also use the Amazon linux distribution if you want to know. I use it because it is free. What have I done wrong? Are there anymore configurations required pls help!

    Read the article

  • PhpMyAdmin 500 Internal Server Error on Nginx/php5-fpm/Debian

    - by ThrownAway
    I downloaded PhpMyAdmin a while ago and am having a hard time getting it to work. Requesting localhost/phpmyadmin gives a 500 Internal Server Error response, but there's nothing in the error log. These are the steps I did: Downloaded the newest phpmyadmin and unzipped all the files to /var/vhosts/phpmyadmin/www/ Created a new php5-fpm pool and a server block on nginx Changed the owner of all the files inside phpmyadmin/ Tried requesting localhost/phpmyadmin and localhost/phpmyadmin/setup The phpmyadmin is running inside a chroot, and all the files are owned by www-data so it shouldn't be a permission error. I made a new php file in the same directory to produce an error and it logs just fine so it has to be just phpmyadmin. Here's my php5-fpm pool: [phpmyadmin] listen = /var/vhosts/phpmyadmin/tmp/.php.sock; user = www-data group = www-data chroot = /var/vhosts/phpmyadmin/ chdir = / php_admin_value[error_reporting] = E_ALL php_admin_value[error_log] = error.log php_admin_flag[log_errors] = on php_admin_flag[display_errors] = on php_value[session.save_handler] = files php_value[session.save_path] = /tmp And Nginx server block: server { listen 80; root /var/vhosts/phpmyadmin/www; server_name pma.domain; location / { try_files $uri $uri/ /index.html; autoindex on; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_pass unix:/var/vhosts/phpmyadmin/tmp/.php.sock; fastcgi_param SCRIPT_FILENAME /www$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param DOCUMENT_ROOT /www; } index index.html index.htm index.php; try_files $uri $uri/ =404; } Any ideas what could be wrong? Why is it not producing any errors even though I've forced them to be on?

    Read the article

  • CPANEL ModSec2 not working with SecFilterSelective

    - by jfreak53
    Ok, I have cPanel/WHM latest on a Dedi, here are my specs on apache: Server version: Apache/2.2.23 (Unix) Server built: Oct 13 2012 19:33:23 Cpanel::Easy::Apache v3.14.13 rev9999 I just ran a re-compile using easyapache as you can see by the date. When running it I made sure that ModSec was selected and it stated in big bold letters something to the effect of If you install Apache 2.2.x you get ModSec 2 So I believed it :) I recompiled, I then ran: grep -i release /home/cpeasyapache/src/modsecurity-apache_2.6.8/apache2/mod_security2.c Hmm, the file is there but grep doesn't output anything, if I run: grep -i release /home/cpeasyapache/src/modsecurity-apache_1.9.5/apache2/mod_security.c I of course get the ModSec 1 version output. But the thing is that ModSec2 is installed since the c file is there. So I continued and put the following in modsec2.user.conf: SecFilterScanOutput On SecFilterSelective OUTPUT "text" Now when I restart Apache I get this error: Syntax error on line 1087 of /usr/local/apache/conf/modsec2.user.conf: Invalid command 'SecFilterScanOutput', perhaps misspelled or defined by a module not included in the server configuration Now supposedly this is supposed to work, I even have it running in ModSec2 on a non-cpanel server setup manually. So I know ModSec2 supports it. Anyone have any ideas? I have asked this question over at cpanel forum and it got nowhere.

    Read the article

  • javascript doesn't seem to be able to post form data (nginx server w/ php-fpm)

    - by Jones
    So the situation is like so: I have a nginx server with php-fpm installed. All is well and the site scripts and all work perfectly. I am able to use html to POST form data and it works just fine. However, There seems to be be some correlation between javascript, the POST protocol and nothing happening. I cant seem to determine the issue. Example: I have a user login widget that uses javascript on submit the fields and POST the data to a backend auth script which returns a server message that then populates the login box saying something like "Login Successful" followed by reloading the page to properly enable content. Problem is, nothing happens when you hit submit. I do know the setup works because i had it working on apache before migrating. Also if it makes any difference, the server is a Amazon EC2 instance using the Amazon AMI. I really dont know where to start looking on this one, but below is my default.conf for the server: upstream backend_get { server 127.0.0.1:80 weight=1; } upstream backend_post { server 127.0.0.1:80 weight=1; } #Main website url server { listen 80; server_name server.com; #charset koi8-r; access_log logs/host.access.log main; error_log logs/host.error.log; location / { root /usr/share/nginx/html; index index.php index.html index.htm; if ($request_method = POST) { proxy_pass http://backend_post; break; } } location ~ \.php$ { #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • 400 error with nginx subdomains over https

    - by aquavitae
    Not sure what I'm doing wrong, but I'm trying to get gunicorn/django through nginx using only https. Here is my nginx configuration: upstream app_server { server unix:/srv/django/app/run/gunicorn.sock fail_timeout=0; } server { listen 80; return 301 https://$host$request_uri; } server { listen 443; server_name app.mydomain.com; ssl on; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; client_max_body_size 4G; access_log /srv/django/app/logs/nginx-access.log; error_log /srv/django/app/logs/nginx-error.log; location /static/ { alias /srv/django/app/data/static/; } location /media/ { alias /wrv/django/app/data/media/; } location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header Host $http_host; proxy_pass http://app_server; } } I get a 400 error on app.mydomain.com, but the app is published on mydomain.com. Is there an error in my configuration?

    Read the article

  • ldap-authentication without sambaSamAccount on linux smb/cifs server (e.g. samba)

    - by umlaeute
    i'm currently running samba-3.5.6 on a debian/wheezy host to act as the fileserver for our department's w32-clients. authentication is done via OpenLDAP, where each user-dn has an objectclass:sambaSamAccount that holds the smb-credentials and an objectclass:shadowAccount/posixAccount for "ordinary" authentication (e.g. pam, apache,...) now we would like to dump our department's user-db, and instead use authenticate against the user-db of our upstream-organisation. these user-accounts are managed in a novell-edirectory, which i can already use to authenticate using pam (e.g. for ssh-logins; on another host). our upstream organisation provides smb/cifs based access (via some novell service) to some directories, which i can access from my linux client via smbclient. what i currently don't manage to do is to use the upstream-ldap (the eDirectory) to authenticate our institution's samba: i configured my samba-server to auth against the upstream ldap server: passdb backend = ldapsam:ldaps://ldap.example.com but when i try to authenticate a user, i get: $ smbclient -U USER \\\\SMBSERVER\\test Enter USER's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.6.6] tree connect failed: NT_STATUS_ACCESS_DENIED the logfiles show: [2012/10/02 09:53:47.692987, 0] passdb/secrets.c:350(fetch_ldap_pw) fetch_ldap_pw: neither ldap secret retrieved! [2012/10/02 09:53:47.693131, 0] lib/smbldap.c:1180(smbldap_connect_system) ldap_connect_system: Failed to retrieve password from secrets.tdb i see two problems i'm having: i don't have any administrator password for the upstream ldap (and most likely, they won't give me one). i only want to authenticate my users, write-access is not needed at all. can i go away with that? the upstream ldap does not have any samba-related attributes in the db. i was under the impression, that for samba to authenticate, those attributes are required, as smb/cifs uses some trivial hashing which is not compatible with the usual posixAccount hashes. is there a way for my department's samba server to authenticate against such an ldap server?

    Read the article

  • Advice for UPC/Surge Protector in home office

    - by Fred
    I'm just starting out as an independant developer, mostly Unix stuff with some Windows thrown in occasionally. I've been running two machines, a linux and a windows dev machine. Long story short, we had a bad storm come through last week and I unplugged one machine, forgot to unplug the other and the p/s and mobo ended up dead. Luckily I backup to an external service religiously (rsync.net for anyone interested), so there was no loss in data, but it did show me a glaring hole in my current setup, namely, lack of UPC and Surge Protection (this has honestly never been an issue before). Can anyone recommend a UPC/Surge Protector for a home office? It only needs to support a single machine (I opted to use vmware instead of rebuilding that machine), but it's a quad core Phenom 2 with a 1k watt p/s. This is outside my experience so I thought I'd get some input from others. I'm looking for something that's reasonably priced and does the job reasonably well. I don't need absolute 100% uptime, just something to protect my PC better than it is now.

    Read the article

  • Make a drive from one machine appear as a physical disk in another machine.

    - by Roberto Sebestyen
    I want to take a physical disk (or part of a disk) in one machine (call it machine-A) and I want to make it available in another machine (machine-B). But I don't want to map a network drive. I want it to appear in machine-B as a physical drive. Even though it is not a physical drive. The reason I want to do this is i want the ability to create shares in machine-B on that drive. Since I cannot do that on mapped drives, I need to use some utility that fools machine-B to think that it is a physical drive, and treat it as such. Both of these machines are windows server 2003. I heard about NFS, It sounds like what could be the solution to my problem. But isn't that a Linux/Unix protocol? What tools can I use to make this happen? Are there any open source solutions? I don't care what the solution is, as long as it achieves the end result, preferably open source solution though. Thanks for reading guys and gals!

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Looking for easiest, most simple solution to run a customised DNS Server for my local network on Windows 7.

    - by Jamie G
    I need to forward some websites, such as http://testing.server/ to an fixed IP address on my local network. I can do this easily on one computer using the hosts file. However, I need this to work for all machines on my network. I think the best way to do this will be to setup my own DNS Servers and add the custom DNS settings there. However, I'm looking for the simplest way possible to do this - I really don't want to spend hours setting up Unix Servers and running tricky terminal based scripts just to do this! My server is a standard Windows 7 machine. My dream would be a nice simple windows program with a GUI where I could input my ISP's DNS server and it would use those records, unless I had specifically set up my own DNS for a domain to use instead. If it had a web based admin system that was accessible from another computer on the network that would be even better. Does anyone know of anything that can do this? Many thanks indeed.

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • Advice for UPC/Surge Protector in home office

    - by user37755
    I'm just starting out as an independant developer, mostly Unix stuff with some Windows thrown in occasionally. I've been running two machines, a linux and a windows dev machine. Long story short, we had a bad storm come through last week and I unplugged one machine, forgot to unplug the other and the p/s and mobo ended up dead. Luckily I backup to an external service religiously (rsync.net for anyone interested), so there was no loss in data, but it did show me a glaring hole in my current setup, namely, lack of UPC and Surge Protection (this has honestly never been an issue before). Can anyone recommend a UPC/Surge Protector for a home office? It only needs to support a single machine (I opted to use vmware instead of rebuilding that machine), but it's a quad core Phenom 2 with a 1k watt p/s. This is outside my experience so I thought I'd get some input from others. I'm looking for something that's reasonably priced and does the job reasonably well. I don't need absolute 100% uptime, just something to protect my PC better than it is now.

    Read the article

  • Apache memory allocation error message

    - by la_f0ka
    I'm trying to set up a medium sized Drupal 7 website on my miniserver but I keep getting a 500 error message. This is what I found in Apache's error log: [Wed Sep 12 15:02:04 2012] [notice] SSL FIPS mode disabled [Wed Sep 12 15:02:04 2012] [warn] No JkShmFile defined in httpd.conf. Using default /usr/local/apache/logs/jk-runtime-status [Wed Sep 12 15:02:04 2012] [notice] Apache/2.2.22 (Unix) mod_ssl/2.2.22 OpenSSL/1.0.0-fips mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_jk/1.2.35 configured -- resuming normal operations [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] /usr/bin/php: error while loading shared libraries: libkrb5support.so.0: failed to map segment from shared object: Cannot allocate memory [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] Premature end of script headers: index.php [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] /usr/bin/php: error while loading shared libraries: libkrb5support.so.0: failed to map segment from shared object: Cannot allocate memory [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] Premature end of script headers: index.php [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] File does not exist: /home/brighton/public_html/favicon.ico [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] /usr/bin/php: error while loading shared libraries: libkrb5support.so.0: failed to map segment from shared object: Cannot allocate memory [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] Premature end of script headers: index.php I contacted support and they just told me I should just upgrade my package (right not I have a 512Mb account), but I am not sure if I'm buying it... even if I'm trying to access a file which only contains phpinfo(); I still get the 500. Any help would be much appreciated, and if there's need of any other information please let me know and I'll update the question. I compiled apache with tomcat because I intend to use Solr... not sure if this is relevant or not.

    Read the article

  • De-duplicating backup tool on a block basis? [closed]

    - by SST
    I am looking for an (ideally free as in speech or beer) backup tool for Unix-like OS which can store deduplicated backups, i.e. only nonredundant content takes up additional space. I already looked at dirvish (my first candidate) and rsnapshot which use hardlinks to achieve deduplication on a per-file level. However, as I want to back up large files (Thunderbird mailboxes 3GB, VMware images 10GB), such file are stored again entirely even if just a few bytes change. Then there are rsync-based tools like rdiff-backup which only store deltas and a current mirror. However, as the deltas are generated against each previous mirror, it is difficult to fine-tune the retention granularity (only keep one backup after a week, etc.) because the deltas would have to be re-evaluated. Another approach is to partition content into blocks and store each block only if it is not stored yet, otherwise just linking it to the first occurrence. The only tool I know of that does this by now is obnam (http://liw.fi/obnam), and it even supports zlib-compression and gpg-encryption -- nice! But it is very slow, AFAICT. Does any one know any other, solid backup software which supports deduplication on a sub-file level, ideally with at least some management options (show/select/delete generations...)?

    Read the article

  • Remote reboot of windows to knoppix

    - by user64452
    I am attempting to develop an Auditing application. This audit application will be employed on windows networks. The Audit will need to discover Hardware and software details of all machines attached to the network (including Printers) I do not want to have to install this application on each workstation. The audit app. needs to discover all the ip addresses of all the networked workstations. I have been prototyping this app for the last couple of months and have decided to try a new tack Is this possible? a). You have a windows network, min Windows XP sp3 and upwards b). Maximum of 100 Networked machines (if that matters) c). I need to remotely reboot each WINDOWS machine in turn on the entire network and get it to startup using UNIX, say knoppix for example! d). however the knoppix live cd is only available from one of the networked machines Questions... Morphology? Longevity? Incept dates? Cheers DD

    Read the article

  • Compare-Object gives false differences

    - by Andy
    I have some problem with Compare-Object. My task is to get difference between two directory snapshots made at different times. First snapshot is taken like this: ls -recurse d:\dir | export-clixml dir-20100129.xml Then, later, I get second snapshot and load both of them: $b = (import-clixml dir-20100130.xml) $a = (import-clixml dir-20100129.xml) Next, I'm trying to compare with Compare-Object, like that: diff $a $b What I get is in some places files that were added to $b since $a, but in some -- files that were in both snapshots, and some files, that were added to $b, are not given in Compare-Object output. Puzzling, but $b.count - $a.count is EXACTLY the same as (diff $a $b).count. Why is that? Ok, Compare-Object has -property param. I try to use that: diff -property fullname $a $b And I get the whole mess of differences: it shows me ALL the files. For example, say $a contains: A\1.txt A\2.txt A\3.txt And $b contains: X\2.mp3 X\3.mp3 X\4.mp3 A\1.txt A\2.txt A\3.txt diff output is something like that: X\2.mp3 => A\1.txt <= X\3.mp3 => A\2.txt <= X\4.mp3 => A\3.txt <= A\1.txt => A\2.txt => A\3.txt => Weird. I think I don't understand something crucial about Compare- Object usage, and manuals are scarce... Please, help me to get the DIFFERENCE between two directory snapshots. Thanks in advance UPDATE: I've saved data as plain strings like that: > import-clixml dir-20100129.xml | % { $_.fullname } | out-file -enc utf8 a.txt And results are the same. Here're excerpts of both snapshots (top 100-something lines, a.txt and b.txt), output of compare-object, and output of UNIX diff (unified). All files are UTF-8: http://dl.dropbox.com/u/2873752/compare-object-problem.zip

    Read the article

  • nginx codeigniter rewrite: Controller name conflicts with directory

    - by palerdot
    I'm trying out nginx and porting my existing apache configuration to nginx. I have managed to reroute the codeigniter url's successfully, but I'm having a problem with one particular controller whose name coincides with a directory in site root. I managed to make my codeigniter url's work as it did in Apache except that, I have a particular url say http://localhost/hello which coincides with a hello directory in site root. Apache had no problem with this. But nginx routes to this directory instead of the controller. My reroute structure is as follows http://host_name/incoming_url => http://host_name/index.php/incoming_url All the codeigniter files are in site root. My nginx configuration (relevant parts) location / { # First attempt to serve request as file, then # as directory, then fall back to index.html index index.php index.html index.htm; try_files $uri $uri/ /index.php/$request_uri; #apache rewrite rule conversion if (!-e $request_filename){ rewrite ^(.*)/?$ /index.php?/$1 last; } # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location ~ \.php.*$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } I'm new to nginx and I need help in figuring out this directory conflict with the Controller name. I figured this configuration from various sources in the web, and any better way of writing my configuration is greatly appreciated.

    Read the article

  • Restricting access to one controller of an MVC app with Nginx

    - by kgb
    I have an MVC app where one controller needs to be accessible only from several ips(this controller is an oauth token callback trap - for google/fb api tokens). My conf looks like this: geo $oauth { default 0; 87.240.156.0/24 1; 87.240.131.0/24 1; } server { listen 80; server_name some.server.name.tld default_server; root /home/user/path; index index.php; location /oauth { deny all; if ($oauth) { rewrite ^(.*)$ /index.php last; } } location / { if ($request_filename !~ "\.(phtml|html|htm|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|xlsx)$") { rewrite ^(.*)$ /index.php last; break; } } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } It works, but does not look right. The following seems logical to me: location /oauth { allow 87.240.156.0/24; deny all; rewrite ^(.*)$ /index.php last; } But this way rewrite happens all the time, allow and deny directives are ignored. I don't understand why...

    Read the article

  • How to make Windows command prompt treat single quote as though it is a double quote?

    - by mark
    My scenario is simple - I am copying script samples from the Mercurial online book (at http://hGBook.red-bean.com) and pasting them in a Windows command prompt. The problem is that the samples in the book use single quoted strings. When a single quoted string is passed on the Windows command prompt, the latter does not recognize that everything between the single quotes belongs to one string. For example, the following command: hg commit -m 'Initial commit' cannot be pasted as is in a command prompt, because the latter treats 'Initial commit' as two strings - 'Initial and commit'. I have to edit the command after paste and it is annoying. Is it possible to instruct the Windows command prompt to treat single quotes similarly to the double one? EDIT Following the reply by JdeBP I have done a little research. Here is the summary: Mercurial entry point looks like so (it is a python program): def run(): "run the command in sys.argv" sys.exit(dispatch(request(sys.argv[1:]))) So, I have created a tiny python program to mimic the command line processing used by mercurial: import sys print sys.argv[1:] Here is the Unix console log: [hg@Quake ~]$ python 1.py "1 2 3" ['1 2 3'] [hg@Quake ~]$ python 1.py '1 2 3' ['1 2 3'] [hg@Quake ~]$ python 1.py 1 2 3 ['1', '2', '3'] [hg@Quake ~]$ And here is the respective Windows console log: C:\Workpython 1.py "1 2 3" ['1 2 3'] C:\Workpython 1.py '1 2 3' ["'1", '2', "3'"] C:\Workpython 1.py 1 2 3 ['1', '2', '3'] C:\Work One can clearly see that Windows does not treat single quotes as double quotes. And this is the essence of my question.

    Read the article

  • How to backup virtual machines on a standalone ESXi host?

    - by Massimo
    Standalone ESXi (4.1) host without any vCenter Server. How to backup virtual machines as quickly and storage-friendly as possible? I know I can access the ESXi console and use the standard Unix cp command, but this has the downfall of copying the whole VMDK files, not only their actually used space; so, for a 30-GB VMDK of which only 1 GB is used, the backup would take 30 full GBs of space, and time accordingly. And yes, I know about thin-provisioned virtual disks, but they tend to behave very badly when physically copied, and/or to blow up to their full provisioned size; also, they are not recommended for actual VM performance. It is ok for me to shut down the VMs before backing them up (i.e. I don't need "live" backups); but I need a way to copy them around efficiently; and yes, a way to automate shutdown/startup when taking a backup would also help. I only have ESXi; no Service Console, no vCenter Server... what's the best way to handle this task? Also, what about restores?

    Read the article

  • Getting PAM/user info into php - something like Net_Finger instead of a db?

    - by digitaltoast
    I've got a very small user group who just need to login, upload, check and then move specific files to a different area when ready. Right now, I use the nginx PAM auth module to log them in against their unix accounts. As their login is their home directory, I've already got the info to send the uploads to the right area - one line of php and no database needed. But I'm maintaining a separate DB just so PHP can welcome them, grab their email and send them an email when processed. Yes, sure I could use nosql or sqlite instead so as to not need a whole mysql install. But it occurred to me that as I've got all these blank user fields for phone numbers I could populate with any data, that I could use something like php's Net_Finger. Which failed for me with: sudo pear install Net_Finger Starting to download Net_Finger-1.0.1.tgz (1,618 bytes) ....done: 1,618 bytes could not extract the package.xml file from "/build/buildd/php5-5.5.9+dfsg/pear-build-download/Net_Finger-1.0.1.tgz" Download of "pear/Net_Finger" succeeded, but it is not a valid package archive Error: cannot download "pear/Net_Finger" At which point I thought I'd stop, and take a ServerFault reality check - is this a really bad/dangerous/stupid idea just to stop me having to maintain details in two places rather than one? It there a better way? Googling shows that it's not an oft-asked thing, so perhaps with good reason?

    Read the article

  • Nginx: Rewriting directory path to file

    - by Doug
    I'm a little new to Nginx here so bear with me - I want to rewrite a url like foo.bar.com/newfoo?limit=30 to foo.bar.com/newfoo.php?limit=30. Seems pretty simple to do it something like this rewrite ^([a-z]+)(.*)$ $1.php$2 last; The part that I am confused about is where to put it - I've tried my hand at a some location directives but I'm doing it wrong. Here's my existing virtual host config, where should I implement my rewrite? server { listen 80; listen [::]:80; server_name foo.bar.com; root /home/foo; index index.php index.html index.htm; location / { try_files $uri $uri/ /index.html; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; } } Thanks!

    Read the article

  • How should an experienced Windows SysAdmin learn Linux? [closed]

    - by Systemspoet
    I have a new hire starting in a few weeks who is an experienced Windows SysAdmin. I think he's fairly senior on the Windows side, with a pretty deep AD understanding and experience with Exchange 2007, 2010, and exchange migrations. He's done a little PowerShell but I suspect more of the "run this command to do this" variety then "write a script to do this" sort. However, we are a mixed shop and (he knows this) I expect him to become a reasonably competent Linux SysAdmin over time. I'm looking for good starting points to bring him along. I have over ten years of Linux/UNIX experience, so it all sort of seems intuitive to me, but I've been thinking about the toolkit you actually need to be productive in the Linux CLI world. Just to be able to use the machines at all, off the top of my head... vi Basic CLI stuff -- move around, rename files, copy files, tar, gzip, changing passwords, finding relevant manpages, keep track of where you are, find things in your history, etc, etc. More advanced things that I take for granted but are actually pretty hard -- doing things with 'find', extracting relevant text via 'awk' and/or 'cut', knowing when to use 'grep' and when to use 'grep -e' or 'egrep'. Distribution specific stuff... compiling software, rpm, yum, apt-get, you name it. This all seems pretty basic to me, but when I think back to 1995 when I was first learning my way, some of those things took me years to master. So my question is -- where should I send him to pick up those skills? I'm not just thinking of classes, but rather also websites and books? Where do you all suggest as a starting point for picking up Linux skills?

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >