Search Results

Search found 24334 results on 974 pages for 'directory loop'.

Page 848/974 | < Previous Page | 844 845 846 847 848 849 850 851 852 853 854 855  | Next Page >

  • Removing trailing slashes in WordPress blog hosted on IIS

    - by Zishan
    I have a WordPress blog hosted in my IIS virtual directory that has all URLs ending with a forward slash. For example: http://www.example.com/blog/ I have the following rules defined in my web.config: <rule name="wordpress" patternSyntax="Wildcard"> <match url="*" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="index.php" /> </rule> <rule name="Redirect-domain-to-www" patternSyntax="Wildcard" stopProcessing="true"> <match url="*" /> <conditions> <add input="{HTTP_HOST}" pattern="example.com" /> </conditions> <action type="Redirect" url="http://www.example.com/blog/{R:0}" /> </rule> In addition, I tried adding the following rule for removing trailing slashes: <rule name="Remove trailing slash" stopProcessing="true"> <match url="(.*)/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="{R:1}" /> </rule> It seems that the last rule doesn't work at all. Anyone around here who has attempted to remove trailing slashes from WordPress blogs hosted on IIS?

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • Processes spawned by taskset not respecting environment variables

    - by jonesy16
    I've run into an issue where an intel compiler generated program that I'm running with taskset has been putting its temporary files into the working directory instead of /tmp (defined by environment variable TMPDIR). If run by itself, it works correctly. If run with taskset (e.g. taskset -c 0 <program> Then it seems to completely ignore the TMPDIR environment variable. I then verified this by writing a quick bash script as follows: contents of test.sh: #!/bin/bash echo $TMPDIR When run by itself: $ export TMPDIR=/tmp $ test.sh /tmp When run through taskset: $ export TMPDIR=/tmp $ taskset -c 1 test.sh "" Another test. If I export the TMPDIR variable inside of my script and then use taskset to spawn a new process, it doesn't know about that variable: #!/bin/bash export TMPDIR=/tmp taskset -c 1 sh -c export When run, the list of exported variables does not include TMPDIR. It works correctly with any other exported environment variable. If i diff the output of: export and taskset -c 1 bash -c export Then I see that there are 4 changes. The taskset spawned export doesn't have LD_LIBRARY_PATH, NLSPATH (intel compiler variable), SHLVL is 3 instead of 1, and TMPDIR is missing. Can anyone tell me why?

    Read the article

  • Creating multiple SFTP users for one account

    - by Tom Marthenal
    I'm in the process of migrating an aging shared-hosting system to more modern technologies. Right now, plain old insecure FTP is the only way for customers to access their files. I plan on replacing this with SFTP, but I need a way to create multiple SFTP users that correspond to one UNIX account. A customer has one account on the machine (e.g. customer) with a home directory like /home/customer/. Our clients are used to being able to create an arbitrary number of FTP accounts for their domains (to give out to different people). We need the same capability with SFTP. My first thought is to use SSH keys and just add each new "user" to authorized_keys, but this is confusing for our customers, many of whom are not technically-inclined and would prefer to stick with passwords. SSH is not an issue, only SFTP is available. How can we create multiple SFTP accounts (customer, customer_developer1, customer_developer2, etc.) that all function as equivalents and don't interfere with file permissions (ideally, all files should retain customer as their owner)? My initial thought was some kind of PAM module, but I don't have a clear idea of how to accomplish this within our constraints. We are open to using an alternative SSH daemon if OpenSSH isn't suitable for our situation; again, it needs to support only SFTP and not SSH. Currently our SSH configuration has this appended to it in order to jail the users in their own directories: # all customers have group 'customer' Match group customer ChrootDirectory /home/%u # jail in home directories AllowTcpForwarding no X11Forwarding no ForceCommand internal-sftp # force SFTP PasswordAuthentication yes # for non-customer accounts we use keys instead Our servers are running Ubuntu 12.04 LTS.

    Read the article

  • Apache proxy: Why is one vhost returning Forbidden while the other one works?

    - by Stefan Majewsky
    I have a Java application that needs to talk to another intranet website using HTTPS in both directions. After fighting with Java's SSL implementations for some time, I gave up on that, and have now set up an Apache that's supposed to act as a bidirectional reverse proxy: external app ---(HTTPS request)---> Apache ---(local HTTP request)---> Java app This direction works just fine, however the other direction does not: Java app ---(local HTTP request)---> Apache ---(HTTPS request)---> external app This is the configuration for the vhost implementing the second proxy: Listen 127.0.0.1:8081 <VirtualHost appgateway:8081> ServerName appgateway.local SSLProxyEngine on ProxyPass / https://externalapp.corp:443/ ProxyPassReverse / https://externalapp.corp:443/ ProxyRequests Off AllowEncodedSlashes On # we do not need to apply any more restrictions here, because we listened on # local connections only in the first place (see the Listen directive above) <Proxy https://externalapp.corp:443/*> Order deny,allow Allow from all </Proxy> </VirtualHost> A curl http://127.0.0.1:8081/ should serve the equivalent of https://externalapp.corp, but instead results in 403 Forbidden, with the following message in the Apache error log: [Wed Jun 04 08:57:19 2014] [error] [client 127.0.0.1] Directory index forbidden by Options directive: /srv/www/htdocs/ This message completely puzzles me: Yes, I have not set up any permissions on the DocumentRoot of this vhost, but everything works fine for the other proxy direction where I haven't. For reference, here's the other vhost: Listen this_vm_hostname:443 <VirtualHost javaapp:443> ServerName javaapp.corp SSLEngine on SSLProxyEngine on # not shown: SSLCipherSuite, SSLCertificateFile, SSLCertificateKeyFile SSLOptions +StdEnvVars ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ ProxyRequests Off AllowEncodedSlashes On # Local reverse proxy authorization override <Proxy http://localhost:8080/*> Order deny,allow Allow from all </Proxy> </VirtualHost>

    Read the article

  • Windows 7 - All Icons Missing, Explorer Progress Bar Never Finishes, Libraries Gone

    - by Alex
    since yesterday i've had three issues which all arose at the same time. windows 7 x64, i7 2.8ghz 12gb ddr3 1 - my libraries, favorites, drives are missing...basically the entire sidebar is gone. http://i.imgur.com/m8pRQ.png. yet when i open a dialog, my libraries and drives are back to normal ONLY for the dialog. i tried Restore Default Libraries. it works one time, but when i open libraries again i go back to the empty mess. restarting the computer temporarily fixes the problem. 2 - in the explorer window that's showing libraries, when i navigate to a certain folder i get an unending progress bar (the kind that turns the address bar green). yesterday when the problem started, i was saving a file to this folder. the program writing the file crashed during the write and i believe that's what caused the problem. i have sugarsync backing up that folder and when i restarted the computer sugarsync informed me that its database was corrupted, so i had to uninstall and reinstall the software. 3 - icons are missing. the Rebuild Icon Cache did not fix this. http://i.imgur.com/r9pgo.png restarting the computer temporarily fixes these problems, but when i open the directory with the initial write problem, everything stops working. edit: i should note that i did a chkdsk /f and it repaired problems. i also did the thing that verifies then restores windows files (can't remember the command now), which reported that everything was normal.

    Read the article

  • Transfering Files to server IP and port

    - by Mason
    I need to transfer files from my local computer on windows 7 to a server running linux. I access the server with putty through ssh at a specific IPv4 address and port number. I've attempted using the pscp command from my local computer but was denied access by the server. "Fatal: Network error: Connection refused" c:>pscp test.csv userid@**IPv4_Addres***:Port# /path/destination_file_name. Either the server blocks all pscp attempts from unauthorized users (most likely my laptop included) or I used the command incorrectly. If you have experience using this command, where exactly will the file get transfered to, I'm assuming that the path destination starts at my home directory in the server. Also if you have any other alternative methods of transfering the files let me know. Update 1 I have also tried using WinSCP however I got permission denied for that as well, it looks like the server will not let me upload or save files. Solved I had a complete lapse of memory and forgot about sudo (spent too much time with scripts the last 2 months), so I was able to change the permissions to allow external editing. Thanks for all the help guys!

    Read the article

  • bind would not work unless allow-query is "any"

    - by adrianTNT
    I have this in /etc/named.conf, I commented the default values and set my own under it. My domain would not load in browser unless I set allow-query to "any", is this OK, what should I edit? If is localhost or 127.0.0.1; 10.0.1.0/24; domain would not load. I tried the 127.. thing because it mentioned it here: http://wiki.mandriva.com/en/Testing:Bind Bind version is 9.7.0-P2-RedHat-9.7.0-5.P2.el6_0.1 OS is CentOS 6.0. options { // listen-on port 53 { 127.0.0.1; }; listen-on port 53 { any; }; //listen-on-v6 port 53 { ::1; }; listen-on-v6 port 53 { any; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; //allow-query { localhost; }; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; };

    Read the article

  • DNSSEC - First Signature

    - by Arancha
    I'm testing DNSSEC with Bind 9.7.2-P2. I have a question regarding the first signature created over a zone that already exists. I'm using dynamic DNS. I create the first two keys: one KSK and one ZSK. According to https://datatracker.ietf.org/doc/draft-ietf-dnsop-dnssec-key-timing/, the first ZSK needs to be published for an interval equal to Ipub, before it can be active. I create the ZSK with a Publication date previous to its Activation date. I restart the service and I can see that the key is published at Publication date, but it's no active later, when Activation date arrives. This is the configuration of the zone dnssec.es at the named.conf file: zone "dnssec.es" { auto-dnssec maintain; update-policy local; sig-validity-interval 1; key-directory "dnssec/keys_dnssec"; type master; file "dnssec/db.dnssec.es"; }; Any clue?? Regards

    Read the article

  • Inheriting file ownership on linux

    - by John Hunt
    We have an ongoing problem here at work. We have a lot of websites set up on shared hosts, our cms writes many files to these sites and allows users of the sites to upload files etc.. The problem is that when a user uploads a file on the site the owner of that file becomes the webserver and therefore prevents us being able to change permissions etc via FTP. There are a few work arounds, but really what we need is a way to set a sticky owner if that's possible on new files and directories that are created on the server. Eg, rather than php writing the file as user apache it takes on the owner of the parent directory. I'm not sure if this is possible (I've never seen it done.) Any ideas? We're obviously not going to get a login for apache to the server, and I doubt we could get into the apache group either. Perhaps we need a way of allowing apache to set at least the group of a file, that way we could set the group to our ftp user in php and set 664 and 775 for any files that are written? Cheers, John.

    Read the article

  • How do I connect to SSH without the password to be requested every time ? - Already follow some answers here but it doesn't work

    - by MEM
    MAC OS X Lion 10.7.3 1) On host, I've created an authorized_keys file inside .ssh folder, by doing: touch authorized_keys 2) I've copy my public ssh key into host .ssh folder by doing: scp ~/.ssh/mykey.pub [email protected]:/home/userhost/.ssh/mykey.pub 3) I've place it's contents inside authorized files by doing: cat mykey.pub >> authorized_keys 4) Then I've removed the mykey.pub file: rm mykey.pub 5) On my terminal, locally, inside my ~/.ssh folder I made: ssh-add mykey (notice that it is without the pub extension); 6) I've closed and opened again the terminal. When I first connect to this host, it has being added to the *known_hosts* file inside ~/.ssh; I've pico known_hosts and the hash is there. Still, every time I connect by doing: ssh [email protected] it requests a password ! What am I missing here ? UPDATE: I've done EVEN TWO MORE THINGS here: 7) Set your key to be the default identity - if it doesn't exist, create; touch ~/.ssh/config and place inside the following line: IdentityFile ~/.ssh/yourkeyname *id_rsa is normally your default key. You should switched to your key. This tells that the outgoing ssh connections should use this as a default identity.* 8) Add a bash process to your ssh-agent: ssh-agent bash ssh-add ~/.ssh/yourkeyname Lisinge answer helped but it's not definitive. If we restart our machine, the password gets prompted again!!! How can we debug this? What can we do here? How can we check where is this process failing ? UPDATE 2: If I use: ssh -v -i <keyfile> [email protected] I get among other things: OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 Warning: Identity file yourkeyname not accessible: No such file or directory. This message refers to what? The identify file is not accessible on the localhost, or it's not accessible on the remote host ? Please advice

    Read the article

  • Regarding partitions for dual-booting Ubuntu with pre-existing Windows 7

    - by Shasteriskt
    I have zero actual experience with configuring disk partitions and the stuff I have read for the past few hours have been confusing me a bit, so please bear with me. First of all, I'd like to explain what I'm setting to achieve: Windows 7 with: C:\ Windows 7 (pre-existing installation) D:\ Data (Already exists and has files already) Ubuntu 11 - Does not exist yet, but I already have a LiveCD in hand. \root directory for Ubuntu \home on its own partition I plan \swap on its own partition with around 8GB Here is the current situation: I have a single 500 GB hard-disk with Windows 7 x64 installed, and the current partition schemes is as follows: System Reserved: 100 MB (Primary, Active) C: 100 GB - Where Windows 7 is installed (Primary) D: 365 GB - Where my files are located, LOTS of free space (Primary) Now, I would like to shrink my D: drive and create around 40 GB of unallocated disk space for the Ubuntu installation, but here what's confusing me a bit: I'm thinking I would create an extended partition and subdivide it into 3 logical partitions for the Ubuntu setup I had in mind. (If you think my setup is a bad idea, please let me know & why. I also hope you can suggest a better one...) I am aware that I can only have up to 4 primary partitions, or 3 primary partitions with 1 extended parition max. Now, does the System Recovery portion count as one primary partition? I'm really new to these things and it is totally unclear to me. In shrinking my D: drive using Windows 7's Disk Management tool, I would get an unallocated free space which I don't know how to make an extended partition from. It seems like I can only create a primary partition from it, not an extended one. How do I go about it? (I'd also like to note, if it is of any importance, that I am trying to avoid using the option to install Ubuntu alongside Windows, and much rather prefer using the custom install where I can specify which drives I wish to use and stuff. Somehow I feel its safer that way.)

    Read the article

  • Configure samba server for Unix group

    - by Bird Jaguar IV
    I'm trying to set up a samba server with access for users in the Linux (RHEL 6) "wheel" group. I am basing smb.conf off of the example here where it goes through the [accounting] example. In my smb.conf I have [tmp] comment = temporary files path = /var/share valid users = @wheel read only = No create mask = 0664 directory mask = 02777 max connections = 0 (rest of the output from $ testparm /etc/samba/smb.conf is here). And groups `whoami` returns user01 : wheel. When I use the following command from another machine (Mac OS) as the Linux user (user01): $ smbclient -L NETBIOSNAME/tmp it asks for a password, I hit return without a password, and get: Enter user01's password: Anonymous login successful Domain=[DOMAIN] OS=[Unix] Server=[Samba 3.6.9-151.el6_4.1] Sharename Type Comment --------- ---- ------- tmp Disk temporary files IPC$ IPC IPC Service (Samba Server Version 3.6.9-151.el6_4.1) But when I try $ smbclient //NETBIOSNAME/tmp I try entering the password I use for the Linux login, and get a bunch of stuff logged, including check_sam_security: Couldn't find user 'user01' in passdb. ... session setup failed: NT_STATUS_LOGON_FAILURE (I can give more logging information if it would be helpful.) I can't find a reference to more steps I need to add group users in the resource. Should I be manually adding samba users from the group somehow? Thank you

    Read the article

  • What is going on when I can't access an SMB server share (not accessible error) until I run cmdkey to delete the credential?

    - by Warren P
    I have a network connection share issue. The first connection works, and seems to stay connected for at least a few hours. However, after each time my windows 7 PC reboots, it can no longer form a network connection to the shared folder, nor browse to it, until I not only unmap and remap the mapped drive, but also, I have to use cmdkey to delete the stored credentials like this: cmdkey /delete:Domain:target=HOSTNAME My work PC is on a domain, and I am not the IT administrator, but I'm curious if there is anything I can do to investigate this issue. Any settings in registry or group policy that I could examine to see why the first connection works, but each subsequent attempt (once a stored credential exists) to browse or use the connection, fails with a connection error saying it is "not accessible", like this: I do not even get any error until at least several minutes go by. THe first thing I see is a window frozen and empty, and then I get this error: This has happened when connecting to a share on a DROBO device, and on a share which is not on the domain, but which was a Microsoft Home Server. I wonder if there's something broken in WIndows 7 professional with regards to connecting to non-domain shares when an active directory domain controller exists, and a particular workstation is joined to a domain? The problem only occurs if I click "remember credentials". It is not fixed by any amount of working with net use. Usingcmdkey to delete all stored credentials for the host is the only way to get back in, and it affects all non-domain shared folders. Update I'm hoping there are some registry locations I could check that could be misconfigured in some way that might explain why SMB/CIFS stored credentials for non-domain systems seem to be auto-invalidated in this weird way. Knowing how whacko Microsoft Windows domain and security handling is sometimes, this could be some kind of stupid "feature".

    Read the article

  • .htaccess, mod_rewrite Issue

    - by Shoaibi
    What i want: Force www [works] Restrict access to .inc.php [works] Force redirection of abc.php to /abc/ Removal of extension from url Add a trailing slash if needed old .htaccess : Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / ### Force www RewriteCond %{HTTP_HOST} ^example\.net$ RewriteRule ^(.*)$ http://www\.example\.net/$1 [L,R=301] ### Restrict access RewriteCond %{REQUEST_URI} ^/(.*)\.inc\.php$ [NC] RewriteRule .* - [F,L] #### Remove extension: RewriteRule ^(.*)/$ /$1.php [L,R=301] ######### Trailing slash: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ http://www.example.net/$1/ [R=301,L] </IfModule> New .htaccess: Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / ### Force www RewriteCond %{HTTP_HOST} ^example\.net$ RewriteRule ^(.*)$ http://www\.example\.net/$1 [L,R=301] ### Restrict access RewriteCond %{REQUEST_URI} ^/(.*)\.inc\.php$ [NC] RewriteRule .* - [F,L] #### Remove extension: RewriteCond %{REQUEST_FILENAME} \.php$ RewriteCond %{REQUEST_FILENAME} -f RewriteRule (.*)\.php$ /$1/ [L,R=301] #### Map pseudo-directory to PHP file RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule (.*) /$1.php [L] ######### Trailing slash: RewriteCond %{REQUEST_FILENAME} -d RewriteCond %{REQUEST_FILENAME} !/$ RewriteRule (.*) $1/ [L,R=301] </IfModule> errorlog: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://www.example.net/ Rewrite.log: http://pastebin.com/x5PKeJHB

    Read the article

  • Samba access works with IP address only

    - by Sebastian Rittau
    I added a Debian etch host (hostname: webserver, IP address: 192.168.101.2) running Samba to a Windows network with a Windows 2003 PDC (IP address 192.168.101.3). The Samba server exports a public guest share, called "Intranet". The server shows up fine in the network, but trying to click on it produces an error dialog, stating I don't have the necessary permissions. So does entering \webserver manually and using \webserver\internet states that the path does not exist. Interestingly, accessing the share by IP address (\192.168.101.2 or \192.168.101.2\intranet) works fine. DNS is configured correctly, and "smbclient //webserver/intranet" on another Linux client works fine. One complicating issue is that the webserver is only a VMware virtual machine running on PDC server. Here is our smb.conf: [global] workgroup = Foobar server string = Webserver wins support = yes ; commenting out these wins server = 192.168.101.3 ; two lines has no effect dns proxy = no guest account = nobody [... snipped some unrelated bits, like logging ...] security = share [... snipped some password-related things ...] domain master = no [intranet] comment = Intranet path = /srv/webserver/contents browseable = yes guest ok = yes guest only = yes read only = yes create mask = 0775 directory mask = 0775

    Read the article

  • Whats the best cloud backup solution for a small scale server envoirnment?

    - by nbv4
    I have a server that runs a postgres database that contains about 200MB of data. Currently I have a cron job setup on my home computer which: ssh's into my server runs a remote script which makes a backup of the database scp's that dump over to my local hard drive for storage. Each dump is 20MiB. does this every six hours (one months of backups is roughly 2GiB) The problem with this setup is that if my local machine goes down for whatever reason, no backups will be made. Also, I can't have the cron run from the server, because I can't have it scp'd to my local machine from my server (firewalls and all that crap). My local machine is running Ubuntu 10.04, and my server is Ubuntu 9.10 server edition. I looked into Ubuntu One, but currently it's gui-only. I also looked into dropbox, but it's a pain in the ass to get setup in linux without gui support. Amazon S3 looks good but it's not free (yet dirt cheap). Is there any other alternative that I should look into? I'd prefer something where I can just have my script dump the database into a directory, and have the backup service 'watch' that folder and sync accordingly. I can maybe also have my local machine sync to the cloud backup so I have even more redundancy, plus easy access to my backups for use in testing.

    Read the article

  • Starting multiple Chrome full screen instances on multiple monitors from (batch) script

    - by Bob Groeneveld
    My goal is to show different web content full screen on multiple monitors automatically after booting from a single computer. The browser I would like to use is Chrome. If Chrome does not support this and Firefox does that would be fine. The OS I would prefer is Windows, if it turns out that Linux is possible that would be fine. On Windows it is possible to set the position of the Chrome browser window (--window-position=) and make Chrome start in full screen mode (--kiosk). Using these options combined you can start Chrome full screen on any of the desktops/screens that you have connected to your computer. I have managed to get this working. However, if I then try to do the same thing a second time to have Chrome full screen on a second screen the second Chrome window will open over the first window, no matter the coordinates I use for the --window-position parameter. I have tried using Chrome profiles and copying the Chrome directory and starting the second chrome.exe. All these things result in the same behaviour.

    Read the article

  • Postfix + Exchange + ActiveDirectory; How to mix them

    - by itwb
    My client has got many sub-offices, and one head office. The headoffice has a domain name: business.com All users in the many sub-offices need to have a headoffice email address: [email protected] Anyone not in the head office will need the email forwarded to an external email address. All users in the head office will have their email delivered to Microsoft Exchange. Users are listed in Active Directory under two different OU's: HeadOffice or SubOffice. Is this something able to be configured? I've done some googling, but I can't find any examples or businesses set up this way. Edit: Postfix will accept all email, will need to determine to forward the email to an external account or alternatively have it delivered to MS Exchange. I've done some reading about MS Exchange and that you can 'mail-enable' contacts for forwarding - but I don't know if each AD account requires an Exchange CAL? The end goal is to forward email to external accounts to sub offices or accept email for head office. Maybe I don't need to worry about Postfix to perform this task..... http://www.windowsitpro.com/article/exchange-server-2010/exchange-server-licensing-some-of-your-questions-answered "What about client access licenses (CALs)? You need one CAL per user who will connect to Exchange. Although it might not be 100 percent precise, I prefer to think of it as one CAL per mailbox; there are exceptions for users outside your organization, automated tools that use mailboxes, and so on. Exchange doesn't enforce this limit, so it's on you to ensure that you have the correct number of CALs for the set of clients you support."

    Read the article

  • Clamdscan scans file in 0 seconds

    - by SupaCoco
    I have to run clamav on large files. I was wondering which command was the fastest between clamscan and clamdscan. But it seems that clamdscan is not working properly: it scans file larger than 1 GB. Could you guys help me find why the heck clamdscan isn't working ? Between clamscan and clamdscan which one is less resource consuming ? I run ClamAV 0.97.8/18037 on Ubuntu 12.04.3 LTS. Please find below the execution result of both commands: clamscan myfile.zip ----------- SCAN SUMMARY ----------- Known viruses: 2864504 Engine version: 0.97.8 Scanned directories: 0 Scanned files: 1 Infected files: 0 Data scanned: 0.00 MB Data read: 1024.16 MB (ratio 0.00:1) Time: 9.145 sec (0 m 9 s) clamdscan myfile.zip /home/ubuntu/workspace/benchmark/myfile.zip: OK ----------- SCAN SUMMARY ----------- Infected files: 0 Time: 0.000 sec (0 m 0 s) And here are the clamav log file: Wed Oct 30 10:26:32 2013 -> Received POLLIN|POLLHUP on fd 4 Wed Oct 30 10:26:32 2013 -> Got new connection, FD 9 Wed Oct 30 10:26:32 2013 -> Received POLLIN|POLLHUP on fd 5 Wed Oct 30 10:26:32 2013 -> fds_poll_recv: timeout after 5 seconds Wed Oct 30 10:26:32 2013 -> Received POLLIN|POLLHUP on fd 9 Wed Oct 30 10:26:32 2013 -> got command CONTSCAN /home/ubuntu/workspace/benchmark/myfile.zip (51, 7), argument: /home/ubuntu/workspace/benchmark/myfile.zip Wed Oct 30 10:26:32 2013 -> mode -> MODE_WAITREPLY Wed Oct 30 10:26:32 2013 -> Breaking command loop, mode is no longer MODE_COMMAND Wed Oct 30 10:26:32 2013 -> Consumed entire command Wed Oct 30 10:26:32 2013 -> Number of file descriptors polled: 1 fds Wed Oct 30 10:26:32 2013 -> fds_poll_recv: timeout after 3600 seconds Wed Oct 30 10:26:32 2013 -> THRMGR: queue (single) crossed low threshold -> signaling Wed Oct 30 10:26:32 2013 -> THRMGR: queue (bulk) crossed low threshold -> signaling Wed Oct 30 10:26:32 2013 -> /home/ubuntu/workspace/benchmark/myfile.zip: OK Wed Oct 30 10:26:32 2013 -> Finished scanthread Wed Oct 30 10:26:32 2013 -> Scanthread: connection shut down (FD 9) Wed Oct 30 10:26:32 2013 -> THRMGR: queue (single) crossed low threshold -> signaling Wed Oct 30 10:26:32 2013 -> THRMGR: queue (bulk) crossed low threshold -> signaling

    Read the article

  • Replacing HD in an MacOS 10.6.8 server caused all shares to fail

    - by Cheesus
    I'm hoping someone might have a helpful suggestion about this problem. We have 2 MacOSX servers available for file sharing. (quad Xeons - 2GB RAM, both 10.6.8), No.1 is an Open Directory Master with 50+ user accounts, No.2 has only 2 local accounts (/local/Default) and looks at the OD Master for all user accounts (/LDAPv3/10.x.x.20/) Both servers have 3 internal HD's, The boot volume with only Server OS and minimal Apps. A 'DataShare' HD (500GB) and a backup drive (500GB). After upgrading the DataShare HD in Server No.2 from a small internal HD (500GB) to larger capacity (2TB) drive, users are unable to connect to shares on Server No.2. Users get an error "There are no shares available or you are not allowed to access them on the server" The process I followed was to use Carbon Copy Cloner to create an exact copy of the original data drive (keeps all ownership data, UID, permissions, last edit date and time). Everything booted up ok, no indication there was any issues. (Paths to the sharepoint look good) Notes during troubleshooting - Server1 is operating perfectly, all users can access shares and authenticate etc. - I've checked the SACL (Server Access Control List) settings is ok. - On Server2 in the Server Admin' app, I can see all the shares listed ok. The paths seem valid, I can disable / reenable the shares, no errors. - On Server2 'workgroup manager' lists all the accounts from the OD Master in the LDAP dir view. All seems fine from here. Basically everything looks normal but no file shares on Server2 can be accessed from regular users.

    Read the article

  • GPO startup script not copying files

    - by marcwenger
    I created a GPO startup script to execute for computers in a specific AD container. The script takes a file from the AD netlogon share and places it on a directory on the computer. Given the right permissions (ie: myself) can execute the script just fine and the file copies. But it doesn't work on startup - the file does not copy over from the AD server. The startup script should run as localsystem (am I right?). So the question is why do the files not copy on startup? Could it be because of: Is it permissions of the local system user? Reading the registry is problematic on startup? Obtaining files from the AD netlogon folder is problematic on startup? Am I missing it completely? My test machine does have the registry key and local directories as described in the script. I myself have standard user permissions on the test machine. AD server is Windows 2008, test client is Windows XP SP3 (and soon to be Windows 7, which I assume permissions issues will be inevitable) Dim wShell, fso, oraHome, tnsHome, key, srcDir Set wShell = WScript.CreateObject("WScript.Shell") Set fso = CreateObject("Scripting.FileSystemObject") key = "HKLM\Software\Oracle\Oracle_Home" On Error Resume Next orahome = wShell.RegRead(key) If err.Number = 0 Then tnsHome = oraHome + "\" + "network\admin\" srcDir = wShell.ExpandEnvironmentStrings("%logonserver%") + "\netlogon\UpdatedFiles\" fso.CopyFile srcDir + "file1.ext", tnsHome, true End If Side note: To ensure that the script is properly deployed, I purposely put some errors in the script, and on the next startup the error message appeared. So I know the GPO is deployed properly.

    Read the article

  • Ngins wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • VirtualBox how to merge arbitrary snapshot into base vdi

    - by jmathew
    I botched a transfer of a VM from one harddisk to the other. Now I'm left with the base vdi and a whole bunch of snapshots. My steps Copied old VM directory over to new HDD Deleted old VM and added new VM using using Machine-add and providing the old XML file Couldn't add base vdi file due to conflict so changed the UUID of base vdi with VBOXMANGE.EXE internalcommands sethduuid Attempt to rollback to a snapshot, but it seems the VM is looking for the snapshots on the old HDD (which is formatted and gone) This is the error (networked is the name): Failed to restore the snapshot networked of the virtual machine lfs. Could not open the medium 'H:\vm\ft.vdi'. VD: error VERR_PATH_NOT_FOUND opening image file 'H:\vm\ft.vdi' (VERR_PATH_NOT_FOUND). Result Code: E_FAIL (0x80004005) Component: Medium Interface: IMedium {53f9cc0c-e0fd-40a5-a404-a7a5272082cd} The old HDD was drive H: the new one is drive N: How can I modify the snapshots/VM to look in N:\vm\ft.vdi for the base vdi? I've already set the default settings in VirtualBox in general (default vm/vm snapshot location). Or if not that how can I merge the old snap shot with the base vdi given that the only things that have changed is the base vdi's UUID? Thanks

    Read the article

  • How to configure network on Windows Server 2008

    - by Gokhan Ozturk
    I have a IBM x3400 Server Machine with Windows Server 2008 R2 installed on it. But, since I am not expert on networking I have some problems. These roles installed on my server: Active Directory DNS File Sharing Hyper-V ISS VPN There is two network card on them. I configured them like this: Local Connection 1: 192.168.30.3 255.255.255.0 192.168.30.2 127.0.0.1 Local Connection 2: 192.168.30.101 255.255.255.0 192.168.30.6 127.0.0.1 My problem is, when I use this Ip gateways, It is sharing internet to all computers. This is not I want. I want to use Local Connection 1 for internal network. I am giving all computers gateway and DNS IP as 192.168.30.3 The Local Connection 2 is for Hyper-V and VPN connections. 192.168.30.2 and 192.168.30.6 are my modem's gateways. I am using 192.168.30.6 external IP for VPN connections. There is two 24 port switches. There is a connection between them and this two ethernet card connected directly to them. And modems are connected to switches as well (Morems are not near the server. They are somewhere in the building). I disabled network Bridge and removed all ethernet cards from it. With this configuration, all computers can ping my server's IP (192.168.30.3) but on server I cannot ping any clients (Request timeout). What is the best way to configure my network? Thank you. Redgards

    Read the article

< Previous Page | 844 845 846 847 848 849 850 851 852 853 854 855  | Next Page >