Search Results

Search found 14131 results on 566 pages for 'note'.

Page 376/566 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • Outbound ports to allow through firewall - core requirements

    - by dunxd
    This question was asked before, but in a rather general way. I'm asking more specifically based on my current requirements. We have a number of remote offices made up of a bunch of PCs and an ASA 5505 which is used as firewall and VPN termination point. In the offices we share the internet connection with one or more other organisations over whom we have very little control, asides from the config on the ASAs. For a bunch of reasons I'd like to lock down these ASA 5505s to only allow outbound traffic to ports used by applications we know we need. I'm putting a standard config to roll out to all the ASAs, and if we need to open up ports for the other orgs we can do it on request. But I want to leave open the most commonly required ports so we can get up and running without waiting on other folks technical staff to get back. I plan to allow the following TCP ports to support email and web access, which I know everyone will need: POP3 (110 and 995) HTTP (80 and 443) IMAP4 (143 and 993) SMTP (25 and and 465) The question really is, what other ports do I need to leave open to allow for "normal" working? I've seen UDP port 53 for DNS as one. Are there any others that would be worth opening up? Just to note - I'll also be setting up monitoring systems to keep an eye on the ports we do allow. Any of the above could be misused of course. We'll also back all this up with signed agreements. But I'm aiming for a technical solutions where I don't have to start out with the full requirements of everyone we share connections with. See also: outbound ports that are always open

    Read the article

  • Virtual Host Configuration and mod_rewrite - Removing PHP Extension and Adding Forward Slash

    - by nicorellius
    On my production server, things are fine: PHP extension removal and trailing slash rules are in place in my .htaccess file. But locally, this isn't working (well, partially, anyway). I'm running Apache2 with a virtual host for the site in question. I decided to not use the .htaccess file in this case and just add the rules to the httpd-vhosts.conf file instead, which, I've heard, if possible on your server, is a better way to go. The virtual host is working and the URL I use for my site is like this: devserver:9090 Here is my httpd-vhosts.conf file: NameVirtualHost *:9090 # for stuff other than this site <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs" ServerName localhost </VirtualHost> # for site in question <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs/devserver" ServerName devserver <Directory "/opt/lampstack/apache2/htdocs/devserver"> Options Indexes FollowSymLinks Includes AllowOverride None Order allow,deny Allow from all </Directory> <IfModule rewrite_module> RewriteEngine ON # remove PHP extension and add trailing slash # note - this doesn't work for directories, and throws 404 # TODO - fix so directories use index.php RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{THE_REQUEST} ^GET\ /[^?\s]+\.php RewriteRule (.*)\.php$ /$1/ [R=302,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*)/$ /$1.php [L] RewriteCond %{REQUEST_FILENAME}.php -f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .*[^/]$ /$0/ [R=302,L] </IfModule> # error docs ErrorDocument 404 /errors/404.php </VirtualHost> The problem I'm facing is that when I go to directories on the site, I get a 404 error. So for example, this: devserver:9090/page.php goes to devserver:9090/page/ but going to a directory (that has an index.php): devserver:9090/dir/ throws 404 error page. If I type in devserver:9090/dir/index.php I get devserver:9090/dir/index/ and the contents I want appear... Can anyone help me with my rewrite rules?

    Read the article

  • Mysql start fails with Operating System error 13

    - by curious
    I have XAMPP on my Ubuntu Lucid system and everything worked fine. But there seems to be some problem now and mysql wouldn't start. I had tried to recover a few Drupal databases and hence copied the raw files to /opt/lampp/var/mysql folder like all other database folders. And, I guess that could have caused the problem. I am pasting the last few lines of the error log. Someone please help me out. 100814 15:17:47 mysqld_safe Starting mysqld daemon with databases from /opt/lampp/var/mysql 100814 15:17:47 [Note] Plugin 'FEDERATED' is disabled. 100814 15:17:47 [ERROR] Can't open shared library 'libpbxt.so' (errno: 0 API version for STORAGE ENGINE plugin is too different) 100814 15:17:47 [Warning] Couldn't load plugin named 'PBXT' with soname 'libpbxt.so'. 100814 15:17:48 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name /opt/lampp/var/mysql/ibdata1 InnoDB: File operation call: 'open'. InnoDB: Cannot continue operation.

    Read the article

  • SSL Mail server connection times out on send()

    - by Jivan
    When trying to programmatically send an email from a website of mine, with PHP Pear Mail package with SSL connection, PEAR:Mail replies the following : Failed to connect to example.blabla.net:PORT [SMTP: Failed to connect socket: connection timed out (code: -1, response: )] I looked for similar questions on SO and SF, all the answers asking the OP to test a request on telnet or ssh in command line. So, that is what I did and here is what happens : $ ssh -l myusername -p PORT example.blablabla.net _ Here, '_' in the second line means that NOTHING happens. Indefinitely, which seems coherent with the timeout message I had from PEAR:Mail. So PEAR:Mail seems out of cause. But, what I have to tell you is that yesterday, it just worked. Connection was properly established, mails were properly sent, etc. Just today, it doesn't work anymore and I absolutely don't know why. I restarted Apache (in case an extension was broken), restarted mail services, etc. Still. No effect. Before yesterday (when it worked) and today (when it doesn't anymore), I just didn't touch the server and did nothing on it, simply because I took a day off to write some blog post! Have anyone of you encountered similar problem ? The problem seems quite common, judging after some googling, but the solution doesn't. Thanks for any help ! (note on config : CentOS 6.4 x86_64 with cPanel/WHM)

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • Repeated installation of malicious software to do outbound DDOS attack [duplicate]

    - by user224294
    This question already has an answer here: How do I deal with a compromised server? 12 answers We have a Ubuntu Vitual Private Server hosted by a Canadian company. Out VPS was affected to do "outbound DDOS attack" as reported by server security team. There are 4 files in /boot looks like iptable, please note that the capital letter "I","L". VPS:/boot# ls -lha total 1.8M drwx------ 2 root root 4.0K Jun 3 09:25 . drwxr-xr-x 22 root root 4.0K Jun 3 09:25 .. -r----x--x 1 root root 1.1M Jun 3 09:25 .IptabLes -r----x--x 1 root root 706K Jun 3 09:23 .IptabLex -r----x--x 1 root root 33 Jun 3 09:25 IptabLes -r----x--x 1 root root 33 Jun 3 09:23 IptabLex We deleted them. But after a few hours, they appeared again and the attack resumed. We deleted them again. They resurfaced again. So on and so forth. So finally we have to disable our VPS. Please let us know how can we find the malicious script somewhere in the VPS, which can automatically install such attcking software? Thanks.

    Read the article

  • Isn't a hidden volume used when encrypting a drive with TrueCrypt detectable?

    - by neurolysis
    I don't purport to be an expert on encryption (or even TrueCrypt specifically), but I have used TrueCrypt for a number of years and have found it to be nothing short of invaluable for securing data. As relatively well known free, open-source software, I would have thought that TrueCrypt would not have fundamental flaws in the way it operates, but unless I'm reading it wrong, it has one in the area of hidden volume encryption. There is some documentation regarding encryption with a hidden volume here. The statement that concerns me is this (emphasis mine): TrueCrypt first attempts to decrypt the standard volume header using the entered password. If it fails, it loads the area of the volume where a hidden volume header can be stored (i.e. bytes 65536–131071, which contain solely random data when there is no hidden volume within the volume) to RAM and attempts to decrypt it using the entered password. Note that hidden volume headers cannot be identified, as they appear to consist entirely of random data. Whilst the hidden headers supposedly "cannot be identified", is it not possible to, on encountering an encrypted volume encrypted using TrueCrypt, determine at which offset the header was successfully decrypted, and from that determine if you have decrypted the header for a standard volume or a hidden volume? That seems like a fundamental flaw in the header decryption implementation, if I'm reading this right -- or am I reading it wrong?

    Read the article

  • Run as another user, but also as administrator

    - by Tewr
    I am trying to debug a virtual machine (VM) running on a remote computer from my workstation (A). Both VM and A are running Windows 7 Enterprise. Apparently, I need to start the Remote Debugger Service (RDS) on VM as an administrator. Apparently, I also need to run RDS as the user Tewr logged in on A (domain: DOM). VM runs the services i need to debug, as well as the remote desktop interface with an account VMUSER in a domain called VMDOMAIN. I manage to start RDS as administrator, but then the RDS process is owned by VMUSER and that's not good enough. I also manage to run RDS as DOM\Tewr, but then not as an administrator. I have Added DOM\Tewr as an administrator on VM, but thats not good enough becuase the process is still not run as administrator. How can I run the RDS process as DOM\Tewr and "As Administrator", while logged on in windows as VMDOMAIN\VM? (note: I have tried creating an account with the same credentials / password as VMUSER, as hinted in the ms article above, but with no luck...)

    Read the article

  • How do I connect to and factory reset a Catalyst 3560 Switch?

    - by Josh
    My company just bought another company. In their server room they had some older hardware, which I would like to repurpose. One of these is a Cisco Switch: C3560G-48TS-S. I found some instructions about this switch here but this is not a guide for a beginner. I have no idea how to connect to this thing to begin running the commands. It says Configure the PC terminal emulation software for 9600 baud, 8 data bits, no parity, 1 stop bit, and no flow control. But I can't find anything on how to do this (assuming with telnet?) or even what program to use. I also don't know how to find the IP address of the device to connect to it. My research also says once I get in there, I need to run clear config all Is this the right command? Also, what if I can't get the username and password for these devices? Is there some way to factory reset (my only experience is with devices that have a hardware reset button) EDIT: I should note that when I push the button on the front the three lights blink, which according to the documentation indicated the switch is configured and "not available for express setup"

    Read the article

  • ESX 3.5 refuses to update

    - by Speeddymon
    I have a set of ESX 3.5 servers in 2 different datacenters. One is DR, one is production. They are on the same vlan and so I can access any of them on the private network from my vCenter server. Last month, as a learning experience (I hadn't dealt with ESX much before), I updated the DR server. Other than finding out that a couple of bundles had to be installed manually in order to get the rest to install from vCenter, it went off without a hitch. Now, I'm trying to do the same for our production servers and it is not working. I've googled around for the error I get during scan, and investigate loads of different solutions (editing the integrity file, checking DNS, etc) -- I did install the 2 bundles that had to be installed manually already -- but scan from vCenter is just not working. Side note: I did just scan the DR server again and that scan works fine so shouldn't be a problem with vCenter that has cropped up recently -- it has to be something else. The error I get is: Patch metadata for (servername) missing. Please download updates metadata first. Failed to scan (servername) for updates. I'm all out of ideas on how to make this work, so any help would be hugely appreciated.

    Read the article

  • query keepalived

    - by tdimmig
    *Note: I have trouble deciding what should go in serverfault and what should go in superuser, if some kindly admin decides this is in the wrong place please move it for me - many thanks. I am implementing a basic HA system with keepalived. I only want to be notified of the failover in the case of hardware failure. I do, however, have the servers switch roles periodically. I have a track_script running on the backup that will vary it's return between 0 and 1 on an interval (once a week, once a month, whatever). Upon returning 0, the priority is raised above that of the master, upon returning 1 the priority is lowered again. This way they trade places on the configured interval. The question: What can I do to tell the difference between a switch caused by my script, and a switch caused because one of the servers died? I certainly want to be notified when there is an actual problem, but not every time the servers change places because of the script. I see that version 1.2.7 has snmp support and I may be able to use it to get some information that could tell me one way or another, but to be honest I've never used snmp before and I don't know how to get the information I want with it (my Google foo failed me).

    Read the article

  • mysqldump isn't able to export a specific database, phpMyAdmin crashes

    - by Devils Child
    I'm experiencing problems with a database on my server (Note: All other databases work fine). Once I try to export it with mysqldump I get this error: # mysqldump -u root -pXXXXXXXXX databasename > /root/databasename.sql mysqldump: Couldn't execute 'show table status like 'apps'': Lost connection to MySQL server during query (2013) Also, phpMyAdmin throws an error when selecting this database and immediately logs out. However, the web site which uses this database works fine. I can also execute SELECT statements on the table named "apps" from the MySQL shell. I tried restarting the MySQL daemon as well as REPAIR DATABASE and REPAIR TABLE but the problem still persists. I had this problem before, then it disappeared somehow without me doing anything to resolve the issue. Now, the problem is back and I'm simply unable to create a backup of this database. Used software Debian 6.0.7 x64 MySQL 5.1.66-0 MySQL Version: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+-------------------+ | Variable_name | Value | +-------------------------+-------------------+ | protocol_version | 10 | | version | 5.1.66-0+squeeze1 | | version_comment | (Debian) | | version_compile_machine | x86_64 | | version_compile_os | debian-linux-gnu | +-------------------------+-------------------+

    Read the article

  • Logging communication between two VMs

    - by sYnfo
    Hi, I'm trying to set up "malware lab" described in this paper. So far, I've set up Windows guest system, adding one Host-only Network adapter, and setting this (sorry if the names aren't exactely correct, I don't have an english language version): - IP Address - 10.0.0.3 - Subnet mask - 255.255.255.0 - Default gateway - not set - Preferred DNS - 10.0.0.4 - Alternate DNS - not set And a Linux guest system - Ubuntu 9.04 - with two Network adapters - Bridged (eth0) and Host-only (eth1), and setting eth1 IP Address to 10.0.0.4, leaving the eth0 to be set by DHCP. Then, I have configured iptables as described in the paper, ie.: iptables -F -t nat iptables -F -t mangle iptables -t mangle -P PREROUTING ACCEPT iptables -t mangle -P OUTPUT ACCEPT iptables -t nat -P PREROUTING ACCEPT iptables -t nat -P POSTROUTING ACCEPT iptables -t nat -P OUTPUT ACCEPT iptables -t mangle -A PREROUTING -i eth0 -j ACCEPT iptables -t mangle -A PREROUTING -p udp -i eth1 -d 10.0.0.3 --dport 53 -j ACCEPT iptables -t mangle -A PREROUTING -p tcp -i eth1 --dport 80 -j ACCEPT iptables -t mangle -A PREROUTING -p tcp -i eth1 -d 10.0.0.3 --dport 6000:7000 -j ACCEPT iptables -t mangle -A PREROUTING -i eth1 -j ULOG iptables -t mangle -A PREROUTING -i eth1 -j DROP Now, when I try to ping the windows system from within the Linux system, it does not reply, I guess thats perfectly normal, because iptables is blocking ping responce. Same when I try to ping the Linux system from within the Windows. But when I try to access any web page from within the Windows system, I would expect that this action should get logged by iptables. But thing is, I don't see any of that kind of lines in log file (If I am looking in the right place, that is. :) It is at /var/log/messages, isn't it?). So, what do you think might be the problem here? I should note, that this is the first time I'm using linux, so don't expect ANY working knowledge of Linux at all... :) Also, since english is not my mother tongue, feel free to point out any gramatical mistakes... :) Thanks for any advice.

    Read the article

  • PHP pages are not parsed by Apache on CentOS

    - by Ram
    I have installed Centos 5.x, Apache 2.2, PHP 5.3 and MySQL 5.5. I also installed phpMyAdmin. I am able to access phpMyAdmin through the browser without any issues. However, when I create a simple index.php with phpinfo() function in the default directory, that page is served without php parsing. As we all know, phpMyAdmin is a php application. This is working fine from the same server but not the simple php page from the doc root directory ??!!!. Of course, I tried moving this page into phpMyAdmin folder and tried accessing it, but no success. Please note that I updated httpd.conf file with appropriate directives based on the php installation guide.Following directives were added to httpd.conf. AddTyoe application/x-httpd-php LoadModule php5_module /usr/lib/httpd/modules/libphp5.so <FilesMatch "\.php$"> SetHandler application/x-httpd-php </FilesMatch> File locations are: docroot - /var/www/html phpMyAdmin folder - /var/www/html/phpMyAdmin File privileges are: [root@linuxdev1 html]# ls -Z -rwxr-xr-x root root index.php drwxr-xr-x root root phpMyAdmin -rw-r--r-- root root phpMyAdmin-3.4.3.2-english.tar.gz drwxr-xr-x root root test1 Any help is appreciated.

    Read the article

  • Outbound ports to allow through firewall

    - by dunxd
    This question was asked before, but in a rather general way. I'm asking more specifically based on my current requirements. We have a number of remote offices made up of a bunch of PCs and an ASA 5505 which is used as firewall and VPN termination point. In the offices we share the internet connection with one or more other organisations over whom we have very little control, asides from the config on the ASAs. For a bunch of reasons I'd like to lock down these ASA 5505s to only allow outbound traffic to ports used by applications we know we need. I'm putting a standard config to roll out to all the ASAs, and if we need to open up ports for the other orgs we can do it on request. But I want to leave open the most commonly required ports so we can get up and running without waiting on other folks technical staff to get back. I plan to allow the following TCP ports to support commonly required resources: POP3 (110 and 995) HTTP (80 and 443) IMAP4 (143 and 993) SMTP (25 and and 465) The question really is, what other ports do I need to leave open to allow for "normal" working. I've seen UDP port 53 for DNS as one. Are there any others that would be worth opening up? Just to note - I'll also be setting up monitoring systems to keep an eye on the ports we do allow. Any of the above could be misused of course. We'll also back all this up with signed agreements. But I'm aiming for a technical solutions where I don't have to start out with the full requirements of everyone we share connections with. See also: outbound ports that are always open

    Read the article

  • How to delete removed devices from a mdadm RAID1?

    - by Kabuto
    I had to replace two hard drives in my RAID1. After adding the two new partitions the old ones are still showing up as removed while the new ones are only added as spare. I've had no luck removing the partitions marked as removed. Here's the RAID in question. Note the two devices (0 and 1) with state removed. $ mdadm --detail /dev/md1 mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. /dev/md1: Version : 00.90 Creation Time : Thu May 20 12:32:25 2010 Raid Level : raid1 Array Size : 1454645504 (1387.26 GiB 1489.56 GB) Used Dev Size : 1454645504 (1387.26 GiB 1489.56 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Nov 12 21:30:39 2013 State : clean, degraded Active Devices : 1 Working Devices : 3 Failed Devices : 0 Spare Devices : 2 UUID : 10d7d9be:a8a50b8e:788182fa:2238f1e4 Events : 0.8717546 Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 2 8 34 2 active sync /dev/sdc2 3 8 18 - spare /dev/sdb2 4 8 2 - spare /dev/sda2 How do I get rid of these devices and add the new partitions as active RAID devices? Update I seem to have gotten rid of them. My RAID is resyncing, but the two drives are still marked as spares and are number 3 and 4, which looks wrong. I'll have to wait for the resync to finish. All I did was to fix the metadata error by editing my mdadm.conf and rebooting. I tried rebooting before, but this time it worked for whatever reason. Number Major Minor RaidDevice State 3 8 2 0 spare rebuilding /dev/sda2 4 8 18 1 spare rebuilding /dev/sdb2 2 8 34 2 active sync /dev/sdc2

    Read the article

  • Ping results "echo'd" out to a text file on Windows XP machine has unusual character rather than the outcome

    - by Richard Rice
    Here is the situation. I have a Windows XP machine that I would like to ping another machine and send the output to a text file for review at a later date. This is quite a standard question with several answers on this website and others. The issue I have is that when reviewing the txt file afterwards, the outcome of the ping appears as a strange character. A screenshot of the text file output can be found at the following link. Please note its the first 8 ping outputs that have a strange character. The %%B was me hoping that would output what I wanted! (Cannot upload image to any hosting websites because the company I work for seems to be afraid of them ....) Here is the code I'm using: @echo off For /f "tokens=1-2 delims=/:" %%b in ("%TIME%") do (set mytime=%%a%%b) PING 1.1.1.1 -n 1 -w 1000 >NUL for /f "tokens=*" %%A in ('ping 192.168.5.2 -w 10 -n 1') do (echo %date% %time% %%A >> C:\Temp\Ping_Checks\GarrickSide.txt && GOTO Ping ) :Ping PING 1.1.1.1 -n 1 -w 1000 >NUL for /f "tokens=* skip=2" %%A in ('ping 192.168.5.2 -w 10 -n 1') do (echo %date% %time% %%A >> C:\Temp\Ping_Checks\GarrickSide.txt && GOTO Ping ) This code works fine on a Windows 7 machine, but not on a Windows XP machine. It appears that the echo command isn't outputting the right data, but I don't understand this enough to correct it. Can anyone spot where I'm going wrong?

    Read the article

  • Choosing local versus public domain name for Active Directory

    - by DSO
    What are the pros and cons of choosing a local domain name such as mycompany.local versus a publicly registered domain name such as mycompany.com (assuming that your org has registered the public name)? When would you choose one over the other? UPDATE Thanks to Zoredache and Jay for pointing me to this question, which had the most useful responses. That also led me to find this Microsoft Technet article, which states: It is best to use DNS names that are registered with an Internet authority in the Active Directory namespace. Only registered names are guaranteed to be globally unique. If another organization later registers the same DNS domain name, or if your organization merges with, acquires, or is acquired by other company that uses the same DNS names, then the two infrastructures cannot interact with one another. Note Using single label names or unregistered suffixes, such as .local, is not recommended. Combining this with mrdenny's advice, I think the right approach is to use either: Registered domain name that will never be used publicly (e.g. mycompany.org, mycompany.info, etc). Subdomain of an existing public domain name which will never be used publicly (e.g. corp.mycompany.com). The "never used publicly" part is a business decision so its probably best to get sign off from those in the company authorized to reserve domain names and subdomains. E.g. you don't want to use a registered name or subdomain that the marketing dept later wants to use for some public marketing campaign.

    Read the article

  • How do I set up Tomcat 7's server.xml to access a network share with an different url?

    - by jneff
    I have Apache Tomcat 7.0 installed on a Windows 2008 R2 Server. Tomcat has access to a share '\server\share' that has a documents folder that I want to access using '/foo/Documents' in my web application. My application is able to access the documents when I set the file path to '//server/share/documents/doc1.doc'. I don't want the file server's path to be exposed on my link to the file in my application. I want to be able to set the path to '/foo/Documents/doc1.doc'. In http://www3.ntu.edu.sg/home/ehchua/programming/howto/Tomcat_More.html under 'Setting the Context Root Directory and Request URL of a Webapp' item number two says that I can rename the path by putting in a context to the server.xml file. So I put <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" /> <Context path="/foo" docBase="//server/share" reloadable="false"></Context> </Host> The context at the bottum was added. Then I tried to pull the file using '/foo/Documents/doc1.doc' and it didn't work. What do I need to do to get it to work correctly? Should I be using an alias instead? Are there other security issues that this may cause?

    Read the article

  • Cisco FWSM -> ASA upgrade broke our mail server

    - by Mike Pennington
    We send mail with unicode asian characters to our mail server on the other side of our WAN... immediately after upgrading from a FWSM running 2.3(2) to an ASA5550 running 8.2(5), we saw failures on mail jobs that contained unicode. The symptoms are pretty clear... using the ASA's packet capture utility, we snagged the traffic before and after it left the ASA... access-list PCAP line 1 extended permit tcp any host 192.0.2.25 eq 25 capture pcap_inside type raw-data access-list PCAP buffer 1500000 packet-length 9216 interface inside capture pcap_outside type raw-data access-list PCAP buffer 1500000 packet-length 9216 interface WAN I downloaded the pcaps from the ASA by going to https://<fw_addr>/pcap_inside/pcap and https://<fw_addr>/pcap_outside/pcap... when I looked at them with Wireshark Follow TCP Stream, the inside traffic going into the ASA looks like this EHLO metabike AUTH LOGIN YzFwbUlciXNlck== cZUplCVyXzRw But the same mail leaving the ASA on the outside interface looks like this... EHLO metabike AUTH LOGIN YzFwbUlciXNlck== XXXXXXXXXXXX The XXXX characters are concerning... I fixed the issue by disabling ESMTP inspection: wan-fw1(config)# policy-map global_policy wan-fw1(config-pmap)# class inspection_default wan-fw1(config-pmap-c)# no inspect esmtp wan-fw1(config-pmap-c)# end The $5 question... our old FWSM used SMTP fixup without issues... mail went down at the exact moment that we brought the new ASAs online... what specifically is different about the ASA that it is now breaking this mail? Note: usernames / passwords / app names were changed... don't bother trying to Base64-decode this text.

    Read the article

  • RewriteRule causes POST data to get dumped before I can access it

    - by MatthewMcGovern
    I'm currently setting up my own 'webserver' (a Ubuntu Server on some old hardware) so I can have a mess around with PHP and get some experience managing a server. I'm using my own little MVC framework and I've hit a snag... In order for all requests to make it through the dispatcher, I am using: <Directory /var/www/> RewriteEngine On RewriteCond %{REQUEST_URI} !\.(png|jpg|jpeg|bmp|gif|css|js)$ [NC] RewriteRule . HomeProjects/index.php [L] </Directory> Which works great. I read on Stackoverflow to change the [L] to [P] to preserve post data. However, this causes every page to return: Not Found The requested URL <url> was not found on this server. So after some more searching, I found, "Note that you need to enable the proxy module, and the proxy_http_module in the config files for this to work." The problem is, I have no idea how to do this and everything I google has people using examples with virtual hosts and I don't know how to 'translate' that into something useful for my setup. I'm accessing my webserver via my public IP and forwarding traffic on port 80 to the web server (like I'm pretending I have a domain/server). How can I get this enabled/get post data working again? Edit: When I use the following, the server never responds and the page loads indefinately? LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so <Directory /var/www/> RewriteEngine On RewriteCond %{HTTP_REFERER} !^http://(.+\.)?82\.6\.150\.51/ [NC] RewriteRule .*\.(jpe?g|gif|bmp|png|jpg)$ /no-hotlink.png [L] RewriteCond %{REQUEST_URI} !\.(png|jpg|jpeg|bmp|gif|css|js)$ [NC] RewriteRule . HomeProjects/index.php [P] </Directory>

    Read the article

  • Expire Files In A Folder: Delete Files After x Days

    - by Brett G
    I'm looking to make a "Drop Folder" in a windows shared drive that is accessible to everyone. I'd like files to be deleted automagically if they sit in the folder for more than X days. However, it seems like all methods I've found to do this, use the last modified date, last access time, or creation date of a file. I'm trying to make this a folder that a user can drop files in to share with somebody. If someone copies or moves files into here, I'd like the clock to start ticking at this point. However, the last modified date and creation date of a file will not be updated unless someone actually modifies the file. The last access time is updated too frequently... it seems that just opening a directory in windows explorer will update the last access time. Anyone know of a solution to this? I'm thinking that cataloging the hash of files on a daily basis and then expiring files based on hashes older than a certain date might be a solution.... but taking hashes of files can be time consuming. Any ideas would be greatly appreciated! Note: I've already looked at quite a lot of answers on here... looked into File Server Resource Monitor, powershell scripts, batch scripts, etc. They still use the last access time, last modified time or creation time... which, as described, do not fit the above needs.

    Read the article

  • Why am I getting a Sharepoint error on a simple "hello world" web page?

    - by Fetchez la vache
    I've been granted admin access to an internal IIS server on which I need to set up a web site. Before doing anything technical I wanted to ensure that I could access the server, but when attempting to access a simple page (that does not refer to Sharepoint) at http://localhost/index.html when logged onto the server directly, I am getting Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load file or assembly 'Microsoft.SharePoint' or one of its dependencies. The system cannot find the file specified. Source Error: Line 1: <%@ Assembly Name="Microsoft.SharePoint"%><%@ Application Language="C#" Inherits="Microsoft.SharePoint.ApplicationRuntime.SPHttpApplication" %> Source File: /global.asax Line: 1 Assembly Load Trace: The following information can be helpful to determine why the assembly 'Microsoft.SharePoint' could not be loaded. WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.5456; ASP.NET Version:2.0.50727.5456 To be quite honest I know zip about Sharepoint, so why am I getting a sharepoint error on a basic "hello world" html page? Cheers :) Update: I've since supposedly uninstalled Sharepoint, but am still getting this error. Any ideas welcome!

    Read the article

  • How to convince Out of state resume filtering?

    - by sanksjaya
    Hello folks! Personally I've applied to quiet a handful of IT admin jobs inside my state and to the ones that are way far away. The sad part is I never miss to get an interview with the jobs in my state, but get a call once in a blue moon from jobs out of my state. Note: All the jobs are of similar nature. Recently one of my friends told me that "Applicants with local addresses are the ones that are even looked upon". How true is this? Does filtering take place at address level before qualifications? Is using a PO box on resume acceptable [one for each state like CA, TX, VA]? Any other suggestions to get calls from out of state? Thank you :) [wiki] =================== EDIT: Ignore my previous questions. PO box is out of my mind. So changed the title. Here are my new questions: 1. How true is this? Does filtering take place at address level before qualifications? 2. I'm ready to relocate anywhere on my own (Time/Money). How do I convince out-of-state resume filtering HR's? Sanks

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >