Search Results

Search found 11100 results on 444 pages for 'xt 20'.

Page 299/444 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • NAT ports - how do they work?

    - by Davidoper
    I have the following network schema: Computer A: three nics: NIC 1 (eth0): dhcp, public internet NIC 2 (eth1): static 192.168.1.1, gateway for Computer B NIC 3 (eth2): static 192.168.2.1, gateway for Computer C Computer B: static 192.168.1.2, using gateway 192.168.1.1 (NIC 2). Computer C: static 192.168.2.2, using gateway 192.168.2.1 (NIC 3). So I applied this to get NAT working: iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE Every computer can connect to the internet now. I have been applying rules to the main computer (Computer A), like dropping connections to some ports, e.g ssh: iptables -A INPUT -p tcp --dport 22 -j DROP But for instance, now I would like only allow connections for ports 20,21,22,53 and 80 in Computer C, and ignore the outside traffic if it's not related to those ports. The allowed connections should be FROM Computer C to outside, but not from outside to Computer C (I mean - Computer C is not hosting any HTTP or SSH, but it is going to use them as a client). I guess this sould be done like this: iptables -A OUTPUT -i eth2 -o eth0 -p tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth2 -o eth0 -p tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT The last rule (dropping any other traffic different from those) is at the end of the configuration, so -A should be working correctly. The thing is... it is not working. If I put the last rule like this: iptables -A FORWARD -i eth2 -o eth0 -j DROP It just drops everything and, for instance, port 21 (previously opened as you can see above) is not either working. Can you tell me what could I have done wrong? I have been struggling with this problem for some time and I am unable to solve it. Thanks!

    Read the article

  • ssh timeout issue connecting to an EC2 instance on OS X

    - by mamusr
    I am new to AWS and not a networking expert but curious to know more about it. I created a VPC with a public subnet only. Then i created an EC2 instance using an Ubuntu 14.04 64-bit pv AMI image (ami-e84d8480) as well generating the key pair needed to connect to it through ssh. I followed amazon's instructions to connect to an EC2 instance via ssh which did not work. Here is my attempted input and debug log: Running on OS X 10.9.4 user$ ssh -vvv -i key.pem [email protected] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 102: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.xxx.xxx [xxx.xxx.xxx.xxx] port 22. debug1: connect to address xxx.xxx.xxx.xxx port 22: Operation timed out ssh: connect to host xxx.xxx.xxx.xxx port 22: Operation timed out To attempt to resolve the issue: I enabled the SSH port. Tried different usernames other than ubuntu, like ec2-user and root. Initially set an inbound ssh rule in the security group to connect to only my ip address. When that did not work, i changed it to allow any ip to connect. But those actions did not fix the problem. Here are my guesses as to what i am missing in getting the EC2 instance connection to work. My etc/ssh_config file may be preventing the connection from taking place. I may have missed an important networking detail when setting up the VPC. I do not have a public ip address specified for the instance. I am connecting through the private ip address. My questions for the community: Am i going about it the wrong way connecting to the instance through the private ip address? if so, do i need to specify a public ip address for it to connect or some other method?

    Read the article

  • Terminating multi-mode fiber

    - by murisonc
    I'm looking at the feasibility of terminating multi-mode fiber connections ourselves. We would be using LC connectors. I've done some research and found two different methods. One requires polishing the ends and using epoxy while the other doesn't. I like the idea of not having to polish the ends but there doesn't seem to be much information on quality or ease of use. I've found two vendors (3M and Corning) that offer kits for terminating fiber without polishing or using epoxy. Does anyone have any experience with both methods that can offer some advice? Copper is easy but fiber seems to be a whole different animal. EDIT: After looking into fusion splicing suggested in the answer I've determined it's not for us. It's my understanding that is primarily used for outside plant and is better suited for single mode fiber. It's a good answer but doesn't address the question directly. Some more information about our situation. We will only be terminating multi-mode fiber inside a building and only doing between 4 and 20 pair a year. Hiring an outside person won't work due to our location. There are currently a couple people on-site that can terminate fiber (working for another company and charging large fees) but they can only do ST and SC connectors and we only use LC. So once again does anyone have experience with terminating using both epoxy type connectors and the other type (similar to Corning Unicam)?

    Read the article

  • Problem importing pictures from a digital camera with Windows 7

    - by snark
    Hi, Since I'm using Windows 7 (Beta, then RC, now RTM), I have issues when I download pictures from my digital cameras. It happens with my 2 cameras: a Canon Powershot S2 IS and a Canon Ixus 80 IS. When I plug a camera (any of them) to a USB port and switch it on in Play mode, the Autoplay function of Windows 7 starts with this screen: I select "Import pictures and videos" to call the native Windows 7 tool. It searches a bit for pictures to download from the camera and starts the transfer. However, during the transfer, I often get errors like this one: If I use "Try again", it works fine the second time and the picture is retrieved correctly. It's very annoying when it happens 20 or 30 times in a 500-picture download. I cannot leave it running standalone, as I have to watch for the errors and click on "Try again". Any idea what is causing these errors? I tried changing the USB port (normally the cameras are connected via a USB hub but it happens also when I connect them directly to a MB USB port) and the USB cable, but no success. I also checked the SD card by connecting them with a card reader and running a ChkDsk on them but it found no errors on the cards. Update: No problem when I copy the pictures manually with the Windows Explorer. And no problem either when I access the card with a reader. The builtin import tool of Windows is convenient as it sorts the pictures automatically by date (1 folder per day). And this is the way I sort my pictures

    Read the article

  • HP Procurve 2610 intervlan routing

    - by user19039
    Can anyone tell me why inter vlan routing is working for all vlans except my newly created vlan 4/ I have an hp procurve 2610. Any help would be appreciated. I have basically this 1 switch with all unmanaged switches attached to the core. We have a second 2610 on port 28 Running configuration: ; J9085A Configuration Editor; Created on release #R.11.25 hostname "Core_HP" interface 22 speed-duplex 100-full exit ip routing snmp-server community "public" Unrestricted vlan 1 name "DEFAULT_VLAN" untagged 1-12,17-22,26-27 ip address 192.168.4.6 255.255.255.0 tagged 25 no untagged 13-16,23-24,28 exit vlan 2 name "WAN" untagged 28 ip address 10.254.254.3 255.255.255.0 exit vlan 3 name "Wireless" untagged 13-16,24 ip address 192.168.7.6 255.255.255.0 ip helper-address 192.168.4.2 tagged 27 exit vlan 35 name "guest" untagged 23 tagged 24 exit vlan 4 name "esxi" untagged 25 ip address 10.10.1.1 255.255.248.0 exit ip route 192.168.5.0 255.255.255.0 10.254.254.1 ip route 192.168.6.0 255.255.255.0 10.254.254.1 ip route 0.0.0.0 0.0.0.0 192.168.4.10 show ip route IP Route Entries Destination Gateway VLAN Type Sub-Type M etric Dist. ------------------ --------------- ---- --------- ---------- - --------- ----- 0.0.0.0/0 192.168.4.10 1 static 1 1 10.10.0.0/21 esxi 4 connected 0 0 10.254.254.0/24 WAN 2 connected 0 0 127.0.0.0/8 reject static 0 250 127.0.0.1/32 lo0 connected 0 0 192.168.4.0/24 DEFAULT_VLAN 1 connected 0 0 192.168.5.0/24 10.254.254.1 2 static 1 1 192.168.6.0/24 10.254.254.1 2 static 1 1 192.168.7.0/24 Wireless 3 connected 0 0 show ip Internet (IP) Service IP Routing : Enabled Default TTL : 64 Arp Age : 20 VLAN | IP Config IP Address Subnet Mask Prox y ARP ------------ + ---------- --------------- --------------- ---- ----- DEFAULT_VLAN | Manual 192.168.4.6 255.255.255.0 No WAN | Manual 10.254.254.3 255.255.255.0 No Wireless | Manual 192.168.7.6 255.255.255.0 No esxi | Manual 10.10.1.1 255.255.248.0 No guest | Disabled

    Read the article

  • How to install a private user script in Chrome 21+?

    - by Mathias Bynens
    In Chrome 20 and older versions, you could simply open any .user.js file in Chrome and it would prompt you to install the user script. However, in Chrome 21 and up, it downloads the file instead, and displays a warning at the top saying “Extensions, apps, and user scripts can only be added from the Chrome Web Store”. The “Learn More” link points to http://support.google.com/chrome_webstore/bin/answer.py?hl=en&answer=2664769, but that page doesn’t say anything about user scripts, only about extensions in .crx format, apps, and themes. This part sounded interesting: Enterprise Administrators: You can specify URLs that are allowed to install extensions, apps, and themes directly through the ExtensionInstallSources policy. So, I ran the following commands, then restarted Chrome and Chrome Canary: defaults write com.google.Chrome ExtensionInstallSources -array "https://gist.github.com/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://gist.github.com/*" Sadly, these settings only seem to affect extensions, apps, and themes (as it says in the text), not user scripts. (I’ve filed a bug asking to make this setting affect user scripts as well.) Any ideas on how to install a private user script (that I don’t want to add to the Chrome Web Store) in Chrome 21+? Update: The problem was that gist.github.com’s raw URLs redirect to a different domain. So, use these commands instead: # Allow installing user scripts via GitHub or Userscripts.org defaults write com.google.Chrome ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" This works!

    Read the article

  • How to utilize Varnish for A/B Testing and Feature Rollout?

    - by Ken
    Hi all, wasn't really sure if this should go here on or stackoverlow - admins, please move if i'm mistaken (and sorry). Today we have our web layer exposed to the world. We would like to add Varnish in front of our web layer to accelerate the site and reduce calls to the backend. However, we have some concerns and i was wondering how most people approach them: A/B Testing - How do you test two "versions" of each page and compare? I mean, how does varnish know which page to serve up? If and how do you save seperate versions on each page? Feature rollout - how would you set up a simple feature rollout mechanism? Let's say i want to open a new feature/page to just 10% of the traffic.. and then later increase that to 20%? How do you handle code deployments? Do you purge your entire varnish cache every deployment? (We have deployments on a daily basis). Or do you just let it slowly expire (using TTL)? Any ideas and examples regarding these issues is greatly appreciated! Thanks in advance. Ken.

    Read the article

  • Keyboards for kiosk/outdoor/abusive environments?

    - by Justin Scott
    We have a bunch of kiosks deployed into let's just say... abusive environments. The enclosures we had built are touch as nails, and the HP thin client computers are working great. The keyboards that were purchased for the project have been nothing but problems. They're a generic brand direct from a Chinese manufacturer. They're stainless steel with keys mounted from the inside and a trackball, but they've been deployed for only a month and nearly 20% of them are already out of service due to keys sticking, keys not working, trackball problems, water damage, and a variety of other issues. Are there any kiosk keyboards that can take a beating without breaking so easily? Ideally they should be tamper-proof (keys can't be removed), waterproof, lettering should be engraved into the keys, trackball, option for a single mouse button would be nice, and some protection to keep debris out of the keys so they don't stick (sticky cleaners, food debris, etc.). Does such a beast exist? Everything we've looked at is susceptible to easy damage. We need the M1 Abrams Tank of keyboards. Any suggestions?

    Read the article

  • Any way to bring my laptop battery back to life?

    - by Josh
    Recently my laptop battery will get extremely hot (definitely hotter than it should get) when I charge it. After that I usually end up removing it once it's fully charged to let it cool down, which takes a couple hours... Question is, is my battery dead? My last battery I had that died just ended up lasting 2 - 3 minutes on battery, no weird heat issues. And is there any way to possibly fix this? Probably not but I won't be able to get a replacement anytime soon. UPDATE: A few days ago when this happened and it cooled down, assuming it was fully charged, I ran my laptop on battery, and the battery life lasted about 10 minutes and then the laptop shutdown. I then plugged it in later and charged it back up, and for a while I had a orange light blinking on my laptop - which I assumed meant the battery was dead, especially since I got 10 minutes battery life. Then today, I turned my laptop on and was surprised to see that the battery was at 20% and charging (it's been plugged in since the incident above, so it should have been fully charged when I shut it off) I let it charge up, and as usual it got pretty hot around the time it was fully charged. So I turned my laptop off and pulled the battery out to let it cool down Now the thing is, just now I tried running it on battery, and it's been going for an hour now... so maybe its not dead? (also the orange light is no longer blinking...) Thanks in advance if anyone knows whats going on, and how to fix it, if its fixable =] EDIT: Some info if it helps... my laptop is about 2 years ago, and it's an Asus K50ID. I know laptop batteries usually don't last more than a year but I'm trying to keep this one going for as long as I can.

    Read the article

  • How to verify a self-signed certificate from another server using openssl?

    - by ntsue
    I am new to openssl and I am having some trouble verifying (from a client machine) an ftp server using ssl with a self-signed certificate. I generated the .cer file by going to my server in IIS and exporting the certificate without the private key. I believe that this is all that I should need on the client side, right? I use the following code to verify the certificate openssl verify ftp.cer and the error that I get back is error 20 at 0 depth lookup:unable to get local issuer certificate I tried this as well: openssl verify -CAfile ftp.cer ftp.cer but received the same error. From what I understand about SSL, this is happening because I have no chain of trust that connects to this server. By default, openssl did not install any trusted CAs and this is fine. I would just like to tell it to trust this server. I tried various tutorials telling me how to add a certificate authority, including this one here, however the instructions are for linux and include adding a symlink and I am trying to do this in windows. If anyone could provide any guidance on how to do this, or enlighten me if I am not understanding something correctly, I would greatly appreciate it. Thanks!

    Read the article

  • Nvidia GTS 360M in Asus laptop shuts off when gaming on external display

    - by mr odus
    I have had this problem on my Asus G51j Intel I5, Nvidia GTS360M graphics card. Just updated the drivers to latest off Nvidias site. Most games I play, If I hook it up through an external monitor(this happens whether it's HDMI or VGA). The laptop hard shuts down after about 20 minutes, give or take. This tends to happen on more graphically intense games like Call of dutys, bioshock. I'm running Windows 7, latest nvidia drivers. Games work fine on my laptops screen, and Movies, general computing work fine on the external displays. The laptop always sits on a cooling pad and the latest time was in front of my AC unit, I ran heatfan or whatever the heat tracking software was, and my temperatures were normal through a shut down. This has been happening for the life of the laptop. I dont play very many games, and even fewer on an external monitor, so this issue doesnt come up much Is it possible I have a faulty graphics card? Is there anything else I can try?

    Read the article

  • Zero sized tar.gz file found inside a tar.gz file

    - by PavanM
    My current directory contains a single file like this- $ls -l -rw-r--r-- 1 root staff 8 May 28 09:10 pavan Now, I want to tar and gzip this file like $tar -cvf - * 2>/dev/null |gzip -vf9 > pavan.tar.gz 2>/dev/null (I am aware I am creating the zipped file in the same directory as the original file) When I run the above tar/gzip commands around 20 times, a few times I observe that the final tarred and zipped file pavan.tar.gz file has a ZERO sized pavan.tar.gz file. I am not sure from where is this zero sized file coming into the archive from. Note: I am NOT running tar/gzip commands on an already existing tar.gz file. I always make sure that the directory has only one file before running the commands On googling, as described here, I suspected that the tar.gz being created was also part of the file being archived. But in my case, gzip is the one who's creating the final file and by the time gzip runs, tar should be done tarring. This is happening on AIX but I've used Linux tag too, to draw more attention, as I guess the problem is platform independent.

    Read the article

  • Web SMTP Server(foo.com) will not send mail to exchange server which is also(foo.com)

    - by Atom
    I think I understand this problem fully, but I do not know how to approach it or where to go in terms of troubleshooting. I've got my one domain http://foo.com that runs a Zen Cart installation that needs to be able to send emails to users(order confirmation, password reset). This works fine to send to any other domain BUT foo.com. I'm running a locally hosted exchange server that is foo.com, and we can send and receive email just fine. If I run tail -f /usr/local/psa/var/log/maillog I recieve this error: Apr 1 10:08:51 foo qmail-local-handlers[25824]: Handlers Filter before-local for qmail started ... Apr 1 10:08:51 foo qmail-local-handlers[25824]: from= Apr 1 10:08:51 foo qmail-local-handlers[25824]: [email protected] Apr 1 10:08:51 foo qmail-local-handlers[25824]: cannot reinject message to '[email protected]' Apr 1 10:08:51 foo qmail: 1270141731.583139 delivery 32410: failure: This_address_no_longer_accepts_mail./ Apr 1 10:08:51 foo qmail: 1270141731.584098 status: local 0/10 remote 0/20 I understand that the foo.com SMTP service doesn't have any account but the one that is used to authenticate mail being sent, so of course, I understand why it's saying 'this address no longer accepts mail'. My question is, how can I get the foo.com(web) SMTP service to handle emails meant for my exchange server([email protected]) or how do I handle the mail that needs to be sent to our exchange server? Is this something to do with MX records? Thanks in advance A

    Read the article

  • PNP4Nagios, nagiosgraph, separate Cacti, or something else for Nagios trending

    - by Matt
    I've been using Nagios for a while now and recently started using Cacti after being dissatisfied with the lack of scaling and lack of any GUI in MRTG. I'm interested in adding trending to my Nagios installation and wondered what was the best route to go. I've looked around a bit and have seen what's available, but there's not a lot of information around to differentiate them from each other. My Nagios install has about 250 hosts and 1100 service checks, but many of them are just simple network devices and there's only about 20 servers and 300 services associated with them. All servers but 2 are running Windows Server 2003. What are the main highlights of PNP4Nagios vs. nagiosgraph, or would I be better off using some sort of tool to convert the data to RRD form and just view it directly in Cacti? Is there a completely different direction I could go that would be even better? Please comment if you need any more information, I tend to be too wordy and tried to keep this question brief. Thanks!

    Read the article

  • Amazon EC2 instance was not available for few minutes (amazon showed that everything ok)

    - by Salvador Dali
    Few minutes ago my amazon Ec2 instance was unavailable for a few minutes. During this time neither I was able to connect to web-site with http, nor I was able to ssh to it. Also I was not able to connect to my amazon management console for some time (less than amount of unavailability of my instance). When I was able to connect to management console, it was showing me that everything is running smoothly (but I still was not able to connect to instance in any way for a minute or two). During this time I have checked their status page just to see that there is no issues (my instance is in Ireland and there is nothing wrong there today). After that I was able to log in. I checked my logins with last to see that no one except me was logging in. I also looked in apache logs and there was no errors or warnings during this time. Right now when I see my amazon monitor, I see a small spike in CPU in last 15 minutes (but this is from 10% to like 20%) I have no idea what can it be (I have never experienced anything like this before) and therefore I have no idea how scared should I be or what else should I look for. Can anyone give me a hint what my actions should be in such situation?

    Read the article

  • Apache2 random 403 error & info server busy logs on Ubuntu

    - by risyasin
    Hello, I have a strange situation with apache2. Meanless, random 403 errors. Any page (html, php etc.) normally working. but if i request repeatedly by pressing refresh button of browser. it interrupts & sends a 403 randomly. after a few seconds it works again. in the error log, i see client denied by server configuration. main error log of apache says [info] server seems busy, (you may need to increase StartServers, or Min/MaxSpareServers), spawning 8 children, there are 99 idle, and 137 total children my current values IfModule mpm_prefork_module StartServers 120 MinSpareServers 100 MaxSpareServers 200 MaxClients 256 MaxRequestsPerChild 500 /IfModule i've increased 10 by 10. from 20. but nothing solved. i've disabled KeepAlive. What may cause this problem ? thank you in advance. a fresh install Ubuntu server x86 8.04.4 Virtualmin from it's website (not from debian repositories). Linux 2.6.24-27-server #1 SMP i686 - Apache 2.2.8 Mpm prefork Virtualmin version 3.78.gpl GPL PHP Version 5.2.4-2ubuntu5.10 Loaded modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) actions_module shared) alias_module (shared) auth_basic_module (shared) auth_digest_module (shared) uthn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) uthz_host_module (shared) authz_user_module (shared) autoindex_module (shared) ache_module shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) expires_module (shared) fcgid_module (shared) file_cache_module (shared) eaders_module (shared) mime_module (shared) mime_magic_module (shared) evasive20_module shared) negotiation_module (shared) php5_module (shared) rewrite_module (shared) etenvif_module (shared) ssl_module (shared) status_module (shared) Syntax OK

    Read the article

  • "The directory name is invalid" when trying to install drivers in Windows 7 via Device Manager

    - by Luke
    First off, this computer is not mine, it's a customer's system. Having said that... The hard drive was moved to a new motherboard, CPU, RAM combo, and booted up fine. Customer puts in driver CD, drivers won't load. He brings it into me. Under Device Manager for Windows 7 x64, I see lots of PCI to PCI bridge, one SMBus Controller, and about 20 Unknown Devices. Greeeeeat... So I start with the SMBus driver directly from the Asus website for the motherboard (P8H77-M Pro). If I install from the setup program, it tells me to reboot, then it starts the install. It gets half way through the setup, then fails (An unknown error occurred. Setup will exit). When I try to point to the folder from Device Manager, it starts copying files for the driver, even presents me with the proper name of the device, but says that an error has occurred there as well: The directory name is invalid. Doing some Googling, I saw that many people had this issue with Vista. K, Vista and 7 are similar, maybe the solutions are the same... But they aren't. I tried: Copying the entire driver folder and setup utility to the Program Files folder and running it / selecting it in DM Downloading another set of drivers in case this one is corrupt Disabling UAC Deleting and recreating the %WINDIR%\TEMP folder Removing all references to previous hardware that I could find, even in Device Manager's hidden mode So far, nothing has worked. A wipe and reload will be out of the question

    Read the article

  • Dynamic nginx domain root path based on hostname?

    - by Xeoncross
    I am trying to setup my development nginx/PHP server with a basic master/catch-all vhost config so that I can created unlimited ___.framework.loc domains as needed. server { listen 80; index index.html index.htm index.php; # Test 1 server_name ~^(.+)\.frameworks\.loc$; set $file_path $1; root /var/www/frameworks/$file_path/public; include /etc/nginx/php.conf; } However, nginx responds with a 404 error for this setup. I know nginx and PHP are working and have permission because the localhost config I'm using works fine. server { listen 80 default; server_name localhost; root /var/www/localhost; index index.html index.htm index.php; include /etc/nginx/php.conf; } What should I be checking to find the problem? Here is a copy of that php.conf they are both loading. location / { try_files $uri $uri/ /index.php$is_args$args; } location ~ \.php$ { try_files $uri =404; include fastcgi_params; fastcgi_index index.php; # Keep these parameters for compatibility with old PHP scripts using them. fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # Some default config fastcgi_connect_timeout 20; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_pass 127.0.0.1:9000; }

    Read the article

  • exim4 to relay emails

    - by Matthieu
    I have exim4 installed on a Linux box. The basics work fine and I can send email from that machine without any problem to whatever email address. I also have a printer/scanner which is capable to send scans as emails. It needs an SMTP gateway to be able to do that. So I give the IP address of that Linux box, changed the configuration a little bit but still cannot get it to work. After I run dpkg-reconfigure exim4-config, here is what I get in /etc/exim4/update-exim4.conf.conf : dc_eximconfig_configtype='internet' dc_other_hostnames='' dc_local_interfaces='127.0.0.1;192.168.2.2' dc_readhost='' dc_relay_domains='mycompanyemail.com' dc_minimaldns='false' dc_relay_nets='192.168.2.0/24' dc_smarthost='' CFILEMODE='644' dc_use_split_config='false' dc_hide_mailname='' dc_mailname_in_oh='true' dc_localdelivery='mail_spool' My problem is that with this configuration, I can only send to emails @mycompanyemail.com... It says I can use wildcard, but when I do that, the '*' is replaced by whatever filename is in the directory where I run all that. How can I configure it to be able to send emails with any domain ? Or am I doing it wrong ? EDIT : here is the part of the log that's causing trouble : 2011-08-03 16:28:18 H=(NPI2D389C) [192.168.2.20] F=<[email protected]> rejected RCPT <[email protected]>: relay not permitted The first part ([email protected]) does not matter. I changed the email address. The point is that if this email is @mycompanyemail.com then everything works fine. Anything else does not work. I could add gmail.com, but I am looking to have any domain working...

    Read the article

  • Excel file growing huge (>150 MB)

    - by Josh
    There is one particular Excel file that is used by a number of employees at my company. It is edited from both Excel 2003 and 2007, with the "Sharing" feature turned on to allow multiple writers at once. The file has a decent amount of data on several sheets with some basic formatting, and used to be about 6MB, which seems reasonable for its content. But after a few weeks of editing, the file grew to 10, then 20 MB, and eventually skyrocketed to more than 150 MB, even though it still has about the same amount of data as before. It now takes 5-10 minutes to open it, and that much time again to save it. The first time this happened, I copied the content of each sheet into a new, blank workbook, and saved the new workbook; this brought it back down to about 6MB. Now, it has blown up again. The workbook uses the "Data Validation" feature to limit the values in certain columns to the contents of a few named ranges. Copying all the data into a new workbook means re-setting up all the data validation, which is a pain and not something that we want to do every month. As a troubleshooting step, I tried saving the file in "XML Spreadsheet 2003" format, hoping to get some insight into what was being stored. Sure enough, the file was almost a gig, and almost all of the 10 million lines look like this: <NamedCell ss:Name="Z_21D5114F_E50C_46AC_AA4F_C3FF540C717F_.wvu.FilterData"/> <NamedCell ss:Name="Z_1EE2BA5E_3011_4F9A_8ACD_E58835250FC4_.wvu.FilterData"/> <NamedCell ss:Name="Z_1E3BDCEA_6A72_4ECC_BF4F_7B03CC66181E_.wvu.FilterData"/> I've seen a few VBScripts online to manage and enumerate named cells that are hidden in Excel's built-in interface, though I wonder how they'd handle my 10 million named cells. What I really need, though, is an understanding of why this keeps happening. What actions in excel could be causing this?

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • Unix VPS server going down at almost the same time every day

    - by ronnz
    My server load seems to be really spiking and many times the server goes down at the same time each night (Around midnight). I have about 20 cPanel accounts hosted on it and have tried everything I know to try to find what is causing the issue. Some of the things I have tried: Combined all site access logs found in /etc/httpd/domlogs and cannot see anything unusual at the time of server going down. Checked most other logs in the var/log directory and found nothing indicating the issue at the time the server is going down. Checked cron logs and cannot see anything unusual.. See below. Last night CPU spiked to 7.5 at 00:14. What else can I be checking? How can I really monitor to find out the root cause? Dec 8 00:05:01 v1 crond[6082]: (root) CMD (/usr/local/cpanel/bin/dcpumon /dev/null 2&1) Dec 8 00:05:01 v1 crond[6084]: (root) CMD (/usr/local/cpanel/whostmgr/bin/dnsqueue /dev/null 2&1) Dec 8 00:10:01 v1 crond[6435]: (root) CMD (/usr/lib64/sa/sa1 1 1) Dec 8 00:10:01 v1 crond[6436]: (root) CMD (/usr/local/cpanel/bin/dcpumon /dev/null 2&1) Dec 8 00:15:12 v1 crond[6775]: (root) CMD (/usr/local/cpanel/scripts/autorepair recoverymgmt /dev/null 2&1) Dec 8 00:15:12 v1 crond[6776]: (root) CMD (/usr/local/cpanel/scripts/recoverymgmt /dev/null 2&1) Dec 8 00:15:12 v1 crond[6777]: (root) CMD (/usr/local/cpanel/bin/dbindex /dev/null 2&1) Dec 8 00:15:12 v1 crond[6781]: (root) CMD (/usr/local/cpanel/bin/dcpumon /dev/null 2&1) Dec 8 00:20:33 v1 crond[7047]: (root) CMD (/usr/lib64/sa/sa1 1 1)

    Read the article

  • Split Tunnel VPN using incorrect Tunnel

    - by Brian Schmeltz
    Our company has a handful of field offices that have recently been setup with a regular internet connection after we removed the T1 and router that connected them directly to our network. Now, when the users are in the office, they log in to the VPN to be able to connect to the network. For the sake of them being able to print and scan from the local multi-function we have setup a split tunnel VPN. We currently have about 15-20 users using this setup around the country without any problems. Recently one of our users started having problems accessing internal programs/sites when connecting from both home and the office. There are three other users in the same office and they do not have this problem. I assumed that it was something with the computer and went ahead and replaced it with another of the same model. The computer worked fine in our home office; however, when the user received it, she had the exact same problem both at home and in the field office. Thinking it may be a NIC driver issue I sent her another computer, this time a different model, same problem occurred. If I update the host file to point to the correct paths, things will work, and if I connect via a normal VPN connection everything works, but the user cannot scan or print - which is a problem. Have tried to find ways to create another tunnel on a normal VPN and have tried to find ways to force the correct tunnel on the split tunnel VPN. It appears that there is something related to the ISP because if I connect to Comcast or Verizon it is fine but once she connects to Insite then she has problems. I have been unable to get any support from Insite as they don't feel the issue is with them. We use a Nortel VPN client. Any thoughts or ideas would be appreciated.

    Read the article

  • Chipset fan on the frtiz - compressed air hasn't fixed anything - is there anything I can do?

    - by Anthony
    Yesterday, my computer started to make an annoying whining noise. Knowing that this is likely a fan issue, I opened the case and proceeded to determine which fan was causing the issue. I got some compressed air and tried cleaning out the dust around it (and the rest of the computer while I was at it). This hasn't seemed to fix the issue. Now, if it were just any fan, I would probably just replace the fan - they're relatively cheap after all. However, this is a special fan. Aside: For what its worth, I feel bad that the graphics card blocks part of the fan, but it is the only slot the graphics card fits, so I had no choice. After pulling out my motherboard user guide, it looks like this is a fan placed directly on top of the chipset. To be perfectly honest, I have no clue what the purpose of the chipset is - but it sounds important. After some quick research, I see that it is responsible for providing the bridge between my CPU, RAM and graphics, among other things. Just a quick search at Newegg tells me that chipset fans can be purchased at pretty reasonable prices (< 20 dollars). Is it practical to replace this fan? It is an old computer as computers go and I wouldn't be terribly upset to upgrade the motherboard and processor, so perhaps this is a sign. Hardware Specs: Motherboard: Asus A8N-E Chipset: NVIDIA nForce4 Ultra

    Read the article

  • Is this memory compatible with this motherboard???

    - by ClarkeyBoy
    Hi, I have a Foxconn P35AP-S as seen here. I need to get some more RAM since I only have 1 2GB stick. The current one is 1066MHz. I would like to get the memory situated here: www.scan.co.uk/Products/6GB-(3x2GB)-Corsair-XMS3-Classic-DDR3-PC3-10666-(1333)-Non-ECC-Unbuffered-CAS-7-7-7-20-165V memory. It is 6GB of Corair 1333MHz memory. According to the motherboard website it is able to take 1333MHz, but it says oc** next to it (which means achieved when overclocked). So my question is: are they still compatible without overclocking, or does the motherboard require overclocking to be compatible? If it requires overclocking (which I have no idea how to do) can anyone recommend any other memory (in the region of 6GB) which the motherboard is compatible with? I'd rather it were from Scan, but to be honest it doesnt need to be. Many thanks in advance. Regards, Richard Edit: I just realised that the motherboard has a maximum capacity of 4GB of RAM. Scrap the RAM given above, I'd like to go for something like that but only 4GB. Edit: Scrap that last edit - its only if I go for DDR3 that I need to take this into account. DDR2 is a maximum of 8GB.

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >