Search Results

Search found 54098 results on 2164 pages for 'something broken'.

Page 377/2164 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • Squid3 not caching simple request and response

    - by Nick Spacek
    Hi folks, I've pared down my squid.conf to try to figure this out: http_port 80 accel defaultsite=host.to.cache cache_peer ip.to.cache parent 80 0 no-query originserver acl our_sites dstdomain host.to.cache http_access allow our_sites refresh_pattern . 1 20% 4320 Requests are being proxied correctly, so that's a start. Here's a request: GET http://host.to.cache/path?some_param=true Accept: */* Accept-Charset: ISO-8859-1,utf-8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en Connection: keep-alive Host: host.to.cache User-Agent: myuseragent And the response: Connection: keep-alive Content-Length: 585 Content-Type: application/xml Date: Thu, 06 Jan 2011 18:33:11 GMT Via: 1.0 localhost (squid/3.0.STABLE19) X-Cache: MISS from localhost X-Cache-Lookup: MISS from localhost:80 The response has no caching-related headers, but I thought that refresh_pattern would set a default behavior for responses without caching-related headers. For my test, I wanted to cache everything for one minute at minimum. Am I missing something obvious? I did take a peek at this question: Squid isn't caching ...and ran through the page here: http://www.mnot.net/cache_docs/ briefly, but didn't see anything relevant (not to say that there isn't, I could have missed something). Thanks for any help.

    Read the article

  • starting oracle 10g on ubuntu, Listener failed to start.

    - by tsegay
    I have installed oracle 10g on a ubuntu 10.x, This is my first time installation. After installing I tried to start it with the command below. tsegay@server-name:/u01/app/oracle/product/10.2.0/db_1/bin$ lsnrctl LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 29-DEC-2010 22:46:51 Copyright (c) 1991, 2005, Oracle. All rights reserved. Welcome to LSNRCTL, type "help" for information. LSNRCTL> start Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait... TNSLSNR for Linux: Version 10.2.0.1.0 - Production System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1))) TNS-12555: TNS:permission denied TNS-12560: TNS:protocol adapter error TNS-00525: Insufficient privilege for operation Linux Error: 1: Operation not permitted Listener failed to start. See the error message(s) above... my listener.ora file looks like this: # listener.ora Network Configuration File: /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora # Generated by Oracle configuration tools. SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1) (PROGRAM = extproc) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1)) (ADDRESS = (PROTOCOL = TCP)(HOST = acct-vmserver)(PORT = 1521)) ) ) I can guess the problem is with permission issue, But i dont know where I have to do the change on permission. Any help is appreciated ... EDIT## When i run with the command sudo, i got this tsegay@server-name:/u01/app/oracle/product/10.2.0/db_1$ sudo ./bin/lsnrctl start LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 30-DEC-2010 01:01:03 Copyright (c) 1991, 2005, Oracle. All rights reserved. Starting ./bin/tnslsnr: please wait... ./bin/tnslsnr: error while loading shared libraries: libclntsh.so.10.1: cannot open shared object file: No such file or directory TNS-12547: TNS:lost contact TNS-12560: TNS:protocol adapter error TNS-00517: Lost contact Linux Error: 32: Broken pipe

    Read the article

  • Security implications of adding www-data to /etc/sudoers to run php-cgi as a different user

    - by BMiner
    What I really want to do is allow the 'www-data' user to have the ability to launch php-cgi as another user. I just want to make sure that I fully understand the security implications. The server should support a shared hosting environment where various (possibly untrusted) users have chroot'ed FTP access to the server to store their HTML and PHP files. Then, since PHP scripts can be malicious and read/write others' files, I'd like to ensure that each users' PHP scripts run with the same user permissions for that user (instead of running as www-data). Long story short, I have added the following line to my /etc/sudoers file, and I wanted to run it past the community as a sanity check: www-data ALL = (%www-data) NOPASSWD: /usr/bin/php-cgi This line should only allow www-data to run a command like this (without a password prompt): sudo -u some_user /usr/bin/php-cgi ...where some_user is a user in the group www-data. What are the security implications of this? This should then allow me to modify my Lighttpd configuration like this: fastcgi.server += ( ".php" => (( "bin-path" => "sudo -u some_user /usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) ...allowing me to spawn new FastCGI server instances for each user.

    Read the article

  • Which upgrade path for disk IO bound postgres server?

    - by user41679
    Hi all, We currently have a Sun x4270 with 2xquad core Xeon Nehalmen 2.93ghz cores (16 threads), 72 gig of ram and 16 x 10k SAS disks split between the os raid 1, a partition for the Write Ahead Logs which is raid 10 and a partition for the database tables and indexes which is also raid 10, all xfs. I'm currently evaluating which path to go down in terms of upgrades. We'll be sharding the DB at some point soon, but for now I need to focus on hardware upgrades specifically. The machine is not CPU or memory bound at all at the moment, just IOWait is become an issue. The machine is mostly write access as we have a heavy caching layer. We're seeing about 300 write IOPS average on both the database partitions. We don't have any additional storage infrastructure like a Fiber Channel or ISCSI network. Budget isn't too much of a concern, something inline with the size of this server (i.e no $1m IBM machines) Space is ok on the DB side of things, we're running out obviously but there's also some reduction we can do. Additional space would be good though. My current thoughts are either: * ISCSI SAN, possible with 10Gbit network that has solid state acceleration. * FusionIO card / Sun F20 card (will the FusionIO card work in the Sun box? * DAS shelf (something like this http://www.broadberry.co.uk/das-direct-attached-storage-servers/cyberstore-224s-das) which a combination of 15k sas disks and some Intel X25-E drives for DB indexes etc) what would I need to put in the x4270 to add a DAS shelf? I think it's a SAS HBA card, do I have to use Sun's own card or will any PCI Express card work? Anything else??? what would you guys do from your experience? I appreciate it's a lot of questions, but I haven't expanded a DB machine for a number of years and the landscape has changed dramatically since then! Any advice or feedback would be very much appreciated. Let me know if there's anything else I can clarify. Thanks in advance!

    Read the article

  • PHP cannot connect to MySQL

    - by yogal
    Hello, I recently installed Apache 2 + PHP 5.3.1 + MySQL 5.1.44 on my Windows 7 64bit machine following this guide: http://sleeplessgeek.blogspot.com/2010/01/setting-up-apache-php-mysql-phpmyadmin.html It all went fine, php is working great (even with XDebug) but I cannot connect to mysql server. A simple script I wrote to test connection (yes, root has no pass): $username = "root"; $password = ""; $database = "test"; $hostname = "localhost"; $conn = mysql_connect($hostname, $username, $password) or die("Unable to connect to MySQL Database!!"); It prints this error after 60sec of timeout: Warning: mysql_connect() [function.mysql-connect]: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. I can connect to mysql using cmdmysql -h localhost -u root Services are working properly. There also seems to be a problem with PhpMyAdmin (using 3.2.5). As soon as I type user and pass the page loads and turns blank (content-lenght in headers is 0 but status code is 302 Found). Looks like something wrong with cookies (my auth method). I hope someone has a clue, it has to be something dumb simple I missed. Thanks in advance.

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0? Link up/down every hour or 2

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). [mymacaddress]/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) It's being logged every hour. I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. There are some tunable SCTP parameters but it's not something I'm familiar with. Do I have to add changes to /etc/system? Looks like sctp_heartbeat_interval might be what I need to change? If it makes any difference, I have a few solaris zones running on this server, each with their own IP address on a virtual interface. eth0:0, eth0:1, etc. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else? Update: This was happening more frequently but now it seems to be happening roughly every hour or every two hours. It's not consistent. I tried setting setting the link speed and duplex to match the switch port and that seemed to make it stop happening for a few hours but then it started again.

    Read the article

  • CentOS Latency High Troubleshooting

    - by Sarah Weinberger
    I have two CentOS servers connected via a 10 Gb fiber optic cable with a network emulator connected between them. All three units sit on a desk in the lab. There is also a regular 1 Gbit Ethernet cable connected to each of the machines, which provide internet connectivity. When I set the latency to something roughly below 30 ms, all is fine. When the latency gets to 70ms and above, and definitely 130ms, the network layer suspends. For instance, if I set the latency (delay) to 70ms, then launching TeamViewer (or any other application that uses network connectivity) never happens or does not work. There is no timeout message, simply no response. I have to lower to latency back down to zero to see any response and have the box start working. What is the problem and how would I go about fixing it? It seems to me some sort of setting in Linux causes one of the CentOS networking drivers to sit in an infinite loop or something. eth0 is the connection to the internet, all settings are default eth2 is the 10 Gbit fiber optic connection to the other computer with the MTU set to 9600 with all other parameters at default values.

    Read the article

  • Application (was Firefox) crash on first load on Ubuntu Linux on older Dell Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then (WAS: everything stops: icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. EDIT : on further investigation, spinning icon, mouse operated by touchpad freeze. There's apparantly a little disk activity occuring about every 5 seconds. I wait 5-10 minutes, behavior doesn't change) A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in). EDIT: I tried to run the Disk manager tool, not that I cared what it was, just a menu-available application. It started up like Firefox, I get a little tag in the lower left saying Disk P*** something had started, and then the same behavior as Firefox. At this point, I don't think its the Ethernet. Is it possible that the Ubuntu disk driver can't handle the disk controller in this older laptop? The install seemed to go fine.

    Read the article

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

  • zsh auto-complete event designator

    - by simont
    (See my previous question for additional context). I'm migrating to zsh from bash, and using oh-my-zsh. When my zsh history looks something like the following: git status git add -A git commit I want to be able to re-run git add -A. To do that, I could use !?git add, which should: !?str[?] Refer to the most recent command containing str. The trailing ‘?’ is necessary if this reference is to be followed by a modifier or followed by any text that is not to be considered part of str. The link for zsh event designators is here. Unfortunately, I can't do this - as I'm typing !?git add, as I hit the ' ', it auto-completes the command to the most recent command matching git (ie, it auto-completes with git commit). I can't use the event designator properly because of this auto-completion as I hit the space. I assume this is an oh-my-zsh feature. I have no idea where to look, though - greping for 'complet' in the oh-my-zsh source doesn't get me anywhere. My question: how do I turn off this feature? Or, if that's not something that's known, where should I be looking - if I was going to implement this auto-complete when whitespace is entered, where would be a logical place to do so in the oh-my-zsh framework?

    Read the article

  • How do I repair my Logitech Anywhere MX?

    - by Stefano Palazzo
    My Anywhere Mouse has got mushy mouse button syndrome. That is, the left mouse button feels a little bit soft, and it easily double clicks, let's go when I drag something. Before I repair it at home, rather than bringing it to the store (I kind of need it, it's the only one I have), I'd like to know exactly what I'm doing. It'd be too bad if I tried to repair it, voided the warranty and didn't succeed. I'm guessing there are screws to open it under the rubber pads. And I suppose I can take those off without breaking them, and put them back on without bending them. How is this mouse held together, and what's the safest way to open it? Once I have it open, will I be able to fix the problem? What's causing the mushy mouse button? Here's what I know so far: It might be the switch itself that's broken, in which case I shouldn't open it (I can't get a replacement, voiding the warranty to "have a look" seems pointless) If there are screws underneath the rubber pads, they're only on the 'front', the back two thirds of the mouse are all battery cover: There's nothing I can see under the batteries either. In the mouse I had before this one, there were sort of springy things connecting the actual button with the switch soldered to the board. They were just lying inside of a bit of plastic, and I could swap the left and right ones easily. If repairing it is more difficult, transferring the problem to the right mouse button would be a very good start.

    Read the article

  • What is it safe to let Revo Uninstaller cleanup leftovers?

    - by msorens
    I have been a user of Revo Uninstaller (free) for sometime and find it does a very good cleanup job with typical applications. Today I wanted to clean up my machine a bit more so I proceeded to remove Visual Studio 2005 with Revo Uninstaller. The VS installer removed the app with no issues, then Revo reported about 20,000 leftover registry keys. I am used to basically just see Arpcache and Muicache... since I am not a registry expert I had no clue about most of the 20,000 listed. So I backed up the registry then let Revo remove the 20,000. It next reported about 1500 leftover files which included my Microsoft Office applications(!) that I knew it should not be touching. So I did not delete any files with Revo. Suspecting that some of the removed keys were also Office-related, I tried to open Word and Excel, both of which knew something was up, as the installer kicked in (albeit just briefly) for each of them. At this point, since I knew there were issues, I just restored the registry and I am now (seemingly) running OK. My question, then: When is it safe to trust Revo Uninstaller? As a seasoned software professional, my own answer to this would be the obvious "When the keys it reports are something you understand and know are safe to delete" but then that makes Revo of little use except to registry experts, does it not...?

    Read the article

  • Can't access one directory via HTTPS + public FQDN

    - by Justin James
    Hello - I have the strangest IIS error that I've ever seen in my life. I have an application/directory on an IIS server, that throws an error 500 when accessing ANY of the content in it, including HTML documents, when accessed via HTTPS AND the machines FQDN. When I access it with "localhost" it works fine. When I added a bogus entry for the NIC's IP in the hosts file, it worked fine. When I access it with the machines name and HTTP it works fine. Here's a chart (the machine's name is "lofn.titaniumcrowbar.com"): http - lofn.titaniumcrowbar.com: works https - lofn.titaniumcrowbar.com: broken https - localhost: works https - temp.titaniumcrowbar.com (put into hosts file): works I set up tracing, and I got some useless information: "The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)" This would make sense, except this happens when pulling up static content. While the directory may be an "application", the content is all static in it. Any/all suggestions, no matter how strange, are VERY appreciated. Thanks! J.Ja

    Read the article

  • Repairing hard disk when Windows installation disk won't boot

    - by Echows
    I'm trying to recover some data from a faulty hard disk with Windows installed on it (on which Windows won't even boot). I have tried so far: Booting to Ubuntu live USB stick and running ntfsfix (didn't work) Trying to mount the broken partition when running Ubuntu from usb stick (doesn't mount) Running photorec image recovery tool from live Ubuntu (it found some stuff but not the images I was looking for) Now as a last resort I got myself a Windows installation on a USB stick so that I can try fdisk, but the installer doesn't work. The loading screen shows up and then the installer crashes. The installer works fine on other computers. I suspect that the installer is trying to read the hard drive to see if there's something there but when it can't read one partition, it crashes. On Ubuntu, I can mount other partitions except the one I'm interested in so at least the hard drive is not completely dead. So the question is, what options do I have left? To be more specific, my goal is to recover some images from the faulty ntfs-partition on the hard drive. Other than that, I don't care about the contents of the hard disk.

    Read the article

  • LG LW20 Express won't boot after hdd replace

    - by Mika
    My old laptop (LG LW20 Express) got a hdd failure and I replaced the hdd. Now the laptop won't boot from cd or usb. I'm trying to install ubuntu on it. When I turn the laptop on it shows me the startup screen but when it should be the time to load operating system it just gives a black screen and starts over. This loop continues until I shut down the laptop. I created the usb boot drive following this guide https://help.ubuntu.com/community/Installation/FromUSBStick/ I used my boot cd to install ubuntu on this machine I'm using right now. So at least the cd should work. From the BIOS I can see that my newly installed hdd is recognized and put as a secondary master. Also the cd and removable media are in the boot list before hdd. The laptop runs pretty hot. The fan is at full speed pretty soon after the laptop is turned on. Earlier I suspected that it would have been the almost broken hdd that would have produced that heat but there obviously is something else also. Any ideas what to check?

    Read the article

  • Spring-mvc project can't select from a particular mysql table

    - by Dan Ray
    I'm building a Spring-mvc project (using JPA and Hibernate for DB access) that is running just great locally, on my dev box, with a local MySQL database. Now I'm trying to put a snapshot up on a staging server for my client to play with, and I'm having trouble. Tomcat (after some wrestling) deploys my war file without complaint, and I can get some response from the application over the browser. When I hit my main page, which is behind Spring Security authentication, it redirects me to the login page, which works perfectly. I have Security configured to query the database for user details, and that works fine. In fact, a change to a password in the database is reflected in the behavior of the login form, so I'm confident it IS reaching the database and querying the user table. Once authenticated, we go to the first "real" page of the app, and I get a "data access failure" error. The server's console log gets this line (redacted): ERROR org.hibernate.util.JDBCExceptionReporter - SELECT command denied to user 'myDbUser'@'localhost' for table 'asset' However, if I go to MySQL from the shell using exactly the same creds, I have no problem at all selecting from the asset table: [development@tomcat01stg]$ mysql -u myDbUser -pmyDbPwd dbName ... mysql> \s -------------- mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (i686) using readline 5.1 Connection id: 199 Current database: dbName Current user: myDbUser@localhost ... UNIX socket: /var/lib/mysql/mysql.sock -------------- mysql> select count(*) from asset; +----------+ | count(*) | +----------+ | 19 | +----------+ 1 row in set (0.00 sec) I've broken down my MySQL access settings, cleaned out the user and re-run the grant commands, set up a version of the user from 'localhost' and another from '%', making sure to flush permissions.... Nothing is changing the behavior of this thing. What gives?

    Read the article

  • How to install a desktop environment onto Ubuntu Server -- but without internet access or a CDROM?

    - by James
    I am playing around with a computer which has no CDROM drive or internet access and I have installed Ubuntu Server onto it. I have that all up and running nicely but now I'd like to install Xfce, GNOME or something similar so I can load up a desktop environment from the command line if I wish. Obviously with internet access or a CDROM, this would be a simple task of using apt-get and it finding & retrieving the packages for me, I assume, but I do not have either. I do however have a USB drive and I have used Unetbootin to make it into a bootable drive with the Ubuntu Server disk image files on there. I have mounted the USB drive to /media/usb0 and tried the command "sudo apt-cdrom add -d /media/usb0" to get apt to recognise the USb drive as an "Ubuntu CD" -- a source of package files but apt-get doesn't seem to be finding Xfce.. I try "sudo apt-get install xfce" and "sudo apt-get install xfce4" but neither find the package.. I would prefer to have Xfce but GNOME would be OK too.. My question is, am I doing something wrong? I figured that the Ubuntu Server disk (or rather, my Ubuntu Server USB drive) might not have any desktop environment packages on there so I tried the Xubuntu Desktop disk too (again, from my USB drive). I tried "sudo apt-get install xubuntu-desktop" but it couldn't find the package - even though it is listed under the /casper/ directory in some MANIFEST file. Anyone see where I'm going wrong? Maybe apt-get install is looking somewhere other than my USB drive? Maybe my commands are wrong? Maybe the disks don't even have the desktop environments on!? Thanks in advance guys, any input would be much appreciated. Cheers - James

    Read the article

  • Why does Exchange 2003 silently reject emails with large attachments?

    - by Cypher
    Our environment: Exchange Server 2003 Standard, single instance, running on Windows Server 2003 Standard. configured to not send/receive mail with attachments larger than 10 MB. NDRs are not enabled. The issue: When an external sender sends an email with an attachment larger than 10MB, Exchange, as configured, does not receive the message. However, the sender of that message does not receive any notifications from his own mail server that the message could not be delivered due to attachment size. However, if an external user tries to send an email to a non-existent user, they do receive a message from their mail server indicating that the user does not exist. Why is that, and is there anything I can do about it? It would be nice if the sender received notification that the attachment file size exceeds our limits and their message was never received... Update The Exchange server has a SpamAssassin box in front of it... could that have something to do with it? Here is one of the last lines from SpamAssassin's logs when searching for my test e-mails: mail postfix/smtp[19133]: 2B80917758: to=, relay=10.0.0.8[10.0.0.8]:25, delay=4.3, delays=2.6/0/0/1.7, dsn=2.6.0, status=sent (250 2.6.0 Queued mail for delivery) My assumption is that Spam Assassin thinks the message is OK and is forwarding it off to Exchange. Update I've verified that Exchange is receiving the message and generating an NDR. However, delivery of NDRs are disabled to prevent Backscatter. Is there something that I can do to get Exchange to send a bounce message to the sending mail server (or verify that message is being sent) so the sending mail server can notify its sender of the bounce?

    Read the article

  • Linux VLAN Bridge

    - by raspi
    I have home network with VLANs, one for LAN, one for WLAN and one for internet. I'd like to use bridging so that instead of configuring these same VLANs to every machine, they had own VLAN ID and bridges were LAN, WLAN and internet. I've tried it but for some reason keep-alive/ttl seems to get broken because SSH sessions etc suddenly disconnects. We have this same setup working in workplace for 4+ years with 100+ customers but it's custom firewall/router hardware so accessing it is impossible. I know that it runs Linux. So what is Debian/Ubuntu default network settings doing wrong or is it just NIC driver/hw problem? I've tried to mess araund with ttl etc settings without any luck. The bad stuff is happening in the bridge because current VLAN-only setup works fine. interfaces: auto lo iface lo inet loopback # The primary network interface allow-hotplug eth0 allow-hotplug eth1 iface eth0 inet static iface eth1 inet static auto vlan111 auto vlan222 auto vlan333 auto vlan444 auto br0 auto br1 auto br2 # LAN iface vlan111 inet static vlan_raw_device eth0 # WLAN iface vlan222 inet static vlan_raw_device eth0 # ADSL Modem iface vlan333 inet static vlan_raw_device eth1 # Internet iface vlan444 inet static vlan_raw_device eth0 # LAN bridge iface br0 inet static address 192.168.0.1 netmask 255.255.255.0 bridge_ports eth0.111 bridge_stp on # Internet bridge iface br1 inet static address x.x.x.x netmask x.x.x.x gateway x.x.x.x bridge_ports eth1.333 eth0.444 bridge_stp on post-up iptables -t nat -A POSTROUTING -o br1 -j MASQUERADE pre-down iptables -t nat -D POSTROUTING -o br1 -j MASQUERADE # WLAN bridge iface br2 inet static address 192.168.1.1 netmask 255.255.255.0 bridge_ports eth0.222 bridge_stp on Sysctl: net.ipv4.conf.default.forwarding=1

    Read the article

  • Setting cfengine3 class based on command output

    - by gnomie
    This question is very similar to How can I use the output of a command in cfengine3 but the answer does not apply in my case I believe. I want to update a git repository via "git pull" and based on whether that lead to changes trigger some follow up action. Simplified, if there was something like "match output and set class" via some body if_output_matches I would want to use something like this: bundle agent updateRepo { commands: "/usr/bin/git pull" contain => setuidgiddir_sh("$(globals.user)","$(globals.group)","$(target)"), classes => if_output_matches("Already up-to-date.","no_update"); reports: no_update:: "nothing updated"; } body contain setuidgiddir_sh(owner,group,folder) { exec_owner => "$(owner)"; exec_group => "$(group)"; useshell => "true"; chdir => "$(folder)"; } So, is it possible to use the output of a - possibly expensive command - and base some decision on that? The execresult function is no good choice for me as a) the pull may become expensive at times (not recommended following the cfengine3 reference) and b) does not allow to specify user, group, working dir - which is important in my case. The repository is in user space and not owned by root.

    Read the article

  • Cheap Solution for Routing a Toll Free Number to a Standard POTS Number

    - by VxJasonxV
    I do some technical work for an Internet Radio Show/Podcast, and need to fix something that has been broken for a while. The hosts have a Skype-In number to take listener calls, and for convenience sake, I bought and paid for a toll free number for a period of time. I used to use Asterlink for routing calls, but they folded and sent my number to OneBox, but they're ridiculously expensive by comparison. I'm looking for a cheap solution for this one simple task. Forward toll free calls to a skype-in number. The definition of cheap is as cheap or cheaper than Asterlink was. I paid something like $2 a month, and then the termination/call rate, which was a fraction of a sent for termination, and only whole cents after some serious time on the call. A $20 preload lasted me months at a time. I don't want to be upsold too, I want a simple web based management screen (CDR/stats are fun!), and obviously, it needs to be reliable. What vendors out there are you a fan of that solves this need?

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • Are flash drives and hard drives thought of as "an ocean of bytes"?

    - by Jian Lin
    Why can a USB Flash drive be formatted as NTFS or FAT32? Is the USB Flash Drive and Hard Drive just to be thought of as "an ocean of bytes"? I get very used to hearing formatting a hard drive as FAT32 or NTFS, but we can also format a USB Flash drive as NTFS or FAT32? Is it because a hard drive or Flash drive both can be thought of as "an ocean of bits" or "an ocean of bytes"? I remember RAM as: it takes 16 bit or 32 bit as an address signal (the 16 or 32 copper footing on the circuit board), and give out 8 bit of data (the other 8 copper footing on the circuit board). So can a hard drive be thought of as working that way too? So that's why a Flash drive can be the same too? Just an "ocean of bytes". But is it true that hard drive's hardware make it an ocean of sector or something else, that is, the smaller unit of read / write is not byte but something else? So with this "ocean of bytes", NTFS has the format that says, "if the first byte is __, then it means __ (it is a file or folder, and link to which sector, indicated by byte 2 and 3, etc, etc)"

    Read the article

  • Looking for a "light" compositing manager for GNOME

    - by detly
    I have an HP Pavilion DM3 (graphics is nVidia GeForce G105M), running Debian Squeeze with GNOME 2.30. My preference for DE is Gnome + Metacity + Nautilus. I'd like to use Docky, but it requires compositing. So I'm looking for a relatively "light" compositing manager. I realise that "light" is ambiguous, but I basically want something that won't chew through my notebook's batteries because of CPU or GPU usage. I know that Metacity is capable of compositing, but as far as I'm aware it's still testing. Some people report that it's smooth and lightweight, others claim that it eats up processor time. I've also seen references to a problem with nVidia, but no actual details. I'm not averse to Compiz, but I haven't used it before and I don't know what to expect in terms of "weight." And maybe there's something else I haven't heard of. So can anyone recommend anything? Or dispel my idea that Metacity is not the right tool for the job? (Originally posted on GNOME forums.)

    Read the article

  • Visual Studio 2010 won't compile/create new projects

    - by tuner
    My Visual Studio 2010 Professional with SP1 installed won't compile anymore. The shown error is: TRACKER : error TRK0005: Failed to locate: "CL.exe". The system cannot find the file specified. Strangely it is also not possible anymore to create new projects - the wizard appears but just restarts when I press create. As I found out the paths for Visual Studio are now built from settings in the registry. Namely HKEY_CURRENT_USER\Software\Microsoft\VisualStudio. Comparing a colleagues installation with mine revealed no different settings. So this is how the Property Pages/Configuration Properties/VC++ Directories look like: Executable Directories: $(ExecutablePath) Include Directories: $(IncludePath) Reference Directories: $(ReferencePath) Library Directories: $(LibraryPath) Source Directories: $(SourcePath) Exclude Directories: $(ExcludePath) From the Visual Studio 2010 Command Prompt, cl.exe is found. I can only guess that this behavior was caused by a reinstallation of Studio a couple of months ago (to a different folder). As we use an external build-script for our main project there is a good chance that it is broken since then. Any hints?

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >