Search Results

Search found 16331 results on 654 pages for 'option'.

Page 507/654 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • Toshiba laptop cd drive read causes OS to totally freeze

    - by Fujishiro
    Okay I'll try to write an understandable summary. Forgive me if I'll fail with that attempt though. So. There is a Toshiba Satellite notebook. Got Windows 7 x86 Professional (OEM) installed on it, everything is fine (okay.. somewhat). The problem. If you put an audio or any kind of disc into the drive, something starts to eat the PC. Back then when the owner told me about this, he put an audio disc into the lappy. Winamp caused the IO load, 100%. Tried taskkill, taskkill /T, tried powershell, EVERYTHING. You just can NOT kill winamp or anything which becomes the blocker at that time. Even if you kill almost everything, laptop won't do a clear shutdown. Also I tried to use the force switch at 'shutdown' from cmd, but no use. (So: At these times you can use the laptop, but the blocker/explorer/disc becomes gray as a non-responding app. You can try to kill them, but that won't work, nor you can shutdown the machine). (Also tried using PID, but no use. For the highest IO I used the "select columns" from Task Manager and enabled the IO columns.) My first hunch was the problematic disc, autoplay and it tries to read tries to read (still shouldn't kill the PC). Disabled autoplay, removed winamp. Tried other software, etc. Everything was ok. Few days later the owner tried to put a disc into the machine and it started to reproduce the same symptoms but with a totally different disc. Uhm what to know. Virus is not an option, protected by BitDefender (valid license) and Spybot. Thanks if you have ANY idea about this strange problem. ps.: For now, the owner uses Daemon tools + Blindwrite as an alternative for those apps which wouldnt start without the disc.

    Read the article

  • attach / detach mssql 2008 sql server manager [SOLVED]

    - by Tillebeck
    An external consult wrote a guide on how to copy a database. Step two was detach the database using Sql Server Manager. After the detach the database was not visible in the SQL Server Manager... Not much to do but write a mail to the service provider asking to have the database attached again. The service porviders answer: Not posisble to attach again since the SQL Server security has been violated". Rolling back to last backup is not the option I want to use. Can any one give feedback if this seems logic and reasonable to assume that a detached database in a SQL Server 2008 accessed through SQL Server Manager cannot be reattached. It was done by rightclicking the database and choosing detach. -- update -- Based on the comments below I update the question with the server setup. There are two dedicated servers: srv1: Web server with remote desktop and an Sql Server Manager srv2: Sql server that can be accessed through the Sql Server Manager on the web server -- update2 -- After a restart of the server the DBA could suddenly do the attachment of the database. And I guess that after the restart it was a simple task. So all of your answer were rigth! It seems that I can only mark one as a correct answer so I marked the first answer correct. But all are correct answer. Thanks a lot. Without posting the link to this thread then we might had so suffer while watching our database beeing restored by a backup :-) Thanks a lot. BR. Anders

    Read the article

  • auth.log is empty (Ubuntu)

    - by Vinicius Pinto
    The /var/log/auth.log file in my Ubuntu 9.04 is empty. syslogd is running and /etc/syslog.conf content is as follows. Any help? Thanks. # /etc/syslog.conf Configuration file for syslogd. # # For more information see syslog.conf(5) # manpage. # # First some standard logfiles. Log by facility. # auth,authpriv.* /var/log/auth.log *.*;auth,authpriv.none -/var/log/syslog #cron.* /var/log/cron.log daemon.* -/var/log/daemon.log kern.* -/var/log/kern.log lpr.* -/var/log/lpr.log mail.* -/var/log/mail.log user.* -/var/log/user.log # # Logging for the mail system. Split it up so that # it is easy to write scripts to parse these files. # mail.info -/var/log/mail.info mail.warning -/var/log/mail.warn mail.err /var/log/mail.err # Logging for INN news system # news.crit /var/log/news/news.crit news.err /var/log/news/news.err news.notice -/var/log/news/news.notice # # Some `catch-all' logfiles. # *.=debug;\ auth,authpriv.none;\ news.none;mail.none -/var/log/debug *.=info;*.=notice;*.=warning;\ auth,authpriv.none;\ cron,daemon.none;\ mail,news.none -/var/log/messages # # Emergencies are sent to everybody logged in. # *.emerg * # # I like to have messages displayed on the console, but only on a virtual # console I usually leave idle. # #daemon,mail.*;\ # news.=crit;news.=err;news.=notice;\ # *.=debug;*.=info;\ # *.=notice;*.=warning /dev/tty8 # The named pipe /dev/xconsole is for the `xconsole' utility. To use it, # you must invoke `xconsole' with the `-file' option: # # $ xconsole -file /dev/xconsole [...] # # NOTE: adjust the list below, or you'll go crazy if you have a reasonably # busy site.. # daemon.*;mail.*;\ news.err;\ *.=debug;*.=info;\ *.=notice;*.=warning |/dev/xconsole

    Read the article

  • Ubuntu server PPTPD with OS X clients Problems

    - by Nakedsteve
    I'm trying to get a PPTP server running on a ubuntu server, but I've run into some issues with it. I followed this guide on how to set up pptpd on my server, and everything went smooth, but when I try to connect with my mac, it gives me this error: Here's my configuration: Does anyone have any idea as to what I'm doing wrong here? Update: Here's what the pptpd.log has to say about it: steve@debian:~$ sudo tail /var/log/pptpd.log sudo: unable to resolve host debian Sep 3 21:46:43 debian pptpd[2485]: MGR: Manager process started Sep 3 21:46:43 debian pptpd[2485]: MGR: Maximum of 11 connections available Sep 3 21:46:43 debian pptpd[2485]: MGR: Couldn't create host socket Sep 3 21:46:43 debian pptpd[2485]: createHostSocket: Address already in use Sep 3 21:46:56 debian pptpd[2486]: CTRL: Client 192.168.1.101 control connection started Sep 3 21:46:56 debian pptpd[2486]: CTRL: Starting call (launching pppd, opening GRE) Sep 3 21:46:56 debian pptpd[2486]: GRE: read(fd=6,buffer=204d0,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Sep 3 21:46:56 debian pptpd[2486]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Sep 3 21:46:56 debian pptpd[2486]: CTRL: Reaping child PPP[2487] Sep 3 21:46:56 debian pptpd[2486]: CTRL: Client 192.168.1.101 control connection finished My pptpd options are: asyncmap 0 noauth crtscts lock hide-password modem debug proxyarp lcp-echo-interval 30 lcp-echo-failure 4 nopix

    Read the article

  • PPTP server connection closes - Too much data?

    - by Sebastian Hoitz
    I set up a PPTP server for my company. However, every time I have another computer connected to this server (i.e. our backup server) and a lot of data gets transferred, the connection to this computer closes. In the syslog on the PPTP server I find this message: Apr 22 12:44:34 komola-chase pptpd[2581]: CTRL: Reaping child PPP[2583] Apr 22 12:44:34 komola-chase pppd[2583]: MPPE disabled Apr 22 12:44:34 komola-chase pppd[2583]: Connection terminated. Apr 22 12:44:34 komola-chase pppd[2583]: Exit. Apr 22 12:44:34 komola-chase pptpd[2581]: CTRL: Client 192.168.0.11 control connection finished Apr 22 12:55:11 komola-chase pptpd[2674]: GRE: xmit failed from decaps_hdlc: No buffer space available Apr 22 12:55:11 komola-chase pptpd[2674]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Apr 22 12:55:11 komola-chase pppd[2675]: Modem hangup Apr 22 12:55:11 komola-chase pppd[2675]: Connect time 23.0 minutes. Hopefully you can help me as to what is wrong. As far as I can tell, there is no compression enabled on the PPTP server (npbsdcomp option). Thank you!

    Read the article

  • snort analysis of wireshark capture

    - by Ben Voigt
    I'm trying to identify trouble users on our network. ntop identifies high traffic and high connection users, but malware doesn't always need high bandwidth to really mess things up. So I am trying to do offline analysis with snort (don't want to burden the router with inline analysis of 20 Mbps traffic). Apparently snort provides a -r option for this purpose, but I can't get the analysis to run. The analysis system is gentoo, amd64, in case that makes any difference. I've already used oinkmaster to download the latest IDS signatures. But when I try to run snort, I keep getting the following error: % snort -V ,,_ -*> Snort! <*- o" )~ Version 2.9.0.3 IPv6 GRE (Build 98) x86_64-linux '''' By Martin Roesch & The Snort Team: http://www.snort.org/snort/snort-team Copyright (C) 1998-2010 Sourcefire, Inc., et al. Using libpcap version 1.1.1 Using PCRE version: 8.11 2010-12-10 Using ZLIB version: 1.2.5 %> snort -v -r jan21-for-snort.cap -c /etc/snort/snort.conf -l ~/snortlog/ (snip) 273 out of 1024 flowbits in use. [ Port Based Pattern Matching Memory ] +- [ Aho-Corasick Summary ] ------------------------------------- | Storage Format : Full-Q | Finite Automaton : DFA | Alphabet Size : 256 Chars | Sizeof State : Variable (1,2,4 bytes) | Instances : 314 | 1 byte states : 304 | 2 byte states : 10 | 4 byte states : 0 | Characters : 69371 | States : 58631 | Transitions : 3471623 | State Density : 23.1% | Patterns : 3020 | Match States : 2934 | Memory (MB) : 29.66 | Patterns : 0.36 | Match Lists : 0.77 | DFA | 1 byte states : 1.37 | 2 byte states : 26.59 | 4 byte states : 0.00 +---------------------------------------------------------------- [ Number of patterns truncated to 20 bytes: 563 ] ERROR: Can't find pcap DAQ! Fatal Error, Quitting.. net-libs/daq is installed, but I don't even want to capture traffic, I just want to process the capture file. What configuration options should I be setting/unsetting in order to do offline analysis instead of real-time capture?

    Read the article

  • Hyperic HQ- Monitor process statistics for 50+ processes on Linux machine

    - by Chris
    Is there an easy way to get metrics on all processes that start with the letters XYZ? I have about 80 processes that I have to monitor individually that all start with the prefix XYZ. I have created a query using the sigar shell: ps State.Name.sw=XYZ, which will give me a list of the processes that I want. What I need to do is define this list of processes through said query and collect and track statistics from the Process service: http://support.hyperic.com/display/hypcomm/Process+service What I need is 3 or 4 key statistics for each of the XYZ processes defined by my query to show up as graphs in the web front end. Note: Hyperic HQ server is installed on a windows machine and I'm monitoring a Linux box via an agent. Thanks, Chris Edit: Here is my try at a plugin that may give me what I want, but it's not being inventoried/detected by the Hyperic web UI. Simply pointing me to one of Hyperic's tutorials won't do. Thanks. <!DOCTYPE plugin [ <!ENTITY process-metrics SYSTEM "/pdk/plugins/process-metrics.xml">]> <plugin> <server name="ABCStats"> <config> <option name="process.query" description="Process Query" default="State.Name.sw=XYZ"/> </config> <metric name="Availability" alias="Availability" template="sigar:Type=ProcState,Arg=%process.query%:State" category="AVAILABILITY" indicator="true" units="percentage" collectionType="dynamic"/> &process-metrics; <plugin type="autoinventory"/> <plugin type="measurement" class="org.hyperic.hq.product.MeasurementPlugin"/> </server> </plugin>

    Read the article

  • Mirror a RAID0 volume

    - by Ghostrider
    I have two SSD running in RAID0. The capacity and speed are just great. I use Windows Home Server to do incremental daily backups. This is fine and well and I've successfully restored from these backups. However. When one of the disks physically died. I was stuck without a working system until the replacement arrives so that I can restore the array from backup. WHS restoration takes about 5 hours which basically means that I'm losing entire day for the process. Is it possible to set up kind of a recovery volume for the RAID array? Use a single mechanical HDD that would be updated with the exact clone of the RAID array on a daily basis. This way if the array goes offline for some reason, I can just boot from the mechanical HDD, lose some perf but will still be able to work. The machine in question runs Windows 7. Creating RAID01 is not an option because of the high price of the SSD and the fact that it still doesn't protect against failure of RAID controller. Is there any way it can be set up?

    Read the article

  • How to set up IP forwarding on Nexenta (Solaris)?

    - by Gleb
    I am trying to set up IP forwarding on my Nexenta box: root@hdd:~# uname -a SunOS hdd 5.11 NexentaOS_134f i86pc i386 i86pc Solaris The box has 2 network interfaces: root@hdd:~# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 e1000g1: flags=1001100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4,FIXEDMTU> mtu 1500 index 2 inet 192.168.12.2 netmask ffffff00 broadcast 192.168.12.255 ether 68:5:ca:9:51:b8 myri10ge0: flags=1100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4> mtu 9000 index 3 inet 10.10.10.10 netmask ffffff00 broadcast 10.10.10.255 ether 0:60:dd:47:87:2 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 192.168.12.0 is my normal LAN with 192.168.12.1 being the firewall/gateway 10.10.10.0 is a separate LAN for iSCSI (with no internet access) I want to set up IP forwarding so that a computer on 10.10.10.0 will be able to access the internet by using 10.10.10.10 as a gateway (I don't need any port forwarding) I have turned on IP forwarding: root@hdd:~# routeadm Configuration Current Current Option Configuration System State --------------------------------------------------------------- IPv4 routing disabled disabled IPv6 routing disabled disabled IPv4 forwarding enabled enabled IPv6 forwarding disabled disabled Routing services "route:default ripng:default" Routing daemons: STATE FMRI disabled svc:/network/routing/rdisc:default disabled svc:/network/routing/route:default disabled svc:/network/routing/legacy-routing:ipv4 disabled svc:/network/routing/legacy-routing:ipv6 disabled svc:/network/routing/ripng:default online svc:/network/routing/ndp:default But when I dry to start ipnat, I get an error: root@hdd:~# ipnat -CF -f /etc/ipf/ipnat.conf ioctl(SIOCGNATS): I/O error Here is the config: root@hdd:~# cat /etc/ipf/ipnat.conf #!/sbin/ipnat -f - # map e1000g1 10.10.10.10/24 -> 192.168.12.2/32 So the question is how to fix this.. Thanks in advance!

    Read the article

  • Why would Windows use slower network interface despite route metrics?

    - by tim11g
    On my previous notebook, the Dell/Broadcom wireless adapter had an option to automatically disable wireless when a wired network is connected, so I never dealt with multiple active interfaces. My current system has an Intel wireless adapter, and they apparently haven't figured out how to turn it off when there is a wired connection. Unless I explicitly remember to disable wireless when docked, the connection is active. That shouldn't be a problem (in theory), since the route metric will cause traffic to go over the fastest network (as indicated by the lowest metric in the routing table). Apparently not - I'm running a backup and seeing the throughput at 25Mbps or so (which is consistent with 802.11g) when a perfectly good Gigabit Ethernet interface is also connected. IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.254 192.168.1.104 10 0.0.0.0 0.0.0.0 192.168.1.254 192.168.1.109 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 Windows has correctly identified the Ethernet interface (.104) and assigned it the lower (preferred) metric. So the Ethernet interface should be used exclusively, right? Why is the Ethernet connection not being used? What other factors are involved? (This is with Windows 7 if it makes a difference)

    Read the article

  • Windows Server 2003 with Apache and IIS causing random faulting and performance issues with Apache?

    - by contrebis
    I'm trying to fix a problem on a Windows Server 2003 SE install which is running IIS6 and Apache webserver (with PHP and MySQL). IIS sites are bound to one IP, Apache to the other. Everything seemed fine till the other IP address was installed to allow a webservice to run under IIS. Symptoms: Apache now responds very slowly, even requests for static files (often 30 seconds or more) Sporadic errors are appearing in the event logs like: Faulting application httpd.exe, version 2.2.14.0, faulting module php5ts.dll, version 5.2.13.13, fault address 0x000ac14f. I've double-checked the config files, taken account of this question/answer http://serverfault.com/questions/51230/running-iis-and-apache-on-the-same-windows-server, upped the Apache log level to debug, run TCPView to check for conflicting bindings, upgraded to latest Apache/PHP versions but still no success or indication of a cause. Any suggestions on where to look, or debugging tips would be gratefully received. I'm a web programmer so not so familiar with Windows Server admin or details of the networking stack. Running PHP under IIS is not an option and hosting on another server is non-ideal.

    Read the article

  • Using 1920x1200 mode on SyncMaster T260HD in Linux

    - by dagorym
    I just got a Samsung SyncMaster T260HD monitor. It works straight out of the box with Windows but I can't seem to get it to work with Linux, which is my primary OS for day to day work. The computer boots up but when going into graphical mode on Linux the monitor gives me a "Mode not supported" error and doesn't display anything. I booted up windows and, using PowerStrip, grabbed the exact ModeLine that should be used to get the equivalent setting in Linux and added it to my xorg config file but it doesn't seem to help. the ModeLine is: ModeLine "1920x1200" 153.9 1920 1984 2016 2080 1200 1203 1209 1235 +hsync -vsync This is the modeline for the working display settings in windows but it doesn't seem to work in Linux My complete entry in the xorg.conf file for the monitor is Section "Monitor" Identifier "Monitor0" ModelName "SyncMaster" DisplaySize 518 324 HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "dpms" ModeLine "1920x1200" 153.9 1920 1984 2016 2080 1200 1203 1209 1235 +hsync -vsync EndSection I'm running Scientific Linux 5.4 (clone of Redhat Enterprise Linux 5.4) but I've tried booting with a recent Linux Mint Distro as well as Ubuntu 9.04 and had the same problem. Any suggestions on other things I should try or might be missing? If anyone's gotten this to work I'd love to know. Thanks.

    Read the article

  • How to direct reverse proxy requests using wildcard vhosts

    - by HonoredMule
    I'm interested in running a reverse proxy with 2-3 virtual machines behind it. Each internal server will run multiple virtual hosts, and rather than manually configuring each individual vhost on the proxy (a variety of vhosts come and go too often for this to be practical), I would like to use something which can employ pattern matching in a sequential order to find the appropriate back-end server. For example: Server 1: *.dev.mysite.com Server 2: *.stage.mysite.com Server 3: *.mysite.com, dev.mysite.com, stage.mysite.com, mysite.com Server 4: * In the above configuration, task.dev.mysite.com would go to Server 1, dev.mysite.com would go to Server 3, yoursite.stage.mysite.com to Server 2, www.mysite.com to Server 3, and yoursite.com to Server 4. I've looked into using Squid, Varnish, and nginx so far. I have my opinions regarding their respective desirability and general suitability, but it's not readily apparent if any of them can handle dynamic server selection in this manner and not require per-vhost configuration. Apache on the other hand can do this handily and simply, but otherwise (aside from being well-known and familiar) seems very poorly suited to the partly-performance-serving task. Performance isn't actually a major concern yet, but it seems foolish to use Apache if another system will perform far better and can also handle the desired 'hands-free' configuration. But so is frequently having to adjust the gateway for all production services and risk network-wide outage...and so also is setting oneself up for longer downtime later if Apache becomes a too-small bottleneck. Which of these (or other) reverse proxies can do it/would do it best? And maybe I should post this as a separate question, but if Apache is the only practical option, how safe/reliable/predictable is apache-mpm-event in apache2.2 (Ubuntu 12.04.1) particularly for a dedicated reverse proxy? As I understand it the Event MPM was declared "safe" as of 2.4 but it's unclear whether reaching stability in 2.4 has any implications for the older (2.2) versions available in official/stable package channels of various distros.

    Read the article

  • How do I create and conveniently search through Libraries in Windows 8?

    - by mtone
    In Windows 7, I took the habit of putting most of my frequently accessed disk areas as Libraries - there were about a dozen. Typing a word in the Start menu would then give me a summary of matches by Library. For example, searching for "WPF" would tell me that I've got some results in the Books library, in the Coding library and a few other PDFs in the Downloads library, one of which I could then expand to see all results within. In Windows 8, that functionality appears to be gone. The Search function in the Charms Bar lists tons of results by type (Documents, Pictures, et cetera) but not by Library. This is practically useless since Documents contains hundreds of .txt and .cs files, a few of which might be Books or Downloads. The only option I found is to go into Explorer and use the search bar in the Library section. However, there again, all search results are mixed together, and I can't seem to find a way to know which Library each result came from (in the Details view, I didn't find a Library column I could add). So, if I want to know which Library contains stuff about a given topic, I have to search the Libraries one by one. Very inconvenient. Is Microsoft slowly deprecating libraries? Any tips? How else can I search through libraries?

    Read the article

  • ssh keys rejected each day

    - by EddyR
    I've had OpenSSH server running on my debian server for a couple weeks and all of a sudden now when I go to login the next day it rejects my ssh key and I have to manually add a new one each time. Not only that but I have the "tunneling with clear-text passwords" option enabled and the non-root (login with root is disabled) account for that is rejected too. I'm at a loss why this is happening and I can't find any ssh options that would explain it. --update-- I just changed debug level to DEBUG. But before that I'm seeing a lot of the following in auth.log Feb 1 04:23:01 greenpages CRON[7213]: pam_unix(cron:session): session opened for user root by (uid=0) Feb 1 04:23:01 greenpages CRON[7213]: pam_unix(cron:session): session closed for user root ... Feb 1 04:36:26 greenpages sshd[7217]: reverse mapping checking getaddrinfo for nat-pool-xx-xx-xx-xx.myinternet.net [xx.xx.xx.xx] failed - POSSIBLE BREAK-IN ATTEMPT! ... Feb 1 04:37:31 greenpages sshd[7223]: Did not receive identification string from xx.xx.xx.xx ... My sshd_conf file settings are: # Package generated configuration file # See the sshd(8) manpage for details # What ports, IPs and protocols we listen for Port xxx # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel DEBUG # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM no ClientAliveInterval 60 AllowUsers myuser

    Read the article

  • Moving automatically spam messages to a folder in Postfix

    - by cad
    Hi My problem is that I want to automatically to move spam messages to a folder and not sure how. I have a linux box giving email access. MTA is Postfix, IMAP is Courier. As webmail client I use Squirrelmail. To filter SPAM I use Spamassassin and is working ok. Spamassasin is overwriting subjects with [--- SPAM 14.3 ---] Viagra... Also is adding headers: X-Spam-Flag: YES X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on xxxx X-Spam-Level: ************** X-Spam-Status: Yes, score=14.3 required=2.0 tests=BAYES_99, DATE_IN_FUTURE_24_48,HTML_MESSAGE,MIME_HTML_ONLY,RCVD_IN_PBL, RCVD_IN_SORBS_WEB,RCVD_IN_XBL,RDNS_NONE,URIBL_RED,URIBL_SBL autolearn=no version=3.2.5 X-Spam-Report: * 0.0 URIBL_RED Contains an URL listed in the URIBL redlist * [URIs: myimg.de] * 3.5 BAYES_99 BODY: Bayesian spam probability is 99 to 100% * [score: 1.0000] * 0.9 RCVD_IN_PBL RBL: Received via a relay in Spamhaus PBL * [113.170.131.234 listed in zen.spamhaus.org] * 3.0 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL * 0.6 RCVD_IN_SORBS_WEB RBL: SORBS: sender is a abuseable web server * [113.170.131.234 listed in dnsbl.sorbs.net] * 3.2 DATE_IN_FUTURE_24_48 Date: is 24 to 48 hours after Received: date * 0.0 HTML_MESSAGE BODY: HTML included in message * 1.5 MIME_HTML_ONLY BODY: Message only has text/html MIME parts * 1.5 URIBL_SBL Contains an URL listed in the SBL blocklist * [URIs: myimg.de] * 0.1 RDNS_NONE Delivered to trusted network by a host with no rDNS I want to automatically to move spam messages to a folder. Ideally (not sure if possible) only to move messages with puntuation 5.0 or more to folder.. spam between 2.0 and 5.0 I want to be stored in Inbox. (I plan later to switch autolearn on) After reading a lot in procmail, postfix and spamassasin sites and googling a lot (lot of outdated howtos) I found two solutions but not sure which is the best or if there is another one: Put a rule in squirrelmail (dirty solution?) Use Procmail Which is the best option? Do you have any updated howto about it? Thanks

    Read the article

  • Symbolic link not allowed or link target not accessible

    - by TK Kocheran
    I can't seem to get a symlink working in my Apache VirtualHost, no matter what I try and I see the following error in the error log: Symbolic link not allowed or link target not accessible: /var/www/carddesigner I can browse the actual symlink from Linux with no problems whatsoever: $ ls -l /var/www | grep "carddesigner" lrwxrwxrwx 1 rfkrocktk rfkrocktk 64 2011-02-28 16:52 carddesigner -> /home/rfkrocktk/Documents/Projects/Work/carddesigner/build/main/ Additionally, I've made sure that the my VirtualHost allows the FollowSymLinks option: /etc/apache2/sites-enabled/000-localhost: <VirtualHost 127.0.0.1:80> ServerAdmin ########## DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Deny from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel debug CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> RewriteEngine On RewriteLog "/var/log/apache2/mod_rewrite.log" RewriteLogLevel 9 </VirtualHost> I can't seem to find any other configuration files that seem to override this and/or prevent symlinks from being loaded. Any ideas? Here are my permissions on the actual referenced files: $ ls -l ~/Documents/Projects/Work/carddesigner/build/main total 12 drwxrwxrwx 5 rfkrocktk rfkrocktk 4096 2011-02-28 16:11 advanced drwxrwxrwx 2 rfkrocktk rfkrocktk 4096 2011-02-28 16:10 core drwxrwxrwx 2 rfkrocktk rfkrocktk 4096 2011-02-28 16:10 simple Seems like the permissions are good to go, right?

    Read the article

  • Troubleshooting iptables and configuring it to drop the priority of long-term connections

    - by intuited
    I'm somewhat familiar with the general concepts of iptables, and would like to learn it in more detail. I'm hoping that my learning experience can also be useful. The situation: I'm running dd-wrt on my router. Despite its purported QoS skills, I'm still seeing connection latency shoot up hugely whenever there's an ongoing http connection, eg some large download. Under such conditions, it can take 10 seconds or more to load a basic webpage; sometimes the connections are dropped entirely. I've tried adjusting the parameters, dropping the allotted bandwidth for up and download to well under my limit, but nothing seems to work. dd-wrt is configured to use HTB as the QoS algorithm; HFSC, although presented as an option, seems to cause the router to crash, and is rumoured to not actually work on any linux system. I'd like to be able to troubleshoot this issue and hopefully improve the settings that dd-wrt is using, but I'm finding the learning curve a bit overwhelming. For starters I am not sure what HTB actually specifies: is this a set of iptables commands, or do some of those commands specify how HTB is to be used? I would like it to prioritize based on protocol the way that it already supposed to, and in addition I'd like to have it drop the priority of connections which have a high total byte count, say over 400KB. Also tips on utilities that can be run under dd-wrt to get more info on what's going on in there are appreciated. I've tried to get iftop to work but there were issues running curses. I'm leaning towards replacing dd-wrt with openwrt; comments on this strategy are also welcome. I suspect that I would be well advised to get a second router as a standin before trying that. It may be worth noting that my total bandwidth is pretty limited (256Kbit/s).

    Read the article

  • fax server farm architechture

    - by Brian Postow
    I'm not sure this is the right forum for this, but it's not Stackoverflow so... I'm trying to figure out an architecture to solve the following problem, maybe someone here can help: I have a T1 with 23 fax lines coming into the building. I have a computer (Macintosh XServe) running Hylafax. If I had one POTS line, I'd be done. However, I have no idea how to get the T1 into the Mac... Options I've considered: some sort of PCI T1-modem (Does that exist?) Splitting the T1 into 23 POTS lines and then connecting 23 analog modems to the mac, either via an external modem bank (Do they still make those?) or via some sort of external PCI bank, which will allow me to use more than 2 4-port modem cards. Either the T1 or the split POTS lines going into some intermediate device and then transfering the images over IP, or USB to the mac. Really, any other option I can come up with This has GOT to be a problem that someone has already solved, right?

    Read the article

  • Linux Port 80 to redirect to a Windows box

    - by Richard Staehler
    I have 2 servers here at work. One is a Windows 2008 Server R2 (for safety's sake, lets use 192.168.1.100) and the other is a Fedora 14 (192.168.1.101). Currently when you hit our subdomain, x.test.com, our routers tell it to go to our Fedora box, and since Apache is installed and listening to port 80, it displays the Fedora Apache Test Page. It's obvious that I don't use port 80 for this machine, however I do use NAGIOS on it and its always nice to be able to access that from anywhere in the world. So when I want to access it, I just type x.test.com/nagios. Now here comes the dilemma.... On the Windows R2 box, we recently have installed a program that requires us to setup a web server using IIS7. Because of this application, I'm going to be creating a new subdomain called y.test.com, but since we only have 1 WAN/router, it will still get pointed to our Fedora box. That being said, it wants to use port 80 as well (or whatever port I damn well wish to assign it). So my question is: since our router is pointing to the Fedora 14 box (.101), and I want to make sure I can access NAGIOS from anywhere in the world, how do I tell Apache (httpd) to redirect port 80 to the other server (.100)? If not possible, what are my other options? I have rinetd installed on Fedora and have even tried the option 192.168.1.101 80 192.168.1.100 80 and it didn't seem to work "because port 80 was already bound" Thoughts? and Thanks!

    Read the article

  • Linux: Force fsck of a read-only mounted filesystem?

    - by Timothy Miller
    I'm developing for a headless embedded appliance, running CentOS 6.2. The user can connect a keyboard, but not a monitor, and a serial console would require opening the case, something we don't want the user to have to do. This all pretty much obviates the possibility of using a recovery USB drive to boot from, unless all it does is blindly reimage the harddrive. I would like to provide some recovery facilities, and I have written a tool that comes up on /dev/tty1 in place of getty to provide these functions. One such function is fsck. I have found out how to remount the root and other file systems read-only. Now that they are read-only, it should be safe to fsck them and then reboot. Unfortunately, fsck complains to me that the filesystems are mounted and refuses to do anything. How can I force fsck to run on a read-only mounted partition? Based on my research, this is going to have to be something obscure. "-f" just means to force repair of a clean (but unmounted) partition. I need to repair a clean or unclean mounted partition. From what I read, this is something "only experts" should do, but no one has bothered to explain how the experts do it. I'm hoping someone can reveal this to me. BTW, I've noticed that e2fsck 1.42.4 on Gentoo will let you fsck a mounted partition, even mounted read-write, but it seems only to do so if fsck is run from a terminal, so it can ask the user if they're sure they want to do something so dangerous. I'm not sure if the CentOS version does the same thing, but it appears that fsck CAN repair a mounted partition, but it flatly refuses to when not run from a terminal. One last-resort option is for me to compile my own hacked fsck. But I'm afraid I'll mess it up in some unexpected way. Thanks! Note: Originally posted here.

    Read the article

  • Mac Management Without Permission and Security

    - by Bart Silverstrim
    I was going through some literature on managing OS X laptops and asked someone some questions about usage scenarios when using the MacBooks. I asked someone more knowledgeable than I about whether it was possible for my Mac to be taken over if I were visiting another site for a conference or if I went on a wifi network at a local coffee house with policies from an OS X Server with workgroup manager (either legit for the site or someone running a version of OS X Server on hardware they have hidden somewhere on the network), which apparently could be set up to do things like limit my access to Finder or impose other neat whiz-bang management features. He said that it is indeed possible for it to happen as it would be assigned via the DHCP server and the OS X server would assume my Mac is a guest and could hand out restrictions and apparently my Mac will happily accept them without notifying me or giving me an option, unlike Windows which I believe would need to be joined to a domain before it becomes "managed" by Active Directory. So my question is as network admins and sysadmins with users traveling with MacBooks, is there a way to reasonably protect your users from having their machines hijacked without resorting to just turning off networking all the time? Or isn't this much of a security hazard? What threat does this pose to the road warriors in your businesses?

    Read the article

  • How to prevent nginx from locking files on mounted samba partition in Centos 6

    - by Bruce Kirkpatrick
    I'm using nginx 1.3.8 inside a centos 6.3 virtualbox 4.2.4 virtual machine. The system is running the latest software available via yum update. The host OS is windows 7. The site files nginx is serving are on mounted samba partition, which is a folder on the host Windows system. I.e., inside linux, nginx paths are referring to /home/vhosts and this is mounted from D:\vhosts\ on windows. The samba partition is mounted as root with 777 privileges. /etc/fstab looks like this, but with real ip, username, password: //hostip/vhosts /home/vhosts cifs username=username,password=SECRETPASSWORD,uid=root,gid=root,file_mode=0777,dir_mode=0777,rw,_netdev 0 0 I.e. linux/nginx reads from the windows share, and not the opposite. in /etc/samba/smb.conf, I have tried to disable all samba locking features, but it seems to have no effect even after rebooting the virtual machine. locking=no share modes=no oplocks = no level2 oplocks = no kernel oplocks =no I'm receiving "Access is denied" errors in Windows or linux when attempting to overwrite the javascript file in windows that has been accessed at least once with nginx. If I run "service nginx reload", the lock is removed and I'm able to save the file. That's why I think it is nginx causing the lock. The same problem occurs with directories. However, that may be a different issue not related to the use of samba. I'm using samba so that I can manage the source code outside of the virtual machine. Also note that after I run "service nginx reload", the file I'm editing is actually automatically deleted from the windows host. SOLVED: I just reviewed my nginx.conf file. It appears the "open_file_cache" feature is what is causing the lock and deleted files. When I set this option to open_file_cache off;, My problem is resolved. I will repost as the answer when it allows me to do so.

    Read the article

  • Ubuntu 13.10 on Acer V5-472 with HD 4000

    - by Hyperboreus
    I have an Acer V5-472 with intel HD4000 graphics chipset and a built-in 1366*768 display. I have installed ubuntu 13.10 amd64 in legacy boot mode with an external monitor. Installation showed no problems, I can boot from HDD and log into my system. The internal display doesn't work and I have to use an external monitor. I have tried the following (found in other threads) to no avail: Setting grub option "acpi_backlight=vendor" or "acpi_osi=Linux" or both. Installing the intel HD drivers for Linux from their homepage. Running in circles, screaming and shouting. The internal display lights up (I can change the brightness with Fn-Left and Fn-Right) but that's all. When I boot, I get a purple splash screen and from then only the external monitor works. I read somewhere that this might be a problem with kernel 3.11? Has anybody ubuntu running on an Acer V5-472? Should I change ubuntu version or use 32-bit instead? In general, how can I get the internal display to work? Edit: The settings-display dialogue shows the internal display correctly with supported resolution of 1366.

    Read the article

  • Login to OS X Server User Account from Local Computer

    - by Brod Wilkinson
    I have OS X Server installed on a mac mini. I've created several User accounts, one of which is Account Name: Bob Password: abc123 From the Mac Mini's login screen I can choose "Server" (main account) "Bob" (Bobs account) and "Other..." OS X Server Accounts, from "Other..." if I input Bobs credentials it will log me in. I also have a macbook air, I would like to be able to select from the Login Screen "Other..." input Bobs credentials and have it login to Bobs account, or any other User Account for that matter. My Server is setup as private with the server address: server.network.private Following some googled instructions as well as apples very own instructions I have: Setup an Open Directory with Username: diradmin Password: abc123 Then on the macbook air gone into System Preferences > Users & Groups > Login Options and clicked Join next to Network Account Server, input my server (server.network.private) with diradmin credentials and its connected. Great. I've also ticked Allow Network Users to Login and Login Window and selected All Users. I was assuming this would allow my macbook air to login to the "Bob" account by selecting "Other..." from the login window although there is no "Other..." option. I then setup a VPN, basic credentials, logged into it on the macbook air and still not much has changed. I am able to share screens with the "Bob" account form my macbook air by logging in by clicking Share Screen... from the Finder under Shared > Network Server and then clicking Login In but this obviously requires the macbook air to already be logged into an account before it can share screens which is not suitable. Is there any way to simply login to the OS X Server User Account from the macbook air's login screen via the "Other..." like it does on the mac mini's login screen? Thanks in advance. Operating System: OS X 10.9 Mavericks OS X Server: Version 3

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >