Search Results

Search found 23265 results on 931 pages for 'justin case'.

Page 662/931 | < Previous Page | 658 659 660 661 662 663 664 665 666 667 668 669  | Next Page >

  • How can I add subdomains of default accepted domain of Exchange 2010

    - by Christoph
    I have an Exchange 2010 that has several accepted domains. Now I want this server to accept - besides the default SMTP domain - all subdomains of the default domain. The documentation in Technet states When you create an accepted domain, you can use a wildcard character (*) in the address space to indicate that all subdomains of the SMTP address space are also accepted by the Exchange organization. For example, to configure Contoso.com and all its subdomains as accepted domains, enter *.Contoso.com as the SMTP address space. It is, however not possible to add e. g. *.contoso.com if contoso.com is already configured. Exchange complains in this case that the domain is already configured. It is also not possible to edit the "value", i. e. the domain name of an accepted domain. I know that I cannot modify the default accepted domain, but changing it to another does not help either, because the domain name itself can never be edited. The last idea was deleting the accepted domain and re-creating it with "*." prepended. This is, however, also impossible because it is of course not possible to delete or modify the default address policy and if a domain name is used in an address template it cannot be removed from the accepted domains. The question is: How can I make my Exchange 2010 server accept any subdomain of its default accepted domain with a wildcard?

    Read the article

  • Diagnosing a BSOD involving USB

    - by David Ebbo
    [Running Win7 Ultimate 64 bit] My new HP Pavilion Elite HPE-450t has been plagued by BSDO crashes since I got it about 5 weeks ago. The crashes are somewhat rare, sometimes not occurring for 3 or 4 days. I have spent a lot of time trying to isolate the device that could be at fault, but I have seen crashes with only the keyboard and mouse plugged in (as USB devices), and I tried two sets of keyboard/mouse, so I'm running out of ideas. :( The WhoCrashed tool gave this info about my latest BSOD: crash dump file: C:\Windows\Minidump\121310-11887-01.dmp This was probably caused by the following module: usbport.sys (USBPORT+0x2DE4E) Bugcheck code: 0xFE (0x5, 0xFFFFFA8008F571A0, 0x80863B34, 0xFFFFFA80092F2510) Error: BUGCODE_USB_DRIVER file path: C:\Windows\system32\drivers\usbport.sys product: Microsoft® Windows® Operating System company: Microsoft Corporation description: USB 1.1 & 2.0 Port Driver Bug check description: This indicates that an error has occurred in a Universal Serial Bus (USB) driver. The crash took place in a standard Microsoft module. Your system configuration may be incorrect. Possibly this problem is caused by another driver on your system which cannot be identified at this time. I looked at http://msdn.microsoft.com/en-us/library/ff560407(VS.85).aspx, and for Parameter1 = 0x5, it says "A hardware failure has occurred due to a bad physical address found in a hardware data structure. This is not due to a driver bug". Should I conclude that it's a hardware issue in the machine itself, rather than a bad USB driver or USB device? Here is the MiniDump, in case someone can get more info out of it: http://ewt52q.blu.livefilestore.com/y1peS4Ce8nSK1SXghzMDoxDWXlaEu-EKCJsv25y8y5DXXIUzZ9U0_tYgFJXd939fykwa0zRmx98IW0PYG18GioqKAuARYjtspSA/121310-11887-01.dmp?download&psid=2

    Read the article

  • Melting plastic around DC-in jack in laptop

    - by Ove
    I recently noticed that the plastic around the DC-in jack of my laptop was warped (melted) a little bit. Since I noticed, I have done some experiments, and saw that the metal tip of the charger heats up very much when I am gaming, or performing CPU-intensive work (it's so hot that i can't hold it between my fingers). When I am using Windows normally (web browsing, music, video), the tip is not hot. I tried using another charger from a compatible laptop, but its metal tip overheated as well, so the problem is not caused by the charger. I have been using this laptop for gaming for 1.5 years and I never had this problem. When gaming I always use a laptop cooler. Dust is not the problem (i cleaned out the dust), and the CPU and GPU temperatures are not higher than when I got the laptop. The only thing that is excessively hot is the charger tip. Because I bought my laptop from the USA, sending it to warranty and back would cost more than the laptop's value, so I need to fix it myself. I have googled around, and I saw that the problem might be the DC-in jack that is located on the motherboard of the laptop. I plan to take the laptop apart and see if it has become loose, and soldering it in place if it has. My questions for you are: Did anyone deal with this problem in the past? Did anyone manage to fix it? Is the DC-in jack the culprit in this case? Or is it possible for the problem to be caused by another part on the motherboard? Is there any way I can check the DC-in jack with a multimeter? What should I measure (resistance, etc)? EDIT: My laptop is a Sager NP5135 (aka Clevo B5130M). I also posted on NBR, including some pictures: link

    Read the article

  • Using 1and1.com Servers, SMTP Mail is Limited - Local XAMPP Server Works As Expected

    - by nicorellius
    I'm starting to not like 1and1.com that much. I've used them for years, but mainly for simple sites without much need for configuration. I know there are better hosting companies out and I may go seeking them. The problem here is that on my Local XAMPP server (sitting on a network with Comcast ISP), I have a PHP script that uses PEAR::Mail to send mail using MIME. The script works find locally with either smtp.1and1.com and corresponding credentials and smtp.gmail.com with corresponding credentials, using appropriate ports, etc. 1and1 tells me that I have to change the MX record on the domain where this script runs in order to make this work. This doesn't make sense to me. Now I'm pretty new to all this, but how is it that this is the case? Why can my local server work just fine, out of the box, but their servers not? I have asked them these questions, but they are very vague and I cannot get any good answers from them. Versions: PEAR Version: 1.5.0 PHP Version: 4.4.9 Zend Engine Version: 1.3.0 My apologies in advance for my ignorance. Thanks for the help in advance.

    Read the article

  • Kickstart CentOS 6 prompting for TCP/IP with network set to DHCP

    - by Andy Shinn
    I am trying to stop my kickstart CentOS install prompting me for TCP/IP information. After I click through this prompt (keeping IPv4 and IPv6 to their defaults) the installation continues and completes just fine. Below is my kickstart file: # Andy's super awesome VM kickstart file install url --url=http://mirrors.kernel.org/centos/6/os/x86_64 lang en_US.UTF-8 keyboard us text %include /tmp/network.ks rootpw --iscrypted $6$RA8DyrNTsVJkGIgY$ohZ62HHiOjNnn1yDMZlIu3lQ63D3plGPcbVZtPKE8Oq6Z.IGUgN.kNLkxs/ZymZuluRDWsW2eey5zLOl2G3mp. firewall --service=ssh authconfig --enableshadow --passalgo=sha512 selinux --disabled timezone America/Los_Angeles bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work zerombr clearpart --all --drives=vda --initlabel part /boot --fstype=ext4 --size=500 part pv.253002 --grow --size=1 volgroup vg1 --pesize=4096 pv.253002 logvol / --fstype=ext4 --name=lv_root --vgname=vg1 --grow --size=1024 --maxsize=51200 logvol swap --name=lv_swap --vgname=vg1 --grow --size=4032 --maxsize=4032 repo --name="CentOS" --baseurl=http://mirrors.kernel.org/centos/6/os/x86_64 --cost=100 repo --name="Puppet Labs Products" --baseurl=http://yum.puppetlabs.com/el/6/products/x86_64 repo --name="Puppet Labs Dependencies" --baseurl=http://yum.puppetlabs.com/el/6/dependencies/x86_64 repo --name="EyeFi" --baseurl=http://flexo.eye.fi/6/eye-fi-api %packages @core @server-policy puppet facter %end %pre --erroronfail #!/bin/bash for x in `cat /proc/cmdline`; do case $x in SERVERNAME*) eval $x echo "network --onboot yes --device eth0 --bootproto dhcp --hostname ${SERVERNAME}.eye.fi" /tmp/network.ks ;; esac; done %end %post puppet agent --waitforcert 10 --onetime --no-daemon --pluginsync --server puppet.eye.fi %end reboot My kernel arguments are in this following virt-install command that I use to start the install: virt-install -n zabbix -r 2048 --vcpus=2 -l http://mirrors.kernel.org/centos/6/os/x86_64 --disk /dev/vg_inf1/zabbix --network bridge=br85 --initrd-inject=/home/ashinn/vm_kickstart --extra-args "ks=file:/vm_kickstart SERVERNAME=zabbix" --autostart During the install, I can pull up a console on the second terminal and verify the contents of /tmp/network.ks are: network --onboot=yes --bootproto=dhcp --ipv6=auto --hostname=jenkins2.mydomain.com Why might Anaconda be prompting for the TCP/IP settings when they are already set to DHCP?

    Read the article

  • How to prevent samba from holding a file lock after a client disconnects?

    - by Jean-Francois Chevrette
    Here I have a Samba server (Debian 5.0) thats is configured to host Windows XP profiles. Clients connects to this server and work on their profiles directly on the samba share (the profile is not copied locally). Every now and then, a client may not shutdown properly and thus Windows does not free the file locks. When looking at the samba locking table, we can see that many files are still locked even though the client is not connected anymore. In our case, this seems to occur with lockfiles created by Mozilla Thunderbird and Firefox. Here's an example of the samba locking table: # smbstatus -L | grep DENY_ALL | head -n5 Pid Uid DenyMode Access R/W Oplock SharePath Name Time -------------------------------------------------------------------------------------------------- 15494 10345 DENY_ALL 0x3019f RDWR EXCLUSIVE+BATCH /home/CORP/user1 app.profile/user1.thunderbird/parent.lock Mon Nov 22 07:12:45 2010 18040 10454 DENY_ALL 0x3019f RDWR EXCLUSIVE+BATCH /home/CORP/user2 app.profile/user2.thunderbird/parent.lock Mon Nov 22 11:20:45 2010 26466 10056 DENY_ALL 0x3019f RDWR EXCLUSIVE+BATCH /home/CORP/user3 app.profile/user3.firefox/parent.lock Mon Nov 22 08:48:23 2010 We can see that the files were opened by Windows and imposed a DENY_ALL lock. Now when a client reconnects to this share and tries to open those files, samba says that they are locked and denies access. Is there any way to work around this situation or am I missing something? Edit: We would like to avoid disabling file locks on the samba server because there are good reasons to have those enabled.

    Read the article

  • Cannot access firewalled jboss server from Internet Explorer

    - by Simon Gibbs
    I've produced a website for a client One Single Menu using JBoss and hosted it on Rackspace Cloud Servers running Ubuntu's Maverick Meerkat. Following advice, I esablished some iptables rule to protect jboss: iptables -I INPUT 1 -i lo -j ACCEPT iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080 iptables -I INPUT -p tcp --dport 8080 -j ACCEPT iptables -t nat -A OUTPUT -o lo -p tcp --dport 80 -j REDIRECT --to-port 8080 iptables -A INPUT -j DROP Now, several versions of IE on several computers on at least two different ISPs cannot access the onesinglemenu.com. Curl from within the datacenter, Firefox, and Safari on the same ISPs can all access the server fine. I even tried IE and Firefox on the same computer and IE failed but Firefox worked. The error behaviour is that IE hangs on connecting without reporting an error, even after a minute or so. No page is displayed at all. I find it quite odd that I'm having a browser specific connection issue, but it appears to be the case. Help!

    Read the article

  • Windows Server 2008 backup VHD's - is it possible to mount/open in Windows 7?

    - by Simon
    Hi All, Is it possible to mount the VHD files created by the Windows Server 2008 backup utility onto a Windows 7 (release) client? Following an array failure I was very worried that there was a problem with both the backup sets on different USB drives as attaching the VHD to a Win 7 box did not show the expected structure (instead they behaved like unformatted disk space). Subsequently, I've attached the backup drive to a 2008r2 machine that I'd intended to be the replacement and the backup set can be browsed without issue (seemingly). When the new disks arrive I'll go through the recovery process and see where we are, but it looks promising so far. Is it simply the case that you can't take server created VHD's and mount them on desktop machines? (Rather than hyper-ventilating at the thought of years of lost photos and email, I'm now just mildly curious) Edit:One thing that has confused things is that the backup utility on Win7 is more restrictive about restoring from external devices than the equivilent on 2008r2. With r2, I can restore files 'from another server' and browse to external storage. Win7 only allows the back to be located on a network share. Once my box of new disks arrive and I've got something to restore onto, I'll move the smaller of the backup VHDs onto network storage reachable by Win7 and see if the VHD is readable. I haven't read up on the VHD process used by the backup app - I'm assuming it's a base VHD and differencing files used for incremental backups and that the restore app understands this. Finally: In retrospect the question should have been, 'can I restore a 2008r2 backup set via a Win 7 client' Thanks

    Read the article

  • Authenticate VNC session with ConsolKit?

    - by lori
    I have a linux machine running Fedora 16 in a cupboard. It has no screen or keyboard. I connect to it using a combination of vnc and ssh. Recently, after an update, I have had issues with authentication on the machine. If I vnc to it, the kde desktop pops up an error dialog every few minutes saying Authorization failed. Failed to obtain authentication. If I plug in a USB drive it fails to mount, Dolphin reports an authentication issue again. I have had limited success finding the solution. AFAICT, it is an issue with ConsoleKit deeming me to be a non-local user so it prevents authentication. This is the output from ck-list-sessions: $ ck-list-sessions Session5: unix-user = '1000' realname = 'steve' seat = 'Seat6' session-type = '' active = FALSE x11-display = ':1' x11-display-device = '' display-device = '' remote-host-name = '' is-local = FALSE on-since = '2012-09-16T08:07:03.137011Z' login-session-id = '1' I have tried to update my .vnc/xstartup script to include ck-launch-session as follows: $ cat ~/.vnc/xstartup #!/bin/sh exec ck-launch-session vncconfig -iconic & unset SESSION_MANAGER unset DBUS_SESSION_BUS_ADDRESS export XKL_XMODMAP_DISABLE=1 OS=`uname -s` if [ $OS = 'Linux' ]; then case "$WINDOWMANAGER" in *gnome*) if [ -e /etc/SuSE-release ]; then PATH=$PATH:/opt/gnome/bin export PATH fi ;; esac fi if [ -x /etc/X11/xinit/xinitrc ]; then exec ck-launch-session /etc/X11/xinit/xinitrc fi if [ -f /etc/X11/xinit/xinitrc ]; then exec ck-launch-session sh /etc/X11/xinit/xinitrc fi [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources exec ck-launch-session xsetroot -solid grey exec ck-launch-session xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & exec ck-launch-session twm & This has not helped. How can I either authenticate myself to ConsoleKit, or trick it into believing I am a local user?

    Read the article

  • What is the quickest reliable way to backup a NAS drive to a USB drive?

    - by Tim Murphy
    How would you backup 600+ GB of data on a NAS (Network-Attached Storage) drive to a USB external drive? The NAS drive does not contain mission critical data nonetheless I wish to make weekly copies of it just in case. The NAS drive is almost exclusively used as an archive dump and is rarely updated. However the backup strategy used must have a simple restore procedure so I can confidently say the data now on the NAS drive is exactly how it was at the time of backup. I did try xcopy but seemed like it would take many-many hours and eventually crashed with insufficient memory. http://www.ctunion.com/node/114 suggests I would need to use xxcopy instead due to folder/file name lengths. My concern with xcopy/xxcopy is the length of time it takes. Hoping something else is faster. NAS drive is DLink DNS-313. 1TB drive installed. Connected to router via Ethernet cable. USB drive is Seagate 1TB. Can be connected to Windows Vista (preferred) or Windows 7 PCs. Both PCs are usually connected Wirelessly however ethernet cable can be used during backup to speed up the process.

    Read the article

  • Install PHP mcrypt on Red Hat 4

    - by Chris
    I'm having a very hard time getting mcrypt for PHP installed on a Red Hat 4 server. I've downloaded the rpm but it tells me: error: Failed dependencies: php-common(x86-32) = 5.4.7-2.fc18 is needed by php-mcrypt-5.4.7-2.fc18.i686 rpmlib(FileDigests) <= 4.6.0-1 is needed by php-mcrypt-5.4.7-2.fc18.i686 libc.so.6(GLIBC_2.4) is needed by php-mcrypt-5.4.7-2.fc18.i686 libltdl.so.7 is needed by php-mcrypt-5.4.7-2.fc18.i686 rtld(GNU_HASH) is needed by php-mcrypt-5.4.7-2.fc18.i686 rpmlib(PayloadIsXz) <= 5.2-1 is needed by php-mcrypt-5.4.7-2.fc18.i686 So when I try to install one of those packages, they also require another 8 packages. So I'm diving into dependency hell here. Now if I try to compile mcrypt from source, this is what I get: checking for libmcrypt - version >= 2.5.0... no *** Could not run libmcrypt test program, checking why... *** The test program failed to compile or link. See the file config.log for the *** exact error that occured. This usually means LIBMCRYPT was incorrectly installed *** or that you have moved LIBMCRYPT since it was installed. In the latter case, you *** may want to edit the libmcrypt-config script: no configure: error: *** libmcrypt was not found But I was able to install libmcrypt from an rpm packages successfully. Any suggestions? Also, I cannot use up2date as it requires an active paid account from Red Hat and since the staff has changed rather rapidly in the last year where I work, no one knows if there even was any support accounts.

    Read the article

  • Script launching 3 copies of rsync

    - by organicveggie
    I have a simple script that uses rsync to copy a Postgres database to a backup location for use with Point In Time Recovery. The script is run every 2 hours via a cron job for the postgres user. For some strange reason, I can see three copies of rsync running in the process list. Any ideas why this might the case? Here's the cron entry: # crontab -u postgres -l PATH=/bin:/usr/bin:/usr/local/bin 0 */2 * * * /var/lib/pgsql/9.0/pitr_backup.sh And here's the ps list, which shows two copies of rsync running and one sleeping: # ps ax |grep rsync 9102 ? R 2:06 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9103 ? S 0:00 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9104 ? R 2:51 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log And here's the uber simple script that seems to be the cause of the problem: #!/bin/sh LOG="/var/log/pgsql-pitr-backup.log" base_backup_dir="/var/lib/pgsql/9.0/backups" wal_archive_dir="$base_backup_dir/wal_archives" pitr_archive_dir="$base_backup_dir/pitr_archives" timestamp=`date +%Y%m%d%H%M%S` backup_dir="$pitr_archive_dir/$timestamp" mkdir -p $backup_dir echo `date` >> $LOG /usr/bin/psql -U postgres -c "SELECT pg_start_backup('$backup_dir');" rsync -avW /var/lib/pgsql/9.0/data/ $backup_dir/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log /usr/bin/psql -U postgres -c "SELECT pg_stop_backup();"

    Read the article

  • Four disks - RAID 10 or two mirrored pairs?

    - by ewwhite
    I have this discussion with developers quite often. The context is an application running in Linux that has a medium amount of disk I/O. The servers are HP ProLiant DL3x0 G6 with four disks of equal size @ 15k rpm, backed with a P410 controller and 512MB of battery or flash-based cache. There are two schools of thought here, and I wanted some feedback... 1). I'm of the mind that it makes sense to create an array containing all four disks set up in a RAID 10 (1+0) and partition as necessary. This gives the greatest headroom for growth, has the benefit of leveraging the higher spindle count and better fault-tolerance without degradation. 2). The developers think that it's better to have multiple RAID 1 pairs. One for the OS and one for the application data, citing that the spindle separation would reduce resource contention. However, this limits throughput by halving the number of drives and in this case, the OS doesn't really do much other than regular system logging. Additionally, the fact that we have the battery RAID cache and substantial RAM seems to negate the impact of disk latency... What are your thoughts?

    Read the article

  • Mac creating files w/ wrong perms on samba share

    - by geoffjentry
    In my group, which is very heterogeneous in terms of machines, we use a samba share to collaborate on files and such. In all but one case, it works as expected (or at least close enough). The one exception is my boss' laptop, a snow leopard macbook air. On his desktop (also snow leopard), if he creates a file it ends up serverside with perms of 774, but when he creates it with the Air, the perms are 644. The key problem is the lack of group write permission on the laptop created files. What's really confusing is that everything that I've looked at on the two machines are identical - same version of OS X, same version of samba (3.0.25b-apple), same settings for the same software, etc. I can't imagine why one machine would be different than the other, but it is. To try to be complete w/ the description, here is the relevant portion of my smb.conf file: comment = my Share path = /path/to/share public = no writeable = yes printable = no force group = myshare directory mask = 0770 create mask = 0770 force create mode = 0770 force directory mode = 0770 EDIT: I looked at three more Macs and all of them worked as expected which leaves this one laptop the true oddball. This wasn't as good as a test as the others though, as they were all leopard.

    Read the article

  • Why can't I route to some sites from my MacBook Pro that I can see from my iPad? [closed]

    - by Robert Atkins
    I am on M1 Cable (residential) broadband in Singapore. I have an intermittent problem routing to some sites from my MacBook Pro—often Google-related sites (arduino.googlecode.com and ajax.googleapis.com right now, but sometimes even gmail.com.) This prevents StackExchange chat from working, for instance. Funny thing is, my iPad can route to those sites and they're on the same wireless network! I can ping the sites, but not traceroute to them which I find odd. That I can get through via the iPad implies the problem is with the MBP. In any case, calling M1 support is... not helpful. I get the same behaviour when I bypass the Airport Express entirely and plug the MBP directly into the cable modem. Can anybody explain a) how this is even possible and b) how to fix it? mella:~ ratkins$ ping ajax.googleapis.com PING googleapis.l.google.com (209.85.132.95): 56 data bytes 64 bytes from 209.85.132.95: icmp_seq=0 ttl=50 time=11.488 ms 64 bytes from 209.85.132.95: icmp_seq=1 ttl=53 time=13.012 ms 64 bytes from 209.85.132.95: icmp_seq=2 ttl=53 time=13.048 ms ^C --- googleapis.l.google.com ping statistics --- 3 packets transmitted, 3 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 11.488/12.516/13.048/0.727 ms mella:~ ratkins$ traceroute ajax.googleapis.com traceroute to googleapis.l.google.com (209.85.132.95), 64 hops max, 52 byte packets traceroute: sendto: No route to host 1 traceroute: wrote googleapis.l.google.com 52 chars, ret=-1 *traceroute: sendto: No route to host traceroute: wrote googleapis.l.google.com 52 chars, ret=-1 ^C mella:~ ratkins$ The traceroute from the iPad goes (and I'm copying this by hand): 10.0.1.1 119.56.34.1 172.20.8.222 172.31.253.11 202.65.245.1 202.65.245.142 209.85.243.156 72.14.233.145 209.85.132.82 From the MBP, I can't traceroute to any of the IPs from 172.20.8.222 onwards. [For extra flavour, not being able to access the above appears to stop me logging in to Server Fault via OpenID and formatting the above traceroutes correctly. Anyone with sufficient rep here to do so, I'd be much obliged.]

    Read the article

  • Managing multiple IMAP accounts in Thunderbird

    - by baritoneuk
    I've been using Thunderbird for years without issues with 20+ pop3 accounts. I'm moving over to imap which will enable me to keep copies of the emails locally and on the server whilst keeping everthing synchronised. However I'm looking for the best way to manage multiple imap accounts on Thunderbird. Currently I have a filter that copies all the emails into a central inbox and into seperate local folders. The reason for this is I go through my inbox daily and delete all emails that don't require any action. I move any emails that require action to my "action" imap account folder. This way I can syncronise all the emails that require action across multiple computers (and mobile devices). This technique is my implemantion of the GTD or Getting things Done philosophy. I also copy over each email into seperate local folders. The reason I do this is just in case any emails on the imap accounts get deleted, or something drastic happens on the server which means I lose all the emails. My business partner has access to some of these emails and still uses pop3 (with "leave copy on server" checked), but I know sometimes Thunderbird can still delete emails off the server sometimes. The problem with the above is that thunderbird gives me the dreaded error dialogue saying that the emails cannot be filtered due to another process. I find the folder list in Thunderbird hard to manage. Here is a screenshot of part of my folder list- as you can see it's a bit of a complicated list and not easy to manage: What would be the best way of me managing multiple imap accounts whilst allowing me to have copies put in a central folder and emails in local folders? It would be useful if people think this is necessary, as perhaps there is a betterway? How do people manage multiple imap accounts in a way that allows them to keep on top of actionable emails? I'd be interested in how others manage this. I've never used the Thunderbird-based client "Postbox", does this handle multiple imaps better?

    Read the article

  • 5GHz vs 2.4GHz dual band router, max mbps

    - by Tallboy
    I've done a fair amount of reading before posting this but there are a few things still unclear. I just bought a Netgear WNDR3700 N600. I understand that 5GHz offers more channels, less interference because of more available channels, wont interfere with a microwave and so on, and also has a shorter range. Currently, my router is broadcasting both signals (for my iPhone on 2.4, and my computer on 5) But my question is What is the max speed of 5GHz in mbps? In the router settings, it allows me to set '300mbps', but I keep reading online that the max is 54. Is this true? I noticed when I set up the router the default for 2.4 was set to 300 and the default for 5 was set to 54, so I changed both to 300. Is this fine as well? I don't see why it wouldn't be maxed out for both by default. On the box it says max rate of 300+300 so I assume this is correct, but it's throttled down so the router isn't stressing in case you have something streaming media 24/7 and slowing the internet down too. What is the max range of 5GHz? my apt is 780 square feet, and the router is in the main living room.

    Read the article

  • BTrFS crashhhh?

    - by bumbling fool
    I create a new BTrFS raid10 file system using two 250GB drives and the second partition on a third 80GB drive. I create a subvol and snapshot. I mount the snapshot and start copying 8GB of data to it. It gets to around 1GB and the Desktop disappears and what looks like a non interactive terminal comes up with dump/crash information. I don't have a camera handy or I'd take a picture and post it. It basically looks like stack trace info. CTRL-ALT F7 will eventually bring back the Desktop though but the entire BTrFS portion of the OS is hung and non responsive until I reboot. I've reformated and reproduced this problem 3 times now and I'm about to give up :( I realize it is possible this problem is not entirely BTrFS' fault because I'm on natty which is still alpha. More granular details in case I'm an idiot: 1) Create FS: sudo mkfs.btrfs -m raid10 -d raid10 /dev/sda2 /dev/sdb /dev/sdc 2) Initial temporary mount: mkdir /btrfs && sudo mount -t btrfs /dev/sda2 /btrfs 3) Create subvol btrfs s c /btrfs/vm 4) Create initial snapshot: (optional) btrfs s sn /btrfs/cantremember.snap.something 5)unmount /btrfs and mount /btrfs/vm sudo mount -t btrfs -o subvol=vm /dev/sda2 /btrfs/vm 6) Copy data to subvolume. 7) Balance data across drives: (optional) btrfs f bal <path> (never get to this step 7...) Am I doing something wrong?

    Read the article

  • Very long (>300s) request processing time on Apache Server serving static content from particular IP

    - by Ron Bieber
    We are running an Apache 2.2 server for a very large web site. Over the past few months we have been having some users reporting slow response times, while others (including our resources, both on the internal network and our home networks) do not see any degradation in performance. After a ton of investigation, we finally found a "Deny from none" statement in our configuration that was causing reverse DNS lookups (which were timing out) that solved the bulk of our issues, but we still have some customers that we are seeing in the Apache logs (using %D in the log format) with request processing times of 300s for images, css, javascript and other static content. We've checked all Deny / Allow statements for reoccurrence of "none", as well as all other things we know of that would cause reverse DNS lookups (such as using "REMOTE_HOST" in rewrite rules, using %a instead of %h in our log format configuration) as well as verified that HostnameLookups is set to "Off". As an aside, we've also validated that reverse DNS lookups for folks having this problem do not time out - so I'm fairly certain DNS is not an issue in this case. I've run out of ideas. Are there any Apache configuration scenarios that someone can point me to that I might be missing that would cause request times for static content to take so long only for certain users? Thank you in advance.

    Read the article

  • MS Word 2010: Hide citation title when 2 publications by same first author from different years are in one citation block

    - by srunni
    I'm trying to hide the display of the titles for two publications by the same first author from different years that are in the same citation block. By default, the title is shown in citations when there are two publications by the same author in a given document. The easiest way to get around this is to right click on the citation, click "Edit Citation", and then suppress the title. However, the issue with this is that if there are 2 citations in 1 citation block (i.e., "(Smith, J., et al. 2010, Smith, J., et al. 2011)" rather than "(Smith, J., et al. 2010) (Smith, J., et al. 2011)"), then using that suppress option only suppresses the title for the first citation (in this case, the 2010 publication). OTOH, if I try to initially insert the publications in separate citation blocks, I can suppress the title in both citations, but I can't cut and paste one into the other's citation block. I can click "Cut" and the citation that was just cut disappears, but the "Paste" option is not available when my cursor is in the second citation block. Any ideas? Thanks!

    Read the article

  • Fully FOSS EMail solution

    - by Ravi
    I am looking at various FOSS options to build a robust EMail solution for a government funded university. Commercial options are to be chosen only in the worst case scenario. Here are the requirements: Approx 1000-1500 users - Postfix or Exim? (Sendmail is out;-)) Mailing lists for different groups/Need web based archive - Mailman? Sympa? Centralised identity store - OpenLDAP? Fedora 389DS? Secure IMAP only - no POP3 required - Courier? Dovecot? Cyrus?? Anti Spam - SpamAssasin? what else? Calendaring - ?? webmail - good to have, not mandatory - needs to be very secure...so squirrelmail is out;-)? Other questions: What mailbox storage format to use? where to store? database/file system? Simple and effective HA options? Is there a web proxy equivalent to squid in the mail server world? software load balancers?CARP? Monitoring and alert? Backup? The govt wants to stimulate the local economy by buying hardware locally from whitebox vendors. Also local consultants and university students will do the integration. We looked at out-of-the-box integrated solutions like Axigen, Zimbra and GMail but each was ruled out in favour of a DIY approach in the hopes of full control over the data and avoiding vendor lockin - which i though was a smart thing to do. I wish more provincial governments in the developing world think of these sort of initiatives As for OS - Debian, FreeBSD would be first preference. Commercial OS's need not apply. CentOS as second tier option...

    Read the article

  • What are the most important aspects to consider when choosing a SAN for a small office virtualizatio

    - by Prof. Moriarty
    I am in the process of consolidating 6 physical servers running 6 different operating system flavors (don't ask) into two identical physical servers (Dell PowerEdge 2900), using the free VMware ESXi 4.0 platform. We will install an iSCSI SAN over a 1GbE network, and store all virtual machine images on the SAN. Each physical server would run 3 VMs, and in the case of a physical server failure, we would manually switch over the other 3. These are all internal servers, while important, they can tolerate some amount of downtime (say <1h) to keep cost and complexity associated with HA down. I now need to choose the SAN to be used for the setup, on a low budget. We currently have about 2TB of data, but of course I want to able to grow, do backups of VM snapshots on other drives and remove them to a different location, etc. So what I would like to know is: Which are the must have features for this setup, without which using a SAN is not worth it? We are mostly a Dell shop, so I have been looking at the EqualLogic PS4000E High Availability model. Any opinions, anecdotes, bad experiences with this model? (This is one of the few models which could accomodate our existing disks from the physical servers.) If you can recommend something that is not Dell, but it has better value, I would most definitely consider it. Caveats, things to look out for?

    Read the article

  • hosts.deny not blocking ip addresses

    - by Jamie
    I have the following in my /etc/hosts.deny file # # hosts.deny This file describes the names of the hosts which are # *not* allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # # The portmap line is redundant, but it is left to remind you that # the new secure portmap uses hosts.deny and hosts.allow. In particular # you should know that NFS uses portmap! ALL:ALL and this in /etc/hosts.allow # # hosts.allow This file describes the names of the hosts which are # allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # ALL:xx.xx.xx.xx , xx.xx.xxx.xx , xx.xx.xxx.xxx , xx.x.xxx.xxx , xx.xxx.xxx.xxx but i am still getting lots of these emails: Time: Thu Feb 10 13:39:55 2011 +0000 IP: 202.119.208.220 (CN/China/-) Failures: 5 (sshd) Interval: 300 seconds Blocked: Permanent Block Log entries: Feb 10 13:39:52 ds-103 sshd[12566]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:52 ds-103 sshd[12567]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:52 ds-103 sshd[12568]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:52 ds-103 sshd[12571]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:53 ds-103 sshd[12575]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root whats worse is csf is trying to auto block these ip's when the attempt to get in but although it does put ip's in the csf.deny file they do not get blocked either So i am trying to block all ip's with /etc/hosts.deny and allow only the ip's i use with /etc/hosts.allow but so far it doesn't seem to work. right now i'm having to manually block each one with iptables, I would rather it automatically block the hackers in case I was away from a pc or asleep

    Read the article

  • placing shell script under systemd control

    - by Calvin Cheng
    Assuming I have a shell script like this:- #!/bin/sh # cherrypy_server.sh PROCESSES=10 THREADS=1 # threads per process BASE_PORT=3035 # the first port used # you need to make the PIDFILE dir and insure it has the right permissions PIDFILE="/var/run/cherrypy/myproject.pid" WORKDIR=`dirname "$0"` cd "$WORKDIR" cp_start_proc() { N=$1 P=$(( $BASE_PORT + $N - 1 )) ./manage.py runcpserver daemonize=1 port=$P pidfile="$PIDFILE-$N" threads=$THREADS request_queue_size=0 verbose=0 } cp_start() { for N in `seq 1 $PROCESSES`; do cp_start_proc $N done } cp_stop_proc() { N=$1 #[ -f "$PIDFILE-$N" ] && kill `cat "$PIDFILE-$N"` [ -f "$PIDFILE-$N" ] && ./manage.py runcpserver pidfile="$PIDFILE-$N" stop rm -f "$PIDFILE-$N" } cp_stop() { for N in `seq 1 $PROCESSES`; do cp_stop_proc $N done } cp_restart_proc() { N=$1 cp_stop_proc $N #sleep 1 cp_start_proc $N } cp_restart() { for N in `seq 1 $PROCESSES`; do cp_restart_proc $N done } case "$1" in "start") cp_start ;; "stop") cp_stop ;; "restart") cp_restart ;; *) "$@" ;; esac From the bash script, we can essentially do 3 things: start the cherrypy server by calling ./cherrypy_server.sh start stop the cherrypy server by calling ./cherrypy_server.sh stop restart the cherrypy server by calling ./cherrypy_server.sh restart How would I place this shell script under systemd's control as a cherrypy.service file (with the obvious goal of having systemd start up the cherrypy server when a machine has been rebooted)? Reference systemd service file example here - https://wiki.archlinux.org/index.php/Systemd#Using_service_file

    Read the article

  • Can I autoregister my clients/servers in local DNS?

    - by Christian Wattengård
    Right now I have a W2k12 server at home that I run as a domain controller. This has the extra benefit of registering every "subordinate" computers name in it's DNS so that I don't have to go around remembering IP's all the time. (And it let's me easily run dhcp also on my servers). I need to rework my home network for several odd reasons, and in this new scenario there is no place for a big honking W2k12 server box. I have a RasPI, and I have other smallish linux boxen I can use. (In a worst case scenario I'll use my NUC, but then I'll be forced to use my home cinema's UPnP-client for media... The HORROR!!) Is it possible to set up a DNS-server-"appliance" that somehow autoregisters it's own hostname.. Scenario: Router (N66u) on 172.20.20.1. Runs DHCP on 172.20.20.100-200 range. Server [verdant] of a *nix flavor on 172.20.20.2 Laptop [speedy] of W8 flavor on DHCP assigned Laptop [canary] of W8 flavor on DHCP assigned Desktop [lianyu] of Ubunto flavor on DHCP assigned What I would like is that all of the above servers (except possibly the router) would be available on verdant.starling.lan and canary.starling.lan and so on. This is how it works right now (except the Ubuntu box... I haven't cracked that one yet) because Windows just does this for you.. I would also be able to do this without any manual labor on the server. When I tell my box it's name is smoak it should "immediately" be available as smoak.starling.lan without any extra configuration on my part. How can I do this in a Linux (Ubuntu) environment? (Bonus comment upvote for naming the naming scheme :P )

    Read the article

< Previous Page | 658 659 660 661 662 663 664 665 666 667 668 669  | Next Page >