Search Results

Search found 17406 results on 697 pages for 'option explicit'.

Page 552/697 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • Troubleshooting iptables and configuring it to drop the priority of long-term connections

    - by intuited
    I'm somewhat familiar with the general concepts of iptables, and would like to learn it in more detail. I'm hoping that my learning experience can also be useful. The situation: I'm running dd-wrt on my router. Despite its purported QoS skills, I'm still seeing connection latency shoot up hugely whenever there's an ongoing http connection, eg some large download. Under such conditions, it can take 10 seconds or more to load a basic webpage; sometimes the connections are dropped entirely. I've tried adjusting the parameters, dropping the allotted bandwidth for up and download to well under my limit, but nothing seems to work. dd-wrt is configured to use HTB as the QoS algorithm; HFSC, although presented as an option, seems to cause the router to crash, and is rumoured to not actually work on any linux system. I'd like to be able to troubleshoot this issue and hopefully improve the settings that dd-wrt is using, but I'm finding the learning curve a bit overwhelming. For starters I am not sure what HTB actually specifies: is this a set of iptables commands, or do some of those commands specify how HTB is to be used? I would like it to prioritize based on protocol the way that it already supposed to, and in addition I'd like to have it drop the priority of connections which have a high total byte count, say over 400KB. Also tips on utilities that can be run under dd-wrt to get more info on what's going on in there are appreciated. I've tried to get iftop to work but there were issues running curses. I'm leaning towards replacing dd-wrt with openwrt; comments on this strategy are also welcome. I suspect that I would be well advised to get a second router as a standin before trying that. It may be worth noting that my total bandwidth is pretty limited (256Kbit/s).

    Read the article

  • Moving automatically spam messages to a folder in Postfix

    - by cad
    Hi My problem is that I want to automatically to move spam messages to a folder and not sure how. I have a linux box giving email access. MTA is Postfix, IMAP is Courier. As webmail client I use Squirrelmail. To filter SPAM I use Spamassassin and is working ok. Spamassasin is overwriting subjects with [--- SPAM 14.3 ---] Viagra... Also is adding headers: X-Spam-Flag: YES X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on xxxx X-Spam-Level: ************** X-Spam-Status: Yes, score=14.3 required=2.0 tests=BAYES_99, DATE_IN_FUTURE_24_48,HTML_MESSAGE,MIME_HTML_ONLY,RCVD_IN_PBL, RCVD_IN_SORBS_WEB,RCVD_IN_XBL,RDNS_NONE,URIBL_RED,URIBL_SBL autolearn=no version=3.2.5 X-Spam-Report: * 0.0 URIBL_RED Contains an URL listed in the URIBL redlist * [URIs: myimg.de] * 3.5 BAYES_99 BODY: Bayesian spam probability is 99 to 100% * [score: 1.0000] * 0.9 RCVD_IN_PBL RBL: Received via a relay in Spamhaus PBL * [113.170.131.234 listed in zen.spamhaus.org] * 3.0 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL * 0.6 RCVD_IN_SORBS_WEB RBL: SORBS: sender is a abuseable web server * [113.170.131.234 listed in dnsbl.sorbs.net] * 3.2 DATE_IN_FUTURE_24_48 Date: is 24 to 48 hours after Received: date * 0.0 HTML_MESSAGE BODY: HTML included in message * 1.5 MIME_HTML_ONLY BODY: Message only has text/html MIME parts * 1.5 URIBL_SBL Contains an URL listed in the SBL blocklist * [URIs: myimg.de] * 0.1 RDNS_NONE Delivered to trusted network by a host with no rDNS I want to automatically to move spam messages to a folder. Ideally (not sure if possible) only to move messages with puntuation 5.0 or more to folder.. spam between 2.0 and 5.0 I want to be stored in Inbox. (I plan later to switch autolearn on) After reading a lot in procmail, postfix and spamassasin sites and googling a lot (lot of outdated howtos) I found two solutions but not sure which is the best or if there is another one: Put a rule in squirrelmail (dirty solution?) Use Procmail Which is the best option? Do you have any updated howto about it? Thanks

    Read the article

  • Linux Port 80 to redirect to a Windows box

    - by Richard Staehler
    I have 2 servers here at work. One is a Windows 2008 Server R2 (for safety's sake, lets use 192.168.1.100) and the other is a Fedora 14 (192.168.1.101). Currently when you hit our subdomain, x.test.com, our routers tell it to go to our Fedora box, and since Apache is installed and listening to port 80, it displays the Fedora Apache Test Page. It's obvious that I don't use port 80 for this machine, however I do use NAGIOS on it and its always nice to be able to access that from anywhere in the world. So when I want to access it, I just type x.test.com/nagios. Now here comes the dilemma.... On the Windows R2 box, we recently have installed a program that requires us to setup a web server using IIS7. Because of this application, I'm going to be creating a new subdomain called y.test.com, but since we only have 1 WAN/router, it will still get pointed to our Fedora box. That being said, it wants to use port 80 as well (or whatever port I damn well wish to assign it). So my question is: since our router is pointing to the Fedora 14 box (.101), and I want to make sure I can access NAGIOS from anywhere in the world, how do I tell Apache (httpd) to redirect port 80 to the other server (.100)? If not possible, what are my other options? I have rinetd installed on Fedora and have even tried the option 192.168.1.101 80 192.168.1.100 80 and it didn't seem to work "because port 80 was already bound" Thoughts? and Thanks!

    Read the article

  • Linux: Force fsck of a read-only mounted filesystem?

    - by Timothy Miller
    I'm developing for a headless embedded appliance, running CentOS 6.2. The user can connect a keyboard, but not a monitor, and a serial console would require opening the case, something we don't want the user to have to do. This all pretty much obviates the possibility of using a recovery USB drive to boot from, unless all it does is blindly reimage the harddrive. I would like to provide some recovery facilities, and I have written a tool that comes up on /dev/tty1 in place of getty to provide these functions. One such function is fsck. I have found out how to remount the root and other file systems read-only. Now that they are read-only, it should be safe to fsck them and then reboot. Unfortunately, fsck complains to me that the filesystems are mounted and refuses to do anything. How can I force fsck to run on a read-only mounted partition? Based on my research, this is going to have to be something obscure. "-f" just means to force repair of a clean (but unmounted) partition. I need to repair a clean or unclean mounted partition. From what I read, this is something "only experts" should do, but no one has bothered to explain how the experts do it. I'm hoping someone can reveal this to me. BTW, I've noticed that e2fsck 1.42.4 on Gentoo will let you fsck a mounted partition, even mounted read-write, but it seems only to do so if fsck is run from a terminal, so it can ask the user if they're sure they want to do something so dangerous. I'm not sure if the CentOS version does the same thing, but it appears that fsck CAN repair a mounted partition, but it flatly refuses to when not run from a terminal. One last-resort option is for me to compile my own hacked fsck. But I'm afraid I'll mess it up in some unexpected way. Thanks! Note: Originally posted here.

    Read the article

  • fax server farm architechture

    - by Brian Postow
    I'm not sure this is the right forum for this, but it's not Stackoverflow so... I'm trying to figure out an architecture to solve the following problem, maybe someone here can help: I have a T1 with 23 fax lines coming into the building. I have a computer (Macintosh XServe) running Hylafax. If I had one POTS line, I'd be done. However, I have no idea how to get the T1 into the Mac... Options I've considered: some sort of PCI T1-modem (Does that exist?) Splitting the T1 into 23 POTS lines and then connecting 23 analog modems to the mac, either via an external modem bank (Do they still make those?) or via some sort of external PCI bank, which will allow me to use more than 2 4-port modem cards. Either the T1 or the split POTS lines going into some intermediate device and then transfering the images over IP, or USB to the mac. Really, any other option I can come up with This has GOT to be a problem that someone has already solved, right?

    Read the article

  • Mac Management Without Permission and Security

    - by Bart Silverstrim
    I was going through some literature on managing OS X laptops and asked someone some questions about usage scenarios when using the MacBooks. I asked someone more knowledgeable than I about whether it was possible for my Mac to be taken over if I were visiting another site for a conference or if I went on a wifi network at a local coffee house with policies from an OS X Server with workgroup manager (either legit for the site or someone running a version of OS X Server on hardware they have hidden somewhere on the network), which apparently could be set up to do things like limit my access to Finder or impose other neat whiz-bang management features. He said that it is indeed possible for it to happen as it would be assigned via the DHCP server and the OS X server would assume my Mac is a guest and could hand out restrictions and apparently my Mac will happily accept them without notifying me or giving me an option, unlike Windows which I believe would need to be joined to a domain before it becomes "managed" by Active Directory. So my question is as network admins and sysadmins with users traveling with MacBooks, is there a way to reasonably protect your users from having their machines hijacked without resorting to just turning off networking all the time? Or isn't this much of a security hazard? What threat does this pose to the road warriors in your businesses?

    Read the article

  • How to prevent nginx from locking files on mounted samba partition in Centos 6

    - by Bruce Kirkpatrick
    I'm using nginx 1.3.8 inside a centos 6.3 virtualbox 4.2.4 virtual machine. The system is running the latest software available via yum update. The host OS is windows 7. The site files nginx is serving are on mounted samba partition, which is a folder on the host Windows system. I.e., inside linux, nginx paths are referring to /home/vhosts and this is mounted from D:\vhosts\ on windows. The samba partition is mounted as root with 777 privileges. /etc/fstab looks like this, but with real ip, username, password: //hostip/vhosts /home/vhosts cifs username=username,password=SECRETPASSWORD,uid=root,gid=root,file_mode=0777,dir_mode=0777,rw,_netdev 0 0 I.e. linux/nginx reads from the windows share, and not the opposite. in /etc/samba/smb.conf, I have tried to disable all samba locking features, but it seems to have no effect even after rebooting the virtual machine. locking=no share modes=no oplocks = no level2 oplocks = no kernel oplocks =no I'm receiving "Access is denied" errors in Windows or linux when attempting to overwrite the javascript file in windows that has been accessed at least once with nginx. If I run "service nginx reload", the lock is removed and I'm able to save the file. That's why I think it is nginx causing the lock. The same problem occurs with directories. However, that may be a different issue not related to the use of samba. I'm using samba so that I can manage the source code outside of the virtual machine. Also note that after I run "service nginx reload", the file I'm editing is actually automatically deleted from the windows host. SOLVED: I just reviewed my nginx.conf file. It appears the "open_file_cache" feature is what is causing the lock and deleted files. When I set this option to open_file_cache off;, My problem is resolved. I will repost as the answer when it allows me to do so.

    Read the article

  • On Windows XP, Start > Run "My Documents" sometimes doesn't work

    - by Clayton Hughes
    On all of my home computers, I can enter "my documents" into the Start Run prompt and the My Documents folder of the current profile will open up. What's more, I can continue typing subfolders, files, etc. and auto-complete works and it's smart and enjoyable. I can't check at the moment, but I'm almost positive entries like "My Pictures" and "My Music" also go to their correct folders. On my work computers, if I enter "my documents" into the Start Run prompt, I get the following error: "Windows cannot find 'my'. Make sure you typed the name correctly, and then try again. To search for a file, click the Start Button, and then click Search." I can sort of circumvent this by creating a shortcut in my PATH named 'my' that points to My Documents folder, but this doesn't solve the auto-complete option (and it's otherwise imperfect, of course, because "my pictures" or "my music" all direct to the same place. A google search doesn't provide much help on this, although it does identify a poster in 2007 with this same question at another board: http://www.msfn.org/board/lofiversion/index.php/t124813.html (Login required, but Google cache available here: http://preview.tinyurl.com/ygxhwwl) Is this just a limitation of the networks belonging to a domain, or is there some way I can get this functionality back? My documents folder does live in the standard place (C:\Documents and Settings{username}\My Documents), and not on a network drive or anything. It's probably worth adding that the computers are part of some freakish Novell domain thing, too. I'm not in IT here so I'm not too up on the details. Thanks for any help/suggestions!

    Read the article

  • Login to OS X Server User Account from Local Computer

    - by Brod Wilkinson
    I have OS X Server installed on a mac mini. I've created several User accounts, one of which is Account Name: Bob Password: abc123 From the Mac Mini's login screen I can choose "Server" (main account) "Bob" (Bobs account) and "Other..." OS X Server Accounts, from "Other..." if I input Bobs credentials it will log me in. I also have a macbook air, I would like to be able to select from the Login Screen "Other..." input Bobs credentials and have it login to Bobs account, or any other User Account for that matter. My Server is setup as private with the server address: server.network.private Following some googled instructions as well as apples very own instructions I have: Setup an Open Directory with Username: diradmin Password: abc123 Then on the macbook air gone into System Preferences > Users & Groups > Login Options and clicked Join next to Network Account Server, input my server (server.network.private) with diradmin credentials and its connected. Great. I've also ticked Allow Network Users to Login and Login Window and selected All Users. I was assuming this would allow my macbook air to login to the "Bob" account by selecting "Other..." from the login window although there is no "Other..." option. I then setup a VPN, basic credentials, logged into it on the macbook air and still not much has changed. I am able to share screens with the "Bob" account form my macbook air by logging in by clicking Share Screen... from the Finder under Shared > Network Server and then clicking Login In but this obviously requires the macbook air to already be logged into an account before it can share screens which is not suitable. Is there any way to simply login to the OS X Server User Account from the macbook air's login screen via the "Other..." like it does on the mac mini's login screen? Thanks in advance. Operating System: OS X 10.9 Mavericks OS X Server: Version 3

    Read the article

  • Ubuntu 13.10 on Acer V5-472 with HD 4000

    - by Hyperboreus
    I have an Acer V5-472 with intel HD4000 graphics chipset and a built-in 1366*768 display. I have installed ubuntu 13.10 amd64 in legacy boot mode with an external monitor. Installation showed no problems, I can boot from HDD and log into my system. The internal display doesn't work and I have to use an external monitor. I have tried the following (found in other threads) to no avail: Setting grub option "acpi_backlight=vendor" or "acpi_osi=Linux" or both. Installing the intel HD drivers for Linux from their homepage. Running in circles, screaming and shouting. The internal display lights up (I can change the brightness with Fn-Left and Fn-Right) but that's all. When I boot, I get a purple splash screen and from then only the external monitor works. I read somewhere that this might be a problem with kernel 3.11? Has anybody ubuntu running on an Acer V5-472? Should I change ubuntu version or use 32-bit instead? In general, how can I get the internal display to work? Edit: The settings-display dialogue shows the internal display correctly with supported resolution of 1366.

    Read the article

  • Dynamically updating DNS records with NSUPDATE fails

    - by Thuy
    I've got my own nameserver ns3.epnddns.com and domain epnddns.com I wanted to try and update the records dynamiclly from home using nsupdate but when I run nsupdate -k Kwww.epnddns.com.+157+17183.key i get the following errors Kwww.epnddns.com.+157+17183.key:1: unknown option 'www.epnddns.com.' Kwww.epnddns.com.+157+17183.key:2: unexpected token near end of the file Kwww.epnddns.com.+157+17183.{private,key}: unexpected token Not sure why I get these errors, I'll post my complete setup. Generated keys on my home pc, using dnssec-keygen -a HMAC-MD5 -b 128 -n HOST www.epnddns.com. created /var/named/ and put the keys there and chmod them to 600. transfered the keys to my nameserver ns3.epnddns.com, created /var/named/ ,put the keys there and chmod them to 600 made dnskey.conf in /var/named and added key www.epnddns.com. { algorithm hmac-md5; secret "my secret from they keys=="; }; chmod to 600 then in /etc/bind/named.conf.local include "/var/named/dnskeys.conf"; zone "epnddns.com" { type master; file "/etc/bind/zones/epnddns.com.zone"; allow-transfer { myhomeip; }; //its my home ip so not in the same network allow-update { key www.epnddns.com.; }; }; I restarted bind without any error messages so it seems to be working on the nameserver at least. But on my homepc when i try and run the nsupdate i get those error messages. Thanks in advance for any help or insightful advice.

    Read the article

  • How to set up ProxMox 1.9 on VPN?

    - by Gnudiff
    Disclaimer: I have only rudimentary knowledge of VPNs. I would love to learn about them properly, however, at the moment I really need to make stuff work on short notice. I am trying to set up a ProxMox virtualization platform in an existing network. The network currently consists of several servers which have VMWare free edition. There is some sort of VPN defined in switch. In order for VMWare management interface to be accessible, there needs to be ticked a checkbox in the network settings for VPN and entered the VPN id. I didn't notice any such configuration option during ProxMox installation, so my Proxmox VE on the same physical server, using same manual IP settings (ip/nm/gw), is not accessible. As I understand I should touch the Proxmox's underlying Debian config in /etc/network/interfaces, but I have no idea, what should I aim for: do I specify the settings for eth0, do I make a virtual interface? How to make it accessible for both ProxMox VE and underlying future VMs? I read the ProxMox installation guide, but unfortunately it presumes better understanding of VPNs than I have. A config template or similar would be appreciated. Thanks in advance.

    Read the article

  • Formatting an HP ProLiant dl380 G4

    - by i.h4d35
    I have an old HP ProLiant dl380 G4 server whose hard disk needs to be formatted. Unfortunately, I cannot seem to do so. For one, it doesn't seem to be detecting any Hard Drives attached to the Server. The Hard Disks show up in the Ctrl+A option (SCSI Configuration Utility). Also, while inserting the SmartStart CD (7.01 or 7.04), it shows a message that no logical drives are found and there aren't any options to create one. Alternately I've tried slipstreaming the SCSI Driver into the OS but that doesn't seem to be helping. Also, I have a USB Floppy drive (for the SCSI driver) but that doesn't seem to be detected. Also, directly installing the OS (MS Server 2003 Standard Edition) obviously doesn't work (shows no hard disk found) Could anyone please advise as to what other needs to be done to format my server? Also please tell me what are the possible errors/mistakes which've been made so that I can learn from them. I went through some questions here on ServerFault and the HP guides here but they weren't of much help to a n00b like me. Thanks in Advance.

    Read the article

  • RRAS VPN on windows 2k3 AD, can access rras server only.

    - by nopsax
    I'm setting up a test lab and here is the current configuration: 192.168.86.201 - a windows 2003 machine acting as PDC with AD/DNS/DHCP/WINS. 192.168.86.62 - windows 2003 machine is the RRAS server with IAS, also a file/print server. 192.168.86.6 - gateway/router to internet 192.168.86.21 - Windows XP Workstation Everything works on the internal network, File/Print/AD etc. Whenever a user connects via vpn to the RRAS server remotely using their domain credentials, they are assigned an ip address from the 192.168.86.201 machine along with the wins server address etc. The vpn user can then ping/access resources on the RRAS server, but cannot ping/access resources of any other machines by name or ip. However, if I ping by name, it does resolve to the correct ip address, just no replies. I did notice that on the RRAS server the 'internal' interface gets an ip address of 192.168.86.75 when a remote user connects, and the remote user is assigned, for example 192.168.86.71 . The RRAS server responds on both the .62 and .75 ip addresses. The client also unchecks the 'use remote default gateway option'. Also, I tried connecting a laptop to the physical network, joining the domain, then going remote and dialing the connection before domain login, and everything seems to work, e.g. browse-able shares via network neighborhood. But I can't really join the domain remotely if I cannot access any other resources. I really need to monitor traffic to see whats happening to those packets but won't be able to until this weekend. Any help is appreciated, will provide whatever configurations are needed.

    Read the article

  • Paused BitLocker encryption because no longer want to encrypt the hard drive - how do I get out of encrypting?

    - by Matthew Nagear
    I'm hoping someone can help. I'm not a techy and can't seem to find the same question already answered. I went to encrypt (using BitLocker) a Buffalo hard drive but after it took about 1 min to reach 0.2% 'encrypted' I decided to pause and eject, thinking this would have ended the encryption process. However, I had already assigned a password to it on the instruction pre the encryption beginning. I've since connected my hard drive again and I'm asked to input the password. If I don't I cannot access the drive. However, if I do, I CAN access the files but straight away the little BitLocker Drive Encryption dialogue box comes up saying Encryption in progress.. which I have the option to Pause. So I hit Pause straight away as I do not want to go through with the encryption. Is there any way I can stop the process and decrypt before encryption is completed? Or am I forever going to need to Pause quickly and eject. I fear that will affect my hard drive longer term. Thanks.

    Read the article

  • "ImportError: No module named flask" - Trouble with nginx + uWSGI + Flask in a virtualenv setup

    - by vjk2005
    I got nginx + uWSGI running on localhost inside a virtualenv with a simple hello world program, but I get this error when I replace the hello world with a simple Flask app: File "./wsgi_configuration_module.py", line 1, in <module> from flask import Flask ImportError: No module named flask unable to load app mountpoint Here's the flask app (wsgi_configuration_module.py): from flask import Flask application = Flask(__name__) @application.route("/") def hello(): return "hello world" if __name__ == "__main__": application.run() uWSGI config (app_conf.xml): <uwsgi> <socket>127.0.0.1:9001</socket> <chdir>/srv/www/labs/application</chdir> <pythonpath>/srv/www</pythonpath> <module>wsgi_configuration_module</module> <callable>application</callable> <no-site>true</no-site> </uwsgi> nginx config: server { listen 80; server_name localhost; access_log /srv/www/labs/logs/access.log; error_log /srv/www/labs/logs/error.log; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:9001; } location /static { root /srv/www/labs/public_html/static/; index index.html index.htm; } } virtualenv stored in ~/virtual_env with Python 2.7 + nginx + uWSGI + Flask installed in a virtualenv called basic. Things I've tried to solve this: set the --home (-H) option to my virtualenv folder ~/virtual_env while running uWSGI. Other info: I have the same setup working outside of a virtualenv. Things go wrong only when I try to replicate the setup inside of a virtualenv. Where have I gone wrong?

    Read the article

  • ext4 filesystem corruption -- maybe hardware error?

    - by pts
    I'm getting these errors in dmesg after about half an hour after I turn on the computer: [ 1355.677957] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318420: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251700offset=0(0), inode=1802725748, rec_len=179136, name_len=32 [ 1355.677973] Aborting journal on device sda2-8. [ 1355.678101] EXT4-fs (sda2): Remounting filesystem read-only [ 1355.690144] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318416: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251699offset=0(0), inode=2194783952, rec_len=53280, name_len=152 [ 1356.864720] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1312795: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251176offset=1460(13748), inode=1432317541, rec_len=208208, name_len=119 /dev/sda is an SSD, and it's using the noop scheduler. /etc/fstab entry: UUID=acb4eefa-48ff-4ee1-bb5f-2dccce7d011f / ext4 errors=remount-ro,noatime,discard,user_xattr 0 1 System information: $ cat /proc/mounts | grep /dev/sd /dev/sda1 /boot ext2 rw,noatime,errors=continue 0 0 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.3 LTS" $ uname -a Linux leetpad 2.6.35-30-generic-pae #61~lucid1-Ubuntu SMP Thu Oct 13 21:14:29 UTC 2011 i686 GNU/Linux I've run memtest for 7 hours, it didn't found any memory errors. Any obvious ideas what can go wrong in this case? The most reasonable thing I can imagine is that the SSD is silently dropping some write requests, which eventually leads to an EXT4 filesystem inconsistency (but no disk I/O errors). How can this happen? Is there a relevant configuration option I should ensure to be set correctly? What tools should I use to diagnose the hardware failures? Would it be possible to diagnose the SSD failure without overwriting data?

    Read the article

  • Syncronization between folders MAC OS Lion

    - by Andre Carvalho
    I have an iMac at home and I use a Macbook pro for work. I also have a time capsule at home containing my main folder with my main files. I use it as a NAS besides the Time Machine backup tool. I have several personal files I need to be accessing both at home and at work. My wife, who works at home, uses sometimes the same .XLS files and .DOC files I might have used during my day at work, away from home. My question is: Is there a software, or tool that a I can use to sync my iMac and my MB Pro folders? Remembering that: There might be a chance that my wife and I have changed the same files during the day, so the files would have to be merged so none of the information added by either me or my wife would be lost. The software/tool that would be installed on the MB Pro would need to mount the Time Capsule volume so it could locate the main folder on it. It has to be done automatically when my MB is at home ( with a schedule option ); I have tested some softwares like synctwofolders and Chronosync but none fulfilled all my needs. The first couldn't mount the Time Capsule Volume and didn't have the many schedule options. I really liked Chronosync, but it doesn't merge the files. When it detects a conflict ( for instance: my wife changed a .DOC file on the iMAC and I changed the same file on the MB it asks you to choose which version you want to keep instead of allowing you simply to merge them ). I don't have much experience with automator or scripts but maybe you can give me a hand with that.

    Read the article

  • The physical working paradigm of a signal passing on wire.

    - by smwikipedia
    Hi, This may be more a question of physics, so pardon me if there's any inconvenience. When I study computer networks, I often read something like this in order to represent a signal, we place some voltage on one end of the wire and the other end will detect the voltage and thus the signal. So I am wondering how a signal exactly passes through wire? Here's my current understanding based on my formal knowledge about electronics: First we need a close circuit to constrain/hold the electronic field. When we place a voltage at somewhere A of the circuit, electronic field will start to build up within the circuit medium, this process should be as fast as light speed. And as the electronic field is being built up, the electrons within the circuit medium are moved, and thus electronic current occurs, and once the electronic current is strong enough to be detected at somewhere else B on the complete circuit, then B knows about what has happend at A and thus communication between A and B is achieved. The above is only talking about the process of sending a single voltage through wire. If there's a bitstream and we need to send a series of voltages, I am not sure which of the following is true: The 2nd voltage should only be sent from A after the 1st voltage has been detected at B, the time interval is time needed to stimulate the electronic field in the medium and form a detectable electronic current at B. Several different voltages could be sent on wire one by one, different electronic current values will exists along the wire simutaneously and arrive at B successively. I hope I made myself clear and someone else has ever pondered this question. (I tag this question with network cause I don't know if there's a better option.) Thanks, Sam

    Read the article

  • Two instances of Windows Vista on boot up after failed clean install

    - by Dwayne
    I tried to install a clean version of Vista but failed. I ended up with Windows and Windows.old on my C: drive and a dual boot option on boot up. I gave up and booted up the old version and tried to rename the Windows.old to Windows and was asked if I wanted to merge the two folders. I answered yes and all seemed OK until I booted up this morning and was given the choice of two versions of Vista. The first one is the one that failed to installed correctly and the second one is the old version. How can I get rid of the failed installation? I got rid of the bad boot via MSCONFIG. Here is my current situation: several hard drives installed Using C: as my boot drive a much larger drive (H:) for storing most of my files. I found a subfolder in my C:\windows folder named windows. Upon inspection I determined it to be older than the C:\windows folder and therefore it must be the older, working version of the boot. I renamed the C:\windows folder to c:\windows.bad and moved the sub windows to the C: root directory. I also copied it to the h: drive. Now MSCONFIG reports that the copy that is booting is the h: copy. How can I change it back to the C:\ copy and can I delete the C:\windows.bad file set?

    Read the article

  • Windows 7: Image thumbnails fail to appear

    - by Fopedush
    All right, I've got a pretty strange one here. Since I installed Windows 7 on this machine some time ago, image thumbnails have never worked properly. For the vast majority of images, they completely fail to show up, showing the icon of the default image viewing application instead. Please note that the “Always show icons, never thumbnails” option in folder options is not checked. I've taken a screenshot demonstrating the problem, here: Sometimes, a few image thumbnails will show up correctly, maybe about one in ten, with the rest failing as well. Another screenshot, with a handful of thumbnails visible, can be seen here: Windows explorer does not appear to make any effort to populate the missing thumbnails, and there is no appreciable CPU usage. I can leave a window with missing thumbnails open all night and they will still never appear. Newly created images never generate thumbnails, only images that have been on the system since day one will occasionally show them. This leads me to believe that explorer is set to show thumbnails, but whatever process is supposed to be in charge of actually generating them has failed somehow. In previous versions of windows, explorer.exe itself was responsible for thumbnail generation – has this changed? Any suggestions at all – even if you aren't sure that they will work – are welcome. I'd hate to have to wipe and reinstall on this machine for such a minor annoyance.

    Read the article

  • Easiest way to send encrypted email?

    - by johnnyb10
    To comply with Massachusetts's new personal information protection law, my company needs to (among other things) ensure that anytime personal information is sent via email, it's encrypted. What is the easiest way to do this? Basically, I'm looking for something that will require the least amount of effort on the part of the recipient. If at all possible, I really want to avoid them having to download a program or go through any steps to generate a key pair, etc. So command-line GPG-type stuff is not an option. We use Exchange Server and Outlook 2007 as our email system. Is there a program that we can use to easily encrypt an email and then fax or call the recipient with a key? (Or maybe our email can include a link to our website containing our public key, that the recipient can download to decrypt the mail?) We won't have to send many of these encrypted emails, but the people who will be sending them will not be particularly technical, so I want it to be as easy as possible. Any recs for good programs would be great. Thanks.

    Read the article

  • Umount stale glusterfs partition

    - by Khaled
    I am using glusterfs on several Ubuntu servers: two of them are running glusterfs servers in replication mode. Without any clear error, the glusterfs partition became stale and the system shows this error when I try to access the stale partition: Transport endpoint is not connected Also, when running ls -l on the parent folder I get: d????????? ? ? ? ? ? myfolder I tried all types of commands that I can find to umount this partition, but I could not get it done: umount -l /path/to/mount/point umount -f /path/to/mount/point Also, using fuser command to show processes accessing this folder did not work. Unload the fuse kernel module can not be done as it is clear from the kernel config that fuse is built into the kernel and not a loadable module. I found this line in /boot/config-2.6.32-24-server CONFIG_FUSE_FS=y I have been left with two options: Reboot the system. Create another mount point like myfolder2 and mount this again using sudo glusterfs -f /etc/glustefs/glusterfs.vol /path/to/folder2. Of course, I have chosen to go with option 2. Anyone faced such an issue before? Anyone has a better solution for such a case?

    Read the article

  • ISA 2006 SP1 - SSL Client Certificate Authentication in Workgroup Environment

    - by JoshODBrown
    We have an IIS6 website that was previously published using an ISA 2006 SP1 standard server publishing rule. In IIS we had required a client certificate be provided before the website could be accessed... this all worked fine and dandy. Now we wish to use a web publishing rule on ISA 2006 SP1 for this same website. However, it seems the client certificate doesn't get processed now, so of course the user can't access the website. I've read a few articles stating the CA for the certificate needs to be installed in the trusted root certificate authorities store on the ISA Server (i have done this), as well as installing the client certificate on the ISA Server (done as well). I have also verified that the ISA Server is able to access the CRL for our CA no problem... In the listener properties for the web publishing rule, under Authentication, and Client Authentication Method, there is an option for SSL Client Certificate Authentication... i select this, but it appears the only Authentication Validation Method selectable is Windows (Active Directory).... there is no Active Directory in this environment. When i configure the rule with the defaults, I then try to hit my website and it prompts for my certificate, i choose it and hit ok... then I'm given the following error Error Code: 500 Internal Server Error. The server denied the specified Uniform Resource Locator (URL). Contact the server administrator. (12202) I check the event logs on the ISA Server and in Security Logs, i see Event ID 536, Failure Aud. The reason: The NetLogon component is not active. I think this is pretty obvious since there is no active directory available. Is there a way to make this web publishing rule work using client certificates in this workgroup environment? Any suggestions or links to helpful documents would be greatly appreciated!

    Read the article

  • What determines what resolutions a laptop is willing to output over VGA?

    - by Joshua McKinnon
    I'm responsible for several conference rooms and have setup 1080p projectors and I provide both HDMI and VGA connectivity. HDMI for DisplayPort and Mini-DisplayPort, and VGA as a fallback, universal option. Contrary to what I expected, people seem to have much more trouble with the HDMI than VGA, so VGA gets used a lot more than you'd think (even as most workstation laptops made in the last 3-4 years have DisplayPort or Mini-DisplayPort...). Also to my surprise, VGA outputs over 1080p on a 50ft cable run with very minimal degradation on certain laptops - other laptops just don't offer 1080p as a resolution choice and top out at 1600x1200 or something else. Specific example: a ThinkPad W530 will do 1080p, a W520 won't, over VGA. (both do 1080p over displayport/mini-DP) What determines what resolutions a laptop is willing to output over VGA? I'm thinking this will come down to either a video driver that says it supports only certain resolutions for output, or limitations of the RAMDAC (which wouldn't be in play, at least DAC wise, on a digital output, but WOULD on VGA, an analog output). The basic reason for the question is that I noticed, say, a ThinkPad W520 with 1080p built in display, will output 1080p fine over DisplayPort to a 1080p projector, but will cap out at 1600x1200 (practically the same pixel count, just a little shy) on VGA. Now, this wouldn't be surprising at all except SOME laptops have no issue outputting 1080p over VGA, even with lower native resolutions. Why do I care? Well if there's some way I could enable it... for situations where my users end up using VGA anyway, it's preferable for display mirroring if they can output their laptop's native resolution, which, you guessed it, is very often 1080p on 15" models. DISCLAIMER: This is primarily a curiosity, I'm not claiming 1080p over VGA is ideal by any means, but hey, if it works. I've seen HDMI start artifacting more over same-length, same gauge cabling (up to 50' run in certain rooms). If you think this is better suited to SuperUser, please move it, but this is framed from an IT standpoint of something that affects a real pool of users in a multiple conference room, 50+ deployed laptop scenario.

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >