Search Results

Search found 27238 results on 1090 pages for 'local variable'.

Page 420/1090 | < Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >

  • backup an existing linux server to a virtualbox virtual machine

    - by user146526
    I have some servers and VPSs to many companies across the world. I want to back them up locally. I have some backup solutions enabled to remote hosts, but I want to have a local backup on a computer at home. What I am thinking is: 1) Create a virtualbox virtual machine, install the same version linux as the server. 2) Use rsync to backup the server to the local virtualbox machine. (something like rsync -av --delete --progress --exclude '/dev/' --exclude '/proc/' root@server_ip:// / ) 3) Repeat the command every few days update files. 4) In case of a hard disk failure, or any other bad event, reverse the rsync command and get the files back and continue my bussiness. I tried it with 2 openvz VPS, the one was a backup of the other. I also tried to transfer normal linux server host to openvz machine and it worked great. That way looks pretty clean and easy to me, this is the kind of solution I am looking for. However I need to be sure that this will work if I am going to do it. The question is, will that work ok ? Does anyone see any problem with that ? Do you have any other suggestions ? Thanks

    Read the article

  • credit or minclass does not work well with pam_cracklib.so in common-password (opeSuSe 11.3)

    - by Mario
    I'm trying to implement password complexities on my pdc. It's a samba PDC with openLDAP backend. I tried cracklib-check but it looks like that I should have a decent and localize version of password library since the library out there usually comes in english. I also have another consideration that we will allow users to use any kind of password - even though it's dictionary based - as long as their passwords integrated with low/upper alphabet, digits, and other characters such as '$' or '_' (pam_cracklib.so calls them as classes). So here is my /etc/pam.d/common-password: #password requisite pam_pwcheck.so nullok cracklib password requisite pam_cracklib.so minclass=4 reject_username ##password requisite pam_cracklib.so \ ## dcredit=-1 ucredit=-1 lcredit=-1 ocredit=-1 reject_username password optional pam_gnome_keyring.so use_autht_ok password required pam_unix2.so use_authtok nullok The first commented line (with #) was the default configuration of openSuse 11.3. The 2nd/3rd (with leading ##) is another configuration I use when minclass=4 line is commented out. By the way, I have 'check password script' = /usr/local/sbin/crackcheck -d /usr/share/cracklib/pw_dict and passdb backend = ldapsam:ldap://127.0.0.1 parameters in smb.conf and cracklib-check works fine too. So here is the test I conduct. I logon to windows and then change my password. Sometimes it works fine that it trows error message - which what I wanted, but simple password with only lower alphabets can pass windows change password. Maybe I should make a new library which incorporates local vocabularies, but a guy out there (raise your hand please if you read this :) ) also experienced the same trouble with english word. Besides, what we really want is to let user to choose 2 or 3 format password out of 4 classes. Is there a bug or something with pam module in openSuse 11.3? Thank you in advance. Regards, Mario

    Read the article

  • Mounting fuse sshfs fails when invoked by Cron on FreeBSD 9.0

    - by Tal
    I have a remote server filesystem that I'm attempting to mount locally on a FreeBSD 9 machine via FUSE sshfs, and Cron for a backup routine. I have ssh keys between the boxes setup to allow for passwordless login as the root user on the local machine. Cron is set to run the following script (in Root's crontab): #!/bin/sh echo "Mounting Share" /usr/local/bin/sshfs -C -o reconnect -o idmap=user -o workaround=all <remote user>@<remote domain>.com: /mnt/remote_server As root, I can run this script on the command line without issue, and without being asked for a password the share mounts successfully. Yet, when run by Cron the script fails. The path to sshfs is identical to the value of which sshfs Here is the email root receives from the Cron Daemon: X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Mounting Share fuse: failed to exec mount program: No such file or directory fuse: failed to mount file system: No such file or directory I'm stumped as to why I'm receiving No such file or directory in this instance. It further seems odd given that the paths appear to be correct. I've also attempted to compare the output of env on the shell with env inserted into the script. I don't see any environment variables that should cause this trouble. At bootup, FUSE reports its version as: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 Help me ServerFault wizards, you're my only hope!

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • Which program is locking all my executable files?

    - by Tom Wijsman
    When updating any software product, as well as manually trying to replace .exe files, it says that access is denied to the file and in fact the System process is holding a handle to the file when I check it with Process Explorer. This must be a driver or something that is malfunctioning was my first though, but now I wonder how I figure out which driver / program is doing this and why it is so. Unlocker doesn't seem to be working for me, unless someone can tell me how to use it properly other than making it appear a magical wand in the notification area.... This is what Unlocker puts in my event log: The description for Event ID 1060 from source Application Popup cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: \??\C:\Program Files (x86)\Unlocker\UnlockerDriver5.sys the message resource is present but the message is not found in the string/message table Upon searching event 1060 I get: <file name> has been blocked from loading due to incompatibility with this system. Perhaps it is because I have 64 bit?

    Read the article

  • Setup shared internet connection on virtualbox with fixed IP

    - by Tom
    I am a web developer and until recently I have been using ubuntu as my OS. For many reasons, I have switched back to windows. I still want to keep my server on linux platform, so I setup my local server as a virtual machine. Everything works great, but i have a little struggle with the networking. Since I am working in different places and going around clients, I connect to all sorts of network with different settings. That means the possible IP range is very dynamic which causes issues when I work on my local server. At the moment I have a dynamic IP on my host and static IP on my guest. That way I can access the server from my host (by adding record to hosts file). I also have internet connection on the guest. But once i change networks, it does not work (assuming the network has different configuration). My question is, how to setup host-guest networking, so no matter what network I connect to, I can keep my static IP on guest, which is registered in hosts file on my host so I can access the webserver and also I will have internet connection on the guest? Hope it make sense. Thank you

    Read the article

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

  • allow spoofing when using tun

    - by Johnny
    I have a working openvpn setup with a server and a number of clients. How would i go around allowing IP spoofing through the openvpn server? (to demonstrate security concepts)? A normal ping from client to server goes through all right: root@client: hping3 10.8.0.1 HPING 10.8.0.1 (tun0 10.8.0.1): NO FLAGS are set, 40 headers + 0 data bytes len=40 ip=10.8.0.1 ttl=64 DF id=0 sport=0 flags=RA seq=0 win=0 rtt=124.7 ms root@server:/etc/openvpn# tcpdump -n -i tun0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tun0, link-type RAW (Raw IP), capture size 65535 bytes 10:17:51.734167 IP 10.8.0.6.2146 > 10.8.0.1.0: Flags [], win 512, length 0 But when spoofing a packet, it does not arrive at the openvpn server: root@client: hping3 -a 10.0.8.120 10.8.0.1 HPING 10.8.0.1 (tun0 10.8.0.1): NO FLAGS are set, 40 headers + 0 data bytes root@server:/etc/openvpn# tcpdump -n -i tun0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tun0, link-type RAW (Raw IP), capture size 65535 bytes My current config files server.conf local X.Y.Z.P port 80 proto tcp dev tun ca ca.crt cert server.crt key server.key # This file should be kept secret dh dh1024.pem server 10.8.0.0 255.255.255.0 push "redirect-gateway def1 bypass-dhcp" keepalive 10 120 comp-lzo persist-key persist-tun persist-local-ip status openvpn-status.log verb 3 client.conf client dev tun proto tcp remote MYHOST..amazonaws.com 80 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client.crt key client.key ns-cert-type server comp-lzo verb 3

    Read the article

  • Trying to Set up SMTP Server on WIndows Server 2012

    - by datc
    I'm working on a website, and I need to test the functionality of sending email messages from ASP.NET, something like this: Dim msg As New MailMessage("email1", "email2") msg.Subject = "Subject"<br> msg.IsBodyHtml = True<br> msg.Body = "Click <a href='site'>here</a>." Dim client As SmtpClient = New SmtpClient() client.Host = "My-Server"<br> client.Port = 25<br> client.DeliveryMethod = SmtpDeliveryMethod.Network<br> client.Send(msg) This is running from a Windows 8 workstation. I've installed SMTP server on my Windows Server 2012 machine. The mail shows up in the mailroot/Queue folder and sits there, eventually getting deposited into Badmail. Now I have AT&T U-verse at home, and a few devices connected to the gateway, including let's call it "My-Server." When I run SmtpDiag from say, datc@... to [email protected] I get SOA serial number match passed, Local DNS (99-135-60-233.lightspeed.bcvloh.sbcglobal.net) & Remote DNS (hotmail.com) tests *not* passed, and ultimately, Connecting to the server failed. Error: 10060. Failed to submit mail to mx2.hotmail.com error. When I set My-Server's IP to static and equal to the external IP, 99.135.60.233, and again run SmtpDiag, I get SOA, Local DNS, and Remote DNS tests passed, but the same 10060 error. Same for yahoo.com, gmail.com, and so forth. Is it my ISP's job to fix this? Some PTR record missing somewhere? Is it at all possible to have a home-based SMTP server? All I want is to test my email code. Perhaps, my IP address is just not "trusted" somehow. Thanks.

    Read the article

  • AutoHotKey script not recognizing Windows Explorer

    - by SaintWacko
    I've made a script that will set an environment variable to the path in the current Explorer windows when its hotkey is pressed. However, I want this to only trigger if the active window is an Explorer window. This should do it: #IfWinActive ahk_class ExploreWClass|CabinetWClass #p:: SetPath() return #IfWinActive But for some reason it isn't. Is there something I'm doing wrong?

    Read the article

  • In Bash, how can I obtain the directory path from the previous command's last argument

    - by Beaming Mel-Bin
    I frequently have to do this. For example: $ vim /etc/pam.d/sudo $ vim /etc/pam.d/sudo-i $ cd /etc/pam.d/ # Figure I should just go to the directory Now, is there a way I could obtain the directory of the last argument when it's a file path? I'm asking this cause I recently became aware of the $_ variable that has become useful. Was wondering if there's some other commandline fu that might come in handy.

    Read the article

  • Onboard Ethernet suddenly stopped working to router

    - by AfterschoolHobbist
    Hey guys, yesterday I have a sudden problem of my Ethernet connection to my router. My computer is a Pendium 4 with Window XP SP3 running on it and it was working fine earlier in the day, but yesterday night was the start of the problem. My computer is unable to ping to the router and to any other website and unable to get internet connection. As beside it was connected to a hub, I directly connected it to the router directly and the same problem occurred, being unable to ping to the router (and connecting to it) and still no internet access. My directly connected to the modem and the problem still persisted. For each connection, I connected it to my computer and my laptop and my laptop was able to connect, while my desktop computer was unable. I wondered if it was a OS issue and ran a live Ubuntu CD to see if Ubuntu was able to connect to the internet, but the issue persisted and I was unable to get internet access. I then set my router's lease time to 1 hour and waited. After 1 hour, the lease for my computer was removed and I hoped this worked, but it didn't work, but something strange is acting up. My desktop computer is still unable to ping to the router or connect to the internet, but for some reason, my router and desktop computer is still able to contact each other by providing a lease of an local ip address. The router record of a lease to my desktop computer, and when I do ipconfig, my desktop also recognize that it has been provided a local ip address. I have concluded that this is a hardware issue and the only solution to fix this is to by a network card adapter, but I am wondering if anyone has any solutions that could explain why this happen, why my mac address is 01-23-45-67-89-ab, and is there any way to fix it without buying a new network card? Thanks in advance.

    Read the article

  • S3sync not working

    - by user57833
    Hello, I managed to get s3sync to upload my test folder to Amazon S3 and can see it in the MWS Managment Console. Downloading the data back to a test folder results in the following error message: root@mybucketname:/var/s3sync# ./week_download.sh s3Prefix backups/weekly localPrefix /var/s3sync/testdown/weekly s3TreeRecurse mybucketname backups/weekly Creating new connection Trying command list_bucket mybucketname prefix backups/weekly max-keys 200 delimiter / with 100 retries le ft Response code: 200 prefix found: / s3TreeRecurse mybucketname backups/weekly / Trying command list_bucket mybucketname prefix backups/weekly/ max-keys 200 delimiter / with 100 retries l eft Response code: 200 S3 item backups/weekly/ s3 node object init. Name: Path:backups/weekly Size:0 Tag:d41d8cd98f00b204e9800998ecf8427e Date:Fri O ct 29 14:21:53 UTC 2010 local node object init. Name: Path:/var/s3sync/testdown/weekly/ Size: Tag: Date: source: dest: Update node s3sync.rb:638:in initialize': No such file or directory - /var/s3sync/testdown/weekly/.s3syncTemp (E rrno::ENOENT) from s3sync.rb:638:inopen' from s3sync.rb:638:in updateFrom' from s3sync.rb:393:inmain' from s3sync.rb:735 I am using the following download script: !/bin/bash script to download local directory upto s3 cd /var/s3sync/ export AWS_ACCESS_KEY_ID=nothing to see here export AWS_SECRET_ACCESS_KEY=nothing to see here export SSL_CERT_DIR=/var/s3sync/certs ruby s3sync.rb -r -v -d --progress --make-dirs mybucket:backups/weekly /var/s3sync/testdown copy and modify line above for each additional folder to be synced Any idea's? Does the download script need to download to the source of Amazon S3 i.e testup folder? Was hoping on the instance of a complete failure and the original folders won't exist that it would just download everything from me. Note: changed my bucket names to "mybucketname" so that it is not public!

    Read the article

  • On Ubuntu get: "-bash: ./flume No such file or directory" BUT flume is there and executable. Same binary OK on RHEL

    - by lcbrevard
    This is already posted in serverfault - and may be more apprpriate there. Reworked a bit from the orginal posting. We have a product built on CentOS 4 32-bit Linux that runs unmodified on 32- and 64-bit CentOS/RHEL 4 and 5 and SLES 10. It also runs unmodified on SLES 9 64-bit. [SLES 9 32-bit requires a different libstdc++.] The name of the main binary executable is 'flume' Yesterday we tried to put this on 64-bit Ubuntu 10 and, even though the file is there and the right size, we get: -bash: ./flume: No such file or directory 'file flume' shows it to be a 32-bit ELF (can't remember the exact output and the system is on an isolated network) If put into /usr/local/bin, then 'which flume' returns: /usr/local/bin/flume The file is marked as executable (did 'chmod +x flume') and lsattr shows no problems with attribute bits. I was not able to try 'ldd flume' yet. I have also not tried 'strace flume'. Currently I am with an air conditioning failure. [It's been that kind of week!] I now suspect that some library is not there. This is a profoundly unhelpful message and one I have never seen before. Is this peculiar to Ubuntu or perhaps just to this installation. We gave up and moved to a RHEL 4 system and everything is fine. But I sure would like to know what causes this.

    Read the article

  • Logformat for catching asked hostname in a *.domain.com scenario?

    - by Dhiraj Gupta
    I have an Apache 2.2 VirtualHost with a *.domain.com Servername. This is required for my scenario, all subdomains are handled with the same site. Now, in the access log, I am trying to figure out a logformat variable (or way) that will let me log the asked for domain name. If I use the vhost_combined format, all I get in my access log is *.domain.com entries, not the actual vhost that was asked for. Anyone know how to do this?

    Read the article

  • How to get Postfix to send/forward/relay to a sub-domain located on another server?

    - by thiesdiggity
    I have a quick question. How do I setup postfix to send an email to another server (Exchange Server) when sending to an email address that has a sub-domain of our main server. For example, say our main server is mail.example.com and we have a Exchange server setup to receive emails from exchange.example.com. We have the MX records setup in our DNS and it receives correctly if we send from a GMail account. However, when we try to send an email from a @example.com account we get the following error: Host or domain name not found. Name service error for name=exchange.example.com type=A: Host not found I believe Postfix checks for local mailboxes first and if its setup with the domain it delivers to the local account, but in this case the sub-domain accounts are located in another server. Anyone have any thoughts on what I need to do within Postfix so it doesn't look locally for the exchange.example.com mailboxes? I found relay_domains directive within Postfix but that doesn't seem to fix it when I add the sub-domain. Thanks for your help.

    Read the article

  • How to make Thunderbird play nice with Google mail

    - by Christi
    Thunderbird and gmail aren't exactly the best of friends. Gmail's tags mean that Thunderbird often downloads multiple copies of a single mail. Anything tagged in gmail will appear in a folder related to that tag, the "all mail" folder, and possibly the "inbox" and "sent mail" folders too. Thus a mail with multiple tags could potentially be stored more than four times in a local Thunderbird cache. This can make searching difficult, and is obviously wasteful of disk space. The best solution I have come up with is as follows. Operate a zero inbox policy (i.e. use the inbox for processing live mail only and archive everything else) which eliminates an extra copy in the inbox. Secondly, configure Thunderbird not to sync the "Sent Mail" folder - this is a bit of a pain, since I actually find it quite useful to be able to look through just the mails I've sent, but a search can duplicate this functionality. In this way, most of the duplicates are removed, and only mail with tags is stored locally more than once. Ideally, however, I'd only like one copy of each mail to be stored locally. I am surprised Thunderbird doesn't store mail by some sort of hashing algorithm to prevent precisely this problem - but it wouldn't be compatible with the way the folders are mirrored in a local directory structure, I suppose. Can anyone think of a better way to get Thunderbird to cache a Google mail account locally efficiently.

    Read the article

  • Run Python script at startup using upstart

    - by MarcusMaximus
    I'm trying to create an upstart script to run a python script on startup. In theory it looks simple enough but I just can't seem to get it to work. I'm using a skeleton script I found here and altered. description "Used to start python script as a service" author "Me <[email protected]>" # Stanzas # # Stanzas control when and how a process is started and stopped # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn # When to start the service start on runlevel [2345] # When to stop the service stop on runlevel [016] # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background expect fork # Start the process script exec python /usr/local/scripts/script.py end script The test script I want it to run is currently a simple python script that runs without any issue when run from a terminal. #!/usr/bin/python2 import os, sys, time if __name__ == "__main__": for i in range (10000): message = "UpstartTest " , i , time.asctime() , " - Username: " , os.getenv("USERNAME") #print message time.sleep(60) out = open("/var/log/scripts/scriptlogfile", "a") print >> out, message out.close() The location/var/log/scripts has permissions 777 The file /usr/local/scripts/script.py has permissions 775 The upstart script /etc/init.d/pythonupstart.conf has permissions 755

    Read the article

  • How to setup bindings for development IIS 7.5 with lot of sites

    - by Antonio Bakula
    I am a programmer in a small ASP.NET shop with very little expirience in server administration, and I have to setup IIS 7.5 to host lot of sites on newly installed windows server 2008 R2, these sites are test "clones" for sites on "real" web server and they should be accessible only in local network (domain). Developers should add new sites for our new customers. Project managers use this server to check progress and test new sites and new features, QA people have to have access to this site and test before we copy it to the "real" web server. Developers only have access to IIS console, in fact they can use RDP to test server with their developer domain credentials and permissions, also developers are local admins on that machine (tester). On our previous server I used different port numbers for each site. That worked but don't like this solution, I would prefer to use subdomains. But here are the problems: manually adding DNS records is not an option because we do not wont that developers have to administer domain DNS server, and currently this had to be done with domain administrator credentials. Is there a some way to add DNS record automatically ? I tried to add DNS record for subdomains on test server with wildcard (*.tester) and that seems to work for some time but that change coused some bad problems in our domain network and admin forbid me to mess with DNS, he said that I have to add DNS record for every subdomain manually and that I can not use wildcards, and there is nothing that I can do about it, mainly for "politicall" reasons :( obviously our admin is pretty much uncooperative, outsorced from different organization and I can't do anything about that. can I add another DNS server on that machine ? What must be setup on clients machines to "tell" them to use domain DNS server and tester domain server ? So please I need someone to give me some advice, what should I do ? Is different port numbers only option left ? Thanks !

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • Buying an old laser printer -- what will need to be replaced?

    - by marienbad
    Hi all -- as you can see I'm new. I do IT and wiring for a small local shop but I never deal with printers. I do a LOT of printing, and I'd like to stop spending as much money on it. On my local CL, there is an HP 8100DN (duplex network) printer for a very good price (and the toner is a quarter-cent per page). It has printed 200,000 pages and I don't yet know anything else about it. The model was released in 1999. So my questions: What are the parts that tend to need service on laser printers? On ebay, I see fusers, rollers, DC power boards, and motors. What would you expect to replace soon at 200,000 pages? Are there any good "tests" to find out if certain parts are near failure? Do you have anything to say about the HP 8100 specifically? The bottom line for me is that if there's any chance of repairs costing more than $100, it's not worth it for me.

    Read the article

  • How can I tell if Windows is running in Safe Mode?

    - by cwd
    I have a Windows server that will sometimes reboot into safe mode after updates. I'm working on that issue but what I'd really like to know is how can I check to see if Windows is running in safe mode or not. Ideally I would like to incorporate it into a script that would send a passive check to our Nagios box with the status. Is there some environmental variable I can use or some way to get this information via the command line?

    Read the article

  • OpenVPN + iptables / NAT routing

    - by Mikeage
    I'm trying to set up an OpenVPN VPN, which will carry some (but not all) traffic from the clients to the internet via the OpenVPN server. My OpenVPN server has a public IP on eth0, and is using tap0 to create a local network, 192.168.2.x. I have a client which connects from local IP 192.168.1.101 and gets VPN IP 192.168.2.3. On the server, I ran: iptables -A INPUT -i tap+ -j ACCEPT iptables -A FORWARD -i tap+ -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On the client, the default remains to route via 192.168.1.1. In order to point it to 192.168.2.1 for HTTP, I ran ip rule add fwmark 0x50 table 200 ip route add table 200 default via 192.168.2.1 iptables -t mangle -A OUTPUT -j MARK -p tcp --dport 80 --set-mark 80 Now, if I try accessing a website on the client (say, wget google.com), it just hangs there. On the server, I can see $ sudo tcpdump -n -i tap0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap0, link-type EN10MB (Ethernet), capture size 96 bytes 05:39:07.928358 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 558838 0,nop,wscale 5> 05:39:10.751921 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 559588 0,nop,wscale 5> Where 74.125.67.100 is the IP it gets for google.com . Why isn't the MASQUERADE working? More precisely, I see that the source showing up as 192.168.1.101 -- shouldn't there be something to indicate that it came from the VPN? Edit: Some routes [from the client] $ ip route show table main 192.168.2.0/24 dev tap0 proto kernel scope link src 192.168.2.4 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.101 metric 2 169.254.0.0/16 dev wlan0 scope link metric 1000 default via 192.168.1.1 dev wlan0 proto static $ ip route show table 200 default via 192.168.2.1 dev tap0

    Read the article

  • Changing Java from OpenJDK to SunJava

    - by SpiXel
    I want to change OpenJDK to SunJava in my ubuntu linux desktop , i have downloaded the "jdk-7.tar.gz" from sun's website, but there problem is how to make the system to use the newly downloaded java ? I tried adding the new jdk/bin/java to my PATH ( from .bashrc ) but that seem not to work (cause probably OpenJDK's path is inside my PATH variable as well , so the system checks that first) Here's my JDK7 : /usr/lib/jdk7/bin/java Here's what $(which java) outputs : /usr/bin/java Thanks in advance

    Read the article

< Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >