Search Results

Search found 36498 results on 1460 pages for 'linux usb drive'.

Page 537/1460 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • LXC container can only access host via bridge

    - by vitaut
    I have an LXC container with i686 Ubuntu 12.04 running on a x86_64 Ubuntu 12.04 host. I've set up a bridge using instructions here. However the ping from the container only goes through to the host and not to other machines on the local network. Similarly only the host and not the other machines see the container OS. The host's /etc/network/interfaces file looks as follows: auto lo iface lo inet loopback iface eth0 inet manual auto br0 iface br0 inet dhcp bridge_ports eth0 bridge_fd 0 bridge_maxwait 0 The container's /etc/network/interfaces file looks as follows: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp And here's the relevant part of the container's config: lxc.network.type=veth lxc.network.link=br0 lxc.network.flags=up Any ideas what I'm doing wrong? Additional info: The output of iptables-save on host: $ sudo iptables-save # Generated by iptables-save v1.4.12 on Sat Oct 26 06:06:48 2013 *filter :INPUT ACCEPT [6854:721708] :FORWARD ACCEPT [4067:538895] :OUTPUT ACCEPT [4967:522405] COMMIT # Completed on Sat Oct 26 06:06:48 2013 # Generated by iptables-save v1.4.12 on Sat Oct 26 06:06:48 2013 *nat :PREROUTING ACCEPT [82235:21547307] :INPUT ACCEPT [16:1070] :OUTPUT ACCEPT [9386:583359] :POSTROUTING ACCEPT [14693:1291952] -A POSTROUTING -s 10.0.3.0/24 ! -d 10.0.3.0/24 -j MASQUERADE COMMIT # Completed on Sat Oct 26 06:06:48 2013 The output of brctl show on host: $ brctl show bridge name bridge id STP enabled interfaces br0 8000.080027409684 no eth0 vethBkwWyV The output of ifconfig br0 on host: $ ifconfig br0 br0 Link encap:Ethernet HWaddr 08:00:27:40:96:84 inet addr:192.168.1.11 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe40:9684/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:232863 errors:0 dropped:0 overruns:0 frame:0 TX packets:59518 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:34437354 (34.4 MB) TX bytes:198492871 (198.4 MB) The output of ifconfig eth0 on host: $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 08:00:27:40:96:84 inet6 addr: fe80::a00:27ff:fe40:9684/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:299419 errors:0 dropped:0 overruns:0 frame:0 TX packets:203569 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:59077446 (59.0 MB) TX bytes:372056540 (372.0 MB) The output of ifconfig eth0 on container: $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:16:3e:74:08:2b inet addr:192.168.1.12 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::216:3eff:fe74:82b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:81 errors:0 dropped:0 overruns:0 frame:0 TX packets:113 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8506 (8.5 KB) TX bytes:9021 (9.0 KB)

    Read the article

  • How to secure Firefox traffic (+DNS) through SOCKS proxy under Ubuntu 10.04?

    - by Maarx
    I'm using Ubuntu 10.04, and starting a SOCKS proxy with 'ssh -D', and setting Ubuntu to use it with "System - Preferences - Network Proxy". Firefox uses the proxy, and the proxy's IP appears when I visit a site like http://www.whatismyip.com/. My question is, is Firefox resolving DNS requests through this proxy? Is my web-browsing truly secure? (That is, until I exit the other end of the proxy. I know it's insecure after that.) (And I've verified the keys, I'm not being man-in-the-middled) (And--screw it. You know what I mean. Is it resolving DNS requests through the proxy?) I don't know how I would go about verifying such a thing for myself. Using additional hardware such as another debugging proxy is not an option. If Firefox isn't resolving my DNS requests through the SOCKS proxy, how do I go about fixing it?

    Read the article

  • Apache access.log interpretation

    - by Pantelis Sopasakis
    In the log file of apache (access.log) I find log entries like the following: 10.20.30.40 - - [18/Mar/2011:02:12:44 +0200] "GET /index.php HTTP/1.1" 404 505 "-" "Opera/9.80 (Windows NT 6.1; U; en) Presto/2.7.62 Version/11.01" Whose meaning is clear: The client with IP 10.20.30.40 applied a GET HTTP method on /index.php (that is to say http://mysite.org/index.php) receiving a status code 404 using Opera as client/browser. What I don't understand is entries like the following: 174.34.231.19 - - [18/Mar/2011:02:24:56 +0200] "GET http://www.siasatema.com HTTP/1.1" 200 469 "-" "Python-urllib/2.4" So here what I see is that someone (client with IP 174.34.231.19) accessed http://www.siasatema.com and got a 200 HTTP status code(?). It doesn't make sense to me... the only interpretation I can think of is that my apache server acts like proxy! Here are some other requests that don't have my site as destination... 187.35.50.61 - - [18/Mar/2011:01:28:20 +0200] "POST http://72.26.198.222:80/log/normal/ HTTP/1.0" 404 491 "-" "Octoshape-sua/1010120" 87.117.203.177 - - [18/Mar/2011:01:29:59 +0200] "CONNECT 64.12.244.203:80 HTTP/1.0" 405 556 "-" "-" 87.117.203.177 - - [18/Mar/2011:01:29:59 +0200] "open 64.12.244.203 80" 400 506 "-" "-" 87.117.203.177 - - [18/Mar/2011:01:30:04 +0200] "telnet 64.12.244.203 80" 400 506 "-" "-" 87.117.203.177 - - [18/Mar/2011:01:30:09 +0200] "64.12.244.203 80" 400 301 "-" "-" I believe that all these are related to some kind of attack or abuse of the server. Could someone explain to may what is going on and how to cope with this situation? Update 1: I disabled mod_proxy to make sure that I don't have an open proxy: # a2dismod proxy Where from I got the message: Module proxy already disabled I made sure that there is no file proxy.conf under $APACHE/mods-enabled. Finally, I set on my browser (Mozzila) my IP as a proxy and tried to access http://google.com. I was not redirected to google.com but instead my web page appeared. The same happened with trying to access http://a.b (!). So my server does not really work as a proxy since it does not forward the requests... But I think it would be better if somehow I could configure it to return a status code 403. Here is my apache configuration file: <VirtualHost *:80> ServerName mysite.org ServerAdmin webmaster@localhost DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Update 2: Using a block, I restrict the use of other methods than GET and POST... <Limit POST PUT CONNECT HEAD OPTIONS DELETE PATCH PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK> Order deny,allow Deny from all </Limit> <LimitExcept GET> Order deny,allow Deny from all </LimitExcept> Now methods other that GET are forbidden (403). My only question now is whether there is some trick to boot those how try to use my server as a proxy out...

    Read the article

  • EC2 EBS AMI Instance stopping/restarting doesn't start services

    - by tgm
    I've recently been moving our instances to EBS instances (CentOS) and still have a bit of confusion on what's happening when I "stop" and instance. I have some of my services with runlevels 345 on but when I start a stopped instance the services don't start. What's actually happening when I issue a stop command to the instance, and how do I get my services to start automatically when I start the instance up again?

    Read the article

  • Easy shorewall question : allow ips to DNAT

    - by llazzaro
    Hello, At my home network I had a transparent proxy. This is the rule that forward all 80 traffic to my squid3.1 server at DMZ DNAT loc:!10.0.0.126 dmz:172.16.0.198:3128 tcp 80 - !172.16.0.198 Ok, I need to add more ips to avoid transparent proxy. I tried loc:!10.0.0.134,!10.0.0.126...but didnt work (also similars like [ip0,ip1]. I tried to google the answer cant find it (sorry no matches, not searching the right keywords) also I tried to read the docs, but they are really long (and indexes dont help me). Thanks!

    Read the article

  • Hang while starting several daemons [solved]

    - by Adrian Lang
    I’m running a Debian Squeeze AMD64 server. Target runlevel after boot is runlevel 2, which includes rsyslogd, cron, sshd and some other stuff, but not dovecot, postfix, apache2, etc. The system fails to reach runlevel 2 with several symptoms: The system hangs at trying to start rsyslogd Booting into runlevel 1 works, then login from the console works Starting rsyslogd from runlevel 1 via /etc/init.d/rsyslog hangs Starting runlevel 2 with rsyslogd disabled works But then, logging in via console fails: I get the motd, and then nothing Starting sshd from runlevel 1 succeeds But then, I cannot login via ssh. Sometimes password ssh login gives me the motd and then nothing, sometimes not even this. Trying to offer a public key seems to annoy the sshd enough to not talk to me any further. When rebooting from runlevel 1, the server hangs at trying to stop apache2 (which is not running, so this really should be trivial). Trying to stop apache2 when logged in in runleve 1 does hang as well. And that’s just the stuff which fails all the time. RAM has been tested, dmesg shows no problems. I have no clue. Update: (shortened) output from rsyslogd -c4 -d called in runlevel 1 rsyslogd 4.6.4 startup, compatibility mode 4, module path '' caller requested object 'net', not found (iRet -3003) Requested to load module 'lmnet' loading module '/user/lib/rsyslog/lmnet.so' module of type 2 being loaded conf.c requested ref for 'lmnet', refcount 1 rsylog runtime initialized, version 4.6.4, current users 1 syslogd.c requested ref for 'lmnet', refcount now 2 I can kill rsyslogd with Strg+C, then. /var/log shows none of the configured log files, though. Update2: Thanks to @DerfK I still have no clue, but at least I narrowed down the problem. I’m now testing with /etc/init.d/apache2 stop (without an apache2 running, of course) which hangs as well and looks like an even more obvious failure. After some testing I found out that a file with one single line: /usr/sbin/apache2ctl configtest /dev/null 2&1 hangs, while the same line executed in an interactive shell works. I was not able to further reduce this line while, i. e. every single part, the stream redirections and the commando itself is necessary to reproduce the hang. @DerfK also pointed me to strace which gave a shallow hint about what kind of hang we have here: wait4(-1for the init scripts futex(0xsomepointer, FUTEX_WAIT_PRIVATE, 2, NULL for rsyslogd / apache2 binaries called by the init scripts The system was installed as a Debian Lenny by my hoster in autumn 2011, I upgraded it to Squeeze immediately and kept it up to date with Squeeze, which then used to be testing. There were no big changes, though. I guess I never tried to reboot the system before. Update3: I found the problem. My /etc/nsswitch.conf specified ldap as hosts lookup backup, which is not available at that time of the boot. Relying on dns solely fixes my boot problems.

    Read the article

  • Google Search w/ Chrome Incognito w/ Gnome Do

    - by jrc03c
    I've installed Google Chrome as my default browser in Ubuntu, and recently installed Gnome Do and enabled the Google Search plugin. The Google Search from Gnome Do works exactly as expected but for one thing: Chrome (which is typically set to open in "incognito" mode) does not open in "incognito" mode. The shortcuts on my desktop, taskbar, and menus all have the --incognito flag attached (which works just fine), but the browser refuses to open in this mode when launched from Gnome Do. Any suggestions? Also, please note the settings for the Google Search plugin in Gnome Do: It's obvious that Gnome Do just passes the Google Search blindly to the default browser. In other words, there are no configurable settings specifically for Chrome. Any thoughts?

    Read the article

  • Clicking or Knocking with Seagate HD

    - by Daniel A. White
    My laptop's main HD makes a clicking or knocking sound when Windows or the Bios tries to access it. I put it into a SATA dock and it sounds perfectly fine when spinning up, but after Windows tries to access it, it becomes a repetitive clicking or knocking sound. Does anyone know any tips that might help me access my data? I have most of it backed up, but I would still like to Ghost it before I send it off for repairs. I know my laptop is still under warranty.

    Read the article

  • Are these mySQL user settings vulnerable?

    - by Kavon Farvardin
    I'm using myphpadmin to manage the databases and I'm new to SQL in general. Am I suppose to keep an open anonymous user on localhost so things like drupal can access mySQL? It seems like having a non-passworded root on my server's hostname is retarded but I don't know what I'm doing with this in general. The user who's name starts with a b is the one I use to login and do things like make a database.

    Read the article

  • configure server and create website without any control panel

    - by Emad Ahmed
    i am trying to configure my new server without cpanel i've installed php/mysql/apache And it's now working fine if you visit the server ip http://46.166.129.101/ you'll see the welcome page i've configured my dns too my nameserverips file [root@server]# cat /etc/nameserverips 46.166.129.101=ns1.isellsoftwares.com 46.166.129.101=ns2.isellsoftwares.com if you visit this link http://ns1.isellsoftwares.com you'll see the welcome page too!! but if you visit isellsoftwares.com you'll see ( 'Firefox can't find the server at www.isellsoftwares.com.' ) Now my question is: How to create an account for this domain on the server?? i've tryied to add virtualHost tag in apache <VirtualHost *:80> ServerAdmin [email protected] ServerAlias www.isellsoftwares.com DocumentRoot /var/www/html/issoft ServerName isellsoftwares.com ErrorLog logs/dummy-host.example.com-error_log CustomLog logs/dummy-host.example.com-access_log common </VirtualHost> it still not working ... i've added named file for this domain (( isellsoftwares.com.db )) ; Zone file for isellsoftwares.com $TTL 14400 isellsoftwares.com. 86400 IN SOA ns1.isellsoftwares.com. elsolgan.yahoo.com. ( 2012031500 ;Serial Number 86400 ;refresh 7200 ;retry 3600000 ;expire 86400 ;minimum ) isellsoftwares.com. 86400 IN NS ns1.isellsoftwares.com. isellsoftwares.com. 86400 IN NS ns2.isellsoftwares.com. isellsoftwares.com. 14400 IN A 46.166.129.101 localhost 14400 IN A 127.0.0.1 isellsoftwares.com. 14400 IN MX 0 isellsoftwares.com. mail 14400 IN A 46.166.129.101 www 14400 IN CNAME isellsoftwares.com. ftp 14400 IN A 46.166.129.101 cpanel 14400 IN A 46.166.129.101 whm 14400 IN A 46.166.129.101 webmail 14400 IN A 46.166.129.101 webdisk 14400 IN A 46.166.129.101 ns1 14400 IN A 46.166.129.101 ns2 14400 IN A 46.166.129.101 but it still not working !!!!! So, what else i should do??

    Read the article

  • How do I configure a secondary gateway in RHEL5?

    - by Brett Ryan
    Greetings, we have been experiencing a random timeout issue with VPN users connecting to one of our servers which is causing a problem. My network administrator has instructed me to configure a secondary gateway to include the VPN connection. My current connection resides as follows, 10.1.9.1 is the internal gateway to the internet, I'd like to add 10.1.1.20 as the VPN gateway. # Broadcom Corporation NetXtreme II BCM5708S Gigabit Ethernet DEVICE=eth0 BOOTPROTO=none BROADCAST=10.1.255.255 IPADDR=10.1.1.22 IPV6_AUTOCONF=yes NETMASK=255.255.0.0 NETWORK=10.1.0.0 ONBOOT=yes GATEWAY=10.1.9.1 TYPE=Ethernet USERCTL=no IPV6INIT=no PEERDNS=yes

    Read the article

  • Will Vimperator always be this awesome?

    - by Martín Fixman
    About a week ago I started using Vim, and fell completely in love with it. However, today I installed the Vimperator extension on Firefox, and through there are some problems (all of which will be solved after using it until I get used to it), I found it great. However, I'm still in the "Holy fuck this is totally awesome" phase of software testing, and in some time will go back to the "I have this thing" phase. Just to be sure, will it be a good idea to use it regularly? I want to hear experiences about users and ex-users.

    Read the article

  • Is there a way to find what values comes from what file in HAL under Ubuntu?

    - by vava
    I've been playing with multitouch on my Thinkpad and read a few tutorials on how to setup it. One of them mentioned /usr/hal/fdi/policy/20thirdparty/11-x11-synaptics.fdi, I edited it and enabled SHMConfig through it. Later I found out about /etc/hal/policy/ directory and put some customization for my touchpad there as well in separate fdi file. But now it looks like touchpad doesn't care about my customizations. I have gsynaptec installed and can configure it though GUI, I can configure it with synclient but I can't set any values through fdi files. I even turned off SHMConfig, reverting 11-x11-synaptics,fdi file to it's original state but it seems like SHMConfig still enabled, otherwise I wouldn't be able to configure properties in runtime. So, I was thinking, maybe there's additional hal files I don't know about. How can I find them, particularly ones responsible for turning SHMConfig on?

    Read the article

  • Ubuntu and mysql server. Something isnt allowing me to connect

    - by acidzombie24
    I have a question about mysql settings http://serverfault.com/questions/94054/remote-connections-and-mysql-on-ubuntu/94088#94088 now i want to figure out why i cannot connect. I made sure bind-address was commented out. I can ping the server within the VM but i cannot ping it from within the VM using mysqladmin --protocol=tcp --host=self_ip ping. I also followed along and check if my ports were open and they look like they are. I setup samba on that VM and can access that with no problem as well. It looks like ubuntu does not have a firewall either (i figured this out before) so i am stumped why the server isnt allowing my connection. Apparently the config file works on another person side http://www.pastie.org/742545 I am using Ubuntu 6.06 LTS just because of 'support' reasons. So hopefully this will be 'easy'?

    Read the article

  • Tips and tricks to make NX server more stable

    - by gareth_bowles
    My shop has been using the FreeNX server on Fedora 11 for a while now and mostly getting good results, especially with performance, but we have some annoying problems with client connections. There are two main issues: Client sessions sometimes freeze after a long time (seems to be at least 2 hours of having the session active) We often have to make multiple attempts to start a new client session, especially if a previous session was suspended rather than terminated. In qwuite a few cases, we've had to restart the NX server to get around this. Our NX server configuration is the default except that we've enabled logging level 7 to /var/log/nxserver.log, and set the font server to "unix:/7100" so that it uses xfs. Does anyone have any ideas for making things more stable ?

    Read the article

  • Graphite Running using daemon tools getting defunct

    - by pradeepchhetri
    I am running carbon-cache.py and carbon-aggregator.py using daemon tools. When I made some changes in the storage-schema.conf and tried to restart the carbon-cache.py, I found that it is becoming zombie very frequently. root 3367 3366 0 03:23 pts/1 00:00:00 supervise carbon-aggregator root 3371 3366 0 03:23 pts/1 00:00:00 supervise carbon-cache root 3373 3367 3 03:23 pts/1 00:00:02 /usr/bin/python /usr/bin/carbon-aggregator.py --debug start root 3379 3372 0 03:23 pts/1 00:00:00 multilog t /var/log/multilog/carbon-cache root 3382 3368 0 03:23 pts/1 00:00:00 multilog t /var/log/multilog/carbon-aggregator root 3638 3371 21 03:24 pts/1 00:00:00 [carbon-cache.py] <defunct> Can someone tell me what may be the reason ?

    Read the article

  • Raid1 with active and spare partition

    - by Daniel Baron
    I am having the following problem with a RAID1 software raid partition on my Ubuntu system (10.04 LTS, 2.6.32-24-server in case it matters). One of my disks (sdb5) reported I/O errors and was therefore marked faulty in the array. The array was then degraded with one active device. Hence, I replaced the harddisk, cloned the partition table and added all new partitions to my raid arrays. After syncing all partitions ended up fine, having 2 active devices - except one of them. The partition which reported the faulty disk before, however, did not include the new partition as an active device but as a spare disk: md3 : active raid1 sdb5[2] sda5[1] 4881344 blocks [2/1] [_U] A detailed look reveals: root@server:~# mdadm --detail /dev/md3 [...] Number Major Minor RaidDevice State 2 8 21 0 spare rebuilding /dev/sdb5 1 8 5 1 active sync /dev/sda5 So here is the question: How do I tell my raid to turn the spare disk into an active one? And why has it been added as a spare device? Recreating or reassembling the array is not an option, because it is my root partition. And I can not find any hints to that subject in the Software Raid HOWTO. Any help would be appreciated.

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • Most awesomely bad hack

    - by Zypher
    As I sit watching one of my latest dirty dirty hacks run, I started wondering what kind of dirty hacks you have created that are so bad they are awesome. We all have a few of them in our past - and they are probably still running in production somewhere, chugging along somehow still working. Which reminds me of the hack we had to put into place when we were moving data centers. Our IVRs had to keep running, as the data center we were moving from was the primary DC, and the new Primary wasn't quite ready to take traffic. So what do we do. Well we answer the calls in DC1, then ship the sip stream over the internet to DC2 1900 miles away ... that just felt oh so wrong. So the question is, what is one (or more) of your awesomely bad hacks?

    Read the article

  • How to discover true identity of hard disk?

    - by F21
    I have 2 fake external hard drives that claim to have a storage capacity of 2TB. I pulled the enclosure apart and the hard drives seems to be refurbished ones with their labels replaced as Barracuda LP 2000 GB labels (the serial numbers on both labels are the same). Interestingly, one of the drives have 160G written on it with pencil. However, the counterfeiters seem to have done something to the firmware, because CrystalDiskInfo reports them as 2TB ST2000DL003 drives. I then delete the 1.81 TB partition in Windows disk management and tried to create a new one and format it. Once I get to this point, the drives would make some noise that is common to dying drives. I am not interested in using these drives for production, but I am interested in finding the true identity (manufacturer/serial number/model number, etc) and restoring it to their factory defaults with the right capacity. Can this be done without any special equipment? This would be an interesting learning exercise. Some pictures of the drives in question: Here are the screens from CrystalDiskInfo: Note the serial numbers are the same (these are 2 different drives!). How is this done? Did they have to tamper with the controller board? I would assume that changing the firmware doesn't change the serial number at all.

    Read the article

  • sudoer scheme for a web developer that retains future control of a server?

    - by Tchalvak
    Background I have a server that I'm looking to set up, and provide access to another web developer. I don't want to put many constraints on him, though I wouldn't mind isolating the site that he'll be developing from others on the server that I will develop. The problem Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. What is a good setup for the sudoers file so that he can do things like: *install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc * And can't do things like: take away other admin permissions change the root password have control over other security/administrative functions Example sudoer files that accomplish something like that could be useful, I'm sure that people have needed to do this before.

    Read the article

  • Have an unprivileged non-account user ssh into another box?

    - by Daniel Quinn
    I know how to get a user to ssh into another box with a key: ssh -l targetuser -i path/to/key targethost But what about non-account users like apache? As this user doesn't have a home directory to which it can write a .ssh directory, the whole thing keeps failing with: $ sudo -u apache ssh -o StrictHostKeyChecking=no -l targetuser -i path/to/key targethost Could not create directory '/var/www/.ssh'. Warning: Permanently added '<hostname>' (RSA) to the list of known hosts. Permission denied (publickey). I've tried variations using -o UserKnownHostsFile=/dev/null and setting $HOME to /dev/null and none of these have done the trick. I understand that sudo could probably fix this for me, but I'm trying to avoid having to require a manual server config since this code will be deployed on a number of different environments. Any ideas? Here's a few examples of what I've tried that don't work: $ sudo -u apache export HOME=path/to/apache/writable/dir/ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l deploy -i path/to/key targethost Eventually, I'll be using this solution to run rsync as the apache user.

    Read the article

  • Problems connecting Centos to the network when running in VMWare

    - by Sakin
    Hi, I installed CentOs on VMware running on windows XP. When trying to configure it to connect to the internet, I get an error message when trying to bring up the network interface: [root@VMLinux ~]# /et/init.d/network start Bringing up loopback interface: [ OK ] Bringing up interface eth0: Determining IP information for eth0... failed [FAILED] VM is running on a machine that has access to the network, I tried it on two different networks that have DHCP enabled. I tried to configure VMWare to use a bridged connection as well as NAT connection. An image of Ubuntu runs fine on the same VMware. What am I missing here? Thanks.

    Read the article

  • Where is the encfs volume key stored?

    - by Waldorf
    I am trying to use encfs in reverse mode. I understand that the passphase is used to encrypt a key which is then stored encrypted into the encfs6.xml file. What I do not understand is the following. Create en encrypted virtual fs of a folder by using passphrase A unmount this folder. Delete all contents including the encfs6.xml file If you then try to do the same with another passphrse I would expect that a new encfs6.xml would be created. However I get the following error message: "Error decoding volume key, password incorrect" So I wonder, what volume key is incorrect, I thought it was in the encfs6.xml file ?

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >