Search Results

Search found 36619 results on 1465 pages for 'damn small linux'.

Page 293/1465 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • What can i do when my Ubuntu stucked [closed]

    - by Avihai Marchiano
    From time to time firefox cause to my Ubuntu machine to stuck.(nothing run on the machine except firefox) I cant move the mouse or move windows by keyboard. In some cases i have wait for a few minutes and than close last open tab in firefox and it solve the problem in other cases i do restart. Is there any way to prevent it, limit forefox in some way? In windows in most cased even with 100% cpu you can do alt-ctrl-delete and get the task manager, is there a top priority combination in linux that give you terminal? Or any other way ... Run on Ubuntux86 with GNOME , 2GB RAM, dual core cpu

    Read the article

  • How to know if a server is physical or virtual? [duplicate]

    - by tachomi
    This question already has an answer here: is there anyway to know if your supposedly fully dedicated server is really a virtually resource-shared machine? [duplicate] 5 answers How can I know if my provider is giving me a dedicated physical server or if I'm getting virtual servers? The provider is from an other country so I don't have the chance to go and see the servers by myself, all of them have linux and they're managed by SSH. All the hire has been done by internet or phone. Are there any commands, dirs to analyze or a way to get this info? They of course tell me that the servers are physical, but when they make some support or hardware/software upgrade or that kind of stuff doesn't take a long. For example, when I ask for a new one with specific requirements at the end they give me more infrastructure than the one I asked for. Of course no one gives extra things without an extra price. Hope I have hinted

    Read the article

  • My server hostname doesn't work? [on hold]

    - by xSpartanCx
    I've got a raspberry pi running raspbian server edition. It's a modified debian that runs well on the rPi. My problem is that the only way I can ssh into it with putty is through the static ip. My router doesn't recognize the hostname; it shows the mac address as the name. This causes the pi not to show my website online (I think). The only way I've gotten it to work is using my other linux server to forward using virtual hosts, and that has to use the ip address, too. However, now that I have my other server off, the website doesn't work and I can't ssh (or find it anywhere on the network) using the hostname.

    Read the article

  • Typical Hadoop setup for remote job submission

    - by Artii
    So I am still a bit new to hadoop and am currently in the process of setting up a small test cluster on Amazonaws. So my question relates to some tips on the structuring of the cluster so it is possible to work submit jobs from remote machines. Currently I have 5 machines. 4 are basically the Hadoop cluster with the NameNodes, Yarn etc. One machine is used as a manager machine( Cloudera Manager). I am gonna describe my thinking process on the setup and if anyone can chime in the points I am not clear with, that would be great. I was thinking what was the best setup for a small cluster. So I decided to expose only one manager machine and probably use that to submit all the jobs through it. The other machines will see each other etc, but not be accessible from the outside world. I am have conceptual idea on how to do this,but I am not sure how to properly go about doing this though, if anyone could point me in the right direction that would great. Also another big point is, I want to be able to submit jobs to the cluster through exposed machine from a client machine (might be Windows). I am not so clear on this setup as well. Do I need to have Hadoop installed on the machine in order to use the normal hadoop commands, and to write/submit jobs say from Eclipse or something similar. So to sum it up my questions are, Is this an ok setup for a small test cluster How can I go about using one exposed machine to submit/route jobs to the cluster, without having any of the Hadoop nodes on it. How do I setup a client machine to submit jobs to a remote cluster, and an example on how to do it on Windows. Also if there are any reason not to use Windows as a client machine in this setup. Thanks I would greatly appreciate any advice or help on this.

    Read the article

  • s3cmd fails too many times

    - by alfish
    It used to be my favorite backup transport agent but now I frequently get this result from s3cmd on the very same Ubuntu server/network: root@server:/home/backups# s3cmd put bkup.tgz s3://mybucket/ bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 36864 of 2711541519 0% in 1s 20.95 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.00) WARNING: Waiting 3 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 36864 of 2711541519 0% in 1s 23.96 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.01) WARNING: Waiting 6 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 18.71 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.05) WARNING: Waiting 9 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 18.86 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.25) WARNING: Waiting 12 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 15.79 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=1.25) WARNING: Waiting 15 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 12288 of 2711541519 0% in 2s 4.78 kB/s failed ERROR: Upload of 'bkup.tgz' failed too many times. Skipping that file. This happens even for files as small as 100MB, so I suppose it's not a size issue. It also happens when I use put with --acl-private flag (s3cmd version 1.0.1) I appreciate if you suggest some solution or a lightweight alternative to s3cmd. Thanks

    Read the article

  • NAT, iptables and problematic ports

    - by Rajie
    I am building a small office network with virtual machines. My schema is this: Computer A: gateway, ip 1.1.1.1, iptables used for NAT [eth0=public internet dhcp, dhcp; eth1=gateway] Computer B: client, ip 1.1.1.2, using gateway from Computer A. NAT is working, and Computer B can access the internet using the A's gateway. I redirected some incoming ports from A to B (for instance, if A receives a request to port 80, it goes automatically to Computer B's Apache). The thing is that I do not really understand how to open/close ports for Computer B from Computer A. I know how to close a port: iptables -A INPUT -p tcp --dport 80 -j DROP And it will refuse all incoming (not output) connections to port 80. However, this works for main interface eth0. I tried to, for instance, drop ingoing and outgoing connections for Computer B, port 80: iptables -A FORWARD -i eth1 -o eth0 -p tcp --dport 80 -j DROP iptables -A FORWARD -i eth0 -o eth1 -p tcp --dport 80 -j DROP But it does not work. And I cannot figure out what I am doing wrong. Any clue?

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • Multiple VM environment for developing/testing

    - by Hippo
    I was asked to create a setup for automated deployment, configuration, installation/updates of websites. A bunch of small websites will be bundled on one server. If more website will come up a new server will be created... I decided to us chef for this task. All servers will be running Ubuntu at the same version and configuration. The actual question: Everything needs to be tested properly before starting live deployment, so my question is: What is the best virtualisation tool to run multiple (5 - 10) virtual machines on a Ubuntu Laptop? Requirements: easy setup, fast (clone/snapshot of VMs) All VMs should be easily connected to the internet and should be able to communicate to each other (Open-Source / free would be great) So far I looked into: Virtual box is more for Desktop virtualisation, Cloning not possible, every new machine needs to be installed VMware Player Any suggestions? If there are any question about what I am doing please comment on this question, I will answer as soon as possible. This question is not about the actual set up, it is about a nice working environment.

    Read the article

  • Linux Port 80 to redirect to a Windows box

    - by Richard Staehler
    I have 2 servers here at work. One is a Windows 2008 Server R2 (for safety's sake, lets use 192.168.1.100) and the other is a Fedora 14 (192.168.1.101). Currently when you hit our subdomain, x.test.com, our routers tell it to go to our Fedora box, and since Apache is installed and listening to port 80, it displays the Fedora Apache Test Page. It's obvious that I don't use port 80 for this machine, however I do use NAGIOS on it and its always nice to be able to access that from anywhere in the world. So when I want to access it, I just type x.test.com/nagios. Now here comes the dilemma.... On the Windows R2 box, we recently have installed a program that requires us to setup a web server using IIS7. Because of this application, I'm going to be creating a new subdomain called y.test.com, but since we only have 1 WAN/router, it will still get pointed to our Fedora box. That being said, it wants to use port 80 as well (or whatever port I damn well wish to assign it). So my question is: since our router is pointing to the Fedora 14 box (.101), and I want to make sure I can access NAGIOS from anywhere in the world, how do I tell Apache (httpd) to redirect port 80 to the other server (.100)? If not possible, what are my other options? I have rinetd installed on Fedora and have even tried the option 192.168.1.101 80 192.168.1.100 80 and it didn't seem to work "because port 80 was already bound" Thoughts? and Thanks!

    Read the article

  • Server configurations for hosting MySQL database

    - by shyam
    I have a web application which uses a MySQL database hosted on a virtual server. I've been using this server when I started the application and when the database was really small. Now it has grown and the server is not able to handle the db, causing frequent db errors. I'm planning to get a server and I need suggestions for that. Like I said, the db is now 9 GB, and is growing considerably fast. There are a number of tables with millions of rows, which are frequently updated and queried. The most frequent error the db shows is Lock wait timeout exceeded. Previously there used to be "The total number of locks exceeds the lock table size" errors too, but I could avoid it by increasing Innodb buffer pool size. Please suggest what configurations should I look for in the server I should buy. I read somewhere that the db should ideally have a buffer pool size greater than the size of its data, so in my case I guess I'd need memory gt 9 GB. What other things should I look for in the server? Just tell me if I should give you more info about the

    Read the article

  • How to migrate a running KVM (with full disk copy) to another node?

    - by klipz
    I'm doing tests on KVM, and I'd like to see if I can make a hot migration, I mean the virtual machine won't stop running during the migration (but a few seconds of freeze is ok). I use a small cluster for my test : kvm1, kvm2, and kvmnfs. kvm1 and kvm2 runs the virtual machines kvmnfs is a NFS server, and it's mounted on /KVM on both kvm1 and kvm2 To migrate a VM (only RAM in fact) from kvm1 to kvm2, I run the same kvm command on kvm2 (with -incoming tcp:0:4444) that on kvm1, then I use "migrate -d tcp:kvm2:4444" : It works great, since the VM file is common to both machines. Now, I wan't to make a full migration (RAM + disk) of a local VM file (no more NFS) of kvm1 to kvm2. I tried to create an empty file, with touch, on kvm2 and use the same kvm command line + the "-incoming ..."). Then on kvm1 I use "migrate -d tcp:kvm2:4444" : It copies everything, then... the VM fails (any I/O disk gives an I/O error) ! And my VM file on kvm2, the one I created with touch, as still a size of 0 bytes. What am I doing wrong ? What is the exact command to use on kvm2 ? And what is the command to launch, in the monitoring mode, on kvm1 ?

    Read the article

  • Accidentally mounted a ReiserFS drive as MBR on my windows box - how do I recover?

    - by Ryan
    I had a WD Netcenter with a 160GB drive that kept dropping off the network. I opened up the enclosure and removed the hard drive, connected to a Windows box without knowing the drive used ReiserFS.... When mounting on the Windows box, I chose "MBR" as filesystem. 70GB of data corrupted: 90% of data is word documents, excel spreadsheets, and jpg's - all mission critical. Attempted recovery on Linux box (ubuntu) using TestDisk: I could see the container, but couldn't get anything out – according to TestDisk this was because I chose "none" as filesystem. Attempted recovery using Nucleus Kernel Recovery for windows: 98% of what was recovered is incomplete and/or unusable. I need to know if a way exists to recover or rebuild original ReiserFS MBR, or what tools/techniques might give me the best results in recovering the data. Found a Windows version of TestDisk and I ran it yesterday - here are the results: TestDisk 6.14-WIP, Data Recovery Utility, May 2012 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/sda - 160 GB / 149 GiB - CHS 19457 255 63 The harddisk (160 GB / 149 GiB) seems too small! (< 519 GB / 483 GiB) Check the harddisk size: HD jumpers settings, BIOS detection... The following partitions can't be recovered: Partition Start End Size in sectors > ReiserFS 3.6 62 241 8 19458 0 18 311581568 ReiserFS 3.6 62 248 55 19458 8 2 311581568 ReiserFS 3.6 62 254 37 19458 13 47 311581568 ReiserFS 3.6 63 6 28 19458 20 38 311581568 ReiserFS 3.6 63 13 11 19458 27 21 311581568 ReiserFS 3.6 63 21 43 19458 35 53 311581568 ReiserFS 3.6 63 27 41 19458 41 51 311581568 ReiserFS 3.6 63 37 35 19458 51 45 311581568 ReiserFS 3.6 63 54 20 19458 68 30 311581568 ReiserFS 3.6 63 76 26 19458 90 36 311581568

    Read the article

  • Which server software and configuration to retrieve from multiple POP servers, routing by address to correct user

    - by rolinger
    I am setting up a small email server on a Debian machine, which needs to pick up mail from a variety of POP servers and figure out who to send it to from the address, but I'm not clear what software will do what I need, although it seems like a very simple question! For example, I have 2 users, Alice and Bob. Any email to [email protected] ([email protected] etc) should go to Alice, all other mail to domain.example.com should go to Bob. Any email to [email protected] should go to Bob, and [email protected] should go to Alice Anything to *@bobs.place.com should go to Bob And so on... The idea is to pull together a load of mail addresses that have built up over the years and present them all as a single mailbox for Bob and another one for Alice. I'm expecting something like Postfix + Dovecot + Amavis + Spamassassin + Squirrelmail to fit the bill, but I'm not sure where the above comes in, can Postfix deal with it as a set of defined regular expressions, or is it a job for Amavis, or something else entirely? Do I need fetchmail in this mix, or is its role now included in one of the other components above. I think of it as content-filtering, but everything I read about content-filtering is focussed on detecting spam rather than routing email.

    Read the article

  • How long will a USB key with an OS installed on it last?

    - by Xananax
    I've heard numerous times that installing an OS on a USB key is a bad thing to do, as USBs typically have a certain number of writes before dying, and installing an OS on it will wear it out (unless it's used sporadically for rescue purposes). Nonetheless, I am very tempted to install some flavour of Linux (Ubuntu or Arch, I haven't decided yet) on a small, transportable, USB Key. My problem is, although you read a lot that it's "bad", you are never told how bad. How long would it last (provided, say, a pc that is 24/7 on)? A month? A year? Five years? Is there recipes to make it last longer? Is there any reason beside weariness that should prevent me from attempting this? I mean, if it can be calculated, then I could theoretically shield myself by doing regular backups on another key when the deadline gets close (for example). Notes I am not talking of using a USB as a live CD, but actually installing the OS on it.) When I say "USB Key", I refer to the little USBs with a flash memory, not an external USB hard drive. For the curious, my reason is that I work in a lot of different places, on different PCs, and I have a very customized session, with my own WM, my own key bindings, my own scripts, , a selection of plugins for firefox and chrome, etc, and currently I am synchronizing all this through a mix of dropbox, git, and transporting files on USBs, and and it's becoming a chore. It would be much simpler for me to just plug the USB and mount the hard disk of the PC I am using and use it's processing power without actually needing to install any OS on it.

    Read the article

  • Geographically distributed file system with preferred locality

    - by dpb
    Hi All -- I'm building a application that needs to distribute a standard file server across a few sites over a WAN. Basically, each site needs to write a lot of misc files of varying size (some in the 100s MB range, but most small), and the application is written such that collisions aren't a problem. I'd like to have a system set up that meets the following qualifications: Each site can store files in a shared "namespace". That is, all the files would show up in the same filesystem. Each site would not send data over the WAN unless necessary. I.e., there would be local storage on each side of the WAN that would be "merged" into the same logical filesystem. Linux & Free ($$$) is a must. Basically, something like a central NFS share would meet most of the requirements, however it would not allow the locally written data to stay local. All data from remote sides of the WAN would be copied locally all the time. I have looked into Lustre, and have run some successful tests with it, however, it appears to distribute files fairly uniformly across the distributed storage. I have dug through the documentation and have not found anything that automatically will "prefer" local storage over remote storage. Even something that went with the lowest latency storage would be fine. It would work most of the time, which would meet this application's requirements. Any ideas?

    Read the article

  • Running multiple sites on a LAMP with secure isolation

    - by David C.
    Hi everybody, I have been administering a few LAMP servers with 2-5 sites on each of them. These are basically owned by the same user/client so there are no security issues except from attacks through vulnerable deamons or scripts. I am builing my own server and would like to start hosting multiple sites. My first concern is... ISOLATION. How can I avoid that a c99 script could deface all the virtual hosts? Also, should I prevent that c99 to be able to write/read the other sites' directories? (It is easy to "cat" a config.php from another site and then get into the mysql database) My server is a VPS with 512M burstable to 1G. Among the free hosting managers, is there any small one which works for my VPS? (which maybe is compatible with the security approach I would like to have) Currently I am not planning to host over 10 sites but I would not accept that a client/hacker could navigate into unwanted directories or, worse, run malicious scripts. FTP management would be fine. I don't want to complicate things with SSH isolation. What is the best practice in this case? Basically, what do hosting companies do to sleep well? :) Thanks very much! David

    Read the article

  • Website memory problem

    - by Toktik
    I have CentOS 5 installed on my server. I'm in VPS server. I have site where I have constant online ~150. First look on site looks OK. But when I go through links, sometimes I receive Out of memory PHP error. It looks like this Fatal error: Out of memory (allocated 36962304) (tried to allocate 7680 bytes) in /home/host/public_html/sites/all/modules/cck/modules/fieldgroup/fieldgroup.install on line 100 And always, not allocated memory is very small. In average I have 30% CPU load, 25% RAM load. So I think here is not a physical memory problem. My PHP memory limit was set to 1500MB. My apache error log looks like this [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Out of memory, referer: http://www.host.com/17402 [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Premature end of script headers: index.php, referer: http://www.host.com/17402 [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Out of memory, referer: http://www.host.com/17402 [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Premature end of script headers: index.php, referer: http://www.host.com/17402 [Thu Sep 30 17:49:00 2010] [error] [client 91.204.190.5] File does not exist: /home/host/public_html/favicon.ico Past I have not met with this on my server and the problem appeared itself. Besides this I'm receiving some server errors on mail. cpsrvd failed @ Fri Sep 24 16:45:20 2010. A restart was attempted automagically. Service Check Method: [tcp connect] Failure Reason: Unable to connect to port 2086 Same for tailwatchd. Support tried, and can't help me...

    Read the article

  • Which is the fastest way to move 1Petabyte from one storage to a new one?

    - by marc.riera
    First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult. A small description: Now Storage = 1PB using DDN S2A9900 storage for the OSTs, 4 OSS , 10 GigE network. (lustre 1.6) 100 compute nodes with 2x Infiniband 1 infiniband switch with 36 ports After Storage = Previous storage + another 1PB using DDN S2A 990 or LSI E5400 (still to decide) (lustre 2.0) 8 OSS , 10GigE network 100 compute nodes with 2x Infiniband Previous experience: transfered 120 TB in less than 3 days using following command: tar -C /old --record-size 2048 -b 2048 -cf - dir | tar -C /new --record-size 2048 -b 2048 -xvf - 2>&1 | tee /tmp/dir.log So , big problem here, using big mathematical equations I conclude that we are going to need 1 month to transfer the data from one side to the new one. During this time the researchers will need to step back, and I'm personally not happy with this. I'm telling you that we have infiniband connections because I think that may be there is a chance to use it to transfer the data using 18 compute nodes (18 * 2 IB = 36 ports) to transfer the data from one storage to the other. I'm trying to figure out if the IB switch will handle all the traffic but in case it just burn up will go faster than using 10GigE. Also, having lustre 1.6 and 2.0 agents on same server works quite well, with this there is no need to go by 1.8 to upgrade the metadata servers with two steps. Any ideas? Many thanks Note 1: Zoredache, we can divide it in two blocks (A)600Tb and (B)400Tb. The idea is to move (A) to new storage which is lustre2.0 formated, then format where (A) was with lustre2.0 and move (B) to this lustre2.0 block and extend with the space where (B) was. This way we will end with (A) and (B) on separate filesystems, with 1PB each.

    Read the article

  • KVM and JBoss Java Application Server

    - by Jason
    We have a large Xen deployment running on both RHEL and CentOS and have recently started looking at KVM since this is where it looks like the future of VM's are on Linux. We can load the server and get everything running without an issue. However when we load up a new one with JBoss (4.2 Community edition, Sun JDK 6) and load a large EAR the server goes a little crazy. The %sy will jump to 80-99% and just hang for large periods of time we see a similar jump in %us on the host machine. We though at first this might be I/O as it seems to happen at start of JBoss but then would "cool down" after everything got loaded. We did some tests by extracting some large tar.gz files and using jar -xvf on the ear but could not re-create. Then we starting thinking this might be some type of memory access issues. We loaded a c-program that would generate a lot of memory activity and sure enough we saw the spikes again. Not as high mind you but we did see it jump. We then wrote a small java program to do the same thing and sure enough we saw it jump again. Any thoughts on what might be causing this? Is this just the way KVM works? As a side note we do NOT see this behavior on any other setup. Xen, VMWare and/or native iron. The system does seem a bit slower than our Xen / VMware ones.

    Read the article

  • Ubuntu 10.04 freezing and Ctrl + Alt + Backspace does nothing but music keeps playing

    - by Bryce Thomas
    I'm having intermittent problems where the screen will freeze in Ubuntu. I've tried using Ctrl + Alt + Backspace to restart the X-server, though this does nothing. When the freeze occurs, there's a small square of black dashes around the mouse pointer - maybe 1 inch in size. These dashes look a lot like a 2d barcode. The rest of the screen looks normal, but I can't move the mouse and none of the keyboard shortcuts work to do anything. However, music that I begin playing before the freeze continues to play, which seems to indicate it hasn't stalled up completely. I've noticed a similar freezing problem when I'm using Windows 7. That is, I see the same barcode like dashes around the mouse pointer when it freezes up. So I'm guessing it's either a driver or hardware problem. I thought if it was a hardware problem though, the whole computer might stop working (i.e. music would stop playing)? The video card I am using is an Nvidia, and I believe it's in the 7600 range. In Ubuntu I have the drivers for the card set to the latest available (proprietary). Ideally I'd like to be able to continue using the proprietary drivers. Is there any known issues with the drivers for this model graphics card, or has anyone experienced the same problem and knows how to fix it?

    Read the article

  • Can any postfix guru assist me determine how emails are still being sent via my server from unauthorized sources?

    - by Dave
    Hi all, I'm getting a little concerned as I run a small server hosting a number of websites and manage the email for a few dozen people. Just recently though I've had a couple of notifications from spamcop alerting me that spam has been sent from my server, and when I have a look over the logs from time to time I can indeed see that there are many repeated attempts of mail being sent from my server. Most of the time it gets knocked back from the destination servers but sometimes its getting through. Unfortunately I'm not linux or postfix expert, I can get by but had though I had my machine locked down quite securely, I don't allow relaying, when I check the online DNS/MX tools they tend to report my server as being OK so I'm not sure where to take it now and hoping someone might be able to throw me a few pointers. I get lots of entries like this in my MAIL.INFO log Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 66B88257C12F: from=<>, size=3116, nrcpt=1 (queue active) Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 614C2257C1BC: from=<[email protected]>, size=2490, nrcpt=3 (queue active) and Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6471]: 0A316257C204: to=<[email protected]>, relay=none, delay=384387, delays=384384/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6470]: 5848C257C20D: to=<[email protected]>, relay=none, delay=384373, delays=384370/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) then there tends to be connection timeouts, so from what I see even though I had relaying disabled.. something is getting by and trying to send.. So if you can help that will be greatly appreciated, and any further logging/config info I can supply. Thanks

    Read the article

  • Missing MB on a GPT partioned SSD

    - by pisswillis
    I recently installed Arch Linux on an Intel 40GB SSD. I used GPT for partioning (via GNU parted) and created the following partions: /dev/sda1 : 1 MB, no FS, flag=bios_grub /dev/sda2 : 30MB, /boot, ext2, flag=boot /dev/sda3 : 20GB, /home, ext4 /dev/sda4 : ~20GB, /, ext4 After struggling to install grub2 from the livecd environment (which I finally did via grub-install /dev/sda --root-directory=/mnt/ --no-floppy --force) I got a working system. However, when I was inspecting disk usage with df I noticed that my home partition had around 170MB of used space on it. This surprised me because the only things on /home were one users .bashrc, .bash_history, and .lesshst. du confirmed that there was only a few KB of space being used on /home. Why does df report approximately 170MB being used when du does not? Is this space "gone forever", or can I regain it by repartioning and/or reinstalling? When I installed grub2 it said something along the lines of "your embed area is too small", and that I could "use BLOCKLISTS, but BLOCKLISTS are UNRELIABLE". In the end the only way I could get a system booting from the SSD was to use blocklists via the grub-install --force flag. Is this related to the mysterious missing 170MB? Thanks

    Read the article

  • TC hashing filters - single rule deletion

    - by exa
    For traffic shaping I'm currently using a setup that looks exactly like the setup from LARTC, on this page: http://lartc.org/howto/lartc.adv-filter.hashing.html I have a simple problem with that - everytime I want to modify something in the hash table (like assign a IP to different flowid), I need to delete the whole filter table and add it again filter by filter. (I actually don't do it by hand, I have a nice program that does it for me... but still...) There is a problem - I got roughly 10k filters allocated this way and deleting and refilling the whole filtertable can get pretty lengthy, which is not exactly good for traffic shaping. My program could easily manage to delete only the rules that need to be deleted (thus reducing the whole problem to several commands and miliseconds), but I simply don't know the command that deletes only the one hashing rule. My tc filter show: filter parent 1: protocol ip pref 1 u32 filter parent 1: protocol ip pref 1 u32 fh 2: ht divisor 256 filter parent 1: protocol ip pref 1 u32 fh 2:a:800 order 2048 key ht 2 bkt a flowid 1:101 match 0a0a0a0a/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 2:c:800 order 2048 key ht 2 bkt c flowid 1:102 match 0a0a0a0c/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 link 2: match 00000000/00000000 at 16 hash mask 000000ff at 16 The wish: 'tc filter del ...' command that removes only one specific filter (for example the 0a0a0a0a IP match (IP address 10.10.10.10)). Removal of some small subgroup would also be good - for example I could still recreate a bucket (bkt a) pretty fast. My attempts: I tried to number all the filters using prio, but with no help -- they just create something unusuable (but deletable) below, but the bucketed filters remain there after that gets deleted. Any ideas? edit - I'm adding a simplified tl;dr description of the problem: I created hash filter on some interfce just like in this http://lartc.org/howto/lartc.adv-filter.hashing.html I want to find a command that deletes one rule (e.g. 1.2.1.123) from the table, leaving the rest untouched and working.

    Read the article

  • How do I Forward root's email to an external email address?

    - by ErebusBat
    I have a small server (Ubuntu 10.04) at my house and I would like to forward root's email to my gmail hosted domain to get security notifications and what not. I ripped everything out and started from scratch and ran into some other issues. I now have sendmail working in the sense that I can mail [email protected] and get the mail. HOWEVER, adding an address to /root/.forward does not actually forward the message. I get the following in my logs: Dec 22 14:04:37 batcave sendmail[4695]: oBML4bAT004695: to=<root@batcave>, ctladdr=aburns (1000/1000), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30075, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (oBML4bJ9004696 Message accepted for delivery) Dec 22 14:04:39 batcave sm-mta[4698]: STARTTLS=client, relay=[69.145.248.18], version=TLSv1/SSLv3, verify=FAIL, cipher=DES-CBC3-SHA, bits=168/168 Dec 22 14:04:40 batcave sm-mta[4698]: oBML4bJ9004696: to=<[email protected]>, ctladdr=<[email protected]> (1000/1000), delay=00:00:03, xdelay=00:00:03, mailer=relay, pri=120336, relay=[69.145.248.18] [69.145.248.18], dsn=2.0.0, stat=Sent (OK 01/D4-00853-216621D4) You can see where my local sendmail instance accepts it then hands it off to my ISP, but with the wrong address ([email protected]).

    Read the article

  • a brand new FS based on a database without using fuse

    - by Devrim
    hi all, To serve millions of files out of a single directory, being able to connect to a drive from hundreds of endpoints, and for some other reasons (to avoid gluster/nfs/all fs based networking solutions), I want to evaluate the possibility of making a filesystem that's based on a mongodb (or any other). Basically, it works like fusefs, every single file is kept in mongo gridfs. In theory, I do, mount mongodbfs /mountPoint mongodb://localhost then when i say touch /mountPoint/test.txt this file is inserted into mongodb. This FS will also store uid/gid and perms with the file, we can throw hundreds of servers to it, and no useradd will be necessary. I'm not thinking to include all the features of FS, just the ones we need. My question is, how do I start my quest in finding resources, books, links, people, developers who'd help me implement this? at least a proof of concept. Is it feasible? What should I expect as a timeline for such undertaking? Please only think about gazillion small files and folders.

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >