Search Results

Search found 27819 results on 1113 pages for 'linux intel'.

Page 473/1113 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • scsi and ata entries for same hard drive under /dev/disk/by-id

    - by John Dibling
    I am trying to set up a ZFS pool using 4 bare drives which I have attached to my Ubuntu system via a SATA hot swap backplane. These are Hitachi SATA drives. When I list the contents of /dev/disk/by-id, I see two entries for each drive: root@scorpius:/dev/disk/by-id# ls | grep Hitachi ata-Hitachi_HDS5C3030ALA630_MJ1323YNG0ZJ7C ata-Hitachi_HDS5C3030ALA630_MJ1323YNG1064C ata-Hitachi_HDS5C3030ALA630_MJ1323YNG190AC ata-Hitachi_HDS5C3030ALA630_MJ1323YNG1DGPC scsi-SATA_Hitachi_HDS5C30_MJ1323YNG0ZJ7C scsi-SATA_Hitachi_HDS5C30_MJ1323YNG1064C scsi-SATA_Hitachi_HDS5C30_MJ1323YNG190AC scsi-SATA_Hitachi_HDS5C30_MJ1323YNG1DGPC I know these are the same drives because I wrote down the serial numbers, and all the other drives in this system are either Seagate or WD. The serial number for the first one, for example, is YNG0ZJ7C. Why are there two entries here for each drive? More to the point, when I create my ZFS pool which one should I use; the scsi- one or the ata- one?

    Read the article

  • CPU/RAM usage log over a period of time to file on CentOS

    - by joel_gil
    Hi everyone Im looking for an app pr line of code that could let me observe a process, save the info in a number of variable and then put the gathered info on a file. Ive been trying with variations of top but no luck. I am running several CentOS virtual servers, VM is 2gb ram 2 processor. Maybe a script that works over a specified amount of time while writing lines with the info on a text file so at the end i can have a sort of table with the data. The thing is Im going to stress test the server and I would like to have the data to make some statistics. Any comments and suggestions are most welcome.

    Read the article

  • Can't ssh from CentOS 6.5 to SUSE LINUX 10.1

    - by Pavel Tankov
    We have a quite old installation of SUSE LINUX 10.1 (i586) in the office. The problem shortly: I can successfully ssh to it from machines in the same LAN (192.168.1.0) and not from others (that are in 10.23.0.0). The SuSE has SSH server openssh-4.2p1-18.12. I have ruled out the firewall and hosts.allow and hosts.deny files. When my ssh login attempt fails, here is what the logs say: on the client: $ ssh -vvv 192.168.1.5 OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.5 [192.168.1.5] port 22. debug1: Connection established. debug1: identity file /home/nbuild/.ssh/identity type -1 debug1: identity file /home/nbuild/.ssh/identity-cert type -1 debug1: identity file /home/nbuild/.ssh/id_rsa type -1 debug1: identity file /home/nbuild/.ssh/id_rsa-cert type -1 debug1: identity file /home/nbuild/.ssh/id_dsa type -1 debug1: identity file /home/nbuild/.ssh/id_dsa-cert type -1 on the server: Aug 21 16:34:25 serverhost sshd[20736]: debug3: fd 4 is not O_NONBLOCK Aug 21 16:34:25 serverhost sshd[20736]: debug1: Forked child 20739. Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: entering fd = 7 config len 403 Aug 21 16:34:25 serverhost sshd[20736]: debug3: ssh_msg_send: type 0 Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: done Aug 21 16:34:25 serverhost sshd[20739]: debug1: rexec start in 4 out 4 newsock 4 pipe 6 sock 7 Aug 21 16:34:25 serverhost sshd[20739]: debug1: inetd sockets after dupping: 3, 3 Aug 21 16:34:25 serverhost sshd[20739]: debug3: Normalising mapped IPv4 in IPv6 address Aug 21 16:34:25 serverhost sshd[20739]: Connection from 10.23.1.11 port 44340 The above log on the server is when I enable DEBUG3 log level. However, with the default log level (INFO), the only thing the server logs is this: Aug 21 16:38:32 serverhost sshd[20749]: Did not receive identification string from 10.23.1.11 Any hints? I feel I've tried everything already.

    Read the article

  • Thunderbird doesn't raise or give focus when you click "Write" or "Reply".

    - by Neil
    I'm using Thunderbird 2.0.22, the version that comes with Ubuntu Intrepid 8.10. When I hit "Reply" or "Write", a new email window pops up, but it ends up being under the main Thunderbird Window, and doesn't have focus. Thunderbird is the only application that exhibits this weird behaviour, and it just started happening one day, whereas it worked fine before. I've seen this problem years ago as well, and wasn't sure how I fixed it.

    Read the article

  • Running a script at startup as root?

    - by Usman Ajmal
    Hi i developed a script which I set to run at startup i.e. when the Desktop appears. In the script I mounted a partition using sudo mount /dev/sda1 /mnt &> result.txt After executing script a file named result.txt was created which contained sudo: no tty present and no askpass program specified In other words the mounting failed. If I run the script using sudo ./myProgram i don't face this problem and the drive gets mounted successfully. Any suggestions please....

    Read the article

  • Right solution for /etc/hosts file reset on reboot

    - by user846226
    i've just installed funtoo and after setting the FQDN on /etc/conf.d/hostname i noticed when setting a list of aliases in /etc/hosts file it get overwtiten on each reboot. Someone points to set the aliases to 127.0.0.2 ip address but that's not a valid solution for me. Could someone point me to the file where i should place entries like 127.0.0.1 local.foo 127.0.0.1 local.bar in order to make them persist in /etc/hosts after rebooting? Thanks! PD: I think openresolv could be the one who is overwritting the file.

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • How to configure three IP address into single server

    - by user1363308
    I have Cisco device for call forwarding and three different system,I want to configure 15 and 16 server IP into 192.168.53.197 means eth0 --> 192.168.53.197 eth1 --> 192.168.16.15 eth2 --> 192.168.16.16 which work i have done with 15 and 16 individual , I will do some work on 197 after configuration eth1 and eth2. Means one system have three IP address but base IP address is 192.168.53.197

    Read the article

  • php extensions & apache mods gone/not working after server restart?

    - by user1782359
    I was wondering if anyone has ever come across this before, as I'm pretty stumped to be honest, and my server admin knowledge isn't particular good so I'm not sure what could even be wrong, let alone how to fix it. Basically, Thursday last week everything was fine on our server. I come in on Friday and it's a mess: php extensions are missing/not working, apache modules are gone. (e.g. oci_* was gone completely, odbc_ not working but still there, the apache ntlm_auth for single sign on was gone and so the website wasn't even loading in IE). I'm ruling out anything deliberate because it's just incredibly unlikely. The only thing that really happened between thursday & friday is that on thursday evening one of the network guys did a RAM upgrade on the server and restarted it. That's it, nothing else. Now I'm wondering if somehow those extensions and such which we installed months ago were somehow only saved in a local memory of sorts, and a restart has wiped them? But we installed them all as root, so I don't see why it should be any different from installing anything else. It makes little/no sense to me. To expand on an example of something that's gone very wrong, the php odbc_ extension: It's still on the server, it doesn't return undefined function or anything. But it just cannot connect to the datasource any more. I've tested it through the command line and it's working perfectly fine with that datasource and login details, but all of a sudden having it in the php odbc_connect() function and it just can't connect. ( [S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source. ) But unixODBC is set up fine. Like I say i've tested it all through the terminal and it can connect, and we've not changed anything, it's just now all of a sudden not working through the PHP function. Anyone have any ideas whatsoever as to what could be going on? This is on CentOS 5.x by the way.

    Read the article

  • How expensive is a hostname in htaccess? Other solutions possible?

    - by Nanne
    For easy allow or disallowing of dynamic IP-adresses you can add them as a hostname in a .htaccess file. As I have read from: .htaccess allow from hostname? it does a reverse lookup on the connecting ip address, seeing if the response matches the allowed name. (Well, actually Apache is doing a double lookup, first a reverse lookup and then a forward lookup on the result of the reverse.) This is the reason we are currently not using dynamic-ip hostnames in the .htaccess: this "sounds" quite heavy: 2 extra lookups for every request. Is this indeed quite heavy, and would a reasonably busy server that is rather looking for less then more load get away with this :)? (e.g.: how does this 'load' compare to the rest? If a request is 1000 times more expensive then the lookups it might be negligible. otoh, it could be that final straw :) ) Are there other solutions? I can write a script that does a lookup of the hostname and put it in .htaccess files ofcourse, but this feels a bit like a hack.

    Read the article

  • Do I have a bad SD card?

    - by User1
    I'm trying to copy data from my computer to an SD card. After a few hundred megs, I keep getting the following errors in dmesg: [34542.836192] end_request: I/O error, dev mmcblk0, sector 855936 [34542.836284] FAT: unable to read inode block for updating (i_pos 13694981) [34542.836306] MMC: killing requests for dead queue [34542.836310] end_request: I/O error, dev mmcblk0, sector 9280 [34542.837035] FAT: unable to read inode block for updating (i_pos 148486) [34542.837062] MMC: killing requests for dead queue [34542.837066] end_request: I/O error, dev mmcblk0, sector 1 [34542.837074] FAT: bread failed in fat_clusters_flush [34542.837085] MMC: killing requests for dead queue These were all files I copied from a smaller SD card. I just want to transfer them to my new, larger card for my phone. I tried the same experiment with different files on a different machine and the card failed again. Reading data from the old card went fine. My systems are older and the new SD card is new (16GB Class 4). Could this be that my computers are too old? Is there a definitive test to verify if my SD card is bad?

    Read the article

  • Using remote station as original

    - by Neka
    I have 2 computers with totally same Debian, config, apps and other stuff. One at work and another at home. It's inconvenient to maintain the same configuration on these stations - upgrading OS, sync configuration, etc. Is there the way to use my home station as "host", and such a "terminal" at work? As if i have one HDD on 2 computers, but must use they own resources like an videocard and another. Looks like i need some remote tool as VNC, but this is no sessional event, I need to use "terminal" comp like original all of the time.

    Read the article

  • Improving sound quality with remote ESD server

    - by cuu508
    Hi, I'm investigating low-budget ways to get audio from my PC (Ubuntu) to HiFi without wires. I'm currently testing a setup where Asus WL-500gP wireless router runs ESD daemon and has attached USB soundcard which is then plugged into HiFi. I'm testing playback on PC with mpg123-esd and Spotify under Wine. The sound is there, latency is unexpectedly low, but I also hear occassional clicks and some distortion from time to time. I suppose that's because of the low latency and wireless streaming of uncompressed audio--any packet drops, CPU temporarily being busy etc. will cause clicks in sound output. Is there a way around this problem, increasing latency / buffer size somehow perhaps? Streaming using shoutcast protocol seems to be a way out but I have feeling that would be a complex and brittle setup.

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • how to properly edit hosts, hostname and resolf.conf?

    - by Firewall
    i,v been searching the internet for a real noop tutorial on the subject but could not found any direct info. on how to edit these files the proper way. i,v got a debian internet server that i use to host some personal domains and runs squid and rTorrent. the server is up and running with no problems but i am confused about a few things. lets say that i named my server (foo), my domain is (example.com) and my public IP is 95.211.133.200 now: should /etc/hostname contains: tango.example.com or tango <----- just the server name should /etc/hosts contains: 127.0.0.1 localhost.localdomain localhost 95.211.133.200 foo.example.com foo should /etc/resolf.conf contains (along with the nameservers) both: domain example.com search example.com or just the first one. are there any other files that i should edit in order to make things right? last thing, the command: domainname returns: (none) i believe it should return (example.com). what should i do to correct that?

    Read the article

  • haproxy backend default location

    - by magd1
    If you go to www.company.com, I want it to redirect to /something/something on my server, but the URL still shows www.company.com Is this possible in haproxy? backend new_marketing_server *** set default URL to /something/something*** mode http balance roundrobin timeout server 10m option httpclose server server1 10.86.151.142:80 minconn 32000 maxconn 3200 check port 80 inter 2000 server server2 10.122.13.189:80 minconn 32000 maxconn 3200 check port 80 inter 2000

    Read the article

  • What is the meaning of the 'Personalities' feature under /proc/mdstat

    - by drcelus
    On some systems I see this : Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] md1 : active raid1 sdb1[1] sda1[0] 10485696 blocks [2/2] [UU] md2 : active raid1 sdb2[1] sda2[0] 477371328 blocks [2/2] [UU] And other systems show : Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204788 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 4193272 blocks super 1.1 [2/2] [UU] md2 : active raid1 sda3[0] sdb3[1] 483985276 blocks super 1.1 [2/2] [UU] bitmap: 0/4 pages [0KB], 65536KB chunk I wonder what is the meaning of Personalities and the impact of having different values.

    Read the article

  • QNAP TS-419p as a VPN Gateway?

    - by heisenberg
    Hello, I am hoping one of you might be able to help. I want to make files stored on shared folders on a QNAP TS-409p available to users over a VPN link. How is the possible? Can someone explain what I need to do. What do I need to do at the router and what do I need to do on the QNAP NAS? Effectively, what I want do do is use the built in Windows vpn client to connect to my home network and then be able to browse the shared folders. Thanks in advance.

    Read the article

  • How to troubleshoot this memory usage?

    - by Camran
    I have a classifieds website. I use PHP, MySql, and SOLR. Solr uses a Servlet Container, in my case JETTY, which is java application. I just noticed that something was terribly wrong on my website. I opened the terminal and entered the "top" command and noticed that JAVA was EATING all the cpu and mem. Now I thought "Ok, maybe I need more mem and cpu" So I increased it. But along with the increase the java app started eating more. This has never happened before, and it is either a bug, or a hack of some kind. Anyways, I need to troubleshoot this now, and so I wonder how do I do this? Can I somehow pinpoint exactly when the memory usage started to go up from some error log? How does one troubleshoot this? How do I prevent it? Is it possible to prevent too many requests somehow, if they are within a timeline? Thanks

    Read the article

  • can't run binaries or shell scripts

    - by hyperboreean
    I am running Debian testing and I am not able to run any binary or shell script. I keep getting "No such file or directory". The umask is the default one and I haven't fooled around with the paths. Also, I am aware of this question, but it doesn't work out for me - I compiled my code on this machine and trying to run it on the same machine. Also, all of my shell scripts have the correct shebang. Any advices?

    Read the article

  • CSF Unresolved issue

    - by josephmarhee
    I began receiving service failures for CSF/LFD once the limit was reached in iptables preventing the service from working properly. I flushed all iptables rules, and redid by rules using CIDR rather than the individual IPs that were listed and the issue persists. Error: The VPS iptables rule limit (numiptent) is too low (1527/1536) - stopping firewall to prevent iptables blocking all connections, at line 1459 This is after restarting CSF, which gave me: You have an unresolved error when starting csf. You need to restart csf successfully to remove this warning CSF still seems to be trying to enforce rules that no longer exists (lists entire chains upon trying to be restarted,only to fail with that error). Any idea of what's going on?

    Read the article

  • blocking port 80 via iptables

    - by JoyIan Yee-Hernandez
    I'm having problems with iptables. I am trying to block port 80 from the outside, basically plan is we just need to Tunnel via SSH then we can get on the GUI etc. on a server I have this in my rule: Chain OUTPUT (policy ACCEPT 28145 packets, 14M bytes) pkts bytes target prot opt in out source destination 0 0 DROP tcp -- * eth1 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW,ESTABLISHED And Chain INPUT (policy DROP 41 packets, 6041 bytes) 0 0 DROP tcp -- eth1 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW,ESTABLISHED Any guys wanna share some insights?

    Read the article

  • is there a way to run a command before puppet implements a change?

    - by Patrick
    I want to have puppet run a specific command before performing any type of change. I am aware of the prerun_command option in the main puppet.conf, but this is not what I'm looking for. I want the command to only run if something is about to change, not on every puppet run. Here's the scenario. Let's say I have a bunch of web servers behind a load balancer. I then want puppet to update the web site files. But in order to prevent issues where some files have been updated, but other files haven't, and the mixed versions causing problems, I want to take the server out of the load balancer pool. I could write a script which when run will tell the load balancer to remove the box from the pool. Then puppet can do the change, and use postrun_command to put the box back in the pool once complete. But I need a way to run that script to remove the server from the pool. The only solution I can think of is to keep 2 copies of the files on the box. One a staging copy, and when puppet updates that, use a notify action to trigger the removal script, and then copy from staging into the live location. But I was hoping for something a little more generic that would work on any change being performed (upgrading a package, restarting a service, creating a user, anything).

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >