Search Results

Search found 100910 results on 4037 pages for 'linux server'.

Page 117/4037 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • New Linux Mint User Networking questions

    - by nyCecilia
    I have a readynas that I've been using with XP, Vista, and Win7. Because of weirdness with Vista, it is set up for full read/write guest access. Now I have a Linux Mint netbook. I have set up smb on it and can read from the readynas smb shares, but I can't write. What else can I check? Part2--(keep in mind my network knowledge is small...or smaller) what is the difference between NFS and SMB, can a readynas be set up to allow access to the SMB shares via NFS (if I can figure out NFS lol)? A link to a guide for beginners would be appreciated, google searching "Linux Mint Readynas" doesn't give me anything useful.

    Read the article

  • Running $ORIGIN linked binaries from setuid scripts on linux

    - by drscroogemcduck
    I'm using suidperl to run some programs that require root permissions. however, the runtime linker won't expand library paths which contain $ORIGIN entries so the programs i want to run (jstack from java) won't run. more info here There is one exception to the advice to make heavy use of $ORIGIN. The runtime linker will not expand tokens like $ORIGIN for secure (setuid) applications. This should not be a problem in the vast majority of cases. my program looks something like this: #!/usr/bin/perl $ENV{PATH} = "/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/java/jdk1.6.0_12/bin:/root/bin"; $ENV{JAVA_HOME} = "/usr/java/jdk1.6.0_12"; open(FILE, '/var/run/kil.pid'); $pid = <FILE>; close(FILE); chomp($pid); if ($pid =~ /^(\d+)/) { $pid = $1; } else { die 'nopid'; } system( "/usr/java/jdk1.6.0_12/bin/jstack", "$pid"); is there any way to fork off a child process in a way so that the linker will work correctly.

    Read the article

  • linux worker script/queue (php) [closed]

    - by xetrill
    Hi, I need a binary/script (php) that does the following. Start n process of X in the background and maintain the number processes. An example: - n = 50 - initially 50 processes are started - a process exits - 49 are still running - so 1 should be started again. Please, this is urgent. Thanks! Michael

    Read the article

  • Running $ORIGIN linked binaries from setuid scripts on linux

    - by drscroogemcduck
    I'm using suidperl to run some programs that require root permissions. however, the runtime linker won't expand library paths which contain $ORIGIN entries so the programs i want to run (jstack from java) won't run. more info here There is one exception to the advice to make heavy use of $ORIGIN. The runtime linker will not expand tokens like $ORIGIN for secure (setuid) applications. This should not be a problem in the vast majority of cases. my program looks something like this: #!/usr/bin/perl $ENV{PATH} = "/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/java/jdk1.6.0_12/bin:/root/bin"; $ENV{JAVA_HOME} = "/usr/java/jdk1.6.0_12"; open(FILE, '/var/run/kil.pid'); $pid = <FILE>; close(FILE); chomp($pid); if ($pid =~ /^(\d+)/) { $pid = $1; } else { die 'nopid'; } system( "/usr/java/jdk1.6.0_12/bin/jstack", "$pid"); is there any way to fork off a child process in a way so that the linker will work correctly.

    Read the article

  • Event ID 17890 (A significant part... paged out.) with SQL Server 2008

    - by Godeke
    I have a machine that has SQL Server 2008 Standard installed. Periodically (about once an hour) I am getting Event ID 17890 several times in a row. An example: 6:28:54 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 0 seconds. Working set (KB): 10652, committed (KB): 628428, memory utilization: 1%%. 6:34:27 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 332 seconds. Working set (KB): 169780, committed (KB): 546124, memory utilization: 31%%." 6:38:55 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 600 seconds. Working set (KB): 245068, committed (KB): 546124, memory utilization: 44%%." This pattern repeated at 7:26 - 7:37, 8:26 - 8:36, 9:24 - 9:35 and so with the same increasing working set and memory utilization pattern. I don't have any (known) background tasks running at this time. Backups run at 2:00 This subsided from 11:00 at night until it resumed at 4:00 in the morning and has been continuing the intermittent 10 minute glitch periods. As this server has plenty of RAM (the commit charge has peaked at 2,871,564 of 4,194,012 physical) I disabled the paging files after reading several items I dug up searching Google and not finding any of them changing the situation. This pattern I am documented is after removing the paging files, so I'm not even sure where we are paging the SQL process could be going. I also changed the SQL process memory to have a minimum of 500MB and a maximum of 2GB of RAM (as this is a light duty database server serving only a small workgroup). Has anyone encountered this? Prior to disabling the page files this error would cause 5 minutes of disk thrashing that disabled access to the databases, files, IIS webs and so on. Since disabling the page files it just logs strange things, but I'm not seeing a performance drop at least. Any suggestions would be welcome.

    Read the article

  • configuring linux console email client to check attachments

    - by Christopher
    I need to configure a IMAP4 capable (console-based) email client to - check and edit the name of an attachment ("contains umlauts?" - change character ä to ae) - delete emails that don't fit certain requirements (not PDF, DOC,... not from domain xyz.com) Whether the client can do everything by itself or can just trigger a script on incoming mail doesn't matter. Anyone have an idea with mail client would be suitable for such a task?

    Read the article

  • SSH: Configure ssh_config to use specific key file for a specific server fingerprint

    - by Penthi
    I have a key based login for a server. The IP and DNS of the server can change, because it is hosted on Amazon. Is there a way to configure the ssh client config to use the specific key file for this server only, when the fingerprint of the server matches? In other words: Normaly servers are matched by IP or DNS in the ssh client config. I want to do this by fingerprint, becaus IP and DNS can change.

    Read the article

  • oddities in interference of linux extened ACLs and 'regular' permissions

    - by abbot
    I've got some legacy code which checks that some file is read-only and readable only by it's owner, i.e. permissions set to 0400. I also need to give read-only access to this file to some other user on the system. I'm trying to set extended ACLs, but this changes 'regular' permission bits in a strange way also: $ ls -l hostkey.pem -r-------- 1 root root 0 Jun 7 23:34 hostkey.pem $ setfacl -m user:apache:r hostkey.pem $ getfacl hostkey.pem # file: hostkey.pem # owner: root # group: root user::r-- user:apache:r-- group::--- mask::r-- other::--- $ ls -l hostkey.pem -r--r-----+ 1 root root 0 Jun 7 23:34 hostkey.pem And after this the legacy code starts complaining that the file is group-readable (while it is actually not!) Is it possible to set the extended ACLs in such a way that some other user will also have read-only access, while the file will appear to have only 0400 'regular' permissions?

    Read the article

  • SQL Server 2008R2 Express: which is the users limit in a real case scenario?

    - by PressPlayOnTape
    I know that sql server express has not a user limit, and every application has a different way to load/stress the server. But let's take "a typical accounting software", where users input some record, retrieve some data and from time to time they make some custom big queries. May someone share its own experience and tell me which is the limit of users that can realistically use a sql server express instance in this scenario? I am looking for an indicative idea, like (as an example): "I had a company with an average of 40 users logged in and the application was working ok on sql server express, but when the users become 60 the application started to seem non repsonsive" (please note this sentence is pure imagination, I just wrote it as an example).

    Read the article

  • Linux how to force quit the process by root

    - by Mirage
    I have run the command to backup 7 accounts and then i want to quit that command while its running. How can i quit from command line I want that it should quit backing up all accounts not just current account and then i have to press again untill all accounts open

    Read the article

  • Reinstall linux over ssh.

    - by DoomStone
    Hello I'm having a large problem with our development server, it have had a program called Webmin + a total idiot have been administrating the Linux sever. Witch now have resulted in the server being totally trashed, there are so many different kinds of the same program install that anything doesn’t work. And don't get me started on the users and groups :D Well at last I have been given the responsibility to administrate our development server. But I would like to start from scratch, instead of trying to find every single package and config the previous admin have **ed up. But the problem is that it is a remote hosted server with ssh access. The server is running Debian, but i am thinking of reinstalling it with ubuntu server Thanks

    Read the article

  • Setting up a linked server to another server which isn't in a domain without using SQL authenticatio

    - by Telos
    Server A (SQL2005) is in our primary domain, but server B (SQL2000) is just in a windows workgroup. We are not allowed to join it to the domain, or bad things happen... We also can't enable SQL authentication on server B. We've got domain accounts for A, and matching local accounts on server B. I can connect to B from my local PC or A using SSMS and a domain login, but I can't get the linked server to connect. Any ideas how to do this?

    Read the article

  • Run Jar in Background on Linux

    - by Benny
    I have a jar that runs forever (infinite loop with socket listening thread) and need it to run in the background at all times. An example would be: "java -jar test.jar" How do I do this? Thanks in advance!

    Read the article

  • Linux Program Source Management

    - by Blackninja543
    This particular problem has little do with SubVersion Repositories and more to do with the management of installed programs. My question revolves around the problem of installing a program from source. If I where to build a distro with no package management system what possibilities would I have for maintaining the program is up to date. My only idea would be to keep a record of all the programs installed from source and perform a periodic check to identify if a new version is out.

    Read the article

  • Can't work with server after modifying /lib/libc.so.6

    - by Afshin
    I have a CentOS server, VPS. After running this command I can't work with server and get the same error in all actions (SSH, Login, ls and ...) The command: ln -s /lib/libc.so.1 /lib/libc.so.6 -f And the error is: /sbin/shutdown: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory I have VNC to server but because I can't login to server, that's unusable. Thanks in advance.

    Read the article

  • Removing a device in "removed" state from Linux software RAID array

    - by Sahasranaman MS
    My workstation has two disks(/dev/sd[ab]), both with similar partitioning. /dev/sdb failed, and cat /proc/mdstat stopped showing the second sdb partition. I ran mdadm --fail and mdadm --remove for all partitions from the failed disk on the arrays that use them, although all such commands failed with mdadm: set device faulty failed for /dev/sdb2: No such device mdadm: hot remove failed for /dev/sdb2: No such device or address Then I hot swapped the failed disk, partitioned the new disk and added the partitions to the respective arrays. All arrays got rebuilt properly except one, because in /dev/md2, the failed disk doesn't seem to have been removed from the array properly. Because of this, the new partition keeps getting added as a spare to the partition, and its status remains degraded. Here's what mdadm --detail /dev/md2 shows: [root@ldmohanr ~]# mdadm --detail /dev/md2 /dev/md2: Version : 1.1 Creation Time : Tue Dec 27 22:55:14 2011 Raid Level : raid1 Array Size : 52427708 (50.00 GiB 53.69 GB) Used Dev Size : 52427708 (50.00 GiB 53.69 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Nov 23 14:59:56 2012 State : active, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : ldmohanr.net:2 (local to host ldmohanr.net) UUID : 4483f95d:e485207a:b43c9af2:c37c6df1 Events : 5912611 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 0 0 1 removed 2 8 18 - spare /dev/sdb2 To remove a disk, mdadm needs a device filename, which was /dev/sdb2 originally, but that no longer refers to device number 1. I need help with removing device number 1 with 'removed' status and making /dev/sdb2 active.

    Read the article

  • Tool or script to detect moved or renamed files on Linux prior to a backup

    - by Pharaun
    Basically I am searching to see if there exists a tool or script that can detect moved or renamed files so that I can get a list of renamed/moved files and apply the same operation on the other end of the network to conserve on bandwidth. Basically disk storage is cheap but bandwidth isn't, and the problem is that the files often will be reorganized or moved around into a better directory structure thus when you use rsync to do the backup, rsync won't notice that its a renamed or moved file and re-transmission it over the network all over again despite having the same file on the other end. So I am wondering if there exists a script or tool that can record where all the files are and their names, then just prior to a backup, it would rescan and detect moved or renamed files, then I can take that list and re-apply the move/rename operation on the other side. Here's a list of the "general" features of the files: Large unchanging files They can be renamed or moved around [Edit:] These all are good answers, and what I end up doing in the end was looking at all of the answers and will be writing some code to deal with this. Basically what I am thinking/working on now is: Using something like AIDE for the "initial" scan and enable me to keep checksums on the files because they are supposed to never change, so it would aid on detecting corruption. Creating an inotify daemon that would monitor these files/directory and recording any changes relating to renames & moving the files around to a log file. There are some edge cases where inotify might fail to record that something happened to the file system, thus there is a final step of using find to search the file system for files that has a change time latter than the last backup. This has several benefits: Checksums/etc from AIDE to be able to check/make sure that some media did not get corrupt Inotify keeps resource usage low and no need to re-scan the filesystem over and over No need to patch rsync; If I have to patch things I can, but I would prefer to avoid patching things to keep the burden lower, (IE don't need to re-patch everytime there is an update). I've used Unison before and its really nice, however I could've sworn that Unison does keep copies around on the filesystem and that its "archive" files can grow to be rather large?

    Read the article

  • NTBackup (on WS2k3) fails to backup remote server (WS2k8R2) with " Error: is not a valid drive, or you do not have access."

    - by Mark A
    We run an NTBackup job on a Windows Server 2003 R2 SP2 with all updates (as of Q4-2011). It works well backing up two WS2k3 servers as well as the backup server itself. However, we have been unable to successfully back up our Windows Server 2008 R2 machine ("G5-01"). It often runs for about 2GB worth of backup and then dies out with one of the below error messages. It should be more like 20GB for the full server. We have tried using the admin share (C$), an explicitly shared drive share, UNC and mapped drives. The result is the same each time, the only thing that varies is the amount of stuff backed up before it chokes. We've also run NTBbackup from the UI interface, from the command line and as a scheduled task. We are backing up to 400/800GB tapes and they have plenty of space available on them (blank media). Error: \\G5-01\c is not a valid drive, or you do not have access. Error: \\G5-01\c$ is not a valid drive, or you do not have access. Error: Y: is not a valid drive, or you do not have access. Error: Could not access or create backup catalog files. Verify that you have full access to the working folder and there is disk space available. The job is run as Administrator and we have no problems logging onto the server and transferring files. The Event Log on the WS2k8 is not much help, as it has success audits for each login. All of the hardware involved (HP DL360 G3, HP LTO Ultrium 3, Adaptec 39320A) has the latest supported drivers. We've seemingly tried a bunch of different options but are wondering where to look next to resolve the backup issue. We've been super happy with our reliable schedule task for years but this one is stumping us!

    Read the article

  • adding a route entry to linux routing table

    - by netg
    hi, I have two systems with ip address say 64.103.56.1(A)(Dev name -wlan0) and 64.103.225.18(B),now what i want is , everytime I ping B from my system A, it has to be routed via a router say with address 10.0.0.251(C)(I want this to be my next hop to reach B) , but this router is on a different subnetwork than the two systems.How do I do this? /* Things I tried: I used 'route add -host B gw C wlan0', and got an error saying " no such process exist or no such device found". Tried ping C and traceroute and found the gw addr at my side is some 63.103.236.3(D), so added another entry route add -host C gw D wlan0, I was able to do this without any error! */

    Read the article

  • Change Linux Console's Default Monitor

    - by Tim M
    Is there any way to specify which monitor the console is displayed on in Linux? Details: I have a 3 monitor setup with 2 video cards. When I boot the computer, the BIOS displays on the PCI graphics card (which has a small monitor). When starting Linux, the console is displayed on the same monitor. Is there a way to have the console output on a different monitor? I'm using the vesafb framebuffer. I don't see a way in my BIOS to change the default video card.

    Read the article

  • Linux - quota per directory?

    - by depesz
    I have following scenarios: Single partition mounted as /, with lots of disk space. There is a range of directories (/pg/tbs1, /pg/tbs2, /pg/tbs3 and so on), and I would like to limit total size of these directories. One option is to make some big files, and then mkfs them, and mount over loopback, and then set quota, but this makes expansion a bit problematic. Is there any other way to make the quota work per directory?

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >