Search Results

Search found 24814 results on 993 pages for 'linux distro'.

Page 153/993 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • How do I setup an Alias on Apache with XAMPP on Linux ? (Permission problem)

    - by knarf
    XAMPP works fine but I want to have http://localhost/f to point to /home/knarf/prog/php/fwyxz. I've chmod -R 777 /home/knarf/prog/php/fwyxz I've added Alias /f /home/knarf/prog/php/fwyxz at the end of the httpd.conf And when I try to access it, I get a 403. From the apache error_log: [error] [client 127.0.0.1] (13)Permission denied: access to /f denied. I've already tried several solutions (userdir and symlinks) but they both failed with the same error. I've also tried to add this after the Alias: <Directory "/home/knarf/prog/php/fwyxz"> Order allow,deny Allow from all </Directory> But again, permission denied. Now if I change the User/Group under which apache runs from nobody to knarf, it seems to work (static files are ok) but PHP can't use/initialize sessions : [error] [client 127.0.0.1] PHP Warning: session_start() [function.session-start]: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in /home/knarf/prog/php/fwyxz/index.php on line 3 [error] [client 127.0.0.1] PHP Warning: Unknown: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in Unknown on line 0 [error] [client 127.0.0.1] PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct () in Unknown on line 0 This is really frustrating.

    Read the article

  • How to automatically copy a file uploaded by a user by FTP in Linux (CentOS)?

    - by Buttle Butkus
    Outside contractor says they need read/write/execute permissions on part of the filesystem so they can run a script. I'm ok with that, but I want to know what they're running, in case it turns out there is some nefarious code. I assume they are going to upload the file, run it, and then delete it to prevent me from finding out what they've done. How can I find out exactly what they've done? My question specifically asks for a way of automatically copying the file, which would be one way. But if you have another solution, that's fine. For example, if the file could be automatically copied to /home/root/uploaded_files/ that would be awesome.

    Read the article

  • Compiz & Linux compositing: how does it fit into the X architecture?

    - by Latanius
    Not a really "how to solve stuff" question, but... I was wondering how the modern X architecture works, with compiz & all. What I know about it: in the beginning, there was the X server, clients connected (presumably on TCP), and then sent messages to the server to instruct it to show windows etc. because this didn't work (at all? or just fast enough?) for OpenGL & 3D acceleration, additional APIs were created for direct rendering (DRI? and, in addition to the X server, what things did the X clients talk to to render stuff and through what interfaces?) and, finally, enter Compiz: X clients end up (somehow) rendering to OpenGL textures, which is then put together to form a fancy-looking screen with translucent windows, and rendered to the screen. What I'm especially interested in is what components does the system have and how do they connect to each other? Like... if there is a box labelled "compiz" in the system... is it inside the X server? If it's not, how do the rendered images from the apps end up in it? And where does it render to? Is that another X server? Or DRI? Of course, I'd be equally happy if pointed to some docs capable of clearing up the confusion described above (conditional on they being significantly shorter than book-sized entities).

    Read the article

  • Can I rent exclusive time on a powerful server running linux? [closed]

    - by Mark Borgerding
    My company is involved in a proposal that requires speed estimates of our software on a server with the latest & greatest processors. This is not the first time we've been in this situation. The servers themselves are too expensive to buy a new one every time, so we end up extrapolating from what we have. There are so many variables: processor generation & speed, memory speed, memory channels, cache configurations; it makes extrapolation difficult and error-prone. Is there a business that rents time on the newest servers? At least part of the time we'd need exclusive access to an otherwise quiescent system either via ssh shell access or unattended batch jobs. I am not looking for general cloud computing services. I don't need much time on the server, but it needs to be exclusive. And the server needs to be pretty cutting edge for a solid basis of estimate.

    Read the article

  • How to remove a non-empty directory which is not owned by the user in Linux?

    - by Alex B
    If a directory "foo" is owned by user A and contains a directory "bar", which is owned by root, user A can simply remove it with rmdir, which is logical, because "foo" is writable by user A. But if the directory "bar" contains another root-owned file, the directory can't be removed, because files in it must be removed first, so it becomes empty. But "bar" itself is not writable, so it's not possible to remove files in it. Is there a way around it? Or, convince me otherwise why it's necessary.

    Read the article

  • How to calculate proper amount of inode/block sizes for a linux filesystem.

    - by Donatello
    I have an old reiser filesystem which I'm going to convert to Ext3. The problem I have is to determine the proper block- and inode-sizes for this partition. The partition is 44 GB large and has to hold 3,000,000+ files of sizes between 1 kb and 10kb, how can I figure out the best ratio of inodes and blocksize? The below is something I tried which seems OK but makes the copying files incredibly slow. mkfs.ext3 -t ext3 -c -c -b 1024 -i 4096 -I 128 -v -j -O sparse_super,filetype,has_journal /dev/sdb1 Thanks.

    Read the article

  • What command line tools for monitoring host network activity on linux do you use?

    - by user27388
    What command line tools are good for reliably monitoring network activity? I have used ifconfig, but an office colleague said that its statistics are not always reliable. Is that true? I have recently used ethtool, but is it reliable? What about just looking at /proc/net 'files'? Is that any better? EDIT I'm interested in packets Tx/Rx, bytes Tx/Rx, but most importantly drops or errors and why the drop/error might have occurred.

    Read the article

  • Run shell script on Linux box from a shortcut/app in Android?

    - by melat0nin
    I have an Ubuntu box which runs XBMC, which crashes occasionally. Since I have no keyboard connected,I have to SSH in, kill xinit, then restart it. I was wondering if there's an elegant way of doing this from my Android tablet, so I don't have to go to my desktop PC. I've used ConnectBot and can log in, but typing is laborious, even using the edit keys to scroll back up through the buffer. It seems as though it should be possible to script this so that I can execute a shortcut, or at least select a predefined script to be executed. This would seem to have plenty of applications, and there could be a site of scripts - restart webserver, reboot, email logs etc

    Read the article

  • Most effective way to change Linux command prompt for all users?

    - by incredimike
    I have several machines and the hostnames are really long.. i.e. companyname-ux-staging-web1.companyname.com. So my prompt looks something like [root@mycompany-ux-staging-web1 ~]# I'd like to shorten that up for all users on all machines with the least amount of work. From what I read I have a couple options, but they all have their drawbacks. I could change the hostname, but that would likely affect applications. Not a great choice. I could alter also $PS1 at login for all users by editing all .bashrc for existing users, and edit /etc/skel/.bashrc for potential new users. That's a lot of work across 10 machines. What's my best option or what have I overlooked?

    Read the article

  • What character can be safely used for naming files on unix/linux?

    - by Eric DANNIELOU
    Before yesterday, I used only lower case letters, numbers, dot (.) and underscore(_) for directories and file naming. Today I would like to start using more special characters. Which ones are safe (by safe I mean I will never have any problem)? ps : I can't believe this question hasn't been asked already on this site, but I've searched for the word "naming" and read canonical questions without success (mosts are about computer names). Edit #1 : (btw, I don't use upper case letters for file names. I don't remember why. But since a few month, I have production problems with upper case letters : Some OS do not support ascii!) Here's what happened yesterday at work : As usual, I had to create a self signed SSL certificate. As usual, I used the name of the website for the files : www2.example.com.key www2.example.com.crt www2.example.com.csr. Then comes the problem : Generate a wildcard self signed certificate. I did that and named the files example.com.key example.com.crt example.com.csr, which is misleading (it's a certificate for *.example.com). I came back home, started putting some stars in apache configuration files filenames and see if it works (on a useless home computer, not even stagging). Stars in file names really scares me : Some coworkers/vendors/... can do some script using rm find xarg that would lead to http://www.ucs.cam.ac.uk/support/unix-support/misc/horror, and already one answer talks about disaster. Edit #2 : Just figured that : does not need to be escaped. Anyone knows why it is not used in file names?

    Read the article

  • Burning Linux ISO to DVD and making it bootable.

    - by toc777
    Hi everyone, I just downloaded the Fedora 14 Live-Desktop ISO and used CDBurnerXP to burn the image to a DVD. For some reason the first time I burned the image nothing showed up on the DVD when I accessed it even though CDBurnerXP said it had successfully burned to the disk. I did it again and the ISO shows up on the disk (I don't think this is right, should it be the files inside the image that show up on disk or the image file??). The problem now is my dell PC can't find the ISO when I try to boot from it. I get an error saying it can't boot from the CD. I have verified the ISO image as directed from the Fedora website. My question is how do I make a bootable CD from a Fedora Live-Desktop ISO? How can I verify that the ISO was written to the CD correctly and has anyone had any issues booting from a CD using a Dell desktop (I'm not at home at the moment so I can't check what model it is but its old enough, I've had it for about 5 years). EDIT: All that needed to be done was to burn the image to CD as an image and not a data file. The first three times failed, I'm not sure if this was because of faulty DVD's or if the write speed was too high (16x). I put in a new DVD and changed the write speed to 8x, the image was then properly burned to the disk without any errors. Thanks.

    Read the article

  • How can I undo what I did when I accidentally booted linux host inside itself with VMware?

    - by ThomasGHenry
    Hello, I'm dual booting XP and Kubuntu. I wanted to boot to my existing raw scsi XP partition inside Kubuntu, not a virtual XP instance. I accidentally booted Kubuntu inside itself. I know this is a big mistake, so I interrupted the VM, which saved the state and closed. I rebooted the host and now I can't load the Kubuntu partition at boot time. I get a maintenance shell and the Kubuntu partition is read-only. I am able to boot XP as usual. I removed the HDD and tried to mount it on another computer as an external drive and neither partition (XP or Kubuntu) will be recognized, it just appears to be one device that still mounts and appears empty. From the maintenance shell I can see all the files are still on the Kubuntu partition. How can I undo what I did when I accidentally booted Kubuntu inside itself? Is it a matter of unlocking some files somewhere? how can I do that on a RO filesystem? Thanks!

    Read the article

  • Limiting network throughput of an already launched process ? (Linux/FreeBSD)

    - by jbdenis
    Hello everybody, is there any utility to limit the network throughput of a process after it has been launched ? Simple example: you note that a user takes all your upload bandwidth using scp and you'd like to limit the rate or decrease the priority of the transfer. I guess i could use a combination of iptables/tc or pf to achieve that, but i was wondering if there is a "one-shot" tool available (like tickle with a --pid option ^^) ? Regards, Jean-Baptiste

    Read the article

  • How to output a simple network activity plot in console in Linux?

    - by Vi.
    There's tload that plots load average. There's iftop that network usage as bars. How to do something like this: # tcpdump -i eth0 --plot 'host 1.2.3.4' 13:45:03 | | 0 in 0 out 13:45:04 |O | 0 in 1MB out 13:45:05 |OOOI | 500 KB in 4MB out 13:45:06 |OIIII | 6MB in 1MB out 13:45:07 | | 0 in 0 out 13:45:08 |IIIIIIIIIIII | 53M in 0 out

    Read the article

  • Linux SFTP and many local user accounts, limits with mount --bind?

    - by user123428
    I am in the process of building a solution to handle many developers (possibly hundreds) to work on their files via sftp, each one Jailed in their home directory. For our particular needs, we have a samba mount point that contains all of the users home directories. I have started developing the following solution and hit some walls: - I have configured a Ubuntu Lucid Server as sftp server. - In order to jail the user in their home directory (without allowing them the browse a directory up and seeing all the other users folders) I am using mount --bind and not a symbolic link (also some ftp clients don't really work with sym links). - The user accounts are local unix user accounts on the sftp server (not using a directory service or anything) that have an empty home folder created on the local machine, then I use mount --bind to bind the empty folder to the actual users home directory on the samba share. With this solution I am hitting a couple of problems, in the case of a server reboot, all the mount --binds are lost because they are not written in fstab. Then I have read somewhere that the maximum amount of entries in fstab are 400 (which does not really help us). I have thought of a solution of writing something that stores the mounts in a text file as a backup and on server reboot, run the script that re mounts. I am just really unsure about this whole process and was wondering if anyone has any insight on possibly a better solution for SFTP? (not FTP)

    Read the article

  • How can I set `less` or `more` max lines (scrollable height) limit/boundary in linux?

    - by Rudie
    (Sorry for the title. Any suggestions?) I've set my commandline PS1 to cover 3 lines: white space user, server and pwd $ or # to input I think less (or more?) is configured to break after window's height - 1, because when I do a $ git log, the first two lines are invisible at the top of the window and the rest is scrollable. I'm not sure who handles this scrolling and its configuration, but I assume GIT uses less/more. Where can I configure that my scrollable window is window height - 3 lines and not window height - 1? More info: If I cat lines.txt | less with a 23 line file, it shows the entire file and no scrolling. If I do the same with a 24 line file, it doesn't show line 1 (and no scrolling). With 25 lines: doesn't show lines 1 and 2 (and no scrolling). With 26 lines: shows line 1 and scrolling! The less breakpoint is at the wrong height...

    Read the article

  • Making a Linux laptop flight-safe; disabling wireless/radio

    - by SpoonMeiser
    I'm going on a long flight tomorrow, and would like to be able to use my laptop during the journey. Wireless devices like WiFi and bluetooth interfere with airplanes instruments, and shouldn't be used on flights. If my laptop does not have a physical rf-kill switch, is it sufficient to just ensure that the relevant modules do not get loaded? If so, is that always safe, or does it vary between different hardware? My particular situation, is a Samsung NC10 netbook. Atheros 5k wireless hardware. Debian sid with kernel 2.6.30-1-686. However, I think it'd be interesting to know the answer to this question for the general case; not just my specific case.

    Read the article

  • How do I pull a backup from a Linux server to my Windows PC using rsync?

    - by Nogwater
    I'm currently using sftp to download nightly backups (.tar.gz) from my web host to my desktop computer. I think I'd like to switch to rsync to minimize the bandwidth (and time). I have cygwin installed on my PC, but don't use it for much. I have shell access to my web host via ssh (PuTTY). Let's say my source directory is myserver.com:/home/username/backups/, I want to grab all of the .tar.gz files from there, and I want to save them to C:\Backups\ locally.

    Read the article

  • Regaining access to Linux server after SSH service dies?

    - by GigaWatt
    I recently ran into an issue with a VPS where the SSH service crashed, leaving me unable to connect to the machine. The other services were up and running; only the SSH service died. I managed to resolve the situation with a reboot from the VPS control panel, but the incident got me thinking: Assuming: I don't have physical access to the machine I have no server control panel access or means of rebooting the server All other system services are still functioning Then how could I recover from the SSH service dying?

    Read the article

  • How do you set the default user in Linux for file creation?

    - by Not a Name
    I want to create a directory, for example: /public/all But I want it so that if you create a file in all, the owner is root, but anyone with access to the /public/all folder can delete/edit/etc the file, just not change the permissions. (I will use a self-created "setx" application to change the execute value if needed.) Reason for this, I don't want you to be able to deny other users write/read access to files in /public/all. I heard setuid on directories doesn't work for that.

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >