Search Results

Search found 54067 results on 2163 pages for 'ubuntu 10 04'.

Page 509/2163 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • Graceful logout in dwm

    - by Riateche
    I want dwm to close all windows gracefully when I press quit hotkey. I like Unity behaviour: it displays list of windows denying logout (for example, editors with unsaved changes) and do not logout before all issues are resolved and applications are closed. By default, dwm just end X session and all running applications are killed. I was thinking about writing a script that will retrieve list of all windows, gracefully close them and wait for their processes to finish. But I even don't know how to close windows. The only way I know is using wmctrl, and this utility doesn't work with dwm.

    Read the article

  • Will Ubuntu be releasing an update for Cedar Trail Processors?

    - by Alan
    There is a file available on the Intel web site with the file name "cdv-gfx-drivers-1.0.1_bee.tar.bz2" and a date of July 6, 2012. It can be found by searching the Intel Download Center for the filename or the string "Linux* PowerVR Graphics/Media Drivers". The download page links to the file, release notes and a link, Enabling hardware accelerated playback that takes one to a page containing links to two pdf documents titled "Enabling Hardware Accelerated Playback for Intel® AtomTM Processor N2000/D2000 Series", one for Ubuntu and one for Fedora. The instruction and release notes speak to working with kernel 3.1.0 and since I do not feel I have the skills, knowledge or training to do anything else but, follow the instructions to the "T", I am very reluctant to try anything on my freshly updated 3.2.0 kernel. I would much rather use a Ubuntu supported kernel that applies these drivers and doesn't break anything in the process. Is it a case where this is so new that Canonical has not yet included these drivers but, soon will do so?

    Read the article

  • map linux drives to windwos 7 for media stream over internet

    - by Ortix92
    I'm trying to map a linux network drive to my windows 7 laptop, however this laptop is not on LAN. At home, I simply use Samba, but this obviously won't work over the internet. I'm trying to avoid VPN, so if there are other solutions, I would like to know about them. The reason I ask is because my university does this as well. We can simply map folders to our computers without VPN connections. I'm not sure what they are running as servers. The main reason is because I want to be able to access my files stored on my home server wherever I go. They are located in the /home/ folder (videos, music and pictures folder). I'm trying to keep my websites and media separate from each other. I wouldn't mind accessing them from a web interface either, but I would like to keep the directory structure intact. I remember having an app like that come with winamp and running it on my windows pc (As the server). Unfortunately it doesn't work for linux. Any ideas on what I could use? Would XBMC be able to help me out with this? I did do some researching but I couldn't find any concrete answers

    Read the article

  • I want to deploy my php based web application with apache-ant. How can I do that?

    - by codeperl
    I googled it. But unfortunately did not get the specific answer. I am a fan of command line and typing. So now, I want to deploy my php based web application with apache-ant. How can I do that? Also I want to practice these deployment in my local pc. Is it possible? Phing is there and what i heard phing works on the top of apache-ant for php application deployment. But I want to face the hassel and want to write in my own hand.

    Read the article

  • sudoer scheme to allow useful access to another web developer yet retain future control of a virtual

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, so standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done. I'm sure that people have solved this type of problem before somehow, though, and I'd like to go with something somewhat tested as opposed to something I've homegrown.

    Read the article

  • Network Bridging on Linux for OpenVPN

    - by Coyote
    I've been following all the OpenVPN Bridge tutorials I can, but I'm still missing something. Does anyone know of a super detailed tutorial\explanation of bridging? If anyone has bridging running, can I get a copy of your interfaces file to see how you've got it going. (Obviously change the ip address, just please change them consistently.)

    Read the article

  • Configure FTP Server with two different IP addresses on different subnets and separate NICs

    - by Luke
    I have an FTP server that's on a low bandwidth connection. We want to set it up with a second IP address on a much higher bandwidth connection. I set up the second interface with a static IP address on the faster connection. This unfortunately does not work. I can verify that the second IP address works perfectly when I disable the first IP address. What do I need to do to get two separate interface IP addresses on different subnets working on the same server?

    Read the article

  • How do I run Microsoft Explorer 8 plus on Ubuntu?

    - by user169487
    First off, you should know I'm a Ubuntu newbie who has become passionate since my strangely low-priced Dell computer showed up with Ubuntu loaded on it and it could do almost everything I need from a computer. But now I've got a problem. I know, it's like asking how to use a cassette player on your computer, but in South Korea EVERY website is made to work with exclusively on MS Explorer PLUS Active X, which Microsoft finally killed with 9. Hence, I have to use 8, as there are some Korean sites I need to access for searching, banking, etc. So I see that the way to go is to load PlayOnLinux, but then I it tells me to delete these. Do I really have to do that? Should I just hit "Install Anyway"? Or would that be just asking for trouble?

    Read the article

  • My computer boots up with Ubuntu: How can I tell what's on my hard disk?

    - by Larry Harson
    I've acquired an old laptop (Acer TravelMate 4050 with an Italian keyboard) that boots up with the following options: Ubuntu, with Linux 3.0.0-12-generic Ubuntu, with Linux 3.0.0-12-generic (recovery mode) Memory test (memtest86+) Memory test (memtest86+, serial console 115200) When I choose the first option, the screen just goes blank with a flashing cursor. But I can go into grub command mode by typing 'c'. Now, I want to know what operating system is installed, and the files stored on my computer, so how do I do this? What can I do to maximise the use of this computer in its current state?

    Read the article

  • deploy git project and permission issue

    - by nixer
    I have project hosted with gitolite on my own server, and I would like to deploy the whole project from gitolite bare repository to apache accessible place, by post-receive hook. I have next hook content echo "starting deploy..." WWW_ROOT="/var/www_virt.hosting/domain_name/htdocs/" GIT_WORK_TREE=$WWW_ROOT git checkout -f exec chmod -R 750 $WWW_ROOT exec chown -R www-data:www-data $WWW_ROOT echo "finished" hook can't be finished without any error message. chmod: changing permissions of `/var/www_virt.hosting/domain_name/file_name': Operation not permitted means that git has no enough right to make it. The git source path is /var/lib/gitolite/project.git/, which is owned by gitolite:gitolite And with this permissions redmine (been working under www-data user) can't achieve git repository to fetch all changes The whole project should be placed here: /var/www_virt.hosting/domain_name/htdocs/, which is owned by www-data:www-data. What changes I should do, to work properly post-receive hook in git, and redmine with repository ? what I did, is: # id www-data uid=33(www-data) gid=33(www-data) groups=33(www-data),119(gitolite) # id gitolite uid=110(gitolite) gid=119(gitolite) groups=119(gitolite),33(www-data) does not helped. I want to have no any problem to work apache (to view project), redmine to read source files for project (under git) and git (doing deploy to www-data accessible path) what should I do ?

    Read the article

  • File permissions to run mysqld in chroot

    - by Neo
    I'm trying to run mysqld inside chroot environment. Herez the situation. When I run mysqld as root, I can connect to my databases. But when I run mysql using init.d scripts, mysql gives me an error. $ mysql --user=root --password=password ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) So I guess, I need to change file permissions of some files. But which ones? Oh and in case you are wondering '/var/run/mysqld/mysqld.sock' is owned by 'mysql' user. EDIT: strace output looks something like this [pid 20599] <... select resumed> ) = 0 (Timeout) [pid 20599] time (NULL) = 12982215237 [pid 20599] select(0, NULL, NULL, NULL, {1, 0} <unfinished ...>

    Read the article

  • Creating own Amazon Machine Image - Kernel panic

    - by amra
    I have created own AMI and registered it on Amazon EC2. But while AMI startup I receive following error: Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,1) The image is running locally without any problems. fstab contains: proc /proc proc defaults 0 0 /dev/sda1 / ext3 relatime,errors=remount-ro 0 1 thx for help

    Read the article

  • Anyone experiencing audio issues with VirtualBox on Linux and has a solution?

    - by DoxaLogos
    I've been using Virtualbox (now at 3.0.2) on Kubuntu (now at 9.04) for a while now, and I seem to have a problem when running Windows. Sometime after a while the audio will cut out in Kubuntu. The only way I can get it to recover is to make sure VirtualBox is completely shutdown and either going into multimedia under "system settings" and test the audio or restart. I'm wondering if anyone else here has experienced similar issues and has come up with a more elegant solution. I can't seem to find a reasonable one at virtualbox.org.

    Read the article

  • Can a non-redundant RAID5 cause any serious problems (compared to RAID0)?

    - by leemes
    I used to have a three-disc RAID5 (mdadm) in my computer for personal media storage (music, videos, photos, programs, games, ...). It had three discs with 750 GB each, resulting in an array capacity of 1.5 TB. One day (one year ago), I needed one of those discs to install another operating system. I thought, I don't need the redundancy anymore since I backup the most important stuff (personal photos e.g.) on an external disc anyway. So I decided to remove one of the three discs without converting the RAID to RAID0 or even two separate discs, because I had no temporary storage (since one cannot simply convert the RAID5 to RAID0 AFAIK). So now, for about one year, I have a non-redundant RAID5 with 2 of 3 discs running. Sometimes, one of the discs has a defective contact at the power cable or something similar causing the drive to stop working temporarily (I don't know exactly what it is). Since it still works when rebooting the computer and in most cases by calling some mdadm commands, it wasn't that problematic. Note that the data is not very critical, since I still have a backup of the most important stuff. But in the last few weeks, one of the drives fails very frequently (every few hours), so it gets really annoying to manage this. My questions are: Is there any disadvantage (apart from the annoying management) of a non-redundant RAID5 (with one drive less than typical) over a RAID0? If I understand it correctly, both have no redundancy and the same capacity. On a temporary drive failure, I can restart the array in both cases, assuming that the drive itself still works after the failure. Can it happen that the drive contents alter on a drive failure, making the array inconsistent? If so, can I tell mdadm to check the array for failures (without a file system level checking tool)? Since the drive most probably only has a defective contact causing it to fail for a second only, can I tell mdadm to automatically restart the array, so I will not even notice the failure if no application wanted to access the file system during the failure?

    Read the article

  • Grub hangs intermittently on "Starting Up..."

    - by Griffo
    Hi all, I've had this problem for a while now. My linux server is set to wake-on-lan but occasionally it halts at Grubs "Starting Up..." and goes no further. This is not due to additional hardware being attached such as a flash drive or anything as I never plug anything into it. It may boot perfectly 40 times in a row and then hit this issue. Sometimes it gets the issue a couple of times in quick succession and doesn't happen for ages again. I'm not sure how to diagnose it since it doesn't seem to be reproducible. Any help much appreciated. Thanks

    Read the article

  • Unstable wireless connection on Ubuntu only (router config problem?)

    - by crunch
    so I'm using Ubuntu 14.04 and I could connect to all Wi-Fi's that I've tried and it worked fine... Except for this router that I have at home. This connection seems to be unstable, it just randomly disconnects and reconnects. On the same machine I have Ubuntu 13.10 (same problem) and Windows 8 (no problem). Also there are no Wi-Fi problems when I connect with my Android phone. So I figured it must be some config issue. I tried tweaking some settings/restoring default on the router but I don't really know what I'm doing so, unsurprisingly, it didn't fix the problem.

    Read the article

  • Howto dianostic problem in MySQL Cluster?

    - by maj
    Hi, I have set up a MySQL cluster following exactly this howto. Page 1 is completed, but the problem I can see the nodes in ndb_mgm for a little while, and then I get ndb_mgm> show; Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 (not connected, accepting connect from 192.168.0.101) id=3 (not connected, accepting connect from 192.168.0.102) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.0.103 (Version: 5.1.45) [mysqld(API)] 2 node(s) id=4 (not connected, accepting connect from any host) id=5 (not connected, accepting connect from any host) ndb_mgm> So the questions is; How do I diagnostic this problem? Does there exist log files I can look in?

    Read the article

  • How can one get involved in the Ubuntu Brainstorm team?

    - by Jay
    I've heard it thrown around on a lot of blogs that Ubuntu Brainstorm is the "graveyard of good ideas". I have a bit of a passion for average end-users suggesting the kinds of software they want and they way they think their applications and computers could work. I'm interested in joining the Ubuntu Brainstorm team, maybe to facilitate ideas not getting buried and making it not seem like a dead platform. Unfortunately I'm not very plugged into the "community" so I'm not sure how to go about it. I was hoping someone might be able to point me in the right direction. Thanks.

    Read the article

  • S3sync not working

    - by user57833
    Hello, I managed to get s3sync to upload my test folder to Amazon S3 and can see it in the MWS Managment Console. Downloading the data back to a test folder results in the following error message: root@mybucketname:/var/s3sync# ./week_download.sh s3Prefix backups/weekly localPrefix /var/s3sync/testdown/weekly s3TreeRecurse mybucketname backups/weekly Creating new connection Trying command list_bucket mybucketname prefix backups/weekly max-keys 200 delimiter / with 100 retries le ft Response code: 200 prefix found: / s3TreeRecurse mybucketname backups/weekly / Trying command list_bucket mybucketname prefix backups/weekly/ max-keys 200 delimiter / with 100 retries l eft Response code: 200 S3 item backups/weekly/ s3 node object init. Name: Path:backups/weekly Size:0 Tag:d41d8cd98f00b204e9800998ecf8427e Date:Fri O ct 29 14:21:53 UTC 2010 local node object init. Name: Path:/var/s3sync/testdown/weekly/ Size: Tag: Date: source: dest: Update node s3sync.rb:638:in initialize': No such file or directory - /var/s3sync/testdown/weekly/.s3syncTemp (E rrno::ENOENT) from s3sync.rb:638:inopen' from s3sync.rb:638:in updateFrom' from s3sync.rb:393:inmain' from s3sync.rb:735 I am using the following download script: !/bin/bash script to download local directory upto s3 cd /var/s3sync/ export AWS_ACCESS_KEY_ID=nothing to see here export AWS_SECRET_ACCESS_KEY=nothing to see here export SSL_CERT_DIR=/var/s3sync/certs ruby s3sync.rb -r -v -d --progress --make-dirs mybucket:backups/weekly /var/s3sync/testdown copy and modify line above for each additional folder to be synced Any idea's? Does the download script need to download to the source of Amazon S3 i.e testup folder? Was hoping on the instance of a complete failure and the original folders won't exist that it would just download everything from me. Note: changed my bucket names to "mybucketname" so that it is not public!

    Read the article

  • How to get Postfix to send/forward/relay to a sub-domain located on another server?

    - by thiesdiggity
    I have a quick question. How do I setup postfix to send an email to another server (Exchange Server) when sending to an email address that has a sub-domain of our main server. For example, say our main server is mail.example.com and we have a Exchange server setup to receive emails from exchange.example.com. We have the MX records setup in our DNS and it receives correctly if we send from a GMail account. However, when we try to send an email from a @example.com account we get the following error: Host or domain name not found. Name service error for name=exchange.example.com type=A: Host not found I believe Postfix checks for local mailboxes first and if its setup with the domain it delivers to the local account, but in this case the sub-domain accounts are located in another server. Anyone have any thoughts on what I need to do within Postfix so it doesn't look locally for the exchange.example.com mailboxes? I found relay_domains directive within Postfix but that doesn't seem to fix it when I add the sub-domain. Thanks for your help.

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >