Search Results

Search found 25551 results on 1023 pages for 'linux validated rpm oracl'.

Page 447/1023 | < Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >

  • How to change font size on display

    - by Tim
    My laptop is Lenovo T400, whose screen size is 14.1 inches and default resolution is 1440 x 900. My main OS is Ubuntu 10.10. The default font size on the display is somehow small, which might contribute to the fatigue of my eyes. My previous laptop is Acer 5000, whose screen size is 15.4 inches and the default resolution is 1024 x 768. I like reading on my old laptop better than on my new one. Is it possible to change the setting of my new one to look like reading my old one? What are the parameters that control the font size? Are screen size, resolution part of them? In Windows, there are choices for font size, while in Ubuntu I haven't find out where I can change the setting and would like know if someone here knows about it. I also wonder if I can use a separate bigger display (perhaps just like a desktop display) as the display of my laptop, in case I don't want to enlarge font size at the cost of sacrificing the amount of the content to display, and how I shall do it? Thanks and regards!

    Read the article

  • Linux 3.12 disponible en version stable, avec des gains de performances et une réduction de la consommation d'énergie

    Linux 3.12 disponible en version stable avec des gains de performances et une réduction de la consommation d'énergieLinux Torvalds a annoncé via un message sur LKLM (Linux Kernel Mailing List) la sortie de la version stable du noyau Linux 3.12.Au menu des améliorations, un changement dans la façon de gérer la fréquence de fonctionnement du processeur de l'ordinateur (modification de l'algorithme CPUfreq governor) permettant des gains significatifs de performances et une réduction de la consommation...

    Read the article

  • How can I check for a string match AND an empty file in the same if/then bash script statement?

    - by Mike B
    I'm writing a simple bash script to do the following: 1) Check two files (foo1 and foo2). 2) If foo1 is different from foo2 and foo1 NOT blank, send an email. 3) If foo1 is the same as foo2... or foo1 is blank... do nothing. The blank condition is what's confusing me. Here's what I've got to start with: diff --brief <(sort ./foo1) <(sort ./foo2) >/dev/null comp_value=$? if [ $comp_value -ne 0 ] then mail -s "Alert" [email protected] <./alertfoo fi Obviously this doesn't check for blank contents. Any thoughts on how to do that?

    Read the article

  • Why should I use a puppet parametrized class?

    - by robbyt
    Generally when working with complex puppet modules, I will set variables at the node level or inside a class. e.g., node 'foo.com' { $file_owner = "larry" include bar } class bar { $file_name = "larry.txt" include do_stuff } class do_stuff { file { $file_name: ensure => file, owner => $file_owner, } } How/when/why does parametrized classes help when this situation? How are you using parametrized classes to structure your puppet modules?

    Read the article

  • Lighttpd based server issues crop up when port forwarding

    - by michael
    I have four host computers running lighttpd webservers. they are sitting behind a hspa modem, which each occupying a http port between [81 - 84]. 80 is taken by the modem itself. The port forwarding is setup correctly, however, only a portion of any webpage I request from any of the hosts comes through (they all fails after %20 of the page). If I put the host on port 81 into the dmz, it serves pages fine. The others do not respond to the dmz treatment. Is it possible the web content on the hosts somehow require ports aside from their respective http port? Or is it possible that even though the server.port in the lighttpd_ssl.conf file is set, the individual hosts are still expecting to serve on port 80? I am not familiar with lighttpd, nor did i set them up. they are running on video encoders i purchased. I can grab any files from them required for further information on the problem.

    Read the article

  • How to retry connections with wget?

    - by Andrei
    I have a very unstable internet connection, and sometimes have to download files as large as 200 MB. The problem is that the speed frequently drops and sits at --, -K/s and the process remains alive. I thought just to send some KILL signals to the process, but as I read in the wget manual about signals it doesn't help. How can I force wget to reinitialize itself and pick the download up where it left off after the connection drops and comes back up again? I would like to leave wget running, and when I come back, I want to see it downloading, and not waiting with speed --,-K/s.

    Read the article

  • Eject LiveCD + Reboot

    - by JPerkSter
    We use LiveCD's alot in my line of work. Whether it be fscking file systems, recovering data from a customer to rm'd his server, etc. I'm looking for a quick way to eject the CDROM and reboot the server. Does anyone have any one-liners to do this or any other suggestions? Using 'eject' doesn't work most of the time, from what I've tested / used. We're using RHEL / Cent on most of our servers if that helps :D

    Read the article

  • Using Plesk to setup MySQL

    - by chris
    Having trouble getting my mysql up and running on a new virtual server. The host gave me Plesk and I think MySQL is installed but I can't seem to access it. I keep getting this: mysql -u admin -p Enter password: ERROR 1045 (28000): Access denied for user 'admin'@'localhost' (using password: YES) How do I make sure its running properly? How do I reset the root password? (I have root access to the server)

    Read the article

  • Apache2 WebServer not allowing me to view website/files in /var/www

    - by CitadelCSAlum
    I used to be able to access websites/files that were stored in the directory /var/www I have not used this for a while, but now I have a need to store, media in this directory or in the directory/var/www/images I noticed that my apache web server wasnt running correctly so I did a complete package removal and then reinstalled, but I am still unable to access a test page inde.html in the directory /var/www/index.html by going to http://myipaddresshere/index.html Is there some initial configuration I need to do to allow me to store HTML and media files in this directory and be able to access them from the browser? I dont remember having to do anything before.

    Read the article

  • good books about server architecture?

    - by ajsie
    when the traffic for a website grows i dont think one apache server in a vps is the way to go. i would like to know more about how i then should set up the server side architecture. im not that much into hardware stuff (what kind of cables to use, different cpu architectures etc), but interested in the software architecture: what servers (apache, nginx, squid, varnish etc) to use and how they interact with each other one server in one machine? how many mysql servers. how many apache, nginx servers and so on. how the "machine court" looks like. are there any good books about this area?

    Read the article

  • apache: lists of all directives for a context?

    - by ajsie
    in the apache online documentation each directive could belong to a context eg: server-config, virtualhost, directory, .htaccess and so on. i wonder if there is a list of all directives belonging to each context? eg. a list with all directives for virtualhost so i know exactly which one i can use? and also, where can i find directives for apache modules? on their page or does each module has its own page with documentation (eg. mod_rewrite)?

    Read the article

  • migration of physical server to a virtual solution, what i have to do?

    - by bibarse
    Hello I'm new in this forum, so i would like that you forgive me for my blissfully and my low English level. I'm a trainee in company one month ago, and my mission is to migrate 3 physicals servers to a virtualization technology. The company edit softwares for E-learning so there are lots of data like videos, flash and compressed (zip). This is some inventory of the servers: OS: Debian, 2 redhat, apache, php/mysql, sendMail/Dovecot, webmin with virtualmin template to create dynamically the web sites because there is no sysadmin ... The future provider will be responsible of to secure, update and create the virtual machines (outsourcing) and with a RedHat OS's. So i want that you help me to choose a virtualisation technologie (for the i prefer KVM of Redhat RHEV, VMWare is expensive), how evaluate the hardware needs (this for evolution of 4 or 5 years) and to elaborate a good planing to don't forget any think. Thank you for your responses.

    Read the article

  • securing communication between 2 Linux servers on local network for ports only they need access to

    - by gkdsp
    I have two Linux servers connected to each other via a cross-connect cable, forming a local network. One of the servers presents a DMZ for the other server (e.g. database server) that must be very secure. I'm restricting this question to communication between the two servers for ports that only need to be available to these servers (and no one else). Thus, communication between the two servers can be established by: (1) opening the required port(s) on both servers, and authenticating according to the applications' rules. (2) disabling IP Tables associated with the NIC cards the cross-connect cable is attached to (on both servers). Which method is more secure? In the first case, the needed ports are open to the external world, but protected by user name and password. In the second case, none of the needed ports are open to the outside world, but since the IP Tables are disabled for the NIC cards associated with the cross-connect cables, essentially all of the ports may be considered to be "open" between the two servers (and so if the server creating the DMZ is compromized, the hacker on the DMZ server could view all ports open using the cross-connect cable). Any conventional wisdom how to make the communication secure between two servers for ports only these servers need access to?

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • What are secure ways of sharing a server (ssh+LAMP) with friends?

    - by Bran the Blessed
    What is the best way to share a virtual server with friends? More precisely, I have the following assets: A virtual private server (Debian Lenny) with root access for myself, running... SSH apache2 mysql Some unused disk space Some friends in need of hosting The problem I would now like to do the following: Hosting one or several domains per friend My friends should have full access to their domains, including running PHP scripts, for example My friends should not be able to poke around in other directories The security of my server should not be compromised by faulty PHP scripts To clarify: I do trust my friends in the sense that they are not trying to do something evil with their access. I just do not trust the programs they are going to run. So, what are your recommendations for establishing such a scenario? Partial solution I already came up with the following plan: Add chrooted SSH users for my friends Add Apache vhosts per user (point the directories to subdirectories of the homedirectories, i.e. /home/alice/example.com, /home/bob/example.net, etc. But how can I enforce a chroot-like environment for the scripts they are running within these vhosts? Any pointers would be appreciated.

    Read the article

  • Ubuntu stops auto-mounting flash drive

    - by Brian
    It seems that after being up for a few days, my Ubuntu system refuses to auto-mount hot-plugged USB disks (i.e. flash drives). The output from dmesg shows that the kernel recognizes the device correctly. The only solution I'm aware of at the moment is to reboot (logging out may work as well, but the impact is the same since I have a bunch of stuff open and it takes a few minutes to get everything situated after startup/login). I thought gvfs-fuse-daemon was the thing responsible for managing filesystems in userspace, but killing and restarting that doesn't help. Any other ideas?

    Read the article

  • Raid1+0: create stripe over two /dev/mdx on partition or not?

    - by Chris
    Given that I haven't found a way to define how a Raid10 is created with mdadm, i went the Raid1+0 solution. How to display/define Mirror/Stripping pairs with mdadm mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdf1 mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdg1 /dev/sdh1 mdadm --create /dev/md10 --level=0 --raid-devices=2 /dev/md0 /dev/md1 My question is about the stripe. For the mirror I create a primary partition over the full HD and set partition type to FD. So, should I do the same for the Stripe? Create partition on /dev/md0 and /dev/md1 (primary over full 'HDD', set partition type correctly) and then do the stripe on the partition? Is there a correct way here or are there any advantages/disadvantages to a solution? Thank you

    Read the article

  • Distributed filesystem across a slow link

    - by Jeff Ferland
    I have an image in my head where a link is too slow to realize the real-time transfer of files, but fast enough to catch up every day. What I'd like to see is a master <- master setup where when I write a file to Server A, the metadata will transfer to Server B immediately and the file will transfer at idle or immediately when Server B's client tries to read the file before Server A has sent it. It seems that there are many filesystems which can perform well over fast links, but I don't know of any that do well with a big bottle neck and a few hours of latency.

    Read the article

  • Disaster - partitions lost, data seemingly alive, how to recover?

    - by a2h
    I've used TestDisk and it's written my old partition structure of a ~20GiB partition for Vista, ~25GiB partition for 7 (but it now shows up as unallocated) and a ~400GiB partition for documents. What it's meant to be is a 30GiB partition for 7, some unallocated space, and a ~400GiB partition for documents. So currently, I have access to all my documents, but not any of the programs I've installed on C:, or AppData, because my boot partition is now supposedly a 20GB vista partition. I've tried using my Windows 7 install disc's repair function, but that did nothing beyond wasting about 10 minutes of my time. I'm currently posting from an Ubuntu live CD. Any help?

    Read the article

  • Proper way to configure ~/.Xsession with a standalone window manager to gracefully end a session

    - by cYrus
    I'm using xdm and my ~/.Xsession looks like this: # <initialization stuff here> exec openbox It works, but I've noticed that when I log out Openbox doesn't gracefully kill all the applications. In particular Google Chrome complains about that. How can I make sure to wait for all processes to exit (just like others configurations: Gnome, KDE, Windows ...)? The only (ugly) solution that I've found involves sleep and kill into ~/.Xsession.

    Read the article

  • Redirect non-www ssl traffic to www ssl (apache)

    - by The NinjaSysadmin
    Hello, I'm attempting to get a redirect which is failing, and for some reason I can't think today. I have a vHost file within HTTPD that listens on standard port 80 and port 443. I'm attempting to redirect https://domain.com/(.*) to https://www.domain.com/$1 so that the URL remains intact. My config is as follows: ServerName www.domain.com ServerAlias tempdomain.testdomain.co.uk ServerAlias domain.com My rerwrite rule I'm using is. RewriteCond %{HTTP_HOST} ^domain.com$ RewriteRule ^(.*)$ https://www.domain.com$1 [R=301,L] I've also tried removing the . and $ but nothing.. When I visit the url https://domain.com/secure.page?action=comp it doesn't redirect to https://www.domain.com/secure.page?action=comp I do also have other SSL pages, the above was just an example.. Can anyone point out my stupidity.

    Read the article

< Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >