Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 356/1051 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • phpmyadmin port change?

    - by Rajat
    How do i change my default phpmyadmin port to 443 or 9999? Is it possible or do I have use port 80 only? If possible, then how do I change share the same? Apache is listening on port 9999 for sure. However, going to URL http://<webserver>:9999/phpmyadmin/ Will give following error (with Firefox browser) An error occurred during a connection to webserver:9999. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) Anyone has any clue what is going on?

    Read the article

  • How to diagnose causes of oom-killer killing processes

    - by dunxd
    I have a small virtual private server running CentOS and www/mail/db, which has recently had a couple of incidents where the web server and ssh became unresponsive. Looking at the logs, I saw that oom-killer had killed these processes, possibly due to running out of memory and swap. Can anyone give me some pointers at how to diagnose what may have caused the most recent incident? Is it likely the first process killed? Where else should I be looking?

    Read the article

  • Simultanious process mysteriously ending

    - by Matt
    I'm trying to run a large air quality model, written in FORTRAN, setup with bash scripts, and run in a work queue (slurm.) The first part of the modeling is to run an "entry" model, this runs with MPI in the work queue but only on one process. At one point in the logs, there's a mysterious FORTRAN STOP, and then later the model fails because something wasn't set up properly. This FORTRAN STOP isn't from the main process, which continues running. This is a huge model, but as far as I know there should not be any other processes running at the same time. It consistently fails at the exact same spot. (I can move it by adding debug, but the debug is in the main process) How can I determine what this process is? I've tried added a call to strace -feprocess $SHELL in the run script, but I'm new to this, so if it has offered any info, I haven't been able to use it yet. The is no trace output around the FORTRAN STOP. The whole process occurs so fast that I can't seem to observe it by using ps. Is there a way I can somehow monitor all the processes being initiated from the time the work queue starts? Or some other way I can figure out what is failing? This is running on CentOS 6.4, with Slurm, compiled with PGI 13.

    Read the article

  • rsync - Exclude files that are over a certain size?

    - by Rory
    I am doing a backup of my desktop to a remote machine. I'm basically doing rsync -a ~ example.com:backup/ However there are loads of large files, e.g. wikipedia dumps etc. Most of the files I care a lot about a small, such as firefox cookie files, or .bashrc. Is there some invocation to rsync that will exclude files that are over a certain size? That way I could copy all files that are less than 10MB first, then do all files. That way I can do a fast backup of the most important files, then a longer backup of everything else.

    Read the article

  • Concerns about Apache per-Vhost logging setup

    - by etienne
    I'm both senior developer and sysadmin in my company, so i'm trying to deal with the needs of both activities. I've set up our apache box, wich deals with 30-50 domains atm (and hopefully will grow larger) and hosts both production and development sites, with this directory structure: domains/ domains/domain.ext/ #FTPS chroot for user domain.ext domains/domain.ext/public #the DocumentRoot of http://domain.ext domains/domain.ext/logs domains/domain.ext/subdomains/sub.domain.ext domains/domain.ext/subdomains/sub.domain.ext/public #DocumentRoot of http://sub.domain.ext Each domain.ext Vhost runs with his dedicated user and group via mpm-itk, umask being 027, and the logs are stored via a piped sudo command, like this: ErrorLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_error.log" CustomLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_access.log" combined Now, i've read a lot about not letting the logs out of a very restricted directory, but the developers often need to give a quick look to a particular subdomain error log, and i don't really want to give them admin rights to look into /var/logs. Having them available into the ftp account is REALLY handy during development stages. Do you think this setup is viable and safe enough? To me it is apparently looking good, but i'm concerned about 3 security issues: -is the sudo pipe enough to deal with symlink exploits? Any catches i'm missing? -log dos: logs are in the same partition of all domains. got hundreds of gigs, but still, if one get disk-space dos'd, everything will break. Any workaround? Will a short timed logrotate suffice? -file descriptors limits: AFAIK the default limit for Apache on Ubuntu Server is currently 8192, which should be plenty enough to handle 2 log files per subdomain. Is it? Am i missing something? I hope to read some thoughts on the matter!

    Read the article

  • Bootable GRUB partition

    - by MA1
    I have a customized live fedora 12 USB which is working fine. What i want to do is to make a partition of my hard disk bootable so that my customized fedora can be run from hard disk. To accomplish this i did the following steps: Created a primary partition(/dev/sda2) and format it as ext3 and set it as active. Copy all the files in the live usb to /dev/sda2. Following are the live usb contents(all directories): a. boot b. EFI c. LiveOS d. syslinux Then i installed the GRUB in boot/grub Created the grub.conf in boot/grub Following are the contents of each directory in the USB: syslinux/ boot.cat isolinux.bin splash.jpg vesamenu.c32 initrd0.img ldlinux.sys syslinux.cfg vmlinuz0 LiveOS/ livecd-iso-to-disk osmin.img squashfs.img EFI/ boot/ boot.conf grub.conf boot.efi bootia32.conf bootia32.efi splash.jpg splash.xpm.gz vesamenu.c32 initrd0.img isolinux.bin isolinux.cfg vmlinuz0 boot/grub/ core GRUB files grub.conf olpc.fth Following are contents of grub.conf default=0 splashimage=/EFI/boot/splash.xpm.gz timeout 2 hiddenmenu title funLinux kernel /EFI/boot/vmlinuz0 root=live:LABEL=myFun rootfstype=auto ro liveimg quiet ssb.blacklist=1 selinux=0 vga=normal nomodeset rhgb initrd /EFI/boot/initrd0.img Now when i try to boot from the hard disk it shows the grub menu and fedora starting to load but during loading it said No root device found Boot has failed, sleeping forever So, where is the problem? what i am doing wrong?

    Read the article

  • Broken fonts in konsole kde 4.3.4

    - by depesz
    I have strange situation - after some upgrade couple of days ago fonts in KDE konsole broke. To make it more specific - standard fonts look more or less ok, but when I use my national characters (like acelnsózz) they all look broken - like from another font, or badly scaled. The same problem doesn't exist in gnome-terminal. I usually use Terminus font, so I used this for demonstration, but it shows in other fonts as well - if that will be necessary I will provide list. Konsole shot: gnome-terminal shot: As for my settings: =$ cat /etc/X11/xorg.conf Section "Device" Identifier "Builtin Default intel Device 0" Driver "intel" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "Monitor Model" EndSection Section "Screen" Identifier "Builtin Default intel Screen 0" Device "Builtin Default intel Device 0" Monitor "Monitor0" EndSection Section "InputDevice" Identifier "touchpad" Driver "synaptics" Option "CorePointer" EndSection Section "ServerLayout" Identifier "Builtin Default Layout" Screen "Builtin Default intel Screen 0" InputDevice "touchpad" EndSection =$ xdpyinfo | grep -E resolution\|dimensions dimensions: 1680x1050 pixels (444x277 millimeters) resolution: 96x96 dots per inch I tried forcing DPI in system settings (to 120), or adding monitor size to xorg.conf - so far nothing helped. Any idea on what should I do to make it work sanely again?

    Read the article

  • Using gnu screen with dual monitors.

    - by Seamus
    I use GNU screen for all of the work I do in the terminal, and would like to find a way to use it across two screens, but have not found anything that is satisfactory. I use cygwin or putty to access the system that runs screen, most of the time. Thanks in advance.

    Read the article

  • Rsync backup - detect new directory and backup only from that directory

    - by Pracovek
    New cpanel daily backup is creating separate directories for daily backup. This creates problem when I try to user rsync to do an offsite backup since I would like to rsync only latest data. E.g. On backup server I have directory "backup" and on server, from which we are pulling backups I get directories 2013-11-07, 2013-11-08 etc in backup directory. If I backup /backup directory on the server it will use allot more space so I would like to backup only latest directory in backup directory, eg 2013-11-08. Is there a way to detect latest directory in backup directory and pass that directory name to rsync for backup ?

    Read the article

  • SSL certificates work fine from command line but fail in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • Amavisd start error

    - by Kristian
    I can't start amavis. It gives an error: Starting amavisd: Error in config file "/etc/amavis/conf.d/05-domain_id": Insecure directory in $ENV{PATH} while running with -T switch at /etc/amavis/conf.d/05-domain_id line 7. On line 7 is: chomp($mydomain = `head -n 1 /etc/mailname`); This problem occured after restaring my computer. I don't know much about amavis, so any help is appreciated. Regards, Kristian

    Read the article

  • Can't FTP into server

    - by Roland
    I need to FTP in from one server to another If I FTP using my local PC using Krusader I'm able to FTP into the server but if I ssh into one server and I'm trying to FTP to the server using the same ftp credentials I get message [Resolving host address...] I know this address is correct since I can ping it from the server I use the following command lftp 'open -u username,password server' If I use the same command to ftp to a different server it works. Any help advise will be greatly appreciated.

    Read the article

  • Upgraded from fc10 to fc12 now I have eth0_rename, how do I get back to plain old eth0?

    - by shank
    I upgraded from Fedora 10 to Fedora 12. Unfortunately, my ethernet interface eth0 is now named eth0_rename. I'd like to get back to having it named plain old eth0. I googled a bit but the solution of removing the eth0 entry from /etc/udev/rules.d/70-persistent-net.rules seems to have no effect (I restarted the network service but didn't reboot). The interface works just fine although I could see a script or two having a problem with the format. So, it's more of an inconvenience thing than anything else. Any ideas? Thanks.

    Read the article

  • VNC failure on Xen

    - by BCable
    The following config works and creates a good VM in Xen: # Kernel Setup kernel = "/boot/vmlinuz-2.6.18.8-xenU" # Memory memory = "256" # Disk disk = [ "file:/opt/xen/domains/110/sda1.img,sda1,w", "file:/opt/xen/domains/110/swap.img,sda2,w" ] # container name name = "110" hostname = "boo" # Networking vif = ["type=ieomu, bridge=xenbr0"] # VNC vnc = 1 #vfb = [ 'type=vnc,vncdisplay=2,vnclisten=0.0.0.0,vncpasswd=110' ] # Behavior Settings root = "/dev/sda1" extra = "fastboot" But when I uncomment the VFB line, I get the following error after it hangs for at least 30 seconds: [root@customer 110]# xm create boo.cfg Using config file "./boo.cfg". Error: Device 0 (vkbd) could not be connected. Hotplug scripts not working. Any ideas? Part two of this question: Sometimes it actually works, and a port is opened. When this happens, nmap shows the VNC ports open and I can connect via the VNC client, but it just hangs at "Connection established." and no VNC display shows up. I've tried multiple VNC clients (TightVNC, TightVNC Java Console, RealVNC), but they all fail to connect. Does VNC through Xen require X to be started in order to function? I was under the impression that it would show the console screen, so I'm confused as to why all these issues are occurring. Thanks!

    Read the article

  • Postfix tutorial inconsistency

    - by Desmond Hume
    I'm following this tutorial to setup a Postfix/Dovecot mail server with Postfix Admin as a web front end. As regards directory structure for virtual mail users, the author of the tutorial writes: Virtual mail users are those that do not exist as Unix system users. They thus don't use the standard Unix methods of authentication or mail delivery and don't have home directories. That is how we are managing things here: mail users are defined in the database created by Postfix Admin rather than existing as system users. Mail will be kept in subfolders per domain and account under /var/vmail - e.g. [email protected] will have a mail directory of /var/vmail/example.com/me. But when he gives instructions about configuring Postfix Admin, he suggests this to be contained by Postfix Admin's config.inc.php: // Mailboxes // If you want to store the mailboxes per domain set this to 'YES'. // Examples: // YES: /usr/local/virtual/domain.tld/[email protected] // NO: /usr/local/virtual/[email protected] $CONF['domain_path'] = 'NO'; Is there an inconsistency?

    Read the article

  • Multithread http downloader with webui [closed]

    - by kiler129
    I looking for software similar to JDownloader or PyLoad. JD is pretty good but use heavy Java and for now have very weak web interface. PyLoad is awesome, include simple but powerful web-UI but downloading 10 files (10 threads each, so summary it's 100 connections running at around 8MB/s all) consume a lot of cpu - it's whole core for me. Do you know any lightweight alternatives? Aria2c is good for console but I failed to find any good webui, official one is good but after adding more files almost crashes Chrome :)

    Read the article

  • SQLAuthority Guest Post – Lessons from Life and Work by Srini Chandra (Author of 3 Lives, in search of bliss)

    - by pinaldave
    Work and life are confusing terms together. How can one consider work outside of life. Work should be part of life or are we considering ourselves dead when we are at work. I have often seen developers and DBA complaining and confused about their job, work and life. Complaining is easy and everyone can do. I have heard quite often expression – “I do not have any other option.” I requested Srini Chanda (renowned author of Amazon Best Seller 3 Lives, in search of bliss (Amazon | Flipkart) to write a guest post on this subject which developer can read and appreciate. Let us see Srini’s thoughts in his own words. Each of us who works in the technology industry carries an especially heavy burden nowadays. For, fate has placed in our hands an awesome power to shape our society and its consciousness. For that reason, we must pay more and more attention to issues of professionalism, social responsibility and ethics. Equally importantly, the responsibility lies in our hands to ensure that we view our work and career as an opportunity to enlighten and lift ourselves up. Story: A Prisoner, 20 years and a Wheel Many years ago, I heard this story from a professor when I was a student at Carnegie Mellon. A man was sentenced to 20 years in prison. During his time in prison, he was asked to turn a wheel every day. So, every day he turned the wheel. At times, when he was tired or puzzled and stopped turning the wheel, he would be flogged with a whip. The man did not know anything about the wheel other than that it was placed outside his jail somewhere. He wondered if the wheel crushed corn or if it ground wheat or something similar. He wondered if turning the wheel was useful to anyone. At the end of his jail term, he rushed out to see what the wheel was doing. To his disappointment, he found that the wheel was not connected to anything. All these years, he had been toiling for nothing. He gave a loud, frustrated shout and dropped dead. How many of us are turning wheels wondering what it is connected to? How many of us have unstated, uncaring attitudes towards our careers? How many of us view work as drudgery, as no more than a way to earn that next paycheck? How many of us have wondered about the spiritually uplifting aspect of work? Can a workforce that views work as merely a chore, be ethical? Can it produce truly life enhancing technology? Can it make positive contributions to the quality of life of a society? I think not. Thanks to Pinal and you, his readers, for giving me this opportunity to share my thoughts in a series of guest posts. I’d like to present a few ways over the next few weeks, in which we can tap into the liberating potential of work and make our lives better in the process. Now, please allow me to tell you another version of the story that the good professor shared with us in the classroom that day. Story: A Prisoner, 20 years, a Wheel and the LIFE A man was sentenced to 20 years in prison. During his time in prison, he was asked to turn a wheel every day. So, every day he turned the wheel. At first, his whole body and mind rebelled against his predicament. So, his limbs grew weary and his mind became numb and confused. And then, his self-awareness began to grow. He began to wonder how he came to be in the prison in the first place. He looked around and saw all his fellow prisoners also turning the wheel. His wife, his parents, his friends and his children – they were all in the prison too, and turning their own wheels! He began to wonder how this came about. As he wondered more and more, he began to focus less on his physical drudgery and boredom. And he began to clearly see his inner spirit which guided him in ways that allowed him to see the world with a universal view. His inner spirit guided him towards the source of eternal wisdom and happiness. He began to see the source of happiness in everything around him – his prison bound relationships, even his jailers and in his wheel. He became a source of light to those around him. His wheel jokes and humor infected them with joy and happiness. Finally, the day came for his release from jail. He walked calmly outside the jail and laughed aloud when he saw that the wheel was not connected to anything. He knelt down, kissed it and thanked it for the wisdom it taught him. Life is the prison. The wheel is your work. Both are sacred. Both have enormous powers to teach us wisdom and bring us happiness. Whether we allow them to do so, is a choice we have to make. Over the next few weeks, I hope to share with you a few lessons that I have learnt at the wheel in my two decades of my career (prison). Thank you for reading, and do let me know what you think. Reference: Srini Chandra (3 Lives, in search of bliss), Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, T SQL, Technology

    Read the article

  • How to manage mounted partitions (fstab + mount points) from puppet

    - by Cristian Ciupitu
    I want to manage the mounted partitions from puppet which includes both modifying /etc/fstab and creating the directories used as mount points. The mount resource type updates fstab just fine, but using file for creating the mount points is bit tricky. For example, by default the owner of the directory is root and if the root (/) of the mounted partition has another owner, puppet will try to change it and I don't want this. I know that I can set the owner of that directory, but why should I care what's on the mounted partition? All I want to do is mount it. Is there a way to make puppet not to care about the permissions of the directory used as the mount point? This is what I'm using right now: define extra_mount_point( $device, $location = "/mnt", $fstype = "xfs", $owner = "root", $group = "root", $mode = 0755, $seltype = "public_content_t" $options = "ro,relatime,nosuid,nodev,noexec", ) { file { "${location}/${name}": ensure => directory, owner => "${owner}", group => "${group}", mode => $mode, seltype => "${seltype}", } mount { "${location}/${name}": atboot => true, ensure => mounted, device => "${device}", fstype => "${fstype}", options => "${options}", dump => 0, pass => 2, require => File["${location}/${name}"], } } extra_mount_point { "sda3": device => "/dev/sda3", fstype => "xfs", owner => "ciupicri", group => "ciupicri", $options = "relatime,nosuid,nodev,noexec", } In case it matters, I'm using puppet-0.25.4-1.fc13.noarch.rpm and puppet-server-0.25.4-1.fc13.noarch.rpm.

    Read the article

  • Getting VSFTP running on Fedora 14

    - by Louis W
    Having troubles getting VSFTPD running on Fedora 14. Here is what I have done so far, please let me know if I am missing something. When I try to connect through FTP it says connection time out. Installed VSFTP with yum yum install vsftpd Edited config file vi /etc/vsftpd/vsftpd.conf Started service and made sure it would always start up service vsftpd start chkconfig vsftpd on Added and configured a new user /usr/sbin/useradd upload /usr/bin/passwd upload usermod -c "This user cannot login to a shell" -s /sbin/nologin upload Added firewall rules iptables -A INPUT -p tcp --dport 21 -j ACCEPT iptables -A OUTPUT -p tcp --sport 20 -j ACCEPT service iptables save service iptables restart Checked netstat (In reply to comment below) tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN 23752/vsftpd

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • syntax highlighting in vim?

    - by ajsie
    on my ubuntu server vim got no syntax highlighting when i open files (configurations, scipts...). i have tried with :syntax on :syntax enable and it ways in vim that its enabled, but it doesnt work someone knows how to fix it? thanks!

    Read the article

  • Case insensitive bash auto-complete

    - by Vitaly Polonetsky
    Is there a way to make the file/dir auto-complete in bash case insensitive? For example I would like to write: /opt/ibm/whatever/test [TAB] And bash will auto-complete it to: /opt/IBM/Whatever/TESTfile Or at least only the last part of test to TESTfile. I know that filesystems are case-sensitive, I just don't want to remember which parts are UPPER-case, I want auto-complete to fix the path for me. And if I have both TESTfile and testfile, just show me both of them like bash does today with auto-complete conflicts.

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >