Search Results

Search found 45752 results on 1831 pages for 'ubuntu linux'.

Page 652/1831 | < Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >

  • Removing all traces of GNU java and openjdk and replacing with Sun JDK

    - by user61766
    I have installed latest Sun JDk. But when I do: java -version I still got OpenJDK version. So I completely removed OpenJDK. But now when I do: java -version I get even older GNU java 1.5 something libgcj. So I completely removed that too but it was asking to remove bunch of dependent apps like OpenOffice.org Writer etc. Even though I need the writer, I let it go because I do not want ever to see the face of any GNU java on my linux. So everything related to GNU java is removed. Luckily I am able to start Eclipse and it works fine and start normally (apparently using the installed Sun JDK which is what I want). But now when I run java -version I get bash: /usr/bin/java: No such file or directory Now what I need to do so that when I open any terminal window and enter java -version I should get Sun JDK version? Sun JDK is installed in /usr/java/jdk1.6.021. I also have symlinks: /usr/java/latest and /usr/java/defaults pointing to sun jdk.

    Read the article

  • How can I generate filesystem images that are usable on many different virtualization systems?

    - by Mark Longair
    I have written a script that generates a root filesystem image (based on Debian lenny) suitable for User-Mode Linux. (Essentially this script creates a filesystem image, mounts it with a loop device, uses debootstrap to create a lenny install, sets up a static IP for TUN/TAP networking, adds public keys for login by SSH and installs a web application.) These filesystem images work pretty well with UML, but it would be nice to be able to generate similar images that people can use on alternative virtualization software, and I'm not familiar with these options at all. In particular, since the idea is to use this image as a standalone server for testing the web application, it's important that the networking works. I wonder if anyone can suggest what would be involved in customizing such root filesystem images such that they could be used with other virtualization software, such as VMware, Xen or as an Amazon EC2 instance? Two particular concerns are: If such systems don't use a raw filesystem image (e.g. they need headers with metadata or are compressed in some particular way) do there exist tools to convert between the different formats? I assume that in the filesystem, at least /etc/network/interfaces will have to be customized, but are more involved changes likely to be necessary? Many thanks for any suggestions...

    Read the article

  • Explain why .bash_logout won't run commands?

    - by Droogans
    So I've been wondering how to run these two lines of code everytime I close an open instance of Terminal: history -c cat /dev/null > ~/.bash_history I export HISTFILE=5 on startup, but still want to flush that out when I'm done. I've tried looking around a bit in a couple of places, and haven't had much luck. I run Linux Mint, and would also note here that I ran into a similar issue with .bash_profile; eventually, I discovered I needed to place all start up code in .bashrc, so maybe that has something to do with it. Here's my .bash_logout file: #!/bin/bash # ~/.bash_logout: executed by bash(1) when login shell exits. # when leaving the console clear the screen to increase privacy if [ "$SHLVL" = 1 ]; then history -c cat /dev/null > ~/.bash_history [ -x /usr/bin/clear_console ] && /usr/bin/clear_console -q fi #this does nothing on exit... echo 'logout'; sleep 2s I've tried re-arranging this script many ways, I'm not sure if I don't understand how bash works, and if any of this is running in the first place. Does the fact that I run Xserver make bash consider Terminal something that isn't a log-out on exit?

    Read the article

  • Understanding Netbook Partitions & UNR Installation

    - by Wesley
    Hi all, I have a Samsung N120 netbook (with upgraded 2GB RAM). I'm just looking at the Disk Management right now (in Windows XP) and I'm trying to understand what partition holds what. There is "Local Disk (C:)" which is 40GB, "RECOVERY" (no drive letter) which is 6GB and then "TEMP_PART01 (D:)" which is 103.05GB. XP is installed on Local Disk (C:) and I've only used this hard drive for all my files, etc. Recovery is recovery... probably not removable anyways. Now, what bugs me is the TEMP_PART01 (D:) partition, which contains quite a bit of random junk, such as EULA text documents, an "external installer", UI Wrapper Resource DLLs, a "VC_RED" Windows Installer Package and a few more files. I have no clue what any of it means, but I'm assuming that this was probably stuff that could have been on the Local Disk (C:), along with the WINDOWS, Program Files, and Docs and Settings folder. So, how should I go about this? Should I have kept all my data on D: and left all OS related files/folders on C:? Now, I want to install Ubuntu Netbook Remix. Question is, will this install within Windows, if I want to dual boot it? If not, would I partition D: into two small chunks, one on which I would install UNR? There are basically two questions in here, but it'd be great to get answers for both! Thanks in advance.

    Read the article

  • Fedora Core 6 Migration

    - by Matthew Sprankle
    I am at a loss as to what I should to for this server. I need it to run php5.3 and corresponding version of mysql. I received a client today through work that is using Fedora core 6 running 10 very small websites on some very hodge podge setup. My original idea was just upgrade to php5.3. I have yum (installed 3.0.8) reconfigured for the fedora archive. The latest version of php it allows is 5.1.8. I am still relatively new to server setups and am nervous about wiping their server to upgrade it. Since it is about 6-8 years old I'm not sure if it will even support the newest version of fedora. The server specs are: Parallels Plesk Panel version 9.5.4 Operating system Linux 2.6.9-023stab048.4-smp CPU GenuineIntel, Intel(R) Xeon(R)CPU E5335 @ 2.00GHz (10gb disk space and 1gb of memory). I use fedora for my personal server so I was a little familiar with it. I haven't done anything too extravagant. Is there a way I can escape this nightmare with installing php5.3 or do I need to migrate these sites to a new server?

    Read the article

  • Network external hard drive reports not enough free space

    - by mzhang
    I'm running an Ubuntu (10.04) Samba server on a local network. The server has a 50GB internal drive with only 24MB free. I've shared a folder /samba from that drive. I also have a 1TB NTFS external hard drive mounted to the system. There is a symbiotic link from the Samba shared folder on the nearly-full internal drive to the plenty-of-free-space external drive (i.e. /samba/external_hd). I wish to copy a 3.25GB folder into the (remote) external hard drive, via a Mac (10.6.8). The Mac reports (correctly) that there's 24MB free on the server, and so will not let me copy the folder on the Mac over to the external drive (dragging the folder into /samba/external_hd), failing with a "server does not have enough free space" error. However, it seems that I can still scp the folder into the external drive, via the symbolic link. Is there a reason as to why this is happening (and are there any ways to prevent it)? Is this even good practice (to mount a drive and link into the directory)?

    Read the article

  • Route all traffic of home network through VPN

    - by user436118
    I have a typical semi advanced home network scenario: A cable modem - eth A wireless router (netgear n600) eth and wlan A home server (Running ubuntu 12.04 LTS, connected over wlan) A bunch of wireless clients (wlan) Lying around I have anoher cheaper wlan router, and two different USB wlan NIC's that are known to work with Linux. ACTA struck. I want to route ALL of my WAN traffic through a remote server through a VPN. For sake of completition, lets say there is a remote server running debian sqeeze where a VPN server is to be installed. The network is then to behave so that if the VPN is not operative, it is separated from the outside world. I am familiar with general system/network practices, but lack the specific detailed knowledge to accomplish this. Please suggest the right approach, packages and configurations you'd use to reach said solution. I've also envisioned the following network configuration, please improve it if you see fit: ==LAN== Client ip:10.1.1.x nm:255.0.0.0 gw:10.1.1.1 reached via WLAN Wlan router 1: ip: 10.1.1.1 nm:255.0.0.0 gw: 10.10.10.1 reached via ETH Homeserver: <<< VPN is initiated here, and the other endpoint is somewhere on the internet. eth0: ip:10.10.10.1 nm: 0.0.0.0 gw:192.168.0.1 reached via WLAN Homeserver: wlan0: ip: 192.168.0.2 nm: 255.255.255.0 gw: 192.168.0.1 reached via WLAN ==WAN== Wlan router 2: ip: 192.168.0.1 nm: 0.0.0.0 gw: set via dhcp uplink connector: cable modem Cable Modem: Remote DHCP. Has on-board DHCP server for ethernet device that connects to it, and only works this way. All this WLAN fussery is because my home server is located in a part of the house where a cable link isnt possible unfortunately.

    Read the article

  • package manager cannot create users? installation script fails?

    - by RapidWebs
    It seems that when trying to install any package which requires the creation of a system User and Group, the installer fails with the error: install: invalid user 'username' When it happened the first time, I assumed it was related to the package, or it's installation script. But now I am attempting a different package, and alas, the same issue arises! note: /etc/passwd and /etc/group is not chattr -i Here is the example output from an attempted installation (of varnish): Selecting previously unselected package varnish. Unpacking varnish (from .../varnish_3.0.2-1ubuntu0.1_amd64.deb) ... Processing triggers for man-db ... Setting up libvarnishapi1 (3.0.2-1ubuntu0.1) ... Setting up varnish (3.0.2-1ubuntu0.1) ... install: invalid user `varnish' dpkg: error processing varnish (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: varnish note: running ubuntu 12.04 For now, I have resolved the issue by manually creating the users, and running apt-get install -f, but this does not resolve the actual issue. Any ideas?

    Read the article

  • Tunnell network requests with Windows 7

    - by mark
    I've Windows 7 64bit Pro client in a private LAN behind a Netgear wgr614v7 router. I've also a remote Debian server machine outside. I'd like to tunnel all (or specified ports/protocols) over this outside server, so when I'm on the Windows machine and I request serverfault.com it would not appear from the wgr614v7 public IP but from the server. But it's not only about HTTP traffic, it's basically about everything I'd like to: other TCP ports, even UDP, etc. It must be transparent to the application, e.g. they shouldn't be aware of this. All their requests just appear as being from the server and the tunnel between them takes care about the packets. I'm aware of e.g. Putty and forwarding individual ports or using it as a socks proxy, however not many applications to support this and the support in windows itself looks non-existent to me. I might add it should be something "reasonable" easy to set up. I've heard about PPTP but I'm unsure about it's security implications (by design). Should I go for VPN? There seem to be two common solutions for Linux (OpenSwan and StrongSwan), why would I pick the one over the other? I also fear that setting up a VPN might be quite complex, OTOH maybe it's the only sane way to do the things right? Or is OpenVPN sufficient? I'm seeking for open (source) solutions, what other options to I have or which direction should I head to?

    Read the article

  • Why would my network slow down?

    - by monkthemighty
    The network at my work has about 40 computers on it and a quite a few printers. When there are a lot of people working the network will be slow. I can test the ping between my computer and the router and it will keep rising, sometimes to the point that it times out. The router we are using is running Ubuntu on a atom processor and it has 4gb of ram. When the network slows the process Ksoftirq will be using most if not all of the processing power. I have found that Ksoftirq is a process that handles irq requests. Also when the network slows down I have captured packets from the router and using tshark and looked at it using wireshark on my laptop. With the capture show a lot of packets with TCP Dup ACK and TCP Retransmissions. The destinations of the TCP Dup and TCP retransmissions are to most of the computers on the network but there are some that are far more than others. What could this problem be caused by?

    Read the article

  • How to prevent response to who-has requests on virtual eth interface?

    - by user42881
    Hi, we use small embedded X86 linux servers equipped with a single physical ethernet port as a gateway for an IP video surveillance application. Each downstream IP cam is mapped to a separate virtual IP address like this: real eth0 IP address= 192.168.1.1, camera 1 (eth0:1) =192.168.1.61, camera 2 (eth0:2) =192.168.1.62, etc. etc. all on the same eth0 physical port. This approach works well, except that a specific third-party windows video recording application running on a separate PC on the same LAN, automatically pings the virtual IPs looking for unique who-has responses on system startup and, when it gets back the same eth0 MAC address for each virtual interface, freaks out and won't allow us to subsequently manually enter those addresses. The windows app doesn't mind, tho, if it receives no answer to the who-has ping. My question - how can we either (a) shut off the who-has responses just for the virtual eth0:x interfaces while keeping them for the primary physical eth0 port, or, in the alternative, spoof a valid but different MAC address for each virtual interface? Thanks!

    Read the article

  • Can I use Upstart to start a script which requires the user's X session?

    - by ledneb
    I wrote a script which greps through the output of synclient to determine whether a laptop's touchpad has miraculously turned itself off (Ubuntu seems to /love/ doing this recently) and, if so, turns it back on. The script is something like this: #!/bin/bash while true ; do if [ `synclient | grep -e"TouchpadOff[\s]*1" | wc -l` -ge 1 ] ; then synclient TouchpadOff=0 fi sleep(3) done (I don't have the laptop to hand right now but you get the point! I will update later when I'm at my laptop if that's incorrect) So I tried running this as an upstart script so my touchpad can heal itself without any interaction. But it seems synclient doesn't find the current user's X session when my script is upstart'ed. I tried running it by using something like su -c myscript.sh ledneb in my script stanza, but to no avail. Should I be looking in the direction of /etc/X11/xinit/xinitrc rather than upstart? Is there a proper way to have this script run in the context of the current (or even a hard-coded) user's x session?

    Read the article

  • How to keep multiple servers in sync file wise?

    - by GForceSys
    I'm currently managing a cluster of PHP-FPM servers, all of which tend to get out of sync with each other. The application that I'm using on top of the app servers (Magento) allows for admins to modify various files on the system, but now that the site is in a clustered set up modifying a file only modifies it on a single instance (on one of the app servers) of the various machines in the cluster. Is there an open-source application for Linux that may allow me to keep all of these servers in sync? I have no problem with creating a small VM instance that can listen for changes from machines to sync. In theory, the perfect application would have small clients that run on each machine to be synced, which would talk to the master server which would then decide how/what to sync from each machine. I have already examined the possibilities of running a centralized file server, but unfortunately my app servers are spread out between EC2 and physical machines, which makes this unfeasible. As there are multiple app servers (some of which are dynamically created depending on the load of the site), simply setting up a rsync cron job is not efficient as the cron job would have to be modified on each machine to send files to every other machine in the cluster, and that would just be a whole bunch of unnecessary data transfers/ssh connections.

    Read the article

  • TGT validation fails, but only for one user

    - by wzzrd
    I'm seeing the weirdest thing here. I have a couple of RHEL3, 4 and 5 machines that validate user credentials through Kerberos with an Active Directoy domain controller as their KDC. This works for all of my users, save one. There is one account that is unable to log into RHEL3 Linux machines and generates the following errors there: May 31 13:53:19 mybox sshd(pam_unix)[7186]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.0.1 user=user May 31 13:53:20 mybox sshd[7186]: pam_krb5: TGT verification failed for `user' May 31 13:53:20 mybox sshd[7186]: pam_krb5: authentication fails for `user' Other accounts, like my own, are fine: May 31 17:25:30 mybox sshd(pam_unix)[12913]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.0.1 user=myuser May 31 17:25:31 mybox sshd[12913]: pam_krb5: TGT for myuser successfully verified May 31 17:25:31 mybox sshd[12913]: pam_krb5: authentication succeeds for `myuser' May 31 17:25:31 mybox sshd(pam_unix)[12915]: session opened for user myuser by (uid=0) As you can see, TGT validation fails. This only happens for this specific account, not for any other. The failing useraccount's password has been reset, I inspected both user objects in Active Directory, but I see nothing out of the ordinary. If I have the failing useraccount log into a RHEL4 or 5 box, there is not problem, so it must be RHEL3 specific, but the fact that only one account suffers from this, alludes me. Maybe someone has seen this before?

    Read the article

  • How can I recover my data from a damaged hard drive?

    - by krk
    a few days ago when I was working on windows my laptop was beaten on the side where the hard drive is located. As a result, it was damaged and I couldn't access the windows partition. I had to boot the linux one, which is working without any trouble. I have 2 partition formatted with ntfs, the one with windows on it, and the other one intended to store data. I mounted the windows partition from ubuntu and I could see all my files. But when I tried to mount the data partition it was impossible. It threw me an error, it couldn't recognize ntfs partition. I try to copy the damaged disk into an external hard drive using the command: dd if=/dev/sda of=/dev/sdb conv=noerror,sync The progress stopped at 60%. I was still unable to mount the data partition. Now I'm trying to backup my files using an utility called Photorec. The problem is that it is recovering my files in a disorderly way, it is all mixed up and I need my original directory structure, it will become an endless task to organize the files as they were before. Is there any way I can get my partition back?

    Read the article

  • Route all traffic of home network through VPN [migrated]

    - by user436118
    I have a typical semi advanced home network scenario: A cable modem - eth A wireless router (netgear n600) eth and wlan A home server (Running ubuntu 12.04 LTS, connected over wlan) A bunch of wireless clients (wlan) Lying around I have anoher cheaper wlan router, and two different USB wlan NIC's that are known to work with Linux. ACTA struck. I want to route ALL of my WAN traffic through a remote server through a VPN. For sake of completition, lets say there is a remote server running debian sqeeze where a VPN server is to be installed. The network is then to behave so that if the VPN is not operative, it is separated from the outside world. I am familiar with general system/network practices, but lack the specific detailed knowledge to accomplish this. Please suggest the right approach, packages and configurations you'd use to reach said solution. I've also envisioned the following network configuration, please improve it if you see fit: Client ip:10.1.1.x nm:255.0.0.0 gw:10.1.1.1 reached via WLAN Wlan router 1: ip: 10.1.1.1 nm:255.0.0.0 gw: 10.10.10.1 reached via ETH Homeserver: <<< VPN is initiated here, and the other endpoint is somewhere on the internet. eth0: ip:10.10.10.1 nm: 0.0.0.0 gw:192.168.0.1 reached via WLAN Homeserver: wlan0: ip: 192.168.0.2 nm: 255.255.255.0 gw: 192.168.0.1 reached via WLAN Wlan router 2: ip: 192.168.0.1 nm: 0.0.0.0 gw: set via dhcp uplink connector: cable modem Cable Modem: Remote DHCP. Has on-board DHCP server for ethernet device that connects to it, and only works this way. All this WLAN fussery is because my home server is located in a part of the house where a cable link isnt possible unfortunately.

    Read the article

  • Rsync root files between systems without specifying password

    - by xpt
    This seems very tricky to me. I've set up my two systems so that I can rsync files between them as me, without specifying password. Now the the problem is to rsync files that belong to root. On both of my systems, there are no root passwords. The only way to become root is via sudo. So I can neither give a password for sudo rsyn local root@remote:, no use my ssh-agent to supply pass phrase. I don't want to set up a root password on any systems; and I do need the files to be owned by root on both systems. EDIT: Using the files that belong to root is just an example, I need a way for my unprivileged account to read/write system (including root-owned) files easily. One example is to copy my configured /root environment into the freshly-installed system. The two systems are actually two VMs under a single host, so it's not a big concern for me to copy root-owned files between them. EDIT 2: If I only want to copy my configured /root environment into the freshly-installed system, I can use tar: sudo tar cvzf - /root | ssh me@remote sudo tar xvzf - -C / But I do need rsync to update from time to time. Any easy way to make it happen? EDIT 3: Formally formulate the question Alright, it all began with the question, how to rsync files that belong to root between two systems as a normal unprivileged user, without specifying password, under the condition that, The root account is locked on both of systems. I.e., there are no root passwords. The only way to become root is via sudo (recommended security practice, see http://help.ubuntu.com/community/RootSudo) I don't want a completely passwordless sudo but don’t want to be typing passwords all the time either. The normal unprivileged user has entered their ssh pass phrase into the ssh agent. Thanks

    Read the article

  • Very high CPU and low RAM usage - is it possible to place some of swap some of the CPU usage to the RAM (with CloudLinux LVE Manager installed)?

    - by Chriswede
    I had to install CloudLinux so that I could somewhat controle the CPU ussage and more importantly the Concurrent-Connections the Websites use. But as you can see the Server load is way to high and thats why some sites take up to 10 sec. to load! Server load 22.46 (8 CPUs) (!) Memory Used 36.32% (2,959,188 of 8,146,632) (ok) Swap Used 0.01% (132 of 2,104,504) (ok) Server: 8 x Intel(R) Xeon(R) CPU E31230 @ 3.20GHz Memory: 8143680k/9437184k available (2621k kernel code, 234872k reserved, 1403k data, 244k init) Linux Yesterday: Total of 214,514 Page-views (Awstat) Now my question: Can I shift some of the CPU usage to the RAM? Or what else could I do to make the sites run faster (websites are dynamic - so SQL heavy) Thanks top - 06:10:14 up 29 days, 20:37, 1 user, load average: 11.16, 13.19, 12.81 Tasks: 526 total, 1 running, 524 sleeping, 0 stopped, 1 zombie Cpu(s): 42.9%us, 21.4%sy, 0.0%ni, 33.7%id, 1.9%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8146632k total, 7427632k used, 719000k free, 131020k buffers Swap: 2104504k total, 132k used, 2104372k free, 4506644k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 318421 mysql 15 0 1315m 754m 4964 S 474.9 9.5 95300:17 mysqld 6928 root 10 -5 0 0 0 S 2.0 0.0 90:42.85 kondemand/3 476047 headus 17 0 172m 19m 10m S 1.7 0.2 0:00.05 php 476055 headus 18 0 172m 18m 9.9m S 1.7 0.2 0:00.05 php 476056 headus 15 0 172m 19m 10m S 1.7 0.2 0:00.05 php 476061 headus 18 0 172m 19m 10m S 1.7 0.2 0:00.05 php 6930 root 10 -5 0 0 0 S 1.3 0.0 161:48.12 kondemand/5 6931 root 10 -5 0 0 0 S 1.3 0.0 193:11.74 kondemand/6 476049 headus 17 0 172m 19m 10m S 1.3 0.2 0:00.04 php 476050 headus 15 0 172m 18m 9.9m S 1.3 0.2 0:00.04 php 476057 headus 17 0 172m 18m 9.9m S 1.3 0.2 0:00.04 php 6926 root 10 -5 0 0 0 S 1.0 0.0 90:13.88 kondemand/1 6932 root 10 -5 0 0 0 S 1.0 0.0 247:47.50 kondemand/7 476064 worldof 18 0 172m 19m 10m S 1.0 0.2 0:00.03 php 6927 root 10 -5 0 0 0 S 0.7 0.0 93:52.80 kondemand/2 6929 root 10 -5 0 0 0 S 0.3 0.0 161:54.38 kondemand/4 8459 root 15 0 103m 5576 1268 S 0.3 0.1 54:45.39 lvest

    Read the article

  • very slow connection to ssh server from client (but not other servers)

    - by AntonOfTheWoods
    I have an Ubuntu 12.04 laptop that is taking so long to connect to various servers (in different data centres) that it seems like a bit of a lottery whether I'll actually get a connection. If I connect to the servers between themselves it's instantaneous, and I've set UseDNS no AddressFamily inet On the servers I'm connecting to (and rebooted for good measure). I also put in the reverse DNS+IP of the cable connection I'm connecting from. If I connect from the laptop via telnet: telnet my.server 22 Then the connection is also instantaneous, so it doesn't appear to be a problem with an intervening firewall. I have the same behaviour whether I connect with the IP, a short name in my hosts or the FQDN. I'm connecting with a 50mbps (cable, sync) connection so that doesn't appear to be the problem, and when I do finally get a connection then it's a good, quick, stable one. I have tried listening on another port (8000) and that makes no difference. Web and other connections from the laptop to the machine are also very good. Does anyone have any ideas here?

    Read the article

  • With Ubuntu 12.04 unlike 11.04 Wine installed application start menu links are missing

    - by Ron Whites
    With Ubuntu 12.04 and wine 1.4 unlike ubuntu 11.04 with wine 1.2.2 installed application start menu links are missing. For instance with Ubuntu 11.04 including Wine I can install one of our Windows applications and then can go to Applications Wine Programs Semantic Designs TestCoverage Documentation to bring up the documentation for how to run our tool. Unfortunately with Ubuntu 12.04 the Applications menu is gone and going to Dash I do see "Recent Apps and more apps" but my installed Wine application and related documentation link is shown present, even though the wine uninstaller shows in present. I found this online suggestion and tried using the gnome "main menu"... Windows key to launch the Dash. Enter "Main Menu" in the search field and open the old Edit Main Menu app. Select the Category (aka Unity Dash Filter) you want the item in. Name the Dash/Launcher Item Add the Command to launch said app With "mainmenu" then get down to the TestCoverage Documentation and I could see a command link in properties of .. env WINEPREFIX="/home/sdtest/.wine" wine C:\windows\command\start.exe /Unix /home/sdtest/.wine/dosdevices/c:/users/sdtest/Start\ Menu/Programs/Semantic\ Designs/Test\ Coverage/Java\ 1.7\ Documentation.lnk BUT I could not execute this link to view the installed documentation. So I copied the link properties into a file, set it as executable, and ran it as a bash script and the documentation came up! So why can't I use this link under main menu?

    Read the article

  • Grub Autostart with timeout

    - by BetaRide
    On Ubuntu 10.4 LTS I want Grub to start the default OS after 5 Seconds. I'd like to see the output of the startup scripts. Currently grub wait forever until I hit return and the output of the startup scripts isn't visible. Can someone tell me how I have to configure /etc/default/grub or any other setups? I tried to play with GRUB_TIMEOUT and GRUB_DEFAULT and did a sudo update-grub afterwards, but nothing changed. Any ideas? # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=5 #GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" GRUB_SAVEDEFAULT=true # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux # GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_LINUX_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1"

    Read the article

  • Disk (EXT4) suddenly empty without any sign of why

    - by Ohnomydisk
    I have a Ubuntu 10.04 server with several disks in it. The disks are setup with a union filesystem, which presents them all as one logical /home. A few days ago, one of the disks appears to have suddenly 'become empty', for lack of better explanation. The amount of data on the /home mount almost halved within minutes - the disk appears to have had just over 400 GB of data prior to 'becoming empty'. I have absolutely no idea what happened. I was not using the server at the other time, but there are half a dozen other users who may have been (without root access and without the ability to hose a whole disk). I've ran SMART tests on the disk and it comes back clean. The filesystem checks fine (it has 12 GB used now, as some user software continued downloading after the incident). All I know is that around around midnight on October 19, the disk usage changed dramatically: The data points are every 15 minutes, and the full loss occured between captures: 2012-10-18 23:58:03.399647 - has 953.97/2059.07 GB [46.33 percent] 2012-10-19 00:13:15.909010 - has 515.18/2059.07 GB [25.02 percent] Other than that, I have not much to go off :-( I know that: There's nothing interesting in log files at that time Nobody appeared to be logged in via SSH at the time it occured (most users do not even use SSH) The server was online through whatever occured (3 months uptime) None of the other disks were affected and everything else on the server looks completely normal I have tried using "extundelete" on the disk and it didn't really find anything (some temporary files, but they looked new anyway) I am completely at a loss to what could have caused this. I was initially thinking maybe root escalation exploit, but even if someone did maliciously "rm" the disk contents, it would take more than 15 minutes for 400 GB?

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

  • Multiple SSL certificates on one server

    - by Kyle O'Brien
    We're hosting two websites on our fairly tiny but dedicated production server. Both website require SSL authentication. So, we have virtualhosts set up for both of them. They both reference their own domain.key, domain.crt and domain.intermediate.crt files. Each CSR and certificate file for each site was setup using its own unique information and nothing is shared between them (other than the server itself) However, which ever site's symbolic link (set up in /etc/apache2/sites-enabled) is reference first, is the site who's certificate is referenced even if we're visiting the second site. So for example, assume our companies are Cadbury and Nestle. We set up both sites with their own certificates but we create Cadbury's symbolic link in apache's site-enabled folder first and then Nestle's. You can visit Nestle perfectly fine but if you check the certificate installation, it reference's Cadbury's certificate. We're hosting these websites on a dedicated Ubuntu 12.04.3 LTS server. Both certificates are provided by Thawte.com. I came across a few potential solutions with no degree of success. I'm hoping someone else has a decent solution? Thanks Edit: The only other solution that seems to have provided success to some people is using SNI with Apache. However, the setups here didn't seem to coincide with our setup at all.

    Read the article

  • Apache Virtualhosts with PHP and custom logs - how to isolate PHP errors?

    - by Repox
    I'm trying to setup a simple hosting enviroment for my application on an Ubuntu server. I created a virtualhost like this: <VirtualHost *:80> ServerAdmin [email protected] ServerName www.example.com ServerAlias example.com DocumentRoot /home/owner/example.com/docs <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/owner/example.com/docs/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /home/owner/example.com/logs/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /home/owner/example.com/logs/access.log combined php_flag log_errors on php_value error_log /home/owner/example.com/logs/php-error.log </VirtualHost> Now, my problem is that PHP errors and warnings are thrown in the error.log - not the php-error.log as I was hoping. How can achieve this?

    Read the article

< Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >