Search Results

Search found 4514 results on 181 pages for 'totally newbie'.

Page 132/181 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • What does this example bash startup script do?

    - by Dimitri
    I am trying to set up GNU Octave on my computer (Mac OS X 10.7.4). I am newbie in using Terminal and I need help to understand what the following script actually does: if [ -f ~/.bashrc ];then<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;. ~/.bashrc<br> fi<br> PATH=$PATH:/usr/local/bin<br> BASH_ENV=~/.bashrc<br> export BASH_ENV PATH<br> export GNUTERM=aqua<br> alias octave="/Applications/Octave.app/Contents/Resources/bin/octave"<br> alias gnuplot="/Applications/Gnuplot.app/Contents/Resources/bin/gnuplot"<br> (taken from here: http://wikibox.stanford.edu/me112/index.php/Main/OctaveMatlabNotes) So this script begins with the simple conditional if statement. I don't understand the conditional expression - what is -f and .bashrc? What the statement . ~/.bashrc actually does? Then 2 variables are defined PATH and BASH_ENV. Why are they exported? Why GNUTERM=aqua is exported even if it's not defined anywhere? All I need is a script that would allow me to run Octave by simply typing octave in the terminal. I don't need an alias for the gnu plot. Thanks

    Read the article

  • File sharing problem on Windows Server 2003 x64

    - by O. Askari
    Hi, We have a customer that hosts our .NET application server on Windows Server 2003 x64. The problem is, its file sharing gets totally disabled after about 10-30 minutes. The only way to re-enable it is to restart the server but the same thing happens again after each restart. This server contains SQL Server 2005 Enterprise, .NET Framework 3.5 and our .NET based application server. We haven't had such a problem with any other customer before so we asked them to prepare another server to deploy our application on it. We installed our application server on the new machine and let SQL Server remain on the old one. Unfortunately the same problem happened to the new machine too. Now the old machine works only as database server and the new one works as application server but both of them have the same file sharing problem. File sharing on both machines doesn't get disabled on the same time but it eventually happens to both of them. I wonder why is this happening and how to find the reason to this problem. Any suggestion or solution is much appreciated.

    Read the article

  • Regarding partitions for dual-booting Ubuntu with pre-existing Windows 7

    - by Shasteriskt
    I have zero actual experience with configuring disk partitions and the stuff I have read for the past few hours have been confusing me a bit, so please bear with me. First of all, I'd like to explain what I'm setting to achieve: Windows 7 with: C:\ Windows 7 (pre-existing installation) D:\ Data (Already exists and has files already) Ubuntu 11 - Does not exist yet, but I already have a LiveCD in hand. \root directory for Ubuntu \home on its own partition I plan \swap on its own partition with around 8GB Here is the current situation: I have a single 500 GB hard-disk with Windows 7 x64 installed, and the current partition schemes is as follows: System Reserved: 100 MB (Primary, Active) C: 100 GB - Where Windows 7 is installed (Primary) D: 365 GB - Where my files are located, LOTS of free space (Primary) Now, I would like to shrink my D: drive and create around 40 GB of unallocated disk space for the Ubuntu installation, but here what's confusing me a bit: I'm thinking I would create an extended partition and subdivide it into 3 logical partitions for the Ubuntu setup I had in mind. (If you think my setup is a bad idea, please let me know & why. I also hope you can suggest a better one...) I am aware that I can only have up to 4 primary partitions, or 3 primary partitions with 1 extended parition max. Now, does the System Recovery portion count as one primary partition? I'm really new to these things and it is totally unclear to me. In shrinking my D: drive using Windows 7's Disk Management tool, I would get an unallocated free space which I don't know how to make an extended partition from. It seems like I can only create a primary partition from it, not an extended one. How do I go about it? (I'd also like to note, if it is of any importance, that I am trying to avoid using the option to install Ubuntu alongside Windows, and much rather prefer using the custom install where I can specify which drives I wish to use and stuff. Somehow I feel its safer that way.)

    Read the article

  • sys.dm_exec_query_stats interaction with recompilation

    - by Sam Saffron
    We use sys.dm_exec_query_stats to track down slow queries and queries that are IO offenders. This works great, we get a lot of very insightful stats. It is clear this is not as accurate as running a profiler trace, as you have no idea when SQL Server will decide to chuck out a an execution plan. We have quite a few queries where the wrong execution plan is cached. For example queries like the following: SELECT TOP 30 a.Id FROM Posts a JOIN Posts q ON q.Id = a.ParentId JOIN PostTags pt ON q.Id = pt.PostId WHERE a.PostTypeId = 2 AND a.DeletionDate IS NULL AND a.CommunityOwnedDate IS NULL AND a.CreationDate @date AND LEN(a.Body) 300 AND pt.Tag = @tag AND a.Score 0 ORDER BY a.Score DESC The problem is that the ideal plan really depends on the date selected (screenshot of ideal plan): However if the wrong plan is cached, it totally chokes when the date range is big: (notice the big fat lines) To overcome this we were recommended to use either OPTION (OPTIMIZE FOR UNKNOWN) or OPTION (RECOMPILE) OPTIMIZE FOR UNKNOWN results in a slightly better plan, which is far from optimal. Executions are tracked in sys.dm_exec_query_stats. RECOMPILE results in the best plan being chosen, however no execution counts and stats are tracked in sys.dm_exec_query_stats. Is there another DMV we could use to track stats on queries with OPTION (RECOMPILE)? Is this behavior by-design? Is there another way we can for recompilation while keeping stats tracked in sys.dm_exec_query_stats? Note: the framework will always execute parameterized queries using sp_executesql

    Read the article

  • Connection between Asp.Net and Oracle 10g Express Edition

    - by l3gion
    Hello, I'm struggling to find a way to connect my Asp .Net + C# application with my Oracle 10g Express Edition. Here's my scenario, I'm at Mac OS and I have 2 Virtual machines, one for Win 7 (VS 2010 app) and another with a Parallels Virtual Appliance with Oracle 10g Express Edition 1.1. Which provider (Oledb, ODP.NET, etc..) should I use? How to make the connection to the server in C#? Right now I have this: <appSettings> <add key="conn" value="Data Source=10.211.55.11;Persist Security Info=True;User ID=l3gion;Password=l3gion;" /> </appSettings> And at the .cs file: SqlCommand cmd = new SqlCommand("insert_thing", new SqlConnection(ConfigurationManager.AppSettings["conn"])); cmd.CommandType = CommandType.StoredProcedure; *insert_thing is a stored procedure Using this I got this error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I've searched for some possible solutions. Tried some, including: firewall disabled, allow remote connection at oracle express edition using this cmd line ("EXEC DBMS_XDB.SETLISTENERLOCALACCESS(FALSE);").. The error persists. Can anyone guide me into the right direction? I'm a newbie with this type of things. Thank you for your patience. regards

    Read the article

  • Tables in the SQL Server "master" database, will they cause problems?

    - by pepoluan
    Folks, please be kind on me... I'm just an 'accidental' DBA due to our DBA resigned, so I'm totally a newbie in DBA... You see, I have this application, "ESET Remote Administration Server" (ERAS) that stores its logs and analysis on (originally) a local Access database. The decision was to migrate its database to a SQL Server 2008 R2 machine. ESET (the maker of the software) helpfully provided tools to perform such migration; unfortunately, being the DBA neophyte that I am, I didn't realize that I have to first create my own database (on the SQL Server side) and assign that database as the 'default' database for ERAS' ODBC connection. Now, the migration tool had successfully created a whole bunch of tables inside the "master" database. My questions: Should I leave things be as it is, or should I re-migrate the ERAS database to a different database? If you suggest me perform a re-migration, my plan is to (1) create a new instance, (2) create a new database within the new instance, (3) create a new ODBC System DSN on the ERAS server pointing to the new DB in step 2, (4) use ESET's migration tool to migrate from the current DSN to the new DSN. Do you think I missed a step there? Thanks beforehand for any guidance.

    Read the article

  • Getting Started in SuSE as an Ubuntu User

    - by Subhamoy Sengupta
    I am not a Linux newbie, but haven't touched SuSE in a very very long time (last time I tried it, it was SuSE 7!). Finally now I felt like giving it a try, and many things seem strange or unnecessarily complex. I have a series of questions. How do I ensure that my packages are uptodate? It sounds silly, but I tried the obvious methods already. I have disabled the default repositories that show up when you do zypper lr, and added Tumbleweed and packman repositories (Essentials, Multimedia, Extra). Then I did a sudo zypper ref --force and then sudo zypper dup, and it tells me many dependencies are not met. I have already added solder.allowVendorChange=true to /etc/zypp/zypp.conf, so it should not care which repository the latest versions are in, and just upgrade to it. Even when I chose to skip the packages with unmet dependencies, and seemed like quite a bit happened in the background, I opened Firefox afterwards and the version was 7! I am guessing things did not go as expected. But of course this is not a problem with SuSE, but I am not understanding the system right. How do I do it right? When I start typing arguments of a command, for example sudo zypper install, when I type sudo zypper ins and keep hitting TAB, nothing happens! It always worked in Ubuntu and I feel very uneasy with this. Is this how SuSE is supposed to be? When I try to install something, and I start writing its name, even though the package exists and I am sure of it, hitting TAB does not autocomplete it. This is also quite inconvenient. Why is it not happening? There are many things in SuSE that are really great, and I think I will stay with it and not go back to Ubuntu once I settle these very rudimentary issues. But right now they are giving me a lot of grief! Please help!

    Read the article

  • OS X won't boot up unless I hold down option key

    - by Gazzer
    I have a strange issue on an early 2008 Mac Pro running OS 10.6: if I restart the computer it restarts normally if I shutdown and boot, it stops at the grey screen just before the boot process if I shutdown and boot but hold down the option key, I can select the boot disk and all is good. I've just cloned the disk, and the same thing happens. The disk is a SAMSUNG HD154UI The disk is partitioned (the second partition holds a clone of the Snow Leopard Install disk) One weird thing on the original disk was one of the partitions said 'EFI Boot' in a non-aliased font rather than the name of the disk when the disks are listed upon holding down option. Solution: it seems that there was a problem with the disk. Part of the difficulty in finding the solution was that you need to remove the disk from the computer completely. For example, a good disk in Bay 3, wouldn't boot up if the bad disk was in Bay 2. So for ages I thought the problem was hardware related in Bay 3. So if you think you have a dodgy disk remove it totally if you are testing the hardware with a 'clean' disk. Cleaning the PRAM helped to get the new disk to work too.

    Read the article

  • I need to preserve a tape using symantec backup exec. I'm aving trouble doing so

    - by MrVimes
    Please forgive me if this is the wrong stack exchange site. Please suggest which one I should post this to if it is. There's an automatic tape machine running in a remote location, with software (symantec backup exec 11d) Recently one of the servers being backed up had problems with its raid controller, so one of the drives has become invisible. I need to preserve the last good backup of that drive so I am trying to replace the tape with the most recent backup of that drive on it with one of the scratch tapes (blank tapes) present in the machine. I've tried the following... Associate the blank media with the media set in question (Wednesday) For the existing media (the tape with the data I want to keep) I click 'move to vault' and move it to the offline vault. I associate it with something other than 'Wednesday' (a media set called 'keep data infinitely...') I then do an inventory on that slot. The above steps I'm led to believe are supposed to put the fresh tape in the slot that had the tape I want to keep in it. But it just keeps showing up as containing the tape I want to keep after the inventory. (after refreshing the device tree) I am a complete newbie with this software. Can you tell me what I'm doing wrong, and/or tell me how to acheive my desired goal Edit: Just want to point out that I did try to get help directly from symantec with this, but having jumped through countless hoops to create an account and create a support ticket my progress was halted by requiring something called a 'tecnical contact id' at the final step with no explanation of what it is or how to get one.

    Read the article

  • postfix test and configuration problem

    - by Woho87
    Hi Guys! I installed postfix using sudo yum install postfix postfix-mysql. I'm newbie to mail systems, but I have one AMAZON EC2 instance with a public DNS. I used the public DNS in most cases, when I configured the file main.cf. The public DNS I have is from amazon and it is a long string(ec2-123-34-234-677.....amazon.com). // I configured this on main.cf. I replaced example.com with ec2-123-.......amazon.com myhostname = mail.example.com mydomain = example.com myorigin = $mydomain mydestination = example.com, $transport_maps local_recipient_maps = $alias_maps $virtual_mailbox_maps unix:passwd.byname home_mailbox = Maildir/ How do I test postfix? I just want it to send emails for my web application. I tried to test it with >telnet localhost 25 after I typed in SSH >sudo postfix start. but I recieve the message that telnet command can not be found. I also use the Amazon linux distribution if you want to know. I use it because it is free. What have I done wrong? Are there anymore configurations required pls help!

    Read the article

  • Configuration of Server root email - Change Address and Name on outgoing email

    - by JTWOOD
    As a newbie Postfix user, I've gotten so far and now I am stuck with a SMALL problem. I would like to configure my local network servers to send alerts and like using the following: 1) From address: [email protected] 2) From name: Hostname I can get #1 to work fine using smtp_generic_maps The problem is that on my email client, the name is listed as "root" - as in the header shows the following: Date: Sun, 29 Jul 2012 13:21:01 -0400 (EDT) From: [email protected] (root) To: undisclosed-recipients:; I'd like to change it to "From: [email protected] (Zeus)" I imagine that this can be done in the headers_check, but so far I haven't gotten anything to work and before I waste a ton of time trying to get this to work, I'd like to make sure I am on the right track. My aliasing and genericmaps are set up correctly (As far as I can see and know - the results are correct!). I just want to change that last bit in the From field to reflect the hostname. I would also like to add something in the subject of the outgoing messages for easy filtering - something like Subject: [Zeus.domain] - "Original Subject" Any suggestions are much appreciated. Thanks!

    Read the article

  • Boot records messed on dual boot (win7 and ubuntu) machine with SSD and HDD

    - by Michael
    i have a lenovo ideapad y570 with two hard drives: SSD and normal HDD both managed by RapidDrive and windows 7 pre-installed. First, i have shrunk my 500 GB HDD a little bit to make some place for a linux installation. Then i installed linux mint 12 to it, also installed grub onto the drive (dev/sdb). Installation programm has not allowed me to install grub on sda. Then i replaced linux mint with ubuntu 12.04 but installed grub onto the SSD (which is dev/sda and was the default-option). After that i could boot into my windows, only ubuntu worked. So i did a research, and tried: rewriting mbr of windows into sda1, reinstalling grub, replacing grub2 with grub-legacy, and now i think my partitions table are totally messed. Here is fdisk -l output: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 * 2048 411647 204800 7 HPFS/NTFS/exFAT /dev/sda2 411648 1009430959 504509656 7 HPFS/NTFS/exFAT Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x5e5d1cc8 Device Boot Start End Blocks Id System /dev/sdb1 * 1979 884389887 442193954+ 12 Compaq diagnostics /dev/sdb2 884391934 976771071 46189569 5 Extended /dev/sdb5 884391936 937705471 26656768 83 Linux /dev/sdb6 937707520 967006207 14649344 83 Linux /dev/sdb7 967008256 976771071 4881408 82 Linux swap / Solaris I also cant mount any windows partitions to recover data. And when i open gparted, the whole sda-disk appears unallocated and it states "can not have a partition outside the disk!", also the end-sector address of /dev/sda2 confuses me. If i boot from the SSD, it throws some mbr error and wont boot, if i boot from the HDD, i only get the grub bash. How do i restore the partition tables? I can boot only from a live-cd at the machine. Thanks for any help.

    Read the article

  • Do I need to update some of my Debian Squeeze software?

    - by stan31337
    I have installed Debian 6, and LAMP stack from squeeze repository (default). After upgrading Apache 2.2.16 from unstable repository to 2.2.22, thanks to this post - how to upgrade already installed apache2 on debian (lenny) I'm thinking to upgrade all other software packages that I've previously installd from squeeze repository. Should I upgrade them to the ones from unstable repository? Should I upgrade all of them or just selected ones? Here's the list: * arno-iptables-firewall 1.9.2.k-4 >> 2.0.1.c-1 * bind9 1:9.7.3.dfsg-1~squeeze6 >> 1:9.8.1.dfsg.P1-4.2 * php-apc 3.1.3p1-2 >> 3.1.13-1 * fail2ban 0.8.4-3+squeeze1 >> 0.8.6-3 * exim4 4.72-6+squeeze2 >> 4.80-4 * altermime 0.3.10-4 >> 0.3.10-7 * rrdtool 1.4.3-1 >> 1.4.7-2 * vsftpd 2.3.2-3+squeeze2 >> 3.0.0-4 Also I would like to ask how to upgrade 5.3.3 5.3.16, unstable repository has 5.4.x versions only, I don't think I'm ready to move from 5.3 to 5.4 yet. Actually I'm a newbie in Linux, and after Windows experience I have a paranoidal idea to update software to the latest release. I'd be glad for any suggestions and recommendations! Thank you very much!

    Read the article

  • What does Firefox do when "scanning for viruses" after download?

    - by Joey
    Never mind the fact that Firefox is a browser and not a AV tool, but what exactly does it do after a download? Even on systems that have an up-to-date AV this generates a pause of several seconds after download (where I can't open the file from within the DL manager) and I have no idea what FF might be trying there. I know I can turn it off (using FF only at work anyway) but I'm wondering. I can think of some things here what it might be: FF itself is a AV scanner and it loads signatures in the background and whatnot. Sounds highly unlikely and shouldn't need tens of seconds for 20 KiB files. FF tries to talk with the installed AV to munch the file. Sounds unneeded, given that most AV programs feature real-time protection anyway and therefore will have caught a virus already and also because FF does that on systems without AV installed too. FF uploads the file to some online virus checker. Unlikely and stupid. FF instructs some online virus checker to download the file and check it. Unlikely and would be a nice target for DoSing that service. FF generates a hash of the file and sends that somewhere (presumably Google) to check for. They then respond with either "Whoa, that hash is totally a virus" or "Nope, that MD5 doesn't look very virus-y to me". I'm running out of better ideas. Anyone have a clue?

    Read the article

  • Assigning cores to VM in vSphere

    - by user114933
    Complete vSphere newbie here... Background: So, I have a 12 core machine with 24 VMs on it. Currently, all the processing power is shared between these VMs equally. The question: Can I configure one VM to be given two CPU's worth processing no matter what's happening on the other machines? My Research: I tried two things in vSphere... I set the reservation and limit on one VM to equal the same as two cores. To test if my objective was being reached, I measured the time it would take to gzip a file when other VMs were running nothing and when other VMs were running CPU intensive operations. I expected the time to gzip the file would be the same because this VM gets priority for some processing. Unfortunately, the time taken to gzip the file when other VMs were running something was significantly more than when other VMs were not running anything. I tried setting the Hyperthreaded Core Sharing mode to Internal hoping that this would mean that my VM would get at least an entire core to itself. This did not work either. Thanks in advance!

    Read the article

  • Apache Server with memcache, varnish and php slow request times

    - by coolestdude1
    My issue is that these servers are taking rather long for request about 2 seconds on average just to serve files. When we had just one server doing everything it was noticeably faster even with the same web app (Drupal 6 and Drupal 7). I want to get this number down to a reasonable level and so I need some help getting to the bottom of why the request times are so slow. This can cause the webapp to hang on post or put and generally leads to a bad user experience on my sites. PS: I am more of a server newbie so this has confounded me for quite some time. The domains: collabornation.net nptrainingworks.com (they run off the same two webservers using vhost configs) The Gear: Two Rackspace 4 Gig servers running CentOS 6.2 Final They have a mounted file system (gluster) that is used to keep files the same on both machines. They are behind a rackspace load balancer running round robin. Mysql is run using php-pdo and php-mysql as such mysql is run on another instance running memcache on that machine with phpMyAdmin located there as well. Apache version number 2.2.15-15.el6.centos.1 (httpd.x86_64) Varnish version number 3.0.2-1.el5 (varnish.x86_64) PHP version number 5.3.14-1.el6.remi (php.x86_64) Configs Linked Below Apache Conf Vhost Conf Varnish Backends Varnish Defaults Varnish Acl PHP INI Again need some help, much appreciated!

    Read the article

  • Screen randomly goes blue/black/white

    - by FubsyGamer
    Problem Randomly, while using my computer, the monitor goes dark grey/almost black, or it goes white with faint grey vertical lines, or it goes blue with black vertical lines. It's as if the computer powers off. People tell me I sign out of Skype, Spotify stops playing when it happens, etc. When I look at the tower, it doesn't seem like it's off at all. Nothing changes, fans are spinning, lights are on, etc. If you were only looking at the tower, you'd never know there was a problem The only way I can get it to come back up is to push and hold the power button and turn it off, then back on This never happens while I'm playing video games. I've done 5-6 hour sessions of League of Legends, and it doesn't do anything When I'm just browsing the web, reading email, checking Reddit, etc, it happens all the time. It can happen multiple times in a session, it usually takes only about 5 minutes from the time I start browsing to when the computer crashes This started happening after I moved to a new apartment (this has to be relevant somehow, it was not happening where I lived before) There is nothing in the crash logs or event logs System Specs i5 2500k CPU AMD Radeon 6800 GPU Gigabyte z68a-d3h-b3 motherboard WD VelociRaptor 1 TB HDD Screenshots Device manager About screen Things I have tried I was getting a WMI Error in my event logs, but I fixed it using Microsoft's fix, KB 2545227 I was using Windows 8. I wiped the HDD and downgraded to Windows 7 64 bit I took out the video card and used a can of air to totally clean out the video card, all fans, and the inside of the computer in general. I made sure all of the video card pins were fine, then reconnected it I tried to update my motherboard BIOS, but anything I downloaded from Gigabyte was only for 32 bit machines, not 64. I don't even know how to tell what my motherboard BIOS is at right now I am using a power strip, and anything else connected to it works just fine If I re-seat the monitor cable while this is happening, nothing changes Please, help me. I've been battling this for several weeks now, and it's so frustrating it makes me not even want to use the computer.

    Read the article

  • Connecting SVN from Remote Server

    - by Ashish
    I have hosted my repository in assebbla & it works fine. now I want to write a script that can automate the build process : 1. Take the code from assembla repository 2. Make a dump and copy it onto my web server. what I have researched from net states that use of commands like svn co svn+ssh://[email protected]/home/svn/test I believe I need to open Shell on my server and type these commands but shell has been disabled from my server admin. I tried to run the same from php using exec , admin has disabled that too. (am using shared hosting and want to do a automated deployment using these simple steps. i don't want to bring my local system in this process) now am not sure even if I get the shell access open to my server these commands like svn will work there as I don't have SVN installed on my server (its installed on assembla). kindly let me know if any more explanation is required regarding the same or if am going on the wrong track. Am a newbie so please be descriptive in answering :) Thanx in advance Ace

    Read the article

  • Can I restore one of my user's profiles in Vista?

    - by Rod
    My youngest daughter uses my 4 year old laptop, which has Windows Vista installed. Somehow she got some Trojan (Vista Internet Security). (I'd love to know how that happened, seeing as how she is a standard user, and I have VIPRE as my AV.) Anyway, I ran a deep anti-virus scan using VIPRE, which identified it. I decided to delete everything that it identified. Now she cannot use anything in her profile. If she tries to bring up the browser, it recycles over and over again a dialog box asking which program to use. If I try to run any program at all, it doesn't know what to do. For example, it is totally lost trying to run the command line. If I bring up Windows Explorer and navigate to Windows\System32 and try to run the command line, or anything at all from there, it goes "Huh?" What in heck has happened?? Is it possible to fix this, and if so, how? As an aside, I can log into my account (my account on that machine is an administrator) and it works fine.

    Read the article

  • ubuntu's average load never below "0.00 0.01 0.05"

    - by Karma Fusebox
    I have several ubuntu 12.04 VMs running on a ubuntu 12.04 KVM host. Those of the virtual machines that are totally idle with no services (except syslog and the other "small" standard stuff of a fresh installation) show a constant load of "0.00 0.01 0.05" in top/htop as average 1/5/15. When there are "real" applications running, the load averages behave perfectly normal but they never fall below the mentioned values. While this doesn't affect performance at all and could easily be ignored, it screws up the monitoring graphs in a very annoying way: (Notice how load15 behaves nicely if 0.05 for a short time in the right half of the pic) Unfortunately I don't know what diagnostic outputs might be helpful for you, so here's some default stuff: # top top - 16:31:01 up 1:05, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 62 total, 1 running, 61 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 99.2%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019464k total, 73452k used, 946012k free, 6140k buffers Swap: 0k total, 0k used, 0k free, 22504k cached . # free -m total used free shared buffers cached Mem: 995 72 923 0 6 21 -/+ buffers/cache: 43 951 Swap: 0 0 0 . # iostat -x /dev/vda Linux 3.2.0-32-virtual (vm3) 11/15/2012 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 0.25 0.00 0.65 0.20 0.24 98.66 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util vda 0.14 0.12 0.51 0.22 6.74 1.46 22.50 0.02 23.26 20.64 29.30 7.63 0.56 Need something else? Has anyone ever seen this behavior? Might this be a bug in kvm/ubuntu/kernel 3.x in the end? Thanks a lot!

    Read the article

  • SSH agent forwarding on debian squeeze

    - by nfvindaloo
    Im trying to set up SSH forwarding like this osx debianA debianB I can connect to debianA fine, using ssh -A and it has the following env vars when i do: SSH_AGENT_PID=1543 SSH_AUTH_SOCK=/tmp/ssh-giwdYY1542/agent.1542 SSH_CLIENT='92.233.199.x 38954 22' SSH_CONNECTION='92.233.199.x 38954 108.171.179.x 22' SSH_TTY=/dev/pts/0 When i try to connect to debianB, the agent is not used! ssh -v output ends with: debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /home/nic/.ssh/id_rsa debug1: Trying private key: /home/nic/.ssh/id_dsa debug1: Next authentication method: password Then im asked for a password. I have not set any ForwardAgent no directives in ssh_config and dont have a .ssh/config at all. sshd_config has not got AllowAgentForwarding in it. I have tried all of these directives as yes also. debianA and debianB both have identical ssh_config and sshd_config (verified with diff) so the really weird thing is connecting OSX debianB debianA works fine!! Im totally out of ideas! Has anyone come across this before? Cheers! NFV

    Read the article

  • Simple, centralized user management on a small LAN - NIS or LDAP?

    - by einpoklum
    I'm setting up a small LAN for my team. It will, for all intents and purposes, not be connected to any external networks. I would it to have centralized control of user accounts (at least, I think I'd like that; I'm also considering using puppet, so theoretically I could just push /etc/passwd changes, or something). The number of machines is fixed, but not very small. Mostly they're 'attached' to a single user, but sometimes people work remotely on someone else's box; and there are a couple of servers. I've read this question, but my scenario is much simpler (even simpler than in this question) and I'd like to do something (relatively) quick, with not much hassle, but not a dirty totally-insecure hack. Is NIS relevant for my scenario? If not, what's the most hassle-free way to set up LDAP (or LDAP+Kerberos) to achieve the same? Notes: I have no experience with setting up either NIS or LDAP. We use Debian-flavored Linux distributions, mainly Kubuntu 12.04 (not my choice, but that's the way it is).

    Read the article

  • Ubuntu Launcher Items Don't Have Correct Environment Vars under NX

    - by ivarley
    I've got an environment variable issue I'm having trouble resolving. I'm running Ubuntu (Karmic, 9.10) and coming in via NX (NoMachine) on a Mac. I've added several environment variables in my .bashrc file, e.g.: export JAVA_HOME=$HOME/dev/tools/Linux/jdk/jdk1.6.0_16/ Sitting at the machine, this environment variable is available on the command line, as well as for apps I launch from the Main Menu. Coming in over NX, however, the environment variable shows up correctly on the command line, but NOT when I launch things via the launcher. As an example, I created a simple shell script called testpath in my home folder: #!/bin/sh echo $PATH && sleep 5 quit I gave it execute privileges: chmod +x testpath And then I created a launcher item in my Main Menu that simply runs: ./testpath When I'm sitting at the computer, this launcher runs and shows all the stuff I put into the $PATH variable in my .bashrc file (e.g. $JAVA_HOME, etc). But when I come in over NX, it shows a totally different value for the $PATH variable, despite the fact that if I launch a terminal window (still in NX), and type export $PATH, it shows up correctly. I assume this has to do with which files are getting loaded by the windowing system over NX, and that it's some other file. But I have no idea how to fix it. For the record, I also have a .profile file with the following in it: # if running bash if [ -n "$BASH_VERSION" ]; then # include .bashrc if it exists if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" fi fi

    Read the article

  • Ubuntu 10.04 on virtualbox gives error: Target filesystem doesn't have /sbin/init \ No init found. Try passing init= bootarg

    - by Philip
    I'm a linux newbie and the only reason I have it installed is so I can stop having Windows incompatibility issues with Ruby on Rails. Having said that, it sure has been nice, and much faster, and I don't think I'll be doing any Winrails stuff anytime soon. So I created a virtualmachine using virtualbox and have had ubuntu on it for the last 3 weeks. Recently ubuntu asked if it could update a few things, I clicked 'ok'. Now it won't boot and I get this error: *mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory ... Target filesystem doesn't have /sbin/init. No init found. Try passing init= bootarg BusyBox v1.13.3... (initramfs) _ * So I cruised the forums and there are a variety of solutions, but they all have to do with booting from the live cd. (which I assume is the ISO image I used to install ubuntu in the first place). But when I boot from that CD, it just hangs on the ubuntu screen, and the little dots keep cycling white to red, but it hung there for an hour so I think it was stuck. Not sure what I can do; can I do anything from the busybox shell (or whatever that is) to fix things? The thing is, it took about 10 hours to get everything the way I needed with all the gems and whatnot. And I didn't really write down what I tweaked, and I'm middle aged, so all that information has leaked out by now and I don't want to do it again. I'd really like to repair my existing install. One question you might have is, is there something wrong with the ISO? I don't think so, because I made a new virtual machine and used that same iso file to install a fresh ubuntu. Any help much appreciated. Phil

    Read the article

  • Is there a way to do a sector level copy/clone from one hard drive to another?

    - by irrational John
    Without going into distracting details, I'm attempting to duplicate the contents of the 500GB drive in my MacBook to another 500GB drive. But this is turning out to be an unexpected hassle because the drive contains both the OS X partition and an NTFS partition with Win 7 via Apple's Boot Camp. With the exception of Clonezilla, the tools I have looked at so far all have some limitation. The Mac tools don't want to deal with the NTFS partition. The Windows tools are totally clueless about either the HFS+ partition and/or the hybrid MBR/GPT Boot Camp partitioning. Clonezilla looked like it would do what I want but apparently I can't figure out how to use it. After doing what I thought was a sector to sector copy I found that only the NTFS partition had been migrated. The others were apparently empty. (And frankly, I'm not positive Clonezilla migrated the partition table correctly either). Note: It takes over 2 hours using SATA to read/write all sectors with these drives. So I'm not up for using trial & error to narrow in on the right combination of Clonezilla options to use. I'm beginning to think that maybe the answer is to boot Linux (probably Ubuntu) and then use some ancient BSD command. Trouble is I don't know what command (or parameters to use) in order to do a sector level copy from one drive to another. As far as I know the drives have the same number of sectors so this should be trivial. Sigh.

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >