Search Results

Search found 52182 results on 2088 pages for 'something for the weekend'.

Page 316/2088 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • Borked ubuntu uninstall - need to delete boot partition (i think)

    - by Max Williams
    I just got a new pc laptop with windows 7 and wanted to install Ubuntu on it. Which i did, no problem there, by downloading the installer, burning it to dvd then booting off the dvd and installing. Then, i realised that the new Ubuntu 12.04 uses the Unity desktop, which i immediately disliked, and after some research, began to hate. So, i decided (after a little googling) to install Linux Mint instead. So, thinking i'd better start from scratch, i went to the Windows 7 disk manager and wiped the Ubuntu partition that had been created. Now, when i start up, i get an error from grub, the ubuntu boot manager: error: unknown filesystem grub rescue> _ and a blinking cursor where i can enter commands. I suspect that what i've done is deleted the main ubuntu partition but NOT deleted another partition which is a boot partition, or something like that? Can anyone tell me how i can rescue or unbork this? I'd like to either a) get back to my original windows-only setup OR b) install linux mint off dvd (which i have), into the empty partition, fixing any grub confusion in the process. Any suggestions? Thanks, max BTW please don't answer if you're just going to tell me to stick with 12.04, or install a different distro or something. I definitely want Mint and just want to fix this mess - thanks :)

    Read the article

  • Goal setting/tracking packages for software projects

    - by Avi
    I'm a developer working by myself. I'm looking for a computerized tool to manage my goals and activities. I own it Microsoft Project, but I don't like it. I've started many "projects" but could never keep on using it. Too complex and heavyweight for me. I use MS-Outlook tasks. They are not what I need. No planning capability. Tracking is not nice. I'm using the Pomodoro technique and I like it, but I'm looking for something more comprehensive and with better computerized support. Something that would allow me to define goals with dependencies and time estimation, keep daily prioritized lists etc. So, I'm looking for a solution. One I've found is GoalPro, but I uneasy because I could not find a cross-product "top ten" like review. Are you using any goal setting package such as GoalPro? Which? Does it help? Pros and Cons?

    Read the article

  • Mesh Networked servers via vpn

    - by microspino
    I got a design idea and I would like to have some advice from SF about It. I have 5 customers with small real-estate databases. I've built for them a desktop app and now they would like to merge their database to share their data. I don't want to centralize everything in one place nor I want to do maintenance for servers. They told me also, that all of them in their offices, have little servers and maintenance guys available. Although everything seems suitable for web application, I had the idea to experiment something new: Any customer small-server wild be connected to the others in a sort of mesh network without a single point of failure and through VPNs. If one of the servers went down the customers could still connect to their databases from one of the other mesh networked servers instead of from the local one that is down. During normal operations all the servers sync the db with the others through VPNs. I can accept a half-day timing window of NON synched data, in other words, since I don't need real time synchronization, the server don't have to always stay in synch. I can migrate my data over to other Non-Sql technologies like CouchDB or Redis or whatever you suggest. As you can see I don't have a lot of constraints and although I could go with a web application I would like to delegate and decentralize support, data-privacy and management, as more as I can to my customers offices. Is that a crazy idea? Do you know If something similar exist? Which technology would you suggest?

    Read the article

  • Port knocking via SSH tunnels

    - by j0ker
    I have a server running in my university's internal network. There is only one SSH daemon running which is secured by port knocking with knockd. Works fine if I try to connect from within the internal network. But since the server has no external IP, I have to tunnel into the internal network every time I want to access the server from outside. And since tunneling only works for a single port I cannot do the port knocking as easily as from an internal client. In fact, I don't get it to work at all. What I'm trying is opening tunnels for all the different ports that have to be knocked. Then I send TCP-SYN packets into the tunnels. But that doesn't work even for a single port. If I establish the tunnel on the first port in the knock sequence and send a packet through it, it doesn't reach the server. There is no entry in the log file of knockd, while there should be something like 123.45.67.89: openSSH: Stage 1 (as shown with internal knocks). So I guess, the problem doesn't exist within my knocking script but is a more general one. Are there any known problems with what I'm trying to do? Is it even possible or am I missing something? Thanks in advance!

    Read the article

  • Simple electric DC question. Currency consumption

    - by Bobb
    Suppose you have DC power supply and a consumer connected to it (i.e. computer PSU and a hard drive). Suppose PSU which was supplied with the consumer has output 5V 1A. So I assume that the consumer should not consume more than 1A. Suppose the original PSU is broken now and I want to replace it with the one I have which is 5V 10A. My guess is that current is something which depends on the consumer. So if the consumer consumes normally 1A then it will not consume more than that even if it is connected to 10A PSU. In other word - am I right assuming that the consumer will not burn out being connected to a power supply with higher current output? P.S. my understanding is that voltage is something independent from the consumer. If you give it higher voltage it will burn (voltage is from PSU to the consumer). However current must be in opposite - consumer sucks as much current as it need not as much as PSU can provide (of course given that max PSU current is greater than the consumer needs)

    Read the article

  • Issue with kernel boot [OVH SERVER]

    - by Conner Stephen McCabe
    Trying to install OpenVZ kernel on Centos 6.3, Yes my kernel is installed i can see it in the /boot folder, yes it is Rhel6 and yes it is all up to date, i checked this with yum update. My issue comes when i reboot my server with that kernel set as the default, it doesn't load, below i shall put a copy of my grub.conf file and my menu.lst file. Grub.conf: default=0 timeout=5 title vzkernel (2.6.32-042stab057.1) root (hd0,0) kernel /boot/vmlinuz-2.6.32-042stab057.1 ro root=/dev/sda1 initrd /initramfs-2.6.32-042stab057.1.img title linux centos6_64 kernel /boot/bzImage-3.2.13-xxxx-grs-ipv6-64 root=/dev/sda1 ro root (hd0,0) Now i shall paste in Menu.lst; # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/mapper/vg_stock-lv_root # initrd /initrd-[generic-]version.img #boot=/dev/sda default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Linux OpenVZ (vmlinuz-2.6.32-042stab057.1) root (hd0,0) kernel /boot/vmlinuz-2.6.32-042stab057.1 ro root=/dev/mapper/vg_stock-lv_root rd_LVM_LV=vg_stock/lv_root rd_LVM_LV=vg_stock/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=l$ initrd /initramfs-2.6.32-042stab057.1.img title CentOS (2.6.32-71.el6.x86_64) root (hd0,0) kernel /boot/bzImage-3.2.13-xxxx-grs-ipv6-64 ro root=/dev/mapper/vg_stock-lv_root rd_LVM_LV=vg_stock/lv_root rd_LVM_LV=vg_stock/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFO$ initrd /initramfs-2.6.32-71.el6.x86_64.img # dummy text Somebody mentioned something about OVH having added a script which changes the kernel settings or something, and suggested that we either remove the script or reinstall using a VNC, but we don't know how to go about doing either of these? Really would be great if you guys could help. Thanks in advance.

    Read the article

  • Managing rolling deployments in the cloud

    - by Josh Nankin
    Recently I've been experimenting with various cloud management tools like RightScale, Scalr, custom scripts for managing a variety of servers, each hosting several roles (app, db, load balancer, job queues, etc). The one thing I find lacking in most solutions is a way to do rolling deployments, i.e. running deployments sequentially across a number of servers with the same role. For instance, I dont want to build all of my webservers at the same time, as that will almost definitely result in some down time or 500s for my customers. I'd rather have one or two servers build at a time, while other servers are still available to handle requests. The other alternative is obviously to launch new servers that automatically update themselves on boot, but this isn't as cost effective, and most likely requires more time for the build to complete (it's faster to build on an existing server than to launch a new server and kill old ones). We've all heard of the big companies having the famous "push to build" button (companies like Twilio, Etsy, etc.) but it seems that they all have custom implementations of this. I'm not talking about a simple ssh-loop, clusterssh, or even an mcollective - I preferably want something with a nice simple interface that allows me to specify something like a RightScript or a Scalr script to run on a set of servers with a specific role, and it builds them sequentially. Does any one know of easy ways to get this done, or is this a candidate for a new open source project?

    Read the article

  • why is Mac OSX Lion losing login/network credentials?

    - by Larry Kyrala
    (moved from stackoverflow...) Symptoms So at work we have OSX 10.7.3 installed and every once in a while I will see the following behaviors: 1) if the screen is locked, then multiple tries of the same user/pass are not accepted. 2) if the screen is unlocked, then opening a new bash term may yield prompts such as: `I have no name$` or lkyrala$ ssh lkyrala@ah-lkyrala2u You don't exist, go away! Even when our macs are working normally, everyone here has to login twice. The first time after boot always fails, but the second time (with the same password, not changing anything, just pressing enter again) succeeds. Weird? Workarounds There are some workarounds that resolve the immediate problem, but don't prevent it from happening again: a) wait (maybe an hour or two) and the problems sometimes go away by themselves. b) kill 'opendirectoryd' and let it restart. (from https://discussions.apple.com/thread/3663559) c) hold the power button to reset the computer Discussion Now, the evidence above points me to something screwy with opendirectory and login credentials. Some other people report having these login problems, but it's hard to determine where the actual problem is (Mac, or network environment?). I should add that most of the network are Windows machines, but we have quite a few Macs and Linux machines as well, but I'm not sure of the details of how the network auth is mapped from various domains to others... all I know is that our network credentials work in Windows domains as well as mac and linux logins -- so something is connecting separate systems, or using the same global auth system.

    Read the article

  • TEMP environment variable occasionally set incorrectly

    - by Roger Lipscombe
    Occasionally, I find my TEMP and TMP environment variables set to C:\Windows\TEMP. They should be set to %USERPROFILE%\AppData\Local\Temp, and are configured correctly in System Properties. This manifests itself as error messages like the following: ---> System.InvalidOperationException: Unable to generate a temporary class (result=1). error CS2001: Source file 'C:\Windows\TEMP\gb_pz65v.0.cs' could not be found error CS2008: No inputs specified ...which occurs in various .NET applications (in particular Visual Studio 2010 or SQL Server Management Studio). Alternatively, SQL Server Management Studio will report: Value cannot be null. Parameter name: viewInfo (Microsoft.SqlServer.Management.SqlStudio.Explorer) If I run PowerShell elevated, then $env:TEMP is set correctly. If I run PowerShell non-elevated, then it's not. I believe that it should be set correctly in both cases. If not, it's the wrong way round. The same is true for CMD.EXE. Rebooting fixes it, temporarily, until something breaks it again. Presumably something loaded into Explorer.exe is messing with its environment variables, but what? The values in the registry are correct, even while this is happening: HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment has TEMP = %SYSTEMROOT%\Temp HKCU\Environment has TEMP = %USERPROFILE%\AppData\Local\Temp By setting a breakpoint on shell32!RegenerateUserEnvironment, I'm able to trap it when it happens, but I still don't know why explorer.exe is reading the wrong environment variables. I can reproduce it consistently by broadcasting a WM_SETTINGCHANGE message (I wrote a one-line C++ program to do this). Watching the activity in Process Monitor shows that explorer.exe doesn't even look at HKCU\Environment. What is going on?

    Read the article

  • Troubleshooting an overheating CPU

    - by Jeff Fry
    I & my father just recently put together a new PC. Specs below. From the very beginning, on boot it will often complain that the CPU is too hot. If I sit in BIOS and watch the CPU, it'll drop back down from red to blue (<72C), at which point I've tended to just boot into Windows...and haven't had any problems. In fact, I've played a couple hours straight of Skyrim at max settings, and not had any visible issues. That said, I've occasionally walked away & come back to find that it's crashed. Yesterday, it crashed (while idle) twice in 12 hours, which shifted the balance from busy-with-life to nervous-I'm-about-to-melt-something. I just installed Core Temp which is showing my 4 cores fluxuating between 70-98C. I'm guessing at this point that the CPU fan may be incorrectly installed or defective. My first thought is to either (a) add water cooling (which the case supports) and / or (b) replace the CPU fan with an after-market one. That said, I'm very open to suggestions. A note, while I certainly don't want to burn money here, I have a baby coming any day now and am still unpacking from a recent move so if I have a choice between an option that costs money and another that takes a while...I'll happily spend a bit extra. Side question: Should I be nervous to even have this on at this point? Let me know if there's something useful I could add to my report. Otherwise, I'm looking forward to your suggestions! Thanks. CPU Intel i7-2600 CPU w/ stock fan Other HW ASUS P8Z68-V Pro motherboard 64G SSD boot drive 4 older SATA HDs GIGABYTE ATI Radeon HD6950 1 GB DDR5 8G Kingston T1 Series RAM Corsair 650W Gold Certified power supply Antec P280 case

    Read the article

  • Slow manipulation of netfilter rules

    - by Ole Martin Eide
    I have a script maintaining gre tunnels and firewall rules using the "ip" and "iptables" tools. Setting up hundreds of tunnels, and adresses per interface runs just fine. Takes less than 0.1 second per interface, however when I get around to do the firewall rules everything slows down spending 0.5 per insertion. Why is it running so slow? What can I do to improve the speed? It seems like I could try ipset instead, but I really feel there is something wrong with the kernel or something. The interesting thing is that the first 10 rules runs fast, then it slows down.. mybox(root) foo# iptables -V iptables v1.3.5 mybox(root) foo# uname -a Linux foo 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux mybox(root) foo# cat test.sh #!/bin/sh for n in {1..100} do /sbin/iptables -A OUTPUT -s ${n} -j ACCEPT /sbin/iptables -D OUTPUT -s ${n} -j ACCEPT done mybox(root) foo# time ./test.sh real 1m38.839s user 0m0.100s sys 1m38.724s Appriciate any help. Cheers!

    Read the article

  • Ubuntu server apt-get says "(-5 - No address associated with hostname)"

    - by Srini
    I have a ubuntu 12.04 server. Running sudo apt-get update on it produces errors like this: W: Failed to fetch http://au.archive.ubuntu.com/ubuntu/dists/precise-backports/main/binary-i386/Packages Something wicked happened resolving 'au.archive.ubuntu.com:http' (-5 - No address associated with hostname) I am able to ping all the other hosts on the network and also Google's DNS 8.8.8.8. But am unable to ping www.google.com. So, I'm guessing something is wrong with my DNS setup, but not sure what. I use static IP and my /etc/network/interfaces looks like this: auto eth0 iface eth0 inet static address 192.168.1.50 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.0.255 gateway 192.168.1.1 #dns-nameserver 203.12.160.35 203.12.160.36 #nameserver 203.12.160.35 203.12.160.36 My /etc/resolv.conf and /etc/resolvconf/resolv.conf.d/base are both empty and my /etc/resolvconf/resolv.conf.d/original says: nameserver 192.168.1.1 Any help would be greatly appreciated. P.S. I've googled it a bit and the common resolution is to switch to DHCP which I don't want to do since this is my home server. Thanks Srini

    Read the article

  • How do I prevent a tar pipe from causing swapping?

    - by Jeff Shattock
    I have a rather large filesystem that I need to transfer from one Linux server to another. I figured the best way to do this was via a tar/netcat pipe arrangment, something like tar c . | pv | nc blah blah blah And it works great, the network stays fairly saturated, life is good. Until the source machine starts swapping. The files are on a raid on the source system, so the read speed is much faster than the write speed on the other end. Since the dest machine hasnt picked up the data yet, the source machine needs to stick it somewhere, so into RAM it goes, until there is no more free RAM. It then starts swapping, which is horribly painful since that machine has its OS installed on a somewhat slow CF card. Both machines have 4GB of physical ram, 64 bit Ubuntu 9.04 server. GigE link between them. How do I prevent this swapping? Can I put a "speed-limit" on the tar or netcat process so that the transfer speed doesn't overwhelm the write throughput on the destination end? The man pages didn't list anything, but there might be something I'm overlooking.

    Read the article

  • choosing hosting for custom ecommerce site, shared, dedicated, what to look for?

    - by spirytus
    Hi, I have (almost) developed website for my client and now need to decide on hosting. Most of the users of the site will be located in Australia, and so am I and my client. Now, I want to consider everything before deciding on host and few questions comes to my mind: I cannot afford website being down, and all hosts say something like "99% uptime guranted". Should just that be enough or shall I ask hosts for some stats maybe? Does it make any difference if servers and whole hosting company is located in Australia or outside? I've been hosting few sites with JustHost.com on shared hosting (cheapest plan, servers in US I believe) and never seen any delays but could that be an issue? I would prefer Australian company so I can actually go to them and give them piece of my mind if something goes wrong, but US servers seem cheaper. Would share hosting do? Its ecommerce custom build php application, I know there are security issues with sessions etc on shared hosting. Will take precautions of course but could share hosting be an issue? Would dedicated be worthy option considering that my knowledge of server is very limited? I need to run php/mysql, with preferably unlimited bandwidth as with my experience I cannot tell what amount of traffic would be sufficient. Please let me know if I didn't provide you with enough information so you could answer my questions, will gladly explain further. In advance thanks for any answers :)

    Read the article

  • How to recover my invisible HD again?

    - by pattulus
    I made this several times now, but this time something bad happened. What I did: I installed Windows 7 at a 32GB partition on my slot 2 HD in my MacPro. Windows 7 made a 105MB partition… I knew this before, but what I didn’t know was that this partition is now on my slot 4 HD. My home folder, my private videos and some other stuff are on this 1TB drive. What I found out so far: I’m currently logged in as another admin since my OS partition as well as the two other HD's aren't harmed. Disk Utility: … only shows the 105MB NTSF partition on this 1TB volume. It isn’t showing my old 1TB partition/ex-HD named "storehouse". Only the partition tab is telling me that there now is a 1TB empty free unpartitioned space. Data Rescue II: … is showing the Volume as it used to be with it's old Name "storehouse". A quick scan and a thorough scan both were done in 1 second which leds me to the conclusion that there's isn’t something deleted at all (» hope!). Data Rescue doesn’t even mention the damn "system reserved" partition. Drive Genius: … also shows the old partition and doesn’t mention the new one. But looking at the info it tells me under "content": FDisk_partition_scheme (instead of Apple_partition_scheme). Well D'oh…. Tech Tools: … doesn’t show the volume, otherwise I'd might have been tempted to press rebuild/repair. What to do next?? I think the best approach is to buy another 1TB HD and let Disk Warrior Clone my old one to it… just to be on the safe side. But what is the best thing to do after this… ???

    Read the article

  • Running batch file through a service.

    - by wallz
    I'm trying to schedule a batch file to run through a third party application, however the output file doesn't get created in the directory. If I run the .BAT file from the command line, it works and the file gets created. Also using the Windows Schedule will also succeed. Basically, the 3rd party software will schedule the .BAT file and it shows success within the 3rd party user interface. The difference between running from the command prompt and the software, is that the software will use its Windows service to launch the batch. The 3rd party software will show success since it was able to successfully call the .BAT file to run, however it has no control of the other EXE's that's being called within the script. I'm able to run a simple .BAT file in the 3rd party software, for example a copy command. The .BAT I'm having problems with calls a compiled EXE which launches Excel to create a file to a location. The .bat file calls something.exe, which then calls Excel.exe: C:\something.exe -o D:\filename.xlsm C:\filename.xlsm refresh_pivot Do you think it's a permissions issue? I used Process Monitor to verify any Access Denied errors but everything seems to be working according to the trace. It worked on a non-64-bit OS, I'm currently using Win2008 64-bit.

    Read the article

  • Disappeared graphics card

    - by lenovo user
    I have a Lenovo T520 with two graphics cards, an nVidia quadro and an intel graphics card. I'm running a Ubuntu and Windows 7 dual boot. I can no longer find any trace of my intel graphics card. In my linux boot: > lspci | grep VGA > 01:00.0 VGA compatible controller: nVidia Corporation GF106 [Quadro 2000M] (rev a1) In Windows in control panel display- advanced settings, I only see the NVIDIA Quadro 2000M. In the BIOS there is no mention of the intel graphics card, no where I can find to try and turn it on or off. I thought I was going crazy, but then I found a post I made on ask ubuntu I made 3 months ago where I listed the output of lspci on this same machine: lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: nVidia Corporation GF106 [Quadro 2000M] (rev a1) What is going on? How could my intel graphics card have been disabled or turned off somehow without my knowledge? I've been in the BIOS 3 times now, each time convinced the last time I must have missed something, but I always find nothing. Am I missing something there? Could a thief have opened my computer and stolen my graphics card?

    Read the article

  • LDAP groups not applying to filesystem permissions

    - by BeepDog
    System is ArchLinux, and I'm using nss-pam-ldapd (0.8.13-4) to connect myself to ldap. I've got my users and some groups in LDAP: [root@kain tmp]# getent group <localgroups snipped> dkowis:*:10000: mp3s:*:15000:rkowis,dkowis music:*:15002:rkowis,dkowis video:*:15003:transmission,rkowis,dkowis,sickbeard software:*:15004:rkowis,dkowis pictures:*:15005:rkowis,dkowis budget:*:15006:rkowis,dkowis rkowis:*:10001: And I have some directories that are setgid video so that the video group stays, and they're configured g=rwx so that members of the video group can write to them: [root@kain video]# ls -ld /srv/video drwxrwxr-x 8 root video 208 Oct 19 20:49 /srv/video However, members of that group, say dkowis cannot write into that directory: [root@kain video]# groups dkowis mp3s music video software pictures dkowis Total number of groups that dkowis is in is like 7, I redacted a few here. [dkowis@kain wat]$ cd /srv/video [dkowis@kain video]$ touch something touch: cannot touch 'something': Permission denied [dkowis@kain video]$ groups dkowis mp3s music video software pictures I'm at a loss as to why my groups show up in getent groups, but my filesystem permissions are not being respected. I've tried making a new directory in /tmp and setting it's group permissions to rwx, and then trying to write a file in there, it doesn't work. The only time it does work is if I open it wide up allowing o=rwx. That's obviously not what I want, and I'm not able to figure out what my missing piece is. Thanks in advance.

    Read the article

  • VPN into multiple LAN Subnets

    - by Rain
    I need to figure out a way to allow access to two LAN subnets on a SonicWall NSA 220 through the built-in SonicWall GlobalVPN server. I've Googled and tried everything I can think of, but nothing has worked. The SonicWall NSA management web interface is also very unorganized; I'm probably missing something simple/obvious. There are two networks, called Network A and Network B for simplicity, with two different subnets. A SonicWall NSA 220 is the router/firewall/DHCP Server for Network A, which is plugged into the X2 port. Some other router is the router/firewall/DHCP server for Network B. Both of these networks need to be managed through a VPN connection. I setup the X3 interface on the SonicWall to have a static IP in the Network B subnet and plugged it in. Network A and Network B should not be able to access each other, which appears the be the default configuration. I then configured and enabled VPN. The SonicWall currently has the X1 interface setup with a subnet of 192.168.1.0/24 with a DHCP Server enabled, although it is not plugged in. When I VPN into the SonicWall, I get an IP address supplied by the DHCP Server on the X1 interface and I can access Network A remotely although I do not have access to Network B. How can I allow access to both Network A and Network B to VPN clients although keep devices on Network B from accessing Network A and vice-versa. Is there some way to create a VPN-only subnet (something like 10.100.0.0/24) on the SonicWall that can access Network A and Network B without changing the current network configuration or allowing devices on both netorks "see" each other? How would I go about setting this up? Diagram of the network: (Hopefully this kind of helps) WAN1 WAN2 | | [ SonicWall NSA 220 ]-(X3)-----------------[ Router 2 ] | | (X2) 192.168.2.0/24 10.1.1.0/24 Any help would be greatly appriciated!

    Read the article

  • Web based KVM management for Ubuntu

    - by Tim
    We've got a single Ubuntu 9.10 root server on which we want to run multiple KVM virtual machines. To administer these virtual machines I'd like a web based KVM management tool, but I don't know which one to choose from the list of tools mentioned on linux-kvm.org. I've used virsh & virt-manager on my desktop, but would like a web interface for the server. I tested ConVirt on my desktop, but it failed to pickup KVM machines from virsh / virt-manager, and I could not get KVM virtual machine import to work (only Xen). oVirt looks good, but I can't find out if and how I can install it on Ubuntu 9.10.. (And I'd really rather not waste another few days on testing stuff that might not work in the end.) Can anyone recommend any good web based KVM management tools that are easy to install on Ubuntu 9.10? I'm looking for something that will also allow me to run other services like apache and postgresql besides hosting virtual machines, so preferably fairly lightweight & no dedicated OS installs. We don't need any professional clustering / migration or anything, just something that will let us create, start, inspect, administer & stop virtual machines from a web page. Best regards, Tim Update: Anyone have any suggestions? It's awfully quiet here..

    Read the article

  • Simple electric DC question. Current consumption

    - by Bobb
    Suppose you have DC power supply and a consumer connected to it (i.e. computer PSU and a hard drive). Suppose PSU which was supplied with the consumer has output 5V 1A. So I assume that the consumer should not consume more than 1A. Suppose the original PSU is broken now and I want to replace it with the one I have which is 5V 10A. My guess is that current is something which depends on the consumer. So if the consumer consumes normally 1A then it will not consume more than that even if it is connected to 10A PSU. In other word - am I right assuming that the consumer will not burn out being connected to a power supply with higher current output? P.S. my understanding is that voltage is something independent from the consumer. If you give it higher voltage it will burn (voltage is from PSU to the consumer). However current must be in opposite - consumer sucks as much current as it need not as much as PSU can provide (of course given that max PSU current is greater than the consumer needs)

    Read the article

  • Systemd Service Start With Dynamic Port Value From Docker

    - by Sheriffen
    Using CoreOS, Docker and systemd to manage my services I want to properly perform service discovery. Since CoreOS utilizes etcd (distributed key-value) there is a very convenient way to do this. On systemd's ExecStartPost I can just insert the started service into etcd without problems. My usecase needs something like this: ExecStartPost=/usr/bin/etcdctl set /services/myServiceName '{ \"host\": \"%H\", \"port\": 5555 }' which works like a charm. But this is where my idea popped up. Docker has the power to randomly assign a port if I just run docker run -p 5555 which is awesome since I don't have to set it statically in the *.service file and I could possibly run multiple instances on the same host. What if I could get the randomly assigned port and insert instead of the static 5555? Turns out I can use the docker port command to get the port and with some formatting we can get just the port with $(echo $(/usr/bin/docker port my-container-name 5555) | cut -d':' -f2) which works if I set it (using bash) like this: /usr/bin/etcdctl set /services/myServiceName '{ \"host\": \"%H\", \"port\": '$(echo $(/usr/bin/docker port my-container-name 5555) | cut -d':' -f2)' }' but using systemd I just can't get it to work. This is the code I'm using: ExecStartPost=/usr/bin/etcdctl set /services/myServiceName '{ \"host\": \"%H\", \"port\": '$(echo $(/usr/bin/docker port my-container-name 5555) | cut -d':' -f2)'}' Somehow I got something wrong but it's hard to debug since it works when typed in the terminal.

    Read the article

  • I need to preserve a tape using symantec backup exec. I'm aving trouble doing so

    - by MrVimes
    Please forgive me if this is the wrong stack exchange site. Please suggest which one I should post this to if it is. There's an automatic tape machine running in a remote location, with software (symantec backup exec 11d) Recently one of the servers being backed up had problems with its raid controller, so one of the drives has become invisible. I need to preserve the last good backup of that drive so I am trying to replace the tape with the most recent backup of that drive on it with one of the scratch tapes (blank tapes) present in the machine. I've tried the following... Associate the blank media with the media set in question (Wednesday) For the existing media (the tape with the data I want to keep) I click 'move to vault' and move it to the offline vault. I associate it with something other than 'Wednesday' (a media set called 'keep data infinitely...') I then do an inventory on that slot. The above steps I'm led to believe are supposed to put the fresh tape in the slot that had the tape I want to keep in it. But it just keeps showing up as containing the tape I want to keep after the inventory. (after refreshing the device tree) I am a complete newbie with this software. Can you tell me what I'm doing wrong, and/or tell me how to acheive my desired goal Edit: Just want to point out that I did try to get help directly from symantec with this, but having jumped through countless hoops to create an account and create a support ticket my progress was halted by requiring something called a 'tecnical contact id' at the final step with no explanation of what it is or how to get one.

    Read the article

  • dell latitude XT touch screen issue

    - by Jake
    yesterday I installed windows 7 ultimate after I had the enterprise version on this machine for a few months. the installation went smoothly and everything worked fine except for the finger detection of the screen(only the pen worked) and the bottom screen buttons for rotating and so. so I started to install all the missing drivers dell recommends for this model on their site, but once I tried to install N-Trig degitizer driver the installation said it was unsuccessful and since then the touch stopped working completely! I tried system restore but it didn't help so I went on and formatted the harddisk completely once again and installed windows but that didn't work either. I tried to install the N-Trig's driver but it was alarting for a fatal error and said that no device was detected. same story with N-Trig rollback. so I checked the device manager and saw an "unknown device" with VID and PID values set as 0000. I figured the N-Trig driver might have messed with the device firmware or something and now it doesnt know it's ID and munufactor... Is there anything that can be done? like forcing the N-Trig driver to install on this device or something? please help!

    Read the article

  • Sharing music on NAS with Zune and iPod?

    - by osij2is
    After being a long time iPod owner, I'm switching to the new Zune with its subscription model. I haven't bought a Zune yet but I'm planning on doing so within the next month or so. I have approximately 40GB worth of music and my girlfriend has her iPod music library around 30GB. I've been trying to figure out how to migrate all our music off of our laptops/desktops and centralize everything on my NAS. In sharing iPod music isn't too bad. Sharing from one machine to all is fairly easy within the iTunes player. As far as storing all the music on a NAS, again, iPods aren't too bad and imagine other systems aren't difficult. But I'm really new to the Zune and I'm beginning to run into some issues. My questions are: Is it possible to store all music from our iPods and Zune subscriptions and share music between the iPod/Zune within the same file share on my NAS? I'm sure it's possible to store music on a share, but I'm not sure how iTunes and the Zune software differs. Is there 3rd party software, maybe something like DoubleTwist that can sync based from NAS to multiple desktop/laptops? I've never used DoubleTwist but it's something that I found that looks close to being what I need. I've never quite done this myself so I'm trying to find a solution that can: a) store music on a network share; b) sync between different devices (Zune/iPod) seamlessly.

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >