Search Results

Search found 45505 results on 1821 pages for 'change directory'.

Page 695/1821 | < Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >

  • Can't untar Testdisk 6.14 using live CD

    - by Orestes
    I'm using an Ubuntu LiveCD right now and need to recover files from a (Windows 7) partition. I've read about TestDisk and tried downloading it and untaring it but: testdisk-6.14-WIP.linux26.tar.bz2: Cannot open: No such file or directory tar (child): Error is not recoverable: exiting now tar: Child returned status 2 tar: Error is not recoverable: exiting now I don't know why it doesn't work (noob). I tried using sudo apt-get install testdisk but: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package testdisk so.... HELP :D

    Read the article

  • Restoring Windows 2008 Server X86 and X64

    - by rihatum
    Restoring Windows 2008 Server (Domain Controller) We are using Backup Exec System Recovery 2010 to Image our DC. Now this software has a feature to convert the backup into a vmware or hyper-v VM I have also used disk2vhd to convert one of our dc's to a vhd and when I connected it into Hyper-V, it booted fine, I can login - BUT :-) As soon as I login, I get the activation error, that change product key, this product key isn't good for this machine etc. Question is : When in a real recovery situation, what would be the procedure to restore it either virtual or onto a physical box but be able to login and change product key etc ? In this scenario its just locked down and I cant' do anything, if this is the case, how would I replicate my production environment via these tools ? Any Ideas ? Will be grateful for some real world examples here. Same thing happens with our exchange backup / test restore either physical or virtual, can login but nothing else. Now we don't have the keys as they are OEM keys and just wondering what will happen in a real scenario, would we be purchasing another KEY or using the OEM key on our new server ? This is a test environment I am trying to create by restoring our backups either into hyper-v or physical test machines. Also, If I build up a machine (Server 2008) in a VM (Hyper-V), How can I restore just the system state backup of my DC into it ? will that give me the activation error too ? even though I would use the TRIAL ISOs provided by Microsoft ? Kind regards

    Read the article

  • Kickstart: Serve dynamic kickstart images via a CGI or PHP script?

    - by Stefan Lasiewski
    I'd like to kickstart a couple dozen RHEL6/SL6 servers. However, some of these servers are different and I don't want to create a new ks.cfg file for each class of server. Are there any products which can generate a Kickstart file dynamically on the fly, from a template? For example, if I append a line like this to the KERNEL: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi Then the script ks.cgi can determine what host this is (Via the MAC address), and print out Kickstart options which are appropriate for that host. I could optionally override some options by passing parameters to the script, like this: APPEND ks=http://192.168.1.100/cgi-bin/ks.cgi?NODETYPE=production&IP=192.168.2.80 After we kickstart the server, we activate Cfengine/Puppet on this system and manage the system using our favorite Configuration Management product. We're experimenting with xCAT but it is proving too cumbersome. I've looked into Cobbler, but I'm not sure it does this. Update: A roll-your-own solution is discussed in the O'Reilly book: Managing RPM-Based Systems with Kickstart and Yum, Chapter 3. Customizing Your Kickstart Install Dynamic ks.cfg, which echos some of the comments in this thread: To implement such a tool is beyond the scope of this Short Cut, but I can walk through the high-level design. Any such solution would mix a data store (the things that change) with a templating solution (the things that don’t change). The data store would hold the per-machine data, such as the IP address and hostname. You would also need a unique identifier, perhaps the hostname, such that you could pick up a given machine’s data. The data store could be a flat file, XML data, or a relational database such as PostgreSQL or MySQL. In turn, to invoke the system, you pass a machine’s unique identifier as a URL parameter. For example: boot: linux ks=http://your.kickstart.server/gen_config?host-server25 In this example, the CGI (or servlet, or whatever) generates a ks.cfg for the machine server25. But where, oh where, is the code for ks.cgi?

    Read the article

  • How do I copy an existing hard disc to a new one so I can boot off the new disc?

    - by Brian Hooper
    I currently have a failing hard drive which is the only hard drive in the machine. I have just bought a new hard drive to replace it, and my plan is to copy the contents of the old drive onto the new one, and then replace the old drive in the machine with the new one. I presumably can't just copy the whole directory structure (or can I)? What do I need to do to manage this, assuming it is possible? Is there a utility to do this for me? (The old drive is hopefully good for a few more hours.) (I hope by this means to keep all the software and configuration files as they are, to avoid having to re-install everything. Can that be done?)

    Read the article

  • Games installed from repos won't start

    - by Shauna
    I'm running Natty (upgraded from Maverick) with Gnome 3 (from the Gnome 3 PPA, Unity removed), and have recently found that some of my games from the repos no longer work. When I go to start them, I get the message Failed to launch [app name]. Failed to execute child process [app name] (no such file or directory). I've so far found this on Gweled and PyScrabble. Other games (Mines, Sudoku, Mahjongg), as well as other applications have opened just fine. Gweled used to open fine until recently. Any ideas on how to fix this?

    Read the article

  • What kernel modules are required for wi-fi to work?

    - by Leonid Shevtsov
    My custom-built 2.6.32 kernel cannot connect to any WPA-protected network. The kernel includes (probably?) everything that should be needed for wifi, including IPv4 network support (IPv6 is disabled), the ath5k wireless driver (which is used in the generic Ubuntu 2.6.31 kernel) and all crypto APIs. The card is being detected, however, iwlist scan returns wlan0 Failed to read scan data : Network is down and network-manager log says <info> (wlan0): driver supports SSID scans (scan_capa 0x01). <info> (wlan0): new 802.11 WiFi device (driver: 'ath5k') <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/1 <info> (wlan0): now managed <info> (wlan0): device state change: 1 -> 2 (reason 2) <info> (wlan0): bringing up device. <info> (wlan0): preparing device. <info> (wlan0): deactivating device (reason: 2). supplicant_interface_acquire: assertion `mgr_state == NM_SUPPLICANT_MANAGER_STATE_IDLE' failed <info> modem-manager is now available <WARN> default_adapter_cb(): bluez error getting default adapter: The name org.bluez was not provided by any .service files <info> Trying to start the supplicant... <info> (wlan0): supplicant manager state: down -> idle <info> (wlan0): device state change: 2 -> 3 (reason 0) <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. The exact same configuration works with the generic kernel. Is anything except wifi and crypto api needed for wi-fi to work?

    Read the article

  • I receive email not addressed to me - virus?

    - by Anne
    Every once in a while I receive email (on Gmail) that isn't addressed to me. Gmail puts it in the spam box, because it 'can't verify that it has been sent by [sender]'. The emails, when opened, contain confidential information about deliveries and paid bills (it does look an awful lot like 'real' mail from well-known companies, and it doesn't look like a scam, since the mail is informative - they give information instead of asking for credit card numbers ;-)), and I even got an email from "Facebook" that I requested a password change and that I have to 'click here' to change the password for [email address that isn't mine]. I am not the only addressee, there seems to be a whole list of Gmail addresses beginning with 'a'. The original addressee obviously has some sort of virus, and now I wonder if this could be a risk for me too. Is my email being sent around without my knowing too? I am not the kind of person who randomly clicks on shady links - I am very careful on the internet - but maybe there are other ways of catching viruses? Is there something I should do/check? Thank you for your help!

    Read the article

  • Sub domain on root domain

    - by dror
    I have a site, actually a "portal"/ "directory" for service providers. Now, for start, we opened every service provider own page on our site, but now we get a lot of applications from those providers that thy want sites from their own. We want to make every service provider his own site, but on sub domain url. ( they don’t mind… its ok for them) So, my site is www.exaple.com There site will be: provider.exaple.com Now I have two questions: can it harm my site in SEO? if one from those sub domain , punished by Google because is owner do "black hat seo" , how it will affect the rood domain? It can make the root domain to get punished?

    Read the article

  • How do you name your personal libraries?

    - by Mehrdad
    I'm pretty bad with naming things. The only name I can every generically come up with is 'helper'. Say, if I have a header file that contains helping functions for manipulating paths, I tend to put it inside my "helper" directory and call it "path-helper.hpp" or something like that. Obviouslly, that's a bad naming convention. :) I want to have a consistent naming scheme for my folder (and namespace) which I can use to always refer to my own headers and libraries, but I have trouble finding names that are easy to type or remember (like boost)... so I end up calling some of them "helper" or "stdext" or whatnot, which isn't a great idea. How do you find names for your libraries that are easy to remember and easy to type, and which aren't too generic (like "helper" or "std" or "stdext" or the like)? Any suggestions on how to go about doing this?

    Read the article

  • Cannot get debconf version after delete /var/lib/dpkg

    - by pije76
    As a new ubuntu user, I just make a mistaken. I've deleted a folder /var/lib/dpkg/ instead of /var/lib/dpkg/lock :) Now when I execute apt-get -f install then it will display error message: ... E: Cannot get debconf version. Is debconf installed? debconf: apt-extracttemplates failed: No such file or directory ... I've try this tutorial: http://people.adams.edu/~cdmiller/posts/Ubuntu-dpkg-recovery/ but still no luck. How can I fix this issue?

    Read the article

  • ralink rt2860 not working anymore in ubuntu 12.10

    - by greenit
    I just upgraded from Ubuntu 12.04 to 12.10, however, my network card is not working anymore... When i had 12.04 installed, i just had to download the driver from Ralink, go to the directory and enter these commands: $ make $ sudo make install and the network card worked... now, when I want to do this in ubuntu 12.10, it seems like it works, but it doesn't. I can enable the module, it recognizes the network card, but i am unable to connect to any wireless network... How can i get my wireless card working again? If any1 knows a solution, please tell it to me, thx in advance

    Read the article

  • How to properly set up Sun's JDK?

    - by jurchiks
    I'm trying to manually install the Sun JDK package (I have my reasons, don't bother asking why). I've successfully extracted the .bin file into /usr/lib/jvm/jdk1.6.0_23, but the problem is the PATH variable. I added this line to the /etc/environment file: JAVA_HOME="/usr/lib/jvm/jdk1.6.0_23" and added JAVA_HOME/bin to the PATH variable, BUT the OS still doesn't recognise the command java, says it's not installed and offers me gcj and openjdk. There was another way by using java-package and converting the .bin to .deb installer, but unfortunately that package is not available on/for maverick, so I can't do it that way. How can I make the PATH variable work and is there anything else required apart from the environment variables to make it all work? When I try to use the update-java-alternatives -l command, it says the following: awk: cannot open /usr/lib/jvm/*.jinfo (No such file or directory) jdk1.6.0_23 /usr/lib/jvm/jdk1.6.0_23 What should be the name of the file and the contents of it?

    Read the article

  • Disable suspend / hibernate via the policykit

    - by redonath
    I am trying to run Lubuntu 12.04, but if the computer suspends I am unable to bootup again. Instead I see the bios post, the hard disk light flickers once and I have to install again (I have tried to re-install grub2). I am new to Linux and what I found that best answered my question was posted by James Henstridge. The instructions say to create disable-shutdown.pkla in etc/polkit-1/50-local.d/ but this directory does not exist, so do I create a folder titled 50-local.d in poolkit-1 or do I have to place this file elsewhere?

    Read the article

  • Can't get Broadcom 43142 drivers to work after installation on 5420 Inspiron

    - by beckett
    I'm a complete newbie to ubuntu, I've already installed and reinstalled the Broadcom driver required done a numerous amount of steps on terminal but I can't seem to get wireless working. However, my wired connection seems to work just fine. also I have tried to follow the instructions but get stuck at step number 2 when i get told "No package dpkg is available": 1) Download the file from the link 2) sudo yum install dpkg 3) mkdir BCM43142 4) dpkg-deb -x Downloads/wireless-bcm43142-dkms-6.20.55.19_amd64.deb BCM43142 5) cd BCM43142/usr/src/wireless-bcm43142-oneiric-dkms-6.20.55.19~bdcom0602.0400.1000.0400/src/wl/sys 6) sudo yum install kernel-devel kernel-headers 7) vi wl_linux.c 8) around line 43, remove the line include 9) save the file (:wq) 10) cd ../../.. 11) make Things should work, and you'll have a file called "wl.ko" in the current directory. 12) sudo yum remove broadcom-wl 13) sudo mkdir -p /lib/modules/3.5.2-3.fc17.x86_64/extra/wl 14) sudo cp wl.ko /lib/modules/3.5.2-3.fc17.x86_64/extra/wl 15) sudo depmod -a 16) sudo modprobe wl I really need help :/

    Read the article

  • bash script move file to folders based in name

    - by user289111
    I hope you can help me... I made a perl and bash script to make a backup of my firewalls and tranfers via tftp #!/bin/sh perl /deploy/scripts/backups/10.160.23.1.pl > /dev/null 2>&1 perl /deploy/scripts/backups/10.160.23.2.pl > /dev/null 2>&1 so this tranfers the file to my tftp directory /tftpboot/ ls -l /tftpboot/ total 532 -rw-rw-rw- 1 tftp tftp 209977 jun 6 14:01 10.160.23.1_20140606.cfg -rw-rw-rw- 1 tftp tftp 329548 jun 6 14:02 10.160.23.2_20140606.cfg my questions is how to improve my script to moving this files dynamically to another folder based on the name (in this case on the ip address) for example: 10.160.23.1_20140606.cfg move to /deploy/backups/10.160.23.1/ is that the answer to this surely was on Google, but wanted to know if there was a particular solution to this request and also learn how to do :) Thanks!

    Read the article

  • Problems, connecting Android ICS to Ubuntu using MTP

    - by ubuntico
    I've followed this tutorial from this blog which very clearly explains how to connect Android phone with ICS to Ubuntu so that one can access phone's sdcard (MTP access). I passed all the procedure with no errors, I can event attach my mobile to ubuntu via mtpfs -o allow_other ~/Android/GalaxyS2 and disconnect via fusermount -u ~/Android/GalaxyS2 The problem comes when I try to access mounted directory. If I try to do it via Nautilus, the system tries to open the folder for a couple of minutes and then, I either see the error, or the folder disappears from Nautilus (it comes back when I disconnect the path). I also get a console error: fuse: bad mount point `~/Android/GalaxyS2': Transport endpoint is not connected I see many people on the net reporting this error, but noone offers any solution to it. I use Ubuntu 11.10 with Gnome Shell (Gnome 3) and the mobile is Samsung Galaxy S II. I am in the fuse list, I did all the steps in the tutorial for dozens of times, all in vain.

    Read the article

  • pfSense router on a LAN with two gateways

    - by JohnCC
    I have a LAN with an ADSL modem/router on it. We have just gained an alternative high-speed internet connection at our location, and I want to connect the LAN to it, eventually dropping the ADSL. I've chosen to use a small PFSense box to connect the LAN to the new WAN connection. Two servers on the LAN run services accessible to the outside via NAT using the single ADSL WAN IP. We have DNS records which point to this IP. I want to do the same via the new connection, using the WAN IP there. That connection permits multiple IPs, so I have configured pfSense using virtual IP's, 1:1 NAT and appropriate firewall rules. When I change the servers' default gateway settings to the pfSense box, I can access the services via the new WAN IPs without a problem. However, I can no longer access them via the old WAN IP. If I set the servers' default gateway back to the ADSL router, then the opposite is true - I can access the services via the ADSL IP, but not via the new one. In the first case, I believe this is because an incoming SYN packet arrives at the ADSL WAN IP, and is NAT'd and sent to the internal IP of the server. The server responds with a SYN/ACK which it sends via its default gateway, the pfSense box. The pfSense box sees a SYN/ACK that it saw no SYN for and drops the packet. Is there any sensible way around this? I would like the services to be accessible via both IPs for a short period at least, since once I change the DNS it will take a while before everyone picks up the new address.

    Read the article

  • Drupal + LDAP + Automatic

    - by WernerCD
    I've got Drupal 6 setup within a XAMPP test area. I have LDAP authentication, groups and data working against Active Directory. What I want... is since I'm on an intranet where users are logged in via user-names... is for automatic authentication, without the need to login via the website. If it's more difficult than its worth, it's no major hassle, but I'd like to know if it's possible that when my users visit our intranet they auto-magically authenticate with their already logged in Windows session. Ultimately, I may switch to IIS, but I do like having a portable, easy to backup/copy/test setup so for now I'm going to see if I can get this working in XAMPP.

    Read the article

  • SQL SERVER – ?Finding Out What Changed in a Deleted Database – Notes from the Field #041

    - by Pinal Dave
    [Note from Pinal]: This is a 41th episode of Notes from the Field series. The real world is full of challenges. When we are reading theory or book, we sometimes do not realize how real world reacts works and that is why we have the series notes from the field, which is extremely popular with developers and DBA. Let us talk about interesting problem of how to figure out what has changed in the DELETED database. Well, you think I am just throwing the words but in reality this kind of problems are making our DBA’s life interesting and in this blog post we have amazing story from Brian Kelley about the same subject. In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words. Sometimes, one of the hardest questions to answer is, “What changed?” A similar question is, “Did anything change other than what we expected to change?” The First Place to Check – Schema Changes History Report: Pinal has recently written on the Schema Changes History report and its requirement for the Default Trace to be enabled. This is always the first place I look when I am trying to answer these questions. There are a couple of obvious limitations with the Schema Changes History report. First, while it reports what changed, when it changed, and who changed it, other than the base DDL operation (CREATE, ALTER, DELETE), it does not present what the changes actually were. This is not something covered by the default trace. Second, the default trace has a fixed size. When it hits that size, the changes begin to overwrite. As a result, if you wait too long, especially on a busy database server, you may find your changes rolled off. But the Database Has Been Deleted! Pinal cited another issue, and that’s the inability to run the Schema Changes History report if the database has been dropped. Thankfully, all is not lost. One thing to remember is that the Schema Changes History report is ultimately driven by the Default Trace. As you may have guess, it’s a trace, like any other database trace. And the Default Trace does write to disk. The trace files are written to the defined LOG directory for that SQL Server instance and have a prefix of log_: Therefore, you can read the trace files like any other. Tip: Copy the files to a working directory. Otherwise, you may occasionally receive a file in use error. With the Default Trace files, if you ask the question early enough, you can see the information for a deleted database just the same as any other database. Testing with a Deleted Database: Here’s a short script that will create a database, create a schema, create an object, and then drop the database. Without the database, you can’t do a standard Schema Changes History report. CREATE DATABASE DeleteMe; GO USE DeleteMe; GO CREATE SCHEMA Test AUTHORIZATION dbo; GO CREATE TABLE Test.Foo (FooID INT); GO USE MASTER; GO DROP DATABASE DeleteMe; GO This sets up the perfect situation where we can’t retrieve the information using the Schema Changes History report but where it’s still available. Finding the Information: I’ve sorted the columns so I can see the Event Subclass, the Start Time, the Database Name, the Object Name, and the Object Type at the front, but otherwise, I’m just looking at the trace files using SQL Profiler. As you can see, the information is definitely there: Therefore, even in the case of a dropped/deleted database, you can still determine who did what and when. You can even determine who dropped the database (loginame is captured). The key is to get the default trace files in a timely manner in order to extract the information. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Can't mount windows partition?

    - by C.J.
    When I try to open the Windows Partition from Ubuntu I receive the error: Unable to mount 55 GB Filesystem Error mounting: mount exited without exit code 13: ntfs_mst_post_read_fixup_warn: magic: 0x04010400 size: 1024 usa_ofs: 1026 usa_count: 1026: Invalid argument Record 6 has no FILE magic (0x4010400) Failed to open inode FILE_Bitmap: Input/output error Failed to mount '/dev/sda2': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more detail. Additionally, I can't open the Windows Partition. I've tried updating it many times but it won't show up on GRUB. Does anybody know what all this means? And how I might fix it? I thank you for any help in advance

    Read the article

  • Need Help Unable to Mount Location

    - by Don't ASk Ubun
    I am not able to start Windows and am using a DVD copy of Ubuntu to start up. I see my 750 GB Hard Disk, but if I click it i get this error: Error mounting: mount exited with exit code 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read NTFS $Bitmap: Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. After googling for a while I think I need to do sudo apt-get install ntfsprogs but when I try that: E: Package 'ntfsprogs' has no installation candidate My problem is a lot like this thread

    Read the article

  • Ubuntu Server 12.10 No GUI Headless Boot and/or Reboot

    - by Ubuntu User
    I have a headless server running the latest Ubuntu Server 12.10. It does not have any GUI at all. I am having the same issue that others are having which is that when you boot or reboot without a monitor (headless) the computer does not boot. The solution presented to others was to edit their xorg.conf file. But since I do not want a GUI installed (and therefore chose not to install a GUI after installing Ubuntu 12.10) I do not have a xorg.conf file in the /etc/X11/ directory. Ubuntu is a widely used distro of Linux, especially for server applications, I absolutely love it. Therefore, there has to be someone who solved this already?

    Read the article

  • How do you force Ubuntu to unmount a disk when you press the eject button on an optical drive?

    - by Michael Curran
    When upgrading my hardware, I also upgraded to Ubuntu 10.10. On my previous system (with 10.04 and earlier) when I ejected a disk from the optical drive, the subfolder in the /media directory was automatically removed. In my new 10.10 system, if I don't eject the disk using the "eject" command within the system, the disk remains mounted, even after a new disk is installed. The new drive is a Blu Ray drive, but I haven't noticed any other problems from it. Normally, this isn't a problem, but it makes installing applications that are spread over multiple CDs more difficult in many cases (i.e. Wine). Any advice?

    Read the article

  • Save Windows 8 Files and Install Ubuntu

    - by Nika
    I would like to install Ubuntu on my laptop. I am new to Ubuntu and therefore have some questions: 1.My hardware is Acer Emachines Intel Celeron T3000 1.8 3 Gig of Ram and Native Intel Intergrated graphics card. Can I run Ubuntu on this machine without any problems? 2.I am windows user and currently have windows 8. I want to install ubuntu but want to save all my directories on C:\ D:\ disks so that I can open my files in ubuntu. Is it possible to save my current directory system and save all my files after installing ubuntu? Can I open all my files? I have very important data in files so I dont want to lose them at all :) Thank You

    Read the article

  • Changing from one file to another

    - by jbander
    I'm told to go to /home/jbander/Downloads, so how do I do that, I assume you do it in terminal but what do you do next, I can get to home but thats it. How do I go from one directory or file or whatever they are, to another and once I'm there what do I do to see what is in the download file. One more question if I want to change it from e.g. cow to e.g. duck how would I do that(they are just arbitrary names) how do I get rid of cow and how do I put duck in it's place.

    Read the article

< Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >