Search Results

Search found 7890 results on 316 pages for 'amit dev'.

Page 16/316 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • ipv6 with KVM on debian

    - by Eliasdx
    I have trouble setting up IPV6 on my Proxmox (KVM) server: My ISP sent me this information(xxx=placeholder): IPs: 2a01:XXX:XXX:301:: /64 Gateway: 2a01:XXX:XXX:300::1 /59 This is the interface setup on the host server: auto vmbr1 iface vmbr1 inet static address 178.XX.XX.4 broadcast 178.XX.XX.63 netmask 255.255.255.192 pointopoint 178.XX.XX.1 gateway 178.XX.XX.1 bridge_ports eth0 bridge_stp off bridge_fd 0 iface vmbr1 inet6 static address 2a01:XXX:XXX:301::2 netmask 64 up ip -6 route add 2a01:XXX:XXX:300::1 dev vmbr1 down ip -6 route del 2a01:XXX:XXX:300::1 dev vmbr1 up ip -6 route add default via 2a01:XXX:XXX:300::1 dev vmbr1 down ip -6 route del default via 2a01:XXX:XXX:300::1 dev vmbr1 On the guest: auto eth0 iface eth0 inet static address 178.xx.xx.47 netmask 255.255.255.255 broadcast 178.xx.xx.63 gateway 178.xx.xx.1 pointopoint 178.xx.xx.1 iface eth0 inet6 static pre-up modprobe ipv6 address 2a01:XXX:XXX:301::2:2 netmask 64 up ip -6 route add 2a01:XXX:XXX:300::1 dev eth0 down ip -6 route del 2a01:XXX:XXX:300::1 dev eth0 up ip -6 route add default via 2a01:XXX:XXX:300::1 dev eth0 down ip -6 route del default via 2a01:XXX:XXX:300::1 dev eth0 Ipv4 works on both host and guest but Ipv6 only works "sometimes". It's up for minutes and then down again until I change something. However I can actually ping the host and the guest from both host and guest. host:~# ip -6 neigh 2a01:XXX:XXX:301::100:2 dev vmbr1 lladdr 00:50:56:00:00:e0 REACHABLE 2a01:XXX:XXX:300::1 dev vmbr1 lladdr 00:26:88:76:18:18 router STALE host:~# ip -6 route 2a01:XXX:XXX:300::1 dev vmbr1 metric 1024 mtu 1500 advmss 1440 hoplimit 4294967295 2a01:XXX:XXX:301::/64 dev vmbr1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev vmbr0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev vmbr1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev tap101i1d0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 default via 2a01:XXX:XXX:300::1 dev vmbr1 metric 1024 mtu 1500 advmss 1440 hoplimit 4294967295 Does someone know why it isn't working? And is there a way to configure multiple v6 IPs from the same subnet so I can dedicate IPs to websites on a server with multiple virtualhosts?

    Read the article

  • ipv6 with KVM on debian

    - by Eliasdx
    I have trouble setting up IPV6 on my Proxmox (KVM) server: My ISP sent me this information(xxx=placeholder): IPs: 2a01:XXX:XXX:301:: /64 Gateway: 2a01:XXX:XXX:300::1 /59 This is the interface setup on the host server: auto vmbr1 iface vmbr1 inet static address 178.XX.XX.4 broadcast 178.XX.XX.63 netmask 255.255.255.192 pointopoint 178.XX.XX.1 gateway 178.XX.XX.1 bridge_ports eth0 bridge_stp off bridge_fd 0 iface vmbr1 inet6 static address 2a01:XXX:XXX:301::2 netmask 64 up ip -6 route add 2a01:XXX:XXX:300::1 dev vmbr1 down ip -6 route del 2a01:XXX:XXX:300::1 dev vmbr1 up ip -6 route add default via 2a01:XXX:XXX:300::1 dev vmbr1 down ip -6 route del default via 2a01:XXX:XXX:300::1 dev vmbr1 On the guest: auto eth0 iface eth0 inet static address 178.xx.xx.47 netmask 255.255.255.255 broadcast 178.xx.xx.63 gateway 178.xx.xx.1 pointopoint 178.xx.xx.1 iface eth0 inet6 static pre-up modprobe ipv6 address 2a01:XXX:XXX:301::2:2 netmask 64 up ip -6 route add 2a01:XXX:XXX:300::1 dev eth0 down ip -6 route del 2a01:XXX:XXX:300::1 dev eth0 up ip -6 route add default via 2a01:XXX:XXX:300::1 dev eth0 down ip -6 route del default via 2a01:XXX:XXX:300::1 dev eth0 Ipv4 works on both host and guest but Ipv6 only works "sometimes". It's up for minutes and then down again until I change something. However I can actually ping the host and the guest from both host and guest. host:~# ip -6 neigh 2a01:XXX:XXX:301::100:2 dev vmbr1 lladdr 00:50:56:00:00:e0 REACHABLE 2a01:XXX:XXX:300::1 dev vmbr1 lladdr 00:26:88:76:18:18 router STALE host:~# ip -6 route 2a01:XXX:XXX:300::1 dev vmbr1 metric 1024 mtu 1500 advmss 1440 hoplimit 4294967295 2a01:XXX:XXX:301::/64 dev vmbr1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev vmbr0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev vmbr1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev tap101i1d0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 default via 2a01:XXX:XXX:300::1 dev vmbr1 metric 1024 mtu 1500 advmss 1440 hoplimit 4294967295 Does someone know why it isn't working? And is there a way to configure multiple v6 IPs from the same subnet so I can dedicate IPs to websites on a server with multiple virtualhosts?

    Read the article

  • Java Google App Engine inconsistent data lose after restarting dev server

    - by user259349
    Hello everyone, I am using Java GAE. So far, i'm just scafolding my data objects and i'm seeing an interesting issue. The records that i am playing around with are getting updated properly as long as my dev server is running up. The second that the my dev server gets restarted, i lose all of my changes. That would be not alarming if i lost all of my records, but, there was a point of time where my data persisted through the server restart. I'm worried that i would lose production data if i launched without fixing this potential bugs? ANy idea on wher ei should look?

    Read the article

  • Interacting with system commands using a web dev language

    - by Jamie
    Hi all, First of all, sorry for the vague title. Let me explain. At work we're currently using SunGrid I've been assigned a project to create a web interface wrapper for interacting with the engine. i.e. displaying users jobs, submitting jobs via a nice GUI etc. (most of the sgrid commands output xml which is nice) My question for you chaps is the following: What web dev language would you use to interact with the system? i.e. use the language to do a system call and evaluate the response. I'm not after an argument on which language is best, I just would like to know which language is specifically good for interacting with the system and is also good for web dev.

    Read the article

  • iPhone expired dev certs - public/private key pair issue

    - by KevinDTimm
    My dev 'license' expired last week, and with it my dev certs. I re-upped my license, but my keys are still expired. I tried to create a new signing certificate via keychain, etc. but it seems my private key is not enough, it needs my public key to do so. I understand that the public key is stored in the provisioning certificate. The question is, can I retrieve the public key from there? (And if so, how?)

    Read the article

  • MS Dev Studio 2005 Ignores Preprocessor directives during compile

    - by miked
    We just got a new developer and I'm trying to set him up with Dev Studio 2005 (The version we all use at this office), and we're running into a weird problem that I've never seen before. I have some code that works perfectly on my system, and he can't seem to get it compiled. We've tracked the issue down to his copy of dev studio ignoring the preprocessor directives. For example, in the project properties under C/C++|Preprocessor|Preprocessor Directives, I add DEFINE_ME. Which should translate to a /D"DEFINE_ME" for the compiler. And it does in my development environment, but it doesn't on his. I verified that when he checks out the code from the source repository, that he has the same version of the code I do. And if I look in his Project Properties, all of the directives are there. For some reason they're just not getting passed down to the compiler. Any Ideas?

    Read the article

  • Sending events to the dev version of a ruleset via HTTP

    - by Steve Nay
    I've been writing an endpoint that sends events to a KRL ruleset via HTTP GET (based on the documentation here), in this format: http://cs.kobj.net/blue/event/{domain}/{eventname}/{appid} That works great when the version of the app I want to test is the same one that's deployed. I don't always want to deploy before testing it, though. Using the stated format for calling the dev version doesn't work. It still calls the deployed version of my ruleset: http://cs.kobj.net/blue/event/{domain}/{eventname}/{appid}:kynetx_app_version=dev What am I doing wrong?

    Read the article

  • Windows Workflow runs very slowlyh on my DEV machine

    - by Joon
    I am developing an app using WF hosted in IIS as WCF services as a business layer. This runs quickly on any machine running Windows Server 2008 R2, but very slowly on our dev machines, running Windows XP SP3. Yesterday, the workflows were as fast on my dev machine as they are on the server for the whole day. Today, they are back to running slowly again (I rebooted overnight) Has anyone else experienced this problem with workflows running slowly on IIS in XP? What did you do to fix it?

    Read the article

  • Deploying a .Net App Source Control (SVN) over 32-bit AND 64-bit dev stations

    - by Mika Jacobi
    Here is the situation : Our Dev Team has heterogeneous OS systems, scattered between 32-bit and 64-bit. This is not ideal, we are actually planning to homogenize our infrastructure, but in the meantime we have to deal with it. The issue is that when a 32-bit developer checks out a 64-bit solution on SVN, he has to manually change the target platforms all over again to get it compiled (not to mention other side problems) My question is : What clean (though temporary) solution could be addressed in such situation, permitting each developer to keep his default project/platform settings while checking out and in from SVN. I guess that -at least for the first time a project/solution is checked out, a dev still has to tweak the setting manually to compile it properly. After that, according to relevant SVN filters, it is possible to ignore some settings files (which ones, by the way?) I am open to all clever and detailed suggestions. Thanks.

    Read the article

  • As a team should we develop locally and merge into the dev server, or develop on the dev server?

    - by CogitoErgoSum
    Hey, Recently I was tasked with writing up formal procedures for a team based development enviroment. We have several projects with multiple modules each. Right now there are only two programmers, however there are plans to expand to 4-6 programmers. Each programmer will be working on the same project and possibly pages which may cause over writing or error issues. So far the ideal solution I have thought up is: Local development (WAMP/VM or some virtual server instance on their own machine). Once a developer has finished their developments, they check it into the CVS Repository and merge it wih other fixes etc. The CVS version is then deployed to the primary dev server for testing by the devs. The MySQL DAtabases are kept on the primary dev server and users may remotely connect to it. Any Schema / Data alterations are run through a DB Admin who will notify all devs of any DB Changes (Which should be rare). Does anyone see an issue with this or have a better solution?

    Read the article

  • Maven best practice for generating artifacts for multiple environments [prod, test, dev] with CI/Hud

    - by jaguard
    I have a project that need to be deployed into multiple environments (prod, test, dev). The main differences mainly consist in configuration properties/files. My idea was to use profiles and overlays to copy/configure the specialized output. But I'm stuck into if I have to generate multiple artifacts with specialized classifiers (ex: "my-app-1.0-prod.zip/jar", "my-app-1.0-dev.zip/jar") or should I create multiple projects, one project for every environment ?! Should I use maven-assembly-plugin to generate multiple artifacts for every environment ? Anyway, I'll need to generate all them at once so it seams that the profiles does not fit ... still puzzled :( Any hints/examples/links will be more than welcomed. As a side issue, I'm also wondering how to achieve this in a CI Hudson/Bamboo to generate and deploy these generated artifacts for all the environments, to their proper servers (ex: using SCP Hudson plugin) ?

    Read the article

  • Multiple calls to /dev/stdin using python subprocess (*nix)

    - by Alex Leach
    Hi, I have a python subprocess call which I would like to link up to three pipes (two standard in and one standard out). I know that there is only one /dev/stdin, but there's all those other devices in /dev I don't know about, and don't know of any python os, sys or subprocess modules that will utilise them in a manner which allows me to give the device path to subprocess.Popen. The reason I ask is because I would like to pipe information from a mysql database or tar archive rather than a directory structure I currently have which has 28,000 directories in. The directory names alone uses a LOT of space! The alternative is to tar / gunzip the entire directory structure and manoeuvre through the compressed archive. With either solution, mysql or tar, I would still like to have two pipes into subprocess.Popen and one out, so that I can bypass the HDD. Any need for an example??

    Read the article

  • Dev Environment Tests Not 100% Compatible with Staging/Production in Rails

    - by aronchick
    We use a bunch of specific apps/APIs that (unfortunately) differ quite a bit from dev to staging/production. We use tests and continuous integration at each stage, but in dev, the tests fail annoyingly (throwing dialogs, etc - thanks Windows for the 64-bit notification!). I hate to write custom code, but are there some best practices for how to allow a subset of testing in ruby/rails - or for patching out specific tests when you're running on Windows? Some specific situations that: Identify.exe does not support 64-bit Windows and throws a dialog. Sethostname is not supported, and throws an error (at least it's command line).

    Read the article

  • Should a c# dev switch to VB.net when the team language base is mixed?

    - by jjr2527
    I recently joined a new development team where the language preferences are mixed on the .net platform. Dev 1: Knows VB.net, does not know c# Dev 2: Knows VB.net, does not know c# Dev 3: Knows c# and VB.net, prefers c# Dev 4: Knows c# and VB6(VB.net should be pretty easy to pick up), prefers c# It seems to me that the thought leaders in the .net space are c# devs almost universally. I also thought that some 3rd party tools didn't support VB.net but when I started looking into it I didn't find any good examples. I would prefer to get the whole team on c# but if there isn't any good reason to force the issue aside from preference then I don't think that is the right choice. Are there any reasons I should lead folks away from VB.net?

    Read the article

  • Forwarding all mail to a single dev box on IIS via virtual SMTP

    - by Greg R
    I am trying to set up a development environment for our web server. I would like all emails that are relayed by the server go to a specific mailbox, regardless of who they were sent to. For example, some application on the server sends an email to [email protected]. I want that email to go to [email protected]. Is that possible to do with IIS/Virtual SMTP? Is there some other way of doing this? I don't have exchange server running, if that makes a difference. Any help would be greatly appreciated. Thanks a lot!

    Read the article

  • OpenCV install problems on Studio 12.04 - broken dependencies

    - by Will
    I'm trying to follow the Ubuntu OpenCV documentation at OpenCV. The provided script has a line which executed for some time, taking away more packages than I expected (such as ubuntu-studio video); sudo apt-get -qq remove ffmpeg x264 libx264-dev When the script gets to the line below, it bombs; sudo apt-get -qq install libopencv-dev build-essential checkinstall cmake pkg-config yasm libtiff4-dev libjpeg-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libdc1394-22-dev libxine-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libv4l-dev python-dev python-numpy libtbb-dev libqt4-dev libgtk2.0-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils ffmpeg The error msg is; E: Unable to correct problems, you have held broken packages. I've since run Update-Manager, run sudo apt-get updates, rebooted, tried the above script line manually, and still no change. I've just run sudo apt-get install -f and nothing seemed to change. It did mention that some packages were no longer needed and could be removed by apt-get autoremove, so I ran that. It removed a number of packages, so I reran the install command above. Still same problem of held broken packages. I just ran sudo apt-get -u dist-upgrade Part of the response was; The following packages have been kept back: gstreamer0.10-ffmpeg I'm not sure what that means. I do know that it shows up in my Update-Manager and cannot be checked I then ran sudo dpkg --configure -a and then reran sudo apt-get -f install and the package was still not upgraded, though there was this very interesting comment; Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-ffmpeg : Depends: libavcodec53 (< 5:0) but it is not going to be installed or libavcodec-extra-53 (< 5:0) but 5:0.7.2-1ubuntu1+codecs1~oneiric2 is to be installed E: Unable to correct problems, you have held broken packages. Then I ran sudo apt-get -u dist-upgrade It showed I had one held package, so I ran; sudo apt-get -o Debug::pkgProblemResolver=yes dist-upgrade It also exited without upgrading the package, so I ran; sudo apt-get remove --dry-run gstreamer0.10-ffmpeg:i386 And it gave me; *The following packages will be REMOVED: arista gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded. Remv arista [0.9.7-3ubuntu1] Remv gstreamer0.10-ffmpeg [0.10.12-1ubuntu1]* But when I reran sudo apt-get -u dist-upgrade It showed the package was still there. *The following packages have been kept back: gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.* Update: Just went into Synaptic PM and completely removed gstreamer0.10-ffmpeg Reran sudo apt-get -u dist-upgrade And was told; 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. However, when I ran the original apt-get to install opencv (first code at the top of this question), it still gave me the same broken package errors. So I tried $ cat /etc/apt/sources.list # # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu oneiric partner ## Uncomment the following two lines to add software from Ubuntu's ## 'extras' repository. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu oneiric main # deb-src http://extras.ubuntu.com/ubuntu oneiric main # deb http://download.opensuse.org/repositories/home:/popinet/xUbuntu_11.04 ./ # disabled on upgrade to precise and then; $ cat /etc/apt/sources.list.d/* But I don't have enough reputation to post the results here (it says I need at least 10 reputation points to post more than 2 links), so I don't know how to provide the requested feedback. Then tried; $ sudo apt-get check [sudo] password for <abcd>: Reading package lists... Done Building dependency tree Reading state information... Done However, no resolution of the problem yet. What else do I need to do? Will an upgrade to Ubuntu Studio 13.xx solve this problem (or compound it)?

    Read the article

  • Sql 2008 r2 DEV install failed

    - by obulay
    I am trying to install Sql 2008 r2 DEVeloper on Windows 2008 STD. It previously had Sql 2008 Enterprise installed. I first uninstalled SQL 2008. I think uninstall still leaves some crap in registery, and in Program files. Installation fails in the last step of the process - installation, about couple of minutes into it. Can't insert picture for you because serverfault does not allow me to do so. But it basically it: Error message: "Installation failed" I re-installed SQL 2008 Enterprise on this box and we are not going to go to R2 on older servers that previously had SQL 2005 or SQL 2008 on it. Looking at Windows log: Activation context generation failed for "C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\SQLServer2008R2\x64\Microsoft.SqlServer.Configuration.SqlServer_ConfigExtension.dll". Dependent Assembly Microsoft.VC80.CRT,processorArchitecture="amd64",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="8.0.50727.4027" could not be found. Please use sxstrace.exe for detailed diagnosis.

    Read the article

  • Should a c# dev switch to VB.net when the team language base is mixed?

    - by jjr2527
    I recently joined a new development team where the language preferences are mixed on the .net platform. Dev 1: Knows VB.net, does not know c# Dev 2: Knows VB.net, does not know c# Dev 3: Knows c# and VB.net, prefers c# Dev 4: Knows c# and VB6(VB.net should be pretty easy to pick up), prefers c# It seems to me that the thought leaders in the .net space are c# devs almost universally. I also thought that some 3rd party tools didn't support VB.net but when I started looking into it I didn't find any good examples. I would prefer to get the whole team on c# but if there isn't any good reason to force the issue aside from preference then I don't think that is the right choice. Are there any reasons I should lead folks away from VB.net?

    Read the article

  • Grub2 -- Dualboot Ubuntu LTS 12.04 and Windows 7 -- Detects two Windows 7 (loader) entries

    - by DarkIron112
    this is the first question I have ever asked the Ubuntu Community. :D I'm fairly new to Ubuntu, but I understand the basics and know how to navigate the Terminal. I also know how to ask for/research my problems before asking for/ help. I have scoured the internet high and low and learned much of how Grub2 works. But nothing has helped me to solve my problem. My problem is this: I have a computer that has three hard drives. It previously had Windows XP, but I upgraded to Windows 7. I also installed Ubuntu 12.04 LTS (Precise Pangolin). During my installation of Windows 7, there was a failure and I had to restart the installation. Afterwards, I installed Ubuntu. After some trouble removing all traces of the XP OS (Ubuntu auto-detected it, but not Windows 7) I got the two OSes working flawlessly. Or, almost. When booting up, Grub2 used to display Ubuntu, Ubuntu Recovery Mode, Other Versions of Linux, memtest, followed by "Windows 7 (loader) on /dev/sda1" and "Windows 7 (loader) on /dev/sdb1". I eventually removed Recovery Mode, Other Versions, and Memtest. Now, when I run: sudo update-grub I get this print-out: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.2.0-26-generic Found initrd image: /boot/initrd.img-3.2.0-26-generic Found Windows 7 (loader) on /dev/sda1 Found Windows 7 (loader) on /dev/sdb1 I would like to remove "Windows 7 (loader) on /dev/sda1", as it is a broken entry that shouldn't exist, and must have been installed during my first Windows 7 attempt. I cannot find a Windows 7 entry in /etc/grub.d... And I don't know where to look. Here is a layout of my hard drives: /dev/sda1/ (1.82 TiB), NTFS ("Media") /dev/sdb1/ (100 Mib), NTFS ("System Reserved") /dev/sdb2/ (149 GiB), NTFS ("Windows 7") /dev/sdb3/ (149 GiB), Extended (" ") /dev/sdb4/ (145 GiB), ext4 (" ") /dev/sdb5/ (4 GiB), linux-swap (" ") /dev/sdc1/ (488.28 GiB), NTFS ("Downloads") /dev/sdc2/ (488.28 GiB), NTFS ("AltMedia") /dev/sdc3/ (886.45 GiB), NTFS ("Personal") unallocated (2.09 MiB), unallocated What I think has happened: Windows 7 installed first and badly. I installed it again. First, there was Windows XP to guide where the bootloader went to so it was put on /dev/sdb1/. But, the second time no such guide existed so the machine put another bootloader on /dev/sda1/. sda1, by the way, is the only partition on a 2TB drive. No boot record partition appears to exist according to gedit. I'm not sure where Grub2 is getting this information from. But, there it is. Is there anything somebody can do to help me? Or, is there any more information I should add? Thank you, community!

    Read the article

  • How to repair an external harddrive?

    - by dodohjk
    I would like to reformat my hard disk, and if possible recover the (somewhat unimportant) contents if possible. I have a Western Digital 1TB hard drive which had a NTFS partition. I unplugged the drive without safely removing it first. At first a pop up was asking me to use a Windows OS to run the chkdsk /f command, however, in the effort to keep using a Linux OS I used the ntfsfix command on the ubuntu terminal Now, when I try to access the hard drive, it doesn't show up anymore in Nautilus. I tried reformatting it using Disk Utility, but it gives me an error message, and Gparted would hang on the "Scanning devices" step infinitely. Please comment any output that you would like to see and I will add it to my question. EDIT disk utility tells me is on /dev/sdb the command sudo fdisk -l gives dodohjk@DodosPC:~$ sudo fdisk -l [sudo] password for dodohjk: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006fa8c Device Boot Start End Blocks Id System /dev/sda1 * 4094 482344959 241170433 5 Extended /dev/sda2 482344960 488396799 3025920 82 Linux swap / Solaris /dev/sda5 4096 31461127 15728516 83 Linux /dev/sda6 31463424 52434943 10485760 83 Linux /dev/sda7 52436992 62923320 5243164+ 83 Linux /dev/sda8 62924800 482344959 209710080 83 Linux Disk /dev/sdb: 1000.2 GB, 1000202043392 bytes 255 heads, 63 sectors/track, 121600 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6e697373 This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sdb1 ? 1936269394 3772285809 918008208 4f QNX4.x 3rd part /dev/sdb2 ? 1917848077 2462285169 272218546+ 73 Unknown /dev/sdb3 ? 1818575915 2362751050 272087568 2b Unknown /dev/sdb4 ? 2844524554 2844579527 27487 61 SpeedStor Partition table entries are not in disk order I wrote something wrong here, however here the output of fsck /dev/sbd is dodohjk@DodosPC:~$ sudo fsck /dev/sdb fsck from util-linux 2.20.1 e2fsck 1.42.5 (29-Jul-2012) ext2fs_open2: Bad magic number in super-block fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sdb The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device&gt;

    Read the article

  • Conditional dev|nfs mount in Linux

    - by o_O Tync
    I have a mount point — let it be /media/question — and two possible devices: a physical HDD and a remote NFS folder. Sometimes I plug the device in physically, in other cases I mount it via NFS. Is there a way to specify both of them in fstab so that executing mount /media/question will preferably choose physical volume, and when it's not available — NFS?

    Read the article

  • CentOS 6.5 as WebServer for Django Dev

    - by Charlesliam
    During CentOS 6.5 Installation I choose WebServer type for this computer. The server has a static IP address 192.168.111.100. The CentOS was updated I managed to install virtualenv with Python 2.7. Within the virtualenv, I'll be using Django Framework. After I tried to run the command using root user python manage.py runserver 0.0.0.0:8000 I can't see the website from other computer within the LAN when I try to type 192.168.111.100:8000/admin on my browser. I already disable firewall using service iptables stop I can ping the 192.168.111.100 and I have a good feedback with nslookup. What seems the problem of my config?

    Read the article

  • IIS- defining a website as a dev site

    - by Lock
    I am new to IIS. Is there a way during the setup of IIS to have a variable of some sort set that I can use to tell my site that this is the development copy? I am using PHP via IIS 7.5 and would like to have a file with a few lines that define which databases etc is used by my application. Is this the purpose of web.config? I would love there to be a place in the setup of the website where I can set a few variables that are accessibly by my application. That way, when I migrate files to live, I don't need to worry about access details to databases etc.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >