Search Results

Search found 3489 results on 140 pages for 'summary'.

Page 97/140 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Crash dump analysis

    - by Ryan Ries
    I hope this isn't a stupid question, and if it is, then I want to at least get it over with so I don't feel so dumb in the future. Here we are, loading up a Windows crash dump with Windbg. Here are the first few lines of the debugger output: 0: kd> .dumpdebug ----- 64 bit Kernel Summary Dump Analysis DUMP_HEADER64: MajorVersion 0000000f MinorVersion 00001db1 ... The MinorVersion I mostly understand. It's hexadecimal and it translates to 7601 in decimal. Windows admins would already be able to tell from that that this must be either a Win7 x64 machine or a 2k8 R2 machine with SP1. But isn't 7601 the build number? It's supposed to be Major.Minor.Build/Revision... right? Also I don't understand the MajorVersion. It should be 6. This version of Windows is 6. But isn't 0000000f in hexadecimal 15 in decimal? The full version string of this version of Windows, when you launch the Command Prompt for instance, is 6.1.7601. If 7601 is the MinorVersion, then what is 1 and what is 6? And why does the crash dump say 0F?

    Read the article

  • Insufficient storage available to create shadow copy

    - by Bob.at.SBS
    I have used the "Windows 7 File Recovery" tool under Windows 8 to create system image backups to an external USB hard drive. I built a new Windows 8.1 machine, and I want to create my first system image backup of that machine to the same USB hard drive. The "Windows 7 File Recovery" tool is gone in Windows 8.1, but wbAdmin is alive and well: wbAdmin start backup -backupTarget:\\?\Volume{2a2b...994f} -allCritical -quiet fails with this text displayed: wbadmin 1.0 - Backup command-line tool (C) Copyright 2013 Microsoft Corporation. All rights reserved. Retrieving volume information... This will back up (EFI System Partition),(C:),Recovery (300.00 MB) to \?\Volume {2a2b1255-3a86-11e3-be86-b8ca3a83994f}. The backup operation to F: is starting. Creating a shadow copy of the volumes specified for backup... Summary of the backup operation: The backup operation stopped before completing. The backup operation stopped before completing. Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. The EFI System Partition is 100 MB The Recovery Partition is 300 MB The C partition is 1.72 TB, NTFS, 218 GB used, 1.51 TB free The destination drive is 1.81 TB, NTFS, 678 GB used, 1.15 TB free I've fiddled with vssadmin resize shadowstorage, with no change in the error. vssadmin list shadowstorage displays: Shadow Copy Storage association For volume: (C:)\?\Volume{37a0...263}\ Shadow Copy Storage volume: (C:)\?\Volume{37a0...263}\ Used Shadow Copy Storage space: 2.39 GB (0%) Allocated Shadow Copy Storage space: 2.81 GB (0%) Maximum Shadow Copy Storage space: 531 GB (30%) Shadow Copy Storage association For volume: (F:)\?\Volume{2a2...94f}\ Shadow Copy Storage volume: (F:)\?\Volume{2a2...94f}\ Used Shadow Copy Storage space: 334 GB (17%) Allocated Shadow Copy Storage space: 337 GB (18%) Maximum Shadow Copy Storage space: UNBOUNDED (922154758%) (Yeah, the "percent calculation" for UNBOUNDED is seriously bogus.) I've run SFC /verifyonly and it seems happy. I've verified that the new `Volume Shadow Copy" service starts when I start the backup operation. Any suggestions?

    Read the article

  • Cisco 837 not passing UDP traffic properly (was: DNS query problem)

    - by TessellatingHeckler
    We have a setup of ADSL line - Cisco 837 ADSL router - Zyxel ZyWall 35 firewall/NAT - Switch - LAN. It has been fine for years, suddenly DNS resolution stopped working from the LAN to public DNS servers. No changes that I know of, so I can't revert anything. Current behaviour: DNS requests from the LAN using TCP show up in the oubound firewall log, in the Cisco debug log, in the dns-server-firewall, in tcpdump on the DNS server, the answer comes back, it works fine. DNS requests from the LAN using UDP show up in the outbound firewall log, in the Cisco debug log, but does NOT show in the dns-server-firewall, not in tcpdump on the DNS server, times out. DNS requests from the Cisco using UDP show up in the dns-server-firewall and in tcpdump on the DNS server, answer received, works fine. netcat connections to port 53 or a random port by TCP show up in the dns-server-firewall netcat connections to port 53 or a random port by UDP do not show up in the dns-server-firewall Summary: TCP seems fine throughought. UDP works from the Cisco over the ADSL, and it works from the LAN to the Cisco, but it doesn't seem to cross the Cisco 837 properly. Update: confirmed with netcat that any UDP traffic from the LAN is affected, not just traffic to port 53. Update: If I change the firewall's external IP to any other IP in the subnet, this starts working. When I put it back, it stops working. I now suspect it's an ISP issue (does that sound plausible?), and am removing the Cisco config.

    Read the article

  • Split horizon, route filtering, and having RIPv2 announce a non-attached route to host

    - by Paul
    Routers A, B & C live at 10.1.1.1, 10.1.1.2 and 10.1.1.3 on a /24 metro Ethernet subnet. Each router also has its own private subnet on another interface. Router B's private subnet links thru a firewall to a 10.20.20.0 network at another organization. Router B redistributes to A and C several static routes for hosts on 10.20.20.0. However, a new host 10.20.20.5/32 must be reached via a different path that goes through router C. I know that C can advertise this host-based route with no problem, but I'd like to keep all my 10.20.20.x static routes in one place. So, how can B tell A via RIPv2 to send packets for 10.20.20.5/32 to C? So far it looks like I need no ip split-horizon on router B's 10.1.1.2 interface, perhaps because B has already learned from C other routes with a next hop of 10.1.1.3. But how does RIPv2 split horizon with no auto-summary and network 10.0.0.0 really work? If B learns a route to ANY 10.x.x.x network or host from A or C, is that enough for split horizon to keep it from redistributing ip route 10.20.20.5 255.255.255.255 10.1.1.3? And if I want to suspend split horizon only for this one new host, how do I filter out the mess of regurgitated routes that B advertises when I try no ip split-horizon? Thanks much.

    Read the article

  • ERROR with rpm_check_debug vs depsolve

    - by Frank Thornton
    Transaction Summary ========================================================================================================================================================== Install 9 Package(s) Upgrade 227 Package(s) Remove 1 Package(s) Total size: 252 M Downloading Packages: Running rpm_check_debug ERROR with rpm_check_debug vs depsolve: libasound.so.2()(64bit) is needed by libgcj-4.4.7-4.el6.x86_64 libasound.so.2(ALSA_0.9)(64bit) is needed by libgcj-4.4.7-4.el6.x86_64 ** Found 15 pre-existing rpmdb problem(s), 'yum check' output follows: alsa-lib-devel-1.0.22-3.el6.x86_64 has missing requires of alsa-lib = ('0', '1.0.22', '3.el6') alsa-lib-devel-1.0.22-3.el6.x86_64 has missing requires of libasound.so.2()(64bit) alsa-utils-1.0.22-5.el6.x86_64 has missing requires of libasound.so.2()(64bit) alsa-utils-1.0.22-5.el6.x86_64 has missing requires of libasound.so.2(ALSA_0.9)(64bit) alsa-utils-1.0.22-5.el6.x86_64 has missing requires of libasound.so.2(ALSA_0.9.0rc4)(64bit) alsa-utils-1.0.22-5.el6.x86_64 has missing requires of libasound.so.2(ALSA_0.9.0rc8)(64bit) frontpage-2002-SR1.2.i386 has missing requires of libexpat.so.0 gstreamer-plugins-base-0.10.29-2.el6.x86_64 has missing requires of libasound.so.2()(64bit) gstreamer-plugins-base-0.10.29-2.el6.x86_64 has missing requires of libasound.so.2(ALSA_0.9)(64bit) gstreamer-plugins-base-0.10.29-2.el6.x86_64 has missing requires of libasound.so.2(ALSA_0.9.0rc4)(64bit) libgcj-4.4.7-3.el6.x86_64 has missing requires of libasound.so.2()(64bit) libgcj-4.4.7-3.el6.x86_64 has missing requires of libasound.so.2(ALSA_0.9)(64bit) 1:qt-x11-4.6.2-26.el6_4.x86_64 has missing requires of libasound.so.2()(64bit) 1:qt-x11-4.6.2-26.el6_4.x86_64 has missing requires of libasound.so.2(ALSA_0.9)(64bit) 1:qt-x11-4.6.2-26.el6_4.x86_64 has missing requires of libasound.so.2(ALSA_0.9.0rc4)(64bit) Your transaction was saved, rerun it with: yum load-transaction /tmp/yum_save_tx-2013-12-23-22-364infzT.yumtx root@www1 [~]# I did some research and this is due to a 32bit binary trying to install itself or broken repo? root@www1 [~]# yum repolist Loaded plugins: fastestmirror, security Loading mirror speeds from cached hostfile * base: centos.mirror.lstn.net * extras: mirror.ash.fastserv.com * updates: ftp.usf.edu repo id repo name status base CentOS-6 - Base 6,284+83 dag Dag RPM Repository for Red Hat Enterprise Linux 4,559+91 extras CentOS-6 - Extras 14 updates CentOS-6 - Updates 247+39 repolist: 11,104 Now I disabled epel and rpmforge repops and still ended up with the same issues. Ideas?

    Read the article

  • How to list rpm packages/subpackages sorted by total size

    - by smci
    Looking for an easy way to postprocess rpm -q output so it reports the total size of all subpackages matching a regexp, e.g. see the aspell* example below. (Short of scripting it with Python/PERL/awk, which is the next step) (Motivation: I'm trying to remove a few Gb of unnecessary packages from a CentOS install, so I'm trying to track down things that are a) large b) unnecessary and c) not dependencies of anything useful like gnome. Ultimately I want to pipe the ouput through sort -n to what the space hogs are, before doing rpm -e) My reporting command looks like [1]: cat unwanted | xargs rpm -q --qf '%9.{size} %{name}\n' > unwanted.size and here's just one example where I'd like to see rpm's total for all aspell* subpackages: root# rpm -q --qf '%9.{size} %{name}\n' `rpm -qa | grep aspell` 1040974 aspell 16417158 aspell-es 4862676 aspell-sv 4334067 aspell-en 23329116 aspell-fr 13075210 aspell-de 39342410 aspell-it 8655094 aspell-ca 62267635 aspell-cs 16714477 aspell-da 17579484 aspell-el 10625591 aspell-no 60719347 aspell-pl 12907088 aspell-pt 8007946 aspell-nl 9425163 aspell-cy Three extra nice-to-have things: list the dependencies/depending packages of each group (so I can figure out the uninstall order) Also, if you could group them by package group, that would be totally neat. Human-readable size units like 'M'/'G' (like ls -h does). Can be done with regexp and rounding on the size field. Footnote: I'm surprised up2date and yum don't add this sort of intelligence. Ideally you would want to see a tree of group-package-subpackage, with rolled-up sizes. Footnote 2: I see yum erase aspell* does actually produce this summary - but not in a query command. [1] where unwanted.txt is a textfile of unnecessary packages obtained by diffing the output of: yum list installed | sed -e 's/\..*//g' > installed.txt diff --suppress-common-lines centos4_minimal.txt installed.txt | grep '>' and centos4_minimal.txt came from the Google doc given by that helpful blogger.

    Read the article

  • WIM2VHD failing with "Cannot derive Volume GUID from mount point."

    - by Jacob
    I'm trying to use WIM2VHD according to the instructions on Scott Hanselman's blog post to create a Sysprepped VHD image to boot from. I've installed the WAIK, and I have my Windows 7 sources mounted as a virtual drive. When I try to run WIM2VHD like this: cscript WIM2VHD.wsf /wim:F:\sources\install.wim /sku:Ultimate /vhd:E:\WindowsSeven.vhd /size:30721 I get the following log: Log for WIM2VHD 6.1.7600.0 on 11/2/2009 at 10:51:18.16 Copyright (C) Microsoft Corporation. All rights reserved. MACHINE INFO: Build=7600 Platform=x86fre OS=Windows 7 Ultimate ServicePack= Version=6.1 BuildLab=win7_rtm BuildDate=090713-1255 Language=en-ZA INFO: Looking for IMAGEX.EXE... INFO: Looking for BCDBOOT.EXE... INFO: Looking for BCDEDIT.EXE... INFO: Looking for REG.EXE... INFO: Looking for DISKPART.EXE... INFO: Session key is E01E1ED7-C197-4814-BDE4-43B73E14FCC4 INFO: Inspecting the WIM... INFO: Configuring and formatting the VHD... ******************************************************************************* Error: 0: Cannot derive Volume GUID from mount point. ******************************************************************************* INFO: Unmounting the VHD due to error... WARNING: In order to help resolve the issue, temporary files have not been deleted. They are in: C:\Users\Jacob\AppData\Local\Temp\WIM2VHD.WSF\E01E1ED7-C197-4814-BDE4-43B73E14FCC4 *emphasized text*Summary: Errors: 1, Warnings: 1, Successes: 0 INFO: Done. Any ideas?

    Read the article

  • How do I install the pdo_mysql driver on Red Hat Enterprise Linux 6.1?

    - by Will Martin
    I have a RHEL box running PHP 5.3.3, which was installed using the binary packages provided by yum. I have installed the php-pdo package: # yum info php-pdo Loaded plugins: product-id, rhnplugin, subscription-manager Updating Red Hat repositories. Installed Packages Name : php-pdo Arch : x86_64 Version : 5.3.3 Release : 3.el6_1.3 Size : 168 k Repo : installed From repo : rhel-x86_64-server-6 Summary : A database access abstraction module for PHP applications URL : http://www.php.net/ License : PHP Description : The php-pdo package contains a dynamic shared object that will add : a database access abstraction layer to PHP. This module provides : a common interface for accessing MySQL, PostgreSQL or other : databases. It appears to be working correctly for SQLite databases, but not MySQL. There's no file including pdo_mysql.so in /etc/php.d, and there is no copy of pdo_mysql.so in /usr/lib64/php/modules. I'm pretty sure I just need the driver file and a line in the PHP configuration. A yum search pdo mysql didn't turn up any useful packages, and Google has failed me. If I were on Ubuntu or Debian, I'd apt-get install php5-mysql and be done with it. So ... where in Red Hat land do I get a copy of pdo_mysql.so, and install it properly?

    Read the article

  • If spaces in filenames are possible, why do some of us still avoid using them?

    - by Chris W. Rea
    Somebody I know expressed irritation today regarding those of us who tend not to use spaces in our filenames, e.g. NamingThingsLikeThis.txt -- despite most modern operating systems supporting spaces in filenames. Non-technical people must look at filenames created by geeks and wonder where we learned English. So, what are the reasons that spaces in filenames are avoided or discouraged? The most obvious reason I could think of, and why I typically avoid it, are the extra quotes required on the command line when dealing with such files. Are there any other significant reasons, other than the practice being a vestigial preference? UPDATE: Thanks for all your answers! I'm surprised how popular this was. So, here's a summary: Six Reasons Why Geeks Prefer Filenames Without Spaces In Them It's irritating to put quotes around them when referenced on the command line (or elsewhere.) Some older operating systems didn't used to support them and us old dogs are used to that. Some tools still don't support spaces in filenames at all or very well. (But they should.) It's irritating to escape spaces when used where spaces must be escaped, such as URLs. Certain unenlightened services (e.g. file hosting, webmail) remove or replace spaces anyway! Names without spaces can be shorter, which is sometimes desirable as paths are limited.

    Read the article

  • Add bookmarks to Delicious and Google Bookmarks at the same time

    - by BrianH
    I have used delicious.com (or back then, del.icio.us) to store my bookmarks for a long time now, and I love it. I was looking through some of my Google services, and realized they have a bookmarking service that integrates with your Google searches (I thought they had a bookmarking service before, but it went away? Maybe not). I like delicious just fine - I'm not interested in leaving. But I also like how my Google bookmarks are highlighted (and I'm guessing, brought to the top) in my search results so I can easily tell if I've bookmarked a site (kind of like the "promote up" feature). I can't even count the number of times I search for a site only to find I've been there months or years ago. If sites I've bookmarked in the past are highlighted in my search results, it makes it easier to pick which search result to go to. My question is around bookmarking tools: Is there a bookmarklet or Firefox addon that will let me save a bookmark to multiple services at the same time, in this case, Google and Delicious? Or maybe a service to sync my delicious bookmarks to Google bookmarks on a regular basis? I have used the Delicious addon since the beginning - it would just be nice to add a bookmark to multiple services with 1 addon. For that matter, it would be nice to add Evernote into the mix - click 1 button to save the page to Evernote, and bookmark the page in Google and delicious. EDIT on 7/30/2009 - Summary: A proposed solution is to use the Delicious addon and the GMarks addon to keep the 2 services in sync. I was not able to get the 2 addons to keep everything in sync, so it was also suggest to use the Google Toolbar with the Delicious addon to keep everything in sync. I personally have reservations with letting Google know about every single site I visit, I believe this solution will work, so I am accepting it as the answer. I still wish there was a solution that would let you post a bookmark/page to multiple services at the same time (delicious, google, evernote, digg, diigo, etc.). Thanks!

    Read the article

  • laptop motherboard "shorts" when connected to adapter

    - by Bash
    Disclaimer: I'm sort of a noob, and this is a long post. Thank you all in advance! summary: completely dead laptop with no signs of life whatsoever (suddenly, for no apparent reason) Here's the deal: Lenovo Y470 (only a few months old with no water or shock damage). It stopped working suddenly (no lights, no sound, even when connecting adapter with or without battery). I tried a different adapter (same electrical rating), but no luck. I disassembled the thing completely, and tried plugging in the adapter and looking for signs of life with all different combinations of components installed (tried all combinations of RAM, CPU, USB power cords, screen, etc plugged in). no luck. Then, I noticed (as I was plugging in the adapter to try for the millionth time) that there was a "spark" for an instant when I first connect the adapter to the power jack. The adapter's LED would then flash (indicating it isn't working or charging). So, I thought the power jack has a short of some sort (due to bad soldering or something). Scanned virtually every single component on the motherboard, and tested the power jack connections with a multimeter. No shorts or damage to anything on the entire motherboard. Now I'm thinking I need to replace the motherboard. But, my actual question: What does this "shorting" when connecting the adapter signify? (btw, the voltage across the power connections and current through it drop to virtually zero when the adapter is connected and "sparks", and they stay that way). The bewildering thing is that there are no damaged components, and the voltage across adapter terminals returns to normal after I disconnect it (so it's not damaged). Please take a look at the pictures (of the motherboard's power connection and nearby components) and see if I'm missing something completely obvious... Links to pictures and laptop and motherboard model: pictures on DropBox Motherboard model: LA-6881P Laptop model: Lenovo IdeaPad Y470

    Read the article

  • IIS 7.5 / Windows 7: Error 500.19, error code 0x800700b7

    - by nikhiljoshi
    I have been trying to resolve this issue. I am using Windows 7 and VS2008 +iis7.5. My project is stuck because of this error. The error says: Error Summary HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. `Detailed Error Information Module IIS Web Core Notification BeginRequest Handler Not yet determined Error Code 0x800700b7 Config Error There is a duplicate 'system.web.extensions/scripting/scriptResourceHandler' section defined Config File \\?\C:\inetpub\wwwroot\test23\web.config Requested URL http://localhost:80/test23 Physical Path C:\inetpub\wwwroot\test23 Logon Method Not yet determined Logon User Not yet determined Config Source 15: <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> 16: <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> 17: <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> ` I followed the instructions in this Microsoft solution document, but it didn't help. http://support.microsoft.com/kb/942055

    Read the article

  • How do I install PHP 5.3 on CentOS?

    - by fivelitresofsoda
    I have to install PHP 5.3 on my CentOS server. If I do yum install php, the base repository installs 5.1.6 which is too old for the applications I need to install. So I've been trying to use the IUS repository, following the official instructions from IUS: root@linuxbox ~]# wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-2.ius.el5.noarch.rpm root@linuxbox ~]# wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm root@linuxbox ~]# rpm -Uvh ius-release*.rpm epel-release*.rpm OK. Now I simply do yum install php53, etc. for all I need... but I get this error: Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Check Error: file /usr/bin/php from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /usr/bin/php-cgi from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /usr/share/man/man1/php.1.gz from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /etc/php.ini from install of php53u-common-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-common-5.1.6-27.el5_5.3.x86_64 Error Summary ------------- I have no idea on how to solve this. I think I have to delete the base packages. However, as someone new to Linux, I don't know how to do that.

    Read the article

  • Generic RPM package for Python 2.x

    - by RaphDG
    I have a python application, it can run on Python = 2.6 and it's architecture independant. I need the rpm package of this application to be installed on Fedora 14 (python 2.7) and Centos 6.2 (python 2.6). I currently use mock to build one rpm package for each "flavour" and it works well. I apparently can't install the Centos compiled rpm on Fedora. It gives me this error message : error: Failed dependencies: python(abi) = 2.6 is needed by myapp-0.9.el6.noarch Here is the relevant part of my .spec file : %{!?python_sitelib: %global python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")} %{!?python_sitearch: %global python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib(1))")} Name: myapp Version: #VERSION# Release: #RELEASE#%{dist} Summary: myapp Group: Development/Languages License: Apache v2 Source0: %{name}-%{version}-#RELEASE#.tar.gz BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n) BuildArch: noarch BuildRequires: python-devel BuildRequires: python-setuptools %description myapp %prep %setup -c %build %{__python} setup.py build %install %{__rm} -rf %{buildroot} %{__python} setup.py install -O1 --skip-build --root %{buildroot} Do I really have to use mock and build 2 rpms or is there another way to create a single generic 2.x rpm package ?

    Read the article

  • Creating mdraid device on top of other existing mdraid devices

    - by Dmitriusan
    I'm considering creating something like "hierarchical raid" and wondering whether it is possible using pure mdraid. Moreover, I'm going to boot from this device. I'm using Ubuntu Server 12.04 LTS with Grub2 bootloader. Motivation behind doing that is: I have 4 x 1tb 7200rpm disks. Two are newer and faster (up to 200mb/sec) and other two are slower (up to 140mb/sec). I want to create RAID-0 device from them. When creating such RAID-0 directly from 4 hard disks, I get summary speed up to ~480mb/sec. That is roughly 4*120mb/sec, so RAID-0 works with speed of the slowest device. I have an idea to create a separate RAID-0 md0 device from 500gb partitions of slower hard disks. Theoretically, this md0 device will have speed 2*140=240~280mb/sec. After that, I'm going to add this md0 device to RAID-0 with faster disks, finishing with up to 3*200=600mb/sec. Stripe-width for this raid will be 2x times bigger than for underlying raid with slow disks. Questions are: is it possible or I'm missing something? will that work as expected? can I boot from such consolidated raid device? any better ideas? any pitfalls? I don't want to use fakeraid for consolidating slow disks for multiple reasons (portability, ability to customize parameters and so on). PS Speed is needed for home virtualization server and just for experience/fun. Reliability is provided via regular automatic backups to a separate device. PPS I considered also using different stripe-width for hard disks with different speed in single raid, but mdraid does not seem to support that.

    Read the article

  • Toshiba laptop cd drive read causes OS to totally freeze

    - by Fujishiro
    Okay I'll try to write an understandable summary. Forgive me if I'll fail with that attempt though. So. There is a Toshiba Satellite notebook. Got Windows 7 x86 Professional (OEM) installed on it, everything is fine (okay.. somewhat). The problem. If you put an audio or any kind of disc into the drive, something starts to eat the PC. Back then when the owner told me about this, he put an audio disc into the lappy. Winamp caused the IO load, 100%. Tried taskkill, taskkill /T, tried powershell, EVERYTHING. You just can NOT kill winamp or anything which becomes the blocker at that time. Even if you kill almost everything, laptop won't do a clear shutdown. Also I tried to use the force switch at 'shutdown' from cmd, but no use. (So: At these times you can use the laptop, but the blocker/explorer/disc becomes gray as a non-responding app. You can try to kill them, but that won't work, nor you can shutdown the machine). (Also tried using PID, but no use. For the highest IO I used the "select columns" from Task Manager and enabled the IO columns.) My first hunch was the problematic disc, autoplay and it tries to read tries to read (still shouldn't kill the PC). Disabled autoplay, removed winamp. Tried other software, etc. Everything was ok. Few days later the owner tried to put a disc into the machine and it started to reproduce the same symptoms but with a totally different disc. Uhm what to know. Virus is not an option, protected by BitDefender (valid license) and Spybot. Thanks if you have ANY idea about this strange problem. ps.: For now, the owner uses Daemon tools + Blindwrite as an alternative for those apps which wouldnt start without the disc.

    Read the article

  • can't access SATA card config screen on boot, nor access the disks

    - by Ronald
    We've just upgraded our file server using an ASUS P6T WS Pro board, running FreeBSD-RELEASE 8.2 and using zfs to manage 12 WD20EARS disks. Since our 3ware card has been giving us trouble we started using the six on-board SATA connectors and got a SuperMicro USAS2-L8i to provide eight more ports. Mechanically, the card is an awkward fit but electrically it all seems ok. Upon boot, the LSI controller shows up and states that pressing ctrl-c will bring up the LSI Config Utility. When doing that, the message changes to state that the utility will be started after initialization, however that never happens. There does seem to be an error message that's only displayed too briefly to read and seems to be about PCI and "not enough space". (That message is pushed off by a hardware summary and I've found no way to scroll back at this point.) The disks do not show up in any recognizable ways after booting, either. I found a hint in another discussion to check the address mapping on either the card or the motherboard BIOS, but have found no way to do that. So what I tried on a hunch is to disable everything that's on-board, including network adapters, Firewire controller and SATA. In fact, after doing that, I can successfully launch the LSI Config Utility. As far as I can tell, all looks well in there, and when booting in that configuration it also displays a list of the disks connected to it, which looks just fine as well. Only problem now is that I can't boot that way, because I need the on-board SATA controller and network adapters. As soon as I re-enable any of them I'm back to square one. That discussion I mentioned about mapping addresses said to try D000, then D7FF, then DFFF, in order. The LSI Config Utility shows the card address as D000 but offers no way of changing it. Any tips or insights would be appreciated.

    Read the article

  • AD server within another network - DNS issues

    - by Harry Muscle
    Here's a quick summary of the environment I support: we have a domain (domain A) that has about 20 client computers. The domain server for this domain and all the clients sit within the network infrastructure of a larger domain (domain B). All the computers get their network settings via DHCP from domain B's servers. I have no control and am unable to make changes to anything to do with domain B. The problem I have is that currently in order for my domain's (domain A) clients to be able to resolve the domain server and the shares on it they have their DNS server IP address set to domain A's domain server (via the default GPO). Unfortunately when a laptop (windows and mac) gets taken home, they are still looking for the domain server as their DNS server and obviously can't access the internet correctly outside of our environment. Ideally I need a solution where the machines use domain A's domain server as their DNS when inside the office and use what ever DNS server DHCP gives them when they are outside the office. However, since I have no control over the office DHCP server, I'm not sure how this can be accomplished. Any help and advice that anyone can offer is highly appreciated. Thanks, Harry P.S. The solution I'm trying to find needs to require no involvement from the user.

    Read the article

  • snort analysis of wireshark capture

    - by Ben Voigt
    I'm trying to identify trouble users on our network. ntop identifies high traffic and high connection users, but malware doesn't always need high bandwidth to really mess things up. So I am trying to do offline analysis with snort (don't want to burden the router with inline analysis of 20 Mbps traffic). Apparently snort provides a -r option for this purpose, but I can't get the analysis to run. The analysis system is gentoo, amd64, in case that makes any difference. I've already used oinkmaster to download the latest IDS signatures. But when I try to run snort, I keep getting the following error: % snort -V ,,_ -*> Snort! <*- o" )~ Version 2.9.0.3 IPv6 GRE (Build 98) x86_64-linux '''' By Martin Roesch & The Snort Team: http://www.snort.org/snort/snort-team Copyright (C) 1998-2010 Sourcefire, Inc., et al. Using libpcap version 1.1.1 Using PCRE version: 8.11 2010-12-10 Using ZLIB version: 1.2.5 %> snort -v -r jan21-for-snort.cap -c /etc/snort/snort.conf -l ~/snortlog/ (snip) 273 out of 1024 flowbits in use. [ Port Based Pattern Matching Memory ] +- [ Aho-Corasick Summary ] ------------------------------------- | Storage Format : Full-Q | Finite Automaton : DFA | Alphabet Size : 256 Chars | Sizeof State : Variable (1,2,4 bytes) | Instances : 314 | 1 byte states : 304 | 2 byte states : 10 | 4 byte states : 0 | Characters : 69371 | States : 58631 | Transitions : 3471623 | State Density : 23.1% | Patterns : 3020 | Match States : 2934 | Memory (MB) : 29.66 | Patterns : 0.36 | Match Lists : 0.77 | DFA | 1 byte states : 1.37 | 2 byte states : 26.59 | 4 byte states : 0.00 +---------------------------------------------------------------- [ Number of patterns truncated to 20 bytes: 563 ] ERROR: Can't find pcap DAQ! Fatal Error, Quitting.. net-libs/daq is installed, but I don't even want to capture traffic, I just want to process the capture file. What configuration options should I be setting/unsetting in order to do offline analysis instead of real-time capture?

    Read the article

  • How do I create and conveniently search through Libraries in Windows 8?

    - by mtone
    In Windows 7, I took the habit of putting most of my frequently accessed disk areas as Libraries - there were about a dozen. Typing a word in the Start menu would then give me a summary of matches by Library. For example, searching for "WPF" would tell me that I've got some results in the Books library, in the Coding library and a few other PDFs in the Downloads library, one of which I could then expand to see all results within. In Windows 8, that functionality appears to be gone. The Search function in the Charms Bar lists tons of results by type (Documents, Pictures, et cetera) but not by Library. This is practically useless since Documents contains hundreds of .txt and .cs files, a few of which might be Books or Downloads. The only option I found is to go into Explorer and use the search bar in the Library section. However, there again, all search results are mixed together, and I can't seem to find a way to know which Library each result came from (in the Details view, I didn't find a Library column I could add). So, if I want to know which Library contains stuff about a given topic, I have to search the Libraries one by one. Very inconvenient. Is Microsoft slowly deprecating libraries? Any tips? How else can I search through libraries?

    Read the article

  • (solved) `ssh foo "<command/>"` not loading remote aliases?

    - by TomRoche
    summary: Why does this fail $ ssh foo 'R --version | head -n 1' bash: R: command not found but this succeeds $ ssh foo 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh foo 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux86_64/bin/R' $ ssh foo '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) ? details: I am a (rootless) user on 2 clusters. One uses environment modules, so any given server on that cluster can provide (via module add) pretty much the same resources. The other cluster, on which I must also unfortunately work, has servers managed individually, so I get in the habit of doing, e.g., EXEC_NAME='whatever' for S in 'foo' 'bar' 'baz' ; do ssh ${SERVER} "${EXEC_NAME} --version" done This works fine for packages installed normally/consistently, but often (for reasons unknown to me) packages are not: e.g. (compare alias below to alias above), $ ssh bar 'R --version | head -n 1' bash: R: command not found $ ssh bar 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh bar 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux/bin/R' $ ssh bar '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) Using aliases copes well with these install differences when I interactively shell into the server, but fails when I try to script ssh commands (as above); i.e., # interactively $ ssh foo ... foo> R --version calls my alias for R on remote host=foo, but # scripting $ ssh foo 'R --version' doesn't. What do I need to do to make ssh foo "<command/>" load my aliases on the remote host?

    Read the article

  • How can i get more low memory with the following setup:

    - by user539484
    Modules using memory below 1 MB: Name Total = Conventional + Upper Memory -------- ---------------- ---------------- ---------------- MSDOS 14 317 (14K) 14 317 (14K) 0 (0K) HIMEM 1 120 (1K) 1 120 (1K) 0 (0K) EMM386 3 120 (3K) 3 120 (3K) 0 (0K) OAKCDROM 36 064 (35K) 36 064 (35K) 0 (0K) POWER 80 (0K) 80 (0K) 0 (0K) NLSFUNC 2 784 (3K) 2 784 (3K) 0 (0K) COMMAND 2 928 (3K) 2 928 (3K) 0 (0K) MSCDEX 15 712 (15K) 15 712 (15K) 0 (0K) SMARTDRV 30 384 (30K) 13 984 (14K) 16 400 (16K) KEYB 6 752 (7K) 6 752 (7K) 0 (0K) MOUSE 17 296 (17K) 17 296 (17K) 0 (0K) DISPLAY 8 336 (8K) 0 (0K) 8 336 (8K) SETVER 512 (1K) 0 (0K) 512 (1K) DOSKEY 4 144 (4K) 0 (0K) 4 144 (4K) POWER 4 672 (5K) 0 (0K) 4 672 (5K) Free 552 944 (540K) 539 088 (526K) 13 856 (14K) Memory Summary: Type of Memory Total = Used + Free ---------------- ---------- ---------- ---------- Conventional 653 312 114 224 539 088 Upper 47 920 34 064 13 856 Reserved 0 0 0 Extended (XMS)* 64 898 256 2 671 824 62 226 432 ---------------- ---------- ---------- ---------- Total memory 65 599 488 2 820 112 62 779 376 Total under 1 MB 701 232 148 288 552 944 Total Expanded (EMS) 33 947 648 (33 152K Free Expanded (EMS)* 33 538 048 (32 752K * EMM386 is using XMS memory to simulate EMS memory as needed. Free EMS memory may change as free XMS memory changes. Largest executable program size 538 976 (526K) Largest free upper memory block 7 488 (7K) MS-DOS is resident in the high memory area. I'm running MS-DOS 6.22 on VMWare virtual hardware. This is memory state after memmaker pass, so i'm looking for optimization beyond memmaker. Note: NLS drivers (DISPLAY, KEYB, NSLFUNC) are essential for me. Thanks to @mtone for valuable reminder about MSCDEX /E which gave me 16KiB of low memory (see the diff)!

    Read the article

  • Can not copy files from NTFS partition

    - by Ali
    I am experiencing a weird problem. I was running Xubuntu on my laptop until yesterday that I had to delete Xubuntu and install Windows. I had a NTFS partition on my Xubuntu that I kept some files on it. Today after installing windows I wanted to move all the files from that partition to an external HDD. I selected all files and folders and clicked on Copy, then I went to the HDD and clicked on paste but nothing happened. I can not do that. I do not know why. I copy the files, and wherever I click paste, nothing happens. If I try to copy the files and folders one by one, I can copy some of them, but some of them do not move. The other problem I have is that I can not open some files, in particular pdf files. When I click on pdf files I get this error: There was an error opening this document. This file cannot be found. Also, I cannot play some mp4 files. I can not open some jpg and txt files. I get this error The directory name is invalid. So in summary, after removing Xubuntu and installing windows 7 I have the following problems with one of the NTFS partitions on my internal drive: Can not copy or cut all folders and files from that partition to any other partition - I also do not get any errors. Can copy some folders and files Can not access some pdf, jpeg, txt and mp4 files and get the above errors. I should also mention I did not change anything for this partition during the installation or formatting the other partitions.

    Read the article

  • .htm pages working but .aspx pages throwing errors

    - by Mike
    Our site has thousands of visitors per day but we've been receiving reports from some of our members that they are able to hit our main page, presumably because it's a .htm page, but when they click off to a .aspx page, they get an error. I've done as much research as I know how and here is what I have come up with: - We have not made any changes to IIS on our server in months. - We have a couple of customers that have been willing to work with us and provide information about their system. One customer is running Vista, the other is running XP. - We had one of the customers test both Firefox and MSIE. She gets same error in both. - One customer said, "We were able to post the profile and searched on available jobs and it worked w/ Firefox for a day, then it just quit working...we did not change any settings to Firefox after we posted the profile." - We asked the customer to clear their cache and try again. They responded with, "We just cleared the cache and got the same error; btw, I periodically clear the cache -- almost every day." Summary We have thousands of customers that hit our site with no problem. We can't reproduce these errors. These customers are getting the same error in different browsers. They are only getting the error on .aspx pages. They still get the error after clearing their cache. We would appreciate any thoughts on what other questions we could ask these customers or thoughts on how we can further troubleshoot this problem.

    Read the article

  • Split horizon, route filtering, and having RIPv2 announce a non-attached route to host...

    - by Paul
    Routers A, B & C live at 10.1.1.1, 10.1.1.2 and 10.1.1.3 on a /24 metro Ethernet subnet. Each router also has its own private subnet on another interface. Router B's private subnet links thru a firewall to a 10.20.20.0 network at another organization. Router B redistributes to A and C several static routes for hosts on 10.20.20.0. However, a new host 10.20.20.5/32 must be reached via a different path that goes through router C. I know that C can advertise this host-based route with no problem, but I'd like to keep all my 10.20.20.x static routes in one place. So, how can B tell A via RIPv2 to send packets for 10.20.20.5/32 to C? So far it looks like I need no ip split-horizon on router B's 10.1.1.2 interface, perhaps because B has already learned from C other routes with a next hop of 10.1.1.3. But how does RIPv2 split horizon with no auto-summary and network 10.0.0.0 really work? If B learns a route to ANY 10.x.x.x network or host from A or C, is that enough for split horizon to keep it from redistributing ip route 10.20.20.5 255.255.255.255 10.1.1.3? And if I want to suspend split horizon only for this one new host, how do I filter out the mess of regurgitated routes that B advertises when I try no ip split-horizon? Thanks much.

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >