Search Results

Search found 4243 results on 170 pages for 'spanning tree'.

Page 135/170 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Underbraces in Word math zones and dealing with stretchy parentheses

    - by Johannes Rössel
    Parentheses in Word usually stretch with whatever they're containing. This might be un-noticeable for things like but for stuff like it's definitely nice, especially compared to the fact that naïve LaTeX users often produce uglinesses such as There is a problem, however, when using under-/overbraces in math and putting parentheses around the complete term it becomes ugly. For simple things like shown here this can be solved by not letting the parentheses stretch which looks almost right. However, for more complex things it's certainly not an option: Both variants look horrible. So is there a way of letting the parentheses only stretch around the actual term parts, not including the under-/overbraces? Those are frequently used for annotations of individual pieces, so simply not using them is a bad idea too. In LaTeX you can get away with guesswork and using explicit sizes for the parentheses instead of relying on \left and \right but I haven't found a comparable option in Word yet. Since the underbrace is (tree-wise) a sibling of the term in parentheses it probably simply has to stretch and there probably can't be an algorithm that determines when to stretch or when not, considering that \above and \below are used for annotations as well but also for other things where perentheses have to stretch. Also, since the parenthesized expression is opaque from the outside one has to put the underbrace inside. From a markup point of view, at least. One can probably draw the rest around but that falls apart when styles change and wouldn't be a good idea either.

    Read the article

  • Excel 2007 pivot table does not aggregate properly

    - by Patrick
    I am using a an excel pivot table to summarize some data and just found a problem. The problem deals with how aggregate values are calculated. Let's say I have a table of data with three columns: Name, Date, Value. If I create a table where Name and then Date are used as Row Labels and Value is the aggregate value, ie Average. The pivot table will look something like this: +John .3450 5/14/2010 1.234 5/15/2010 3.450 5/16/2010 -3.25 What I think should be happening here is that the values for each date are averaged and then those values are averaged to come up with the value in the same row as the Name, John. But that is not what it does. It takes the average for each date, which it shows across from the date, but then instead of taking the average of those numbers, it actually uses the raw data and computes the average for all of John's values. It should show the average of the daily averages to correspond with the tree hierarchy, but instead just shows me the average for all of John's values. It essential will only aggregate at one level, but visually creates sub levels that it is not using. Does anyone know how to change this or understand by what logic this makes sense? Why would I create any sub groupings if I cannot compute aggregates on them?

    Read the article

  • A star vs internet routing pathfinding

    - by alan2here
    In many respects pathfinding algorythms like A star for finding the shortest route though graphs are similar to the pathfinding on the internet when routing trafic. However the pathfinding routers perform seem to have remarkable properties. As I understand it: It's very perfromant. New nodes can be added at any time that use a free address from a finite (not tree like) address space. It's real routing, like A*, theres never any doubling back for example. IP addresses don't have to be geographicly nearby. The network reacts quickly to changes to the networks shape, for example if a line is down. Routers share information and it takes time for new IP's to be registered everywhere, but presumably every router dosn't have to store a list of all the addresses each of it's directions leads most directly to. I can't find this information elsewhere however I don't know where to look or what search tearms to use. I'm looking for a basic, general, high level description to the algorithms workings, from the point of view of an individual router.

    Read the article

  • How to get automatic upgrades to work on Ubuntu Server?

    - by J. Pablo Fernández
    I followed the documentation for enabling automatic upgrades in Ubuntu servers, but it's not really updating anything at all. My /etc/apt/apt.conf.d/50unattended-upgrades looks almost like the default. // Automatically upgrade packages from these (origin, archive) pairs Unattended-Upgrade::Allowed-Origins { "Ubuntu karmic-security"; "Ubuntu karmic-updates"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. The package 'mailx' // must be installed or anything that provides /usr/bin/mail. Unattended-Upgrade::Mail "[email protected]"; // Automatically reboot *WITHOUT CONFIRMATION* if a // the file /var/run/reboot-required is found after the upgrade //Unattended-Upgrade::Automatic-Reboot "false"; The directory /var/log/unattended-upgrades/ is empty. Running /etc/init.d/unattended-upgrades start is not very nice: root@mozart:~# /etc/init.d/unattended-upgrades start Checking for running unattended-upgrades: root@mozart:~# Something seems to be broken, but I'm not sure why. I have pending updates and they are not being applied: root@mozart:~# aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following packages will be upgraded: linux-libc-dev 1 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/743kB of archives. After unpacking 4096B will be used. Do you want to continue? [Y/n/?] In all the servers I have, unattended upgrades seems to have been disabled: root@mozart:~# apt-config shell UnattendedUpgradeInterval APT::Periodic::Unattended-Upgrade root@mozart:~# Any ideas what am I missing?

    Read the article

  • installing Conkeror on Ubuntu 12.04

    - by Menelaos Perdikeas
    I am reading the instructions on conkeror site (and elsewhere) on how to install conkeror on Ubuntu (I am using Ubuntu 12_04 LTS) and it seems that the correct sequence is: sudo apt-add-repository ppa:xtaran/conkeror sudo apt-get update sudo apt-get install conkeror conkeror-spawn-process-helper The first step (apt-add-repository) seems to execute without a problem, giving the following output: You are about to add the following PPA to your system: Conkeror Debian packages for Ubuntu releases without xulrunner (i.e. for 11.04 Natty and later) More info: https://launchpad.net/~xtaran/+archive/conkeror Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret- keyring /tmp/tmp.Re7pWaDEQF --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv CB29CBE050EB1F371BAB6FE83BE0F86A6D689050 gpg: requesting key 6D689050 from hkp server keyserver.ubuntu.com gpg: key 6D689050: "Launchpad PPA for Axel Beckert" not changed gpg: Total number processed: 1 gpg: unchanged: 1 However, the apt-get update seems unable to fetch packages from the newly added PPA, with its output ending in: Hit http://security.ubuntu.com precise-security/restricted Translation-en Hit http://security.ubuntu.com precise-security/universe Translation-en Err http://ppa.launchpad.net precise/main Sources 404 Not Found Ign http://extras.ubuntu.com precise/main Translation-en_US Err http://ppa.launchpad.net precise/main i386 Packages 404 Not Found Ign http://extras.ubuntu.com precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en W: Failed to fetch http://ppa.launchpad.net/xtaran/conkeror/ubuntu/dists/precise /main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/xtaran/conkeror/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead. Accordingly, apt-get-install conkeror fails with: mperdikeas@mperdikeas:~$ sudo apt-get install conkeror Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package conkeror Any ideas what might be wrong ?

    Read the article

  • knife on Windows inconsistently reads ~\.ssh\knife.rb on Management Workstation

    - by gWaldo
    I am implementing a new instance of (Open-source v10.12) Chef in an existing environment. Currently the environment is mostly Windows, but more Linux is being introduced. I have used Chef in a previous gig, however that was a *nix-only environment. Because this is a primarily-Windows environment, my main workstation is Windows 7 (x64), and I use Powershell as my main terminal. I created a ~\.chef directory, populated with a knife.rb and my client.pem file. When I run knife client list from ~, I get the expected results. I keep my work in Dropbox just in case my laptop should fail or be stolen. When I run knife client list from the repo directory (C:\Users\waldo\Dropbox_company\projects\chef`), I get ERROR: Your private key could not be loaded from C:/home/waldo/.chef/waldog.pem Check your configuration file and ensure that your private key is readable (Note that the path is incorrect) This is the progression as I walk up the tree towards my ~ running knife client list: C:\Users\waldo\Dropbox\_company\projects\ => Above error C:\Users\waldo\Dropbox\_company\ => Above error C:\Users\waldo\Dropbox\ => It works! (Expected results) C:\Users\waldo\ => Expected results C:\Users\waldo\Documents\ => Expected Results C:\Users\waldo\Documents\GitHub => Expected Results C:\Users\waldo\Documents\GitHub\aProject\ => Expected Results What. The. Eff! Now, I know that I can add -c path\to\knife.rb, but that's a HUGE PITA. Question is: Why is knife inconsistently reading my ~\.chef\knife.rb, and how can I get around that without incurring carpal tunnel?

    Read the article

  • Underbraces in Word math zones and dealing with parentheses

    - by Johannes Rössel
    Parentheses in Word usually stretch with whatever they're containing. This might be un-noticeable for things like but for stuff like it's definitely nice, especially compared to the fact that naïve LaTeX users often produce uglinesses such as There is a problem, however, when using under-/overbraces in math and putting parentheses around the complete term it becomes ugly. For simple things like shown here this can be solved by not letting the parentheses stretch which looks almost right. However, for more complex things it's certainly not an option: Both variants look horrible. So is there a way of letting the parentheses only stretch around the actual term parts, not including the under-/overbraces? Those are frequently used for annotations of individual pieces, so simply not using them is a bad idea too. In LaTeX you can get away with guesswork and using explicit sizes for the parentheses instead of relying on \left and \right but I haven't found a comparable option in Word yet. Since the underbrace is (tree-wise) a sibling of the term in parentheses it probably simply has to stretch and there probably can't be an algorithm that determines when to stretch or when not, considering that \above and \below are used for annotations as well but also for other things where perentheses have to stretch. Also, since the parenthesized expression is opaque from the outside one has to put the underbrace inside. From a markup point of view, at least. One can probably draw the rest around but that falls apart when styles change and wouldn't be a good idea either.

    Read the article

  • uname -a gives wrong version of kernel in gentoo?

    - by freedrull
    Hi I'm running gentoo and doing uname -a gives the wrong kernel version. tony@P_P-o ~ $ uname -a Linux P_P-o 2.6.27-gentoo-r8 #12 SMP PREEMPT Sun Nov 8 19:46:59 PST 2009 i686 Genuine Intel(R) CPU T2060 @ 1.60GHz GenuineIntel GNU/Linux Running eix gentoo-sources shows that I have a later version than that installed: tony@P_P-o ~ $ eix gentoo-sources [U] sys-kernel/gentoo-sources Available versions: (2.6.16-r13) 2.6.16-r13!b!s (2.6.25-r9) 2.6.25-r9!b!s (2.6.26-r4) 2.6.26-r4!b!s (2.6.27-r8) 2.6.27-r8!b!s (2.6.27-r10) 2.6.27-r10!b!s (2.6.28-r5) 2.6.28-r5!b!s (2.6.28-r6) 2.6.28-r6!b!s (2.6.29-r5) 2.6.29-r5!b!s (2.6.29-r6) 2.6.29-r6!b!s (2.6.30) ~2.6.30!b!s (2.6.30-r3) ~2.6.30-r3!b!s (2.6.30-r4) 2.6.30-r4!b!s (2.6.30-r5) 2.6.30-r5!b!s (2.6.30-r6) 2.6.30-r6!b!s (2.6.30-r7) 2.6.30-r7!b!s (2.6.30-r8) 2.6.30-r8!b!s (2.6.31) ~2.6.31!b!s (2.6.31-r1) ~2.6.31-r1!b!s (2.6.31-r2) ~2.6.31-r2!b!s (2.6.31-r3) ~2.6.31-r3!b!s (2.6.31-r4) ~2.6.31-r4!b!s {build symlink ultra1} Installed versions: 2.6.27-r8(2.6.27-r8)!b!s(07:48:25 PM 06/19/2009)(-build -symlink) 2.6.28-r5(2.6.28-r5)!b!s(12:35:17 PM 06/08/2009)(-build -symlink) 2.6.29-r5(2.6.29-r5)!b!s(07:44:33 PM 06/19/2009)(-build -symlink) 2.6.30-r6(2.6.30-r6)!b!s(11:14:45 PM 10/02/2009)(-build -symlink) Homepage: http://dev.gentoo.org/~dsd/genpatches Description: Full sources including the Gentoo patchset for the 2.6 kernel tree What gives?

    Read the article

  • Why does this preseed for gitolite fail?

    - by troutwine
    I'm installing gitolite on a Debian Squeeze box with the following preseed: gitolite gitolite/gituser string git gitolite gitolite/adminkey string ssh-rsa AAAAB3ECT gitolite gitolite/gitdir string /var/lib/git On installation: # debconf-set-selections /var/cache/debconf/gitolite.preseed # apt-get install gitolite Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: git-daemon-run gitweb The following NEW packages will be installed: gitolite 0 upgraded, 1 newly installed, 0 to remove and 26 not upgraded. Need to get 0 B/114 kB of archives. After this operation, 348 kB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package gitolite. (Reading database ... 24715 files and directories currently installed.) Unpacking gitolite (from .../gitolite_1.5.4-2+squeeze1_all.deb) ... Setting up gitolite (1.5.4-2+squeeze1) ... adduser: The home dir must be an absolute path. dpkg: error processing gitolite (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: gitolite E: Sub-process /usr/bin/dpkg returned an error code (1) Why? The pre-seed was extracted from a manually configured installation, per here and exists without issue on another machine.

    Read the article

  • Why is lighttpd and fastcgi keeping sending me the *.scgi file instead of the website content?

    - by e-satis
    I have the following config: server.modules = ( "mod_compress", "mod_access", "mod_alias", "mod_rewrite", "mod_redirect", "mod_secdownload", "mod_h264_streaming", "mod_flv_streaming", "mod_accesslog", "mod_auth", "mod_status", "mod_expire", "mod_fastcgi" ) [...] fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/tmp/lighttpd/php-fastcgi.socket" + var.PID, "max-procs" => 1, "kill-signal" => 9, "idle-timeout" => 10, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "200", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "/pyapps/essai/blondes.fcgi" => ( "main" => ( "socket" => "/var/tmp/lighttpd/django-fastcgi.socket", ), ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" ))) [...] $HTTP["host"] =~ "(^|www\.)cam\.com(\:[0-9]*)?$" { server.document-root = "/home/cam/web/" accesslog.filename = "/home/cam/log/access.log" server.errorlog = "/home/cam/log/error.log" server.follow-symlink = "enable" # files to check for if .../ is requested server.indexfiles = ( "index.php", "index.html", "index.htm", "index.rb") url.rewrite = ( "^(/blondes/.*)$" => "/pyapps/essai/blondes.fcgi$1" ) } I have the following dir tree: /home/tv/web/ `-- pyapps `-- essai `-- __init__.py `-- blondes.fcgi `-- blondes.pid `-- django-fcgi.py `-- manage.py `-- manage.pyo `-- plop `-- settings.py `-- urls.py No error when restarting lighthttpd. The I run: ./manage.py runfcgi method=prefork socket=/var/tmp/lighttpd/django-fastcgi.socket daemonize=false pidfile=blondes.pid No errors neither. I then go to http://cam.com/blondes/. I offers me to download an empty file. I checked permissions but everything is set to the same user and group, and they work for the PHP site. The file /var/tmp/lighttpd/django-fastcgi.socket exists. When I reload the page, I got no output in error logs, nor in the manage.py runfcgi command. I probably missed something obvious, but what ?

    Read the article

  • Why Is Volume Shadow Copy Services stopping?

    - by David Mackintosh
    I am running Windows 7 Professional, 64-bit. I am running a backup-over-the-internet software client which depends on the Volume Shadow Copy Services running. Since I installed Service Pack 1 (or rather, didn't object when Windows Update forced Service Pack 1 on me) the backup service is failing to back everything up because VSC isn't running. Most of the time it fails to back up such noise as the Security Essentials database or the Messenger Live contact list -- stuff I really don't care about -- but I don't want to fall into the trap of accepting an Error-state backup as "normal". At the recommendation of the backup software, I have set the VSC service startup mode to be Automatic. When I look in the Event Log, System channel I can see at boot time: The Volume Shadow Copy service entered the running state. ...and then two or three minutes later: The Volume Shadow Copy service entered the stopped state. How do I figure out why VSC is stopping? At the suggestion of the backup vendor, I have already followed the suggestions from http://support.microsoft.com/default.aspx/kb/940184 net stop SENS net stop EventSystem net start EventSystem net start SENS net stop COMSysApp net stop SwPrv net stop VSS cd /d C:\Windows\system32 regsvr32 ole32.dll /s regsvr32 oleaut32.dll /s regsvr32 vss_ps.dll /s vssvc /register /s regsvr32 /i swprv.dll /s regsvr32 /i eventcls.dll /s regsvr32 es.dll /s regsvr32 stdprov.dll /s regsvr32 vssui.dll /s regsvr32 msxml.dll /s regsvr32 msxml3.dll /s regsvr32 msxml4.dll /s net start SwPrv net start VSS net start ProtectedStorage ...and per http://support.microsoft.com/kb/940184 I have deleted the key tree HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\EventSystem\{26c409cc-ae86-11d1-b616-00805fc79216}\Subscriptions I have also run chkdsk /F and chkdsk /R on both permanent hard disks. (I had a similar problem with another computer (same OS, same failure, same start point after SP1 install) but the problem went away when I forced Volume Shadow Copy Services to Automatic startup rather than Manual. I did not have to resort to following the Microsoft KB instructions.)

    Read the article

  • Trouble getting latest version of Git

    - by TheMethod
    I am using Ubuntu 10.04 LTS. I'm looking at using git as source control for personal projects and Github as a remote repository. I was having trouble pushing a commit to my remote github repo getting the following error message: The requested URL returned error: 403 while accessing https://github.com/Jstall/helloworld.git/info/refs When I did some digging I found that the problem could be me not having the latest version of Git. When I did a --version I found that I have version 1.7.0.4 locally. So I tried to update git using: sudo apt-get install git but get the following error: Reading package lists... Done Building dependency tree Reading state information... Done Package git is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package git has no installation candidate I've tried running: sudo apt-get update and trying again but it didn't seem to make a difference. I'm not sure if it's relevant but I'm also getting a couple of 404's when I run update: Err http://wine.budgetdedicated.com edgy/main Packages 404 Not Found Fetched 4,117B in 0s (5,142B/s) W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/edgy/universe/binary-i386/Packages.gz 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://wine.budgetdedicated.com/apt/dists/edgy/main/binary-i386/Packages.gz 404 Not Found I'm not sure when I should try next. Could anyone suggest a course of action to get this resolved? Any advice would be appreciated. Thanks much!

    Read the article

  • How to grant read/write to specific user in any existent or future subdirectory of a given directory? [migrated]

    - by Samuel Rossille
    I'm a complete newbie in system administration and I'm doing this as a hobby. I host my own git repository on a VPS. Let's say my user is john. I'm using the ssh protocol to access my git repository, so my url is something like ssh://[email protected]/path/to/git/myrepo/. Root is the owner of everything that's under /path/to/git I'm attempting to give read/write access to john to everything which is under /path/to/git/myrepo I've tried both chmod and setfacl to control access, but both fail the same way: they apply rights recursively (with the right options) to all the current existing subdirectories of /path/to/git/myrepo, but as soon as a new directory is created, my user can not write in the new directory. I know that there are hooks in git that would allow me to reapply the rights after each commit, but I'm starting to think that i'm going the wrong way because this seems too complicated for a very basic purpose. Q: How should I setup my right to give rw access to john to anything under /path/to/git/myrepo and make it resilient to tree structure change ? Q2: If I should take a step back change the general approach, please tell me.

    Read the article

  • Adding a Printer to my Print Server Failing

    - by Rudi Kershaw
    So, on the Windows Server page I read the following. Step 4: Add Network Printers Automatically Print Management (Printmanagement.msc) can automatically detect all the printers that are located on the same subnet as the computer on which you are running Print Management, install the appropriate printer drivers, set up the queues, and share the printers. To automatically add network printers to a printer server Open the Administrative Tools folder, and then double-click Print Management. In the Printer Management tree, right-click the appropriate server, and then click Add Printer. On the Printer Installation page of the Network Printer Installation Wizard, click Search the network for printers, and then click Next. If prompted, specify which driver to install for the printer. So, I have got to this point, made sure the printer (Canon MP620) is on and correctly plugged into the network. However, when I click "Search the network for printers", the wizard doesn't find it. Now, I can't get any further. Is there anything I could be doing wrong? How should I proceed moving forwards?

    Read the article

  • SVN hangs on commit - any suggestions for troubleshooting?

    - by Richard Beier
    We're having a problem with SVN... Subversion clients such as TortoiseSVN hang when we commit any more than a few files at a time to our server. Everything appears to actually be committed successfully to the repository; but the client hangs after all the data has been transmitted. We're using version 1.4.4 of the SVN server. We use the svn:// protocol rather than http to connect. We've reproduced this problem with several clients: TortoiseSVN (1.6.10), AnkhSVN (2.1), and the Silk command-line client (1.6.12). This is happening for everyone on the team, though some people seem to be more affected than others. If someone commits only a few files, it often works; but with more than half a dozen files, it usually hangs. Does anyone have troubleshooting suggestions? This has been happening sporadically for a while, but it's become pretty consistent lately. We've been working around the issue by killing the hung SVN client, doing "svn cleanup", and then doing "svn up"; but sometimes that causes tree conflicts. Another workaround is to blow away the workspace and check it out again after every commit; but of course that's pretty annoying. Are there any diagnostics that could help us troubleshoot this? We're considering upgrading to SVN 1.6 server, and installing the server on a new machine; but we're wondering if there's an easier solution. Thanks for your help, Richard

    Read the article

  • Serve a specific set of error pages for different subdirectories

    - by navitronic
    I am currently trying to setup 2 different sets of Error documents for separate folders within a website. I have 2 folders within the root of a site: demo/ live/ Any requests that return 404's or 403's within the demo folder needs to load one set of pages for the Apache errordocuments, eg. ErrorDocument 404 /statuses/demo-404.html ErrorDocument 403 /statuses/demo-403.html And the live needs to go to similarly name files. ErrorDocument 404 /statuses/live-404.html ErrorDocument 403 /statuses/live-403.html So far I have tried placing an .htaccess file in both directories with the ErrorDocument directives setup pointing to the specific files, the 404 works fine and references the correct page. However, the 403s do not work and revert to the server default when trying to access forbidden folders within the demo directory, the logs indicate the following: [Wed Jun 16 04:47:44 2010] [crit] [client 115.64.131.144] (13)Permission denied: /home/abstract/public_html/demo/xxx/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Is this correct? Would apache revert to default because it is trying to look for the htaccess in a folder it doesn't have permission in? Why wouldn't it work it's way back through the folder tree? Can I make it do this?

    Read the article

  • dpkg broken while upgrading Debian Etch to Lenny

    - by artvolk
    Good day! While trying to recover a box to lenny it seems I've broken things. It upgrades libc and glib after that dpkg seems to be broken. I can run apt-get, but it gets segmentation fault from dpkg: # apt-get -f install Reading package lists... Done Building dependency tree... Done 0 upgraded, 0 newly installed, 0 to remove and 316 not upgraded. 9 not fully installed or removed. Need to get 0B of archives. After unpacking 0B of additional disk space will be used. /bin/sh: line 1: 4606 Segmentation fault /usr/sbin/dpkg-preconfigure --apt E: Sub-process /usr/bin/dpkg received a segmentation fault. I can login via SSH but even ls is not working: # ls Segmentation fault Is there anything I can do remotelly via SSH? # ldd /bin/ls linux-gate.so.1 => (0xffffe000) librt.so.1 => /lib/tls/i686/cmov/librt.so.1 (0xb7fc8000) libacl.so.1 => /lib/libacl.so.1 (0xb7fc2000) libselinux.so.1 => /lib/libselinux.so.1 (0xb7fac000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb7e51000) libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb7e3f000) /lib/ld-linux.so.2 (0xb7fd8000) libattr.so.1 => /lib/libattr.so.1 (0xb7e3b000) libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb7e37000) libsepol.so.1 => /lib/libsepol.so.1 (0xb7df6000) It seems I've temporary fixed it with: # touch /etc/ld.so.nohwcap From here: http://saintaardvarkthecarpeted.com/blog/archive/2005/08/_etc_ld_so_nohwcap.html

    Read the article

  • PC power supply & normal range for voltages reported in BIOS hardware monitor?

    - by Chris W. Rea
    I'm trying to diagnose whether my computer has an ample power supply. Sometimes when I play a video-intensive game, both monitors lose the video signal, even though the computer remains on and sound playing. A theory I have is: the video card isn't getting sufficient power. I can't imagine it's overheating because the machine is well-ventilated and the video card isn't hot to touch when this happens. Anyway, in my PC's BIOS there's a Hardware Monitor page, and among other voltages reported (such as CPU, DRAM, South Bridge, etc.) I can see the following values: 3.3V 3.152V 5V 4.944V 12V 11.872V Are those the voltages used by peripherals? What voltage should I be referencing if I want to know what my video card (PCI Express) is consuming? What is the normal range of values reported for those? My values above appear to be under by approximately 4.5%, 1.1%, and 1.1% respectively. Is that cause for concern? How else should I be determining if my power supply is "right-sized" for my PC and video card, or am I perhaps barking up the wrong tree?

    Read the article

  • Applications randomly alt-tab? (especially full screen games)

    - by Henry Scotts
    I'm not sure when this began, how it happens, or why it happens, but it is quite bothersome and apparently random. Just randomly throughout the day my computer will just go to the desktop. I could be in a full screen game and it will just immediately alt tab and present the desktop. Or I could be watching a movie and this happens. Sometimes it happens once every three hours and other times (just today actually) it did it twice in the span of 30 seconds. I am positive I am not pressing a hotkey because I launched a game, sat idle, and noticed it alt tab while cleaning up around my room after about 20 minutes. Sometimes it goes days without this happening. Specs: Windows 7 Ultimate 64 Bit, 10 gigs of RAM, GeForce GTX 260, Intel Xeon CPU. I also have basically nothing running when it happens other than the game and FireFox. My FireFox add-ons: Adblock Plus, Download Statusbar, Firebug, FirePHP, lazarus form recovery, tree-style-tabs, yslow. I doubt FireFox is causing the issue but I figured I'd include it anyway because it is the only application I have running when it happens. As for user processes I have running: VCDDaemon (context menu for virtual clone drive), razerhid (mouse), OSD, taskhost, dmw (desktop window manager), anyfullscreengame, audiorepeater, netsession_win, explorer, razerofa, tsvncache, firefox, plugin-container, and EKIJ5000MUI (printer). Whew. Okay. That was a lot of information. If someone could diagnose this I would be most grateful for this has been around with me for years.. Thanks for reading! PS: I doubt it's a virus because I never download illegal software and pretty much only browse Reddit and Stackexchange and play games. If it was a virus it would be a pretty lame one.. Hah..

    Read the article

  • Win7 System folder contains infinitely looping SYSTEM(!) directory

    - by Matt
    My Windows 7 Enterprise computer has been crashing fairly frequently recently, so I decided to boot up in safe mode and run the TrendMicro client I have installed. It froze about 10 minutes into the full system scan, so in the spirit of http://whathaveyoutried.com, I started scanning each folder individually. When I got to ProgramData, the AV failed with an uncaught exception. I then went down a level and tried scanning Application Data, which failed as well. Imagine my surprise when I open the folder just to see the same folder again! As far as I can tell, this folder loop continues indefinitely. (If you are trying to recreate this, keep in mind that ProgramData is a hidden folder.) I'm actually a bit concerned that these are system folders, as this is a brand-new computer with a clean installation. I guess I have three questions: Has anyone else seen/experienced this before? I'm running Win7 SP1. How do I fix this? I've run CHKDSK \F with no success (although it was incredibly slow). What are the ramifications of an infinitely recursive directory? Theoretically speaking, each link takes up memory, so shouldn't I have no space available on my hard drive? (I've got about 180GB left.) I noticed that the tree view on the left only shows the "linked folder" icon on the deeper folders--does this mean anything special? (I've circled the icons or lack thereof in red.) How can the OS even resolve this aberration? And above all, what would happen if I were to select "Expand all folders"??? :P Matt

    Read the article

  • Missing whole disk device in OpenSolaris

    - by Jeff Mc
    I have begun experimenting with Solaris and ZFS as a NAS. All was going very smoothly until I had a drive failure. When I replaced the drive, I no longer have a device file mapped to the whole disk. /dev/dsk/c7t3d0 does not exist but c7t2d0 and c7t4d0 both do. Also the sd@3,0:wd file under the /devices/ tree is non-existent. Do I have to prepare/partition the disk somehow to cause the whole disk device to exist? Here are a few outputs that might be useful. jeffmc@ats-ds2:/dev/dsk$ zpool status pool: datapool state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: none requested config: NAME STATE READ WRITE CKSUM datapool DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 UNAVAIL 0 0 0 cannot open mirror-1 ONLINE 0 0 0 c7t4d0 ONLINE 0 0 0 c7t5d0 ONLINE 0 0 0 jeffmc@ats-ds2:/dev/dsk$ zpool replace datapool c7t3d0 cannot open 'c7t3d0': no such device in /dev/dsk must be a full path or shorthand device name jeffmc@ats-ds2:/dev/dsk$ sudo format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@0,0 1. c7t1d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@1,0 2. c7t2d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@2,0 3. c7t3d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@3,0 4. c7t4d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@4,0 5. c7t5d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@5,0

    Read the article

  • vmware esxi 5, cant create snapshots and consolidate fails, how to delete old or consolidate redo logs?

    - by Scott Szretter
    I have a VM that seems to be working ok, but when VMWare DR (or I) tries to create a snap shot, it fails, and when I view the summary page of the VM it has a warning at the top showing that the disks need to be consolidated. So I go to snapshot manager for the VM and choose consolidate (in snapshot manager, there are no snapshots actually listed by the way). If fails with this error: This virtual machine has 255 or more redo logs in a single branch of its snapshot tree. The maximum supported limit has been reached, creating new snapshots will not be allowed. To create new snapshots, please delete old snapshots or consolidate the redo logs. If I browse the data store (which has plenty of free space, 2 TB and this vm is under 40gb), in the vm folder, I do in fact see a bunch of files, numbered all the way to 0255: myvm-000255-ctk.vmdk myvm-000255-delta.vmdk myvm-000255.vmdk How can I clean all this up? Is there an SSH command line command or can I delete some of the files safely? Thanks!

    Read the article

  • How can I switch from a custom linux network namespace back to the default one?

    - by Martin
    With ip netns exec you can execute a command in a custom network namespace - but is there also a way to execute a command in the default namespace? For example, after executing these two commands: sudo ip netns add test_ns sudo ip netns exec test_ns bash How can the newly created bash execute programs in the default network namespace? There is no ip netns exec default or anything similar as far as I've found. My scenario is: I want to run a SSH server in a separate network namespace (to keep the rest of the system unaware of the network connection, as the system is used for network testing), but want to be able to execute programs in the default network namespace via the SSH connection. What I've found out so far: Created network namespaces are listed as files under /var/run/netns (but there is no file for the default namespace) The ip netns exec code can be found here: http://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/ip/ipnetns.c#n132 - I haven't grasped everything that it is doing yet, but it doesn't look very promising. ip netns identify $$ as suggested by Howto query and change network namespace on linux? returns nothing when in the default network namespace

    Read the article

  • OS X can connect to Windows machine, but can't access shared folders

    - by Bonnie
    I can create new folders on my Windows XP machine, set them to "shared". On my Mac, I pick Finder → Go → Connect to Server → smb://192.168.1.4 → Connect → Name / Password. It even shows me all the names of the newly created shared-folders on my PC, but when I try to actually connect to any of them I get connection failed, there was an error connecting Any idea on what would cause that? The fact that it successfully gets so far—to actually showing me my PC share-names—must mean I have 99% of this working correctly, i.e. the physical connection, the IP address, the user name, the password, etc. Still, I can't seem to access the folders themselves. I've tried this with my Windows XP firewall on/off, and Norton AntiVirus on/off. Same problem. Everything did work fine, 4 months ago. Were there any odd OS X or Windows updates released recently? I always apply them all. smbclient on the Mac does correctly find the XP machine, my XP user name, and accepts my XP password. I get the following from that smbclient command: Doing spnego session setup (blob length=16) server didn't supply a full spnego negprot Got challenge flags: ... Got NTLMSSP flags: ... Got NTLMSP flags: ... Domain=[XPMACHINE] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager] tree connect failed: NT_STATUS_INSUFF_SERVER_RESOURCES I'm not sure why a standard XP box can't "supply a full spnego negprot". Whatever that means. Using XP's RegEdit to change my IRPStackSize from 11... to 13, 15, 20, 22... still gives that "NT_STATUS_INSUFF_SERVER_RESOURCES" error on the Mac.

    Read the article

  • Certificates required for WHQL-certified drivers

    - by Kasius
    The 64-bit Windows 7 image that we deploy to machines at our site does not contain all of the certificates included on a default Windows image. Automatic root certificate installation is also disabled per policy from higher in the organization. We have had a lot of trouble installing many WHQL-certified drivers from reputable companies (ex. HP, Lexmark, Dell, etc.), and I hypothesize that a required certificate is missing from one of the certificate stores on the machine. The error we typically get is: The driver cannot be installed because it is either not digitally signed or not signed in the appropriate manner. I know that it is signed. A .CAT file is included, and it has the following tree from top to bottom: Microsoft Root Authority (thumbprint a4 34 89 15 9a 52 0f 0d 93 d0 32 cc af 37 e7 fe 20 a8 b4 19) Microsoft Windows Hardware Compatibility PCA (thumbprint 93 b8 d8 82 0a 32 db 20 a5 ea b6 8d 86 ad 67 8e fa 14 ea 41) Microsoft Windows Hardware Compatibility Publisher (thumprint b0 50 45 45 42 4e be 2c 16 2f 62 5b bf 5a e6 9b 96 bf 0b 0b) What certificates are required to install WHQL-certified drivers? Is it possibly something other than certificates? Thanks! NOTE: I have posted this question on Technet as well, but honestly, I've never had a lot of luck posting questions on the Technet forums.

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >