Search Results

Search found 5262 results on 211 pages for 'commands'.

Page 55/211 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Windows Scheduled Startup Task doesn't appear to be fully working but why?

    - by Devtron
    I originally tried to use Group Policy to enforce a startup script to run at startup. My startup script is a .CMD file, which calls 10 .exe files. Using Group Policy I could never get this to work....so I looked into using Scheduled Tasks. And here I am. I have tried two different versions of my script (for syntax purposes). I originally thought my syntax could be bad, so I tried a few approaches. Neither work. My #1 .CMD file approach commands look similar to this: start "this is my title" /D "C:\Somepathhere\myExecutable.exe" "..\..\published\wc_task.wfc" My #2 .CMD file approach commands look similar to this (it invokes a shortcut file): rundll32 shell32.dll,ShellExec_RunDLL "C:\Somepathhere\bin\Virtual Workflow.lnk" ^ Both of these scripts work fine if I manually run them, either by running the .CMD file, or even by manually forcing the Schedule Task MSC console to "Run" this script. Manual process seems to work fine, but automated it does not. My scheduled task is set for startup and uses "highest privileges" to execute as Admin. At the end of my .CMD script, I added a line to write to a text file, just to prove that the script was being run. That command looks like this: echo foo > C:\foo.txt When I reboot my server, and Schedule Tasks kicks in, I never get my ten .EXE files to run, but I do get the C:\foo.txt on my drive. What gives?

    Read the article

  • How can I explain to dspam that the user "brandon" is the same as "brandon@mydomain"

    - by Brandon Craig Rhodes
    I am using dspam for spam filtering by running the "dspamd" daemon under Ubuntu 9.10 and then setting up a Postfix rule that says: smtpd_recipient_restrictions = ... check_client_access pcre:/etc/postfix/dspam_everything ... where that PCRE map looks like this: /./ FILTER lmtp:[127.0.0.1]:11124 This works well, and means that all users on my system get all of their email, whether "dspam" thinks it is innocent or not, and have the option of filtering on its decisions or ignoring them. The problem comes when I want to train dspam using my email archives. After reading about the "dspam" command, I tried this on the files in my Inbox and spam boxes (which date from when I was using another filtering solution): for file in Mail/Inbox/*; do cat $file | dspam --class=innocent --source=corpus; done for file in Mail/spam/*; do cat $file | dspam --class=spam --source=corpus; done The symptom I noticed after doing all of this was that dspam was horrible at classifying spam — it couldn't find any! The problem, when I tracked it down, was that I was training the user "brandon" with the above commands, but the incoming email was instead compared against the username "brandon@mydomain", so it was running against a completely empty training database! So, what can I do to make the above commands actually train my fully-qualified email address rather than my bare username? I would like to avoid having to run "dspam" as root with a "--user" option. I would have expected that the "dspam" configuration files would have had an "append_domain" attribute or something with which to decorate local usernames with an appropriate email domain, but I can't find any such thing. When I used to use the Berkeley DB backend to "dspam", I solved this problem by creating a symlink from one of the databases to the other. :-) But that solution eventually died because the BDB backend is not thread-safe, so now I have moved to the PostgreSQL back-end and need a way to solve the problem there. And, no, the table where it keeps usernames has a UNIQUE constraint that prevents me from listing both usernames as mapping to the same ID. :-)

    Read the article

  • Rebooting access point via SSH with pexpect... hangs. Any ideas?

    - by MiniQuark
    When I want to reboot my D-Link DWL-3200-AP access point from my bash shell, I connect to the AP using ssh and I just type reboot in the CLI interface. After about 30 seconds, the AP is rebooted: # ssh [email protected] [email protected]'s password: ******** Welcome to Wireless SSH Console!! ['help' or '?' to see commands] Wireless Driver Rev 4.0.0.167 D-Link Access Point wlan1 -> reboot Sound's great? Well unfortunately the ssh client process never exits, for some reason (maybe the AP kills the ssh server a bit too fast, I don't know). My ssh client process is completely blocked (even if I wait for several minutes, nothing happens). I always have to wait for the AP to reboot, then open another shell, find the ssh client process ID (using ps aux | grep ssh) then kill the ssh process using kill <pid>. That's quite annoying. So I decided to write a python script to reboot the AP. The script connects to the AP's CLI interface via ssh, using python-pexpect, and it tries to launch the "reboot" command. Here's what the script looks like: #!/usr/bin/python # usage: python reboot_ap.py {host} {user} {password} import pexpect import sys import time command = "ssh %(user)s@%(host)s"%{"user":sys.argv[2], "host":sys.argv[1]} session = pexpect.spawn(command, timeout=30) # start ssh process response = session.expect(r"password:") # wait for password prompt session.sendline(sys.argv[3]) # send password session.expect(" -> ") # wait for D-Link CLI prompt session.sendline("reboot") # send the reboot command time.sleep(60) # make sure the reboot has time to actually take place session.close(force=True) # kill the ssh process The script connects properly to the AP (I tried running some other commands than reboot, they work fine), it sends the reboot command, waits for one minute, then kills the ssh process. The problem is: this time, the AP never reboots! I have no idea why. Any solution, anyone?

    Read the article

  • Troubleshooting iptables and configuring it to drop the priority of long-term connections

    - by intuited
    I'm somewhat familiar with the general concepts of iptables, and would like to learn it in more detail. I'm hoping that my learning experience can also be useful. The situation: I'm running dd-wrt on my router. Despite its purported QoS skills, I'm still seeing connection latency shoot up hugely whenever there's an ongoing http connection, eg some large download. Under such conditions, it can take 10 seconds or more to load a basic webpage; sometimes the connections are dropped entirely. I've tried adjusting the parameters, dropping the allotted bandwidth for up and download to well under my limit, but nothing seems to work. dd-wrt is configured to use HTB as the QoS algorithm; HFSC, although presented as an option, seems to cause the router to crash, and is rumoured to not actually work on any linux system. I'd like to be able to troubleshoot this issue and hopefully improve the settings that dd-wrt is using, but I'm finding the learning curve a bit overwhelming. For starters I am not sure what HTB actually specifies: is this a set of iptables commands, or do some of those commands specify how HTB is to be used? I would like it to prioritize based on protocol the way that it already supposed to, and in addition I'd like to have it drop the priority of connections which have a high total byte count, say over 400KB. Also tips on utilities that can be run under dd-wrt to get more info on what's going on in there are appreciated. I've tried to get iftop to work but there were issues running curses. I'm leaning towards replacing dd-wrt with openwrt; comments on this strategy are also welcome. I suspect that I would be well advised to get a second router as a standin before trying that. It may be worth noting that my total bandwidth is pretty limited (256Kbit/s).

    Read the article

  • Idle state detection for server

    - by odinmillion
    Windows OS has a service that detects idle state. Details: Task Idle Conditions The computer is considered idle if all the processors and all the disks were idle for more than 90% of the past 15 minutes and if there is no keyboard or mouse input during this period of time. When the Task Scheduler service detects that the computer is idle, the service only waits for user input to mark the end of the idle state. It is very useful for usual PCs that have keyboard amd mouse. We can use standard task scheduler to start some process like defrag when PC in idle state and stop when PC isn't in idle state. But what should we use when we using a standalone server without keyboard and mouse? Server sometimes receives commands by TCP/IP and starts CPU and HDD activity. But sometimes CPU and HDD activity at zero level. I would like to use this periods of time to start defrag or another process. But this started at "idle" state processes should be terminated when another commands will appear. So, standard idle state conditions cant help me because we have not got user input to stop idle state. I need more customizable idle state detector. Automatically started processes shouldn't influence to idle state, but PC should go away from idle state when another process will apperar. What should I use? Maybe exists some advansed task scheduler? Or I should write some useful utility on C#? I hope that it is a standard task and all useful utilities already compiled :)

    Read the article

  • Gittornado with Nginx fails to push and pull

    - by Josh Buell
    I'm making a simple website to host git repositories, much like github. I'm using Gittornado to handle git Smart HTTP requests, and it works perfectly locally; I can clone, push, pull, etc... But when I put it behind Nginx, git commands stop working, giving no errors except: "fatal: The remote end hung up unexpectedly" I know that it's Nginx that's causing the trouble because if I open the port that tornado is running on and try my git commands through that (i.e. "git pull \http://mysite.com:8000/myrepository master" instead of "git pull \http://mysite.com/myrepository master" [backslashes added because Server Fault says I have too many links]) everything works as expected. The Nginx access and error logs don't seem to say anything interesting, so I'm reasonably sure that it has something to do with the way Nginx is compressing or chunking the requests/responses, causing git to think there's been an unexpected hangup, but I'm not sure what to do to fix it, since this is my first time with Nginx. My Nginx configuration file is basically a clone of the on found here; I've tried commenting out various likely-seeming options to see if they were causing the problem, but none of them fixed it so I assume there's some default behavior I need to suppress, I'm just not sure which. Any thoughts on how to fix this? Since it works not through Nginx, I'm considering just redirecting git requests to the tornado port itself, but this feels like a hack rather than a clean solution...

    Read the article

  • How to install a private user script in Chrome 21+?

    - by Mathias Bynens
    In Chrome 20 and older versions, you could simply open any .user.js file in Chrome and it would prompt you to install the user script. However, in Chrome 21 and up, it downloads the file instead, and displays a warning at the top saying “Extensions, apps, and user scripts can only be added from the Chrome Web Store”. The “Learn More” link points to http://support.google.com/chrome_webstore/bin/answer.py?hl=en&answer=2664769, but that page doesn’t say anything about user scripts, only about extensions in .crx format, apps, and themes. This part sounded interesting: Enterprise Administrators: You can specify URLs that are allowed to install extensions, apps, and themes directly through the ExtensionInstallSources policy. So, I ran the following commands, then restarted Chrome and Chrome Canary: defaults write com.google.Chrome ExtensionInstallSources -array "https://gist.github.com/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://gist.github.com/*" Sadly, these settings only seem to affect extensions, apps, and themes (as it says in the text), not user scripts. (I’ve filed a bug asking to make this setting affect user scripts as well.) Any ideas on how to install a private user script (that I don’t want to add to the Chrome Web Store) in Chrome 21+? Update: The problem was that gist.github.com’s raw URLs redirect to a different domain. So, use these commands instead: # Allow installing user scripts via GitHub or Userscripts.org defaults write com.google.Chrome ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" This works!

    Read the article

  • Delay NTP Initialisation, Cisco 877W, IOS 12.4(24)T1

    - by Mike Insch
    I have a Cisco 877W which I'm using for my home ADSL connection (and as a refresher in Cisco IOS). I've got a working config in-place with my PPPoA connection coming online correctly, and VLANs and other settings configured as I want them, but I can't crack the NTP configuration. For NTP, I have the following defined ntp server 0.uk.pool.ntp.org source Dialer0 ntp server 1.uk.pool.ntp.org source Dialer0 ntp server 2.uk.pool.ntp.org source Dialer0 ntp server 3.uk.pool.ntp.org source Dialer0 This setup works fine when issued in Global Configuration Mode when the Dialer0 interface (ATM0.1) is up. The configuration fails at startup though: Translating "1.uk.pool.ntp.org"...domain server (208.67.222.222) (208.67.220.220) ntp server 1.uk.pool.ntp.org source Dialer0 ^ % Invalid input detected at "^" marker. This is repeated for the other servers defined. Obviously the DNS lookup for the server(s) fails because the DNS servers cannot be accessed because the external interface is not yet online. Is there a way to delay the NTP configuration until afte the Dialer0 interface is fully initialised? Can the NTP commands be triggered by the Line Protocol on the Dialer0 interface transitioning to the up state? Alternatively, can the NTP commands be delayed for 5 minutes after the router has finished initialising? Any advice, or pointers to useful documentation or examples gratefully received ...

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • How do I fix a custom Event Viewer Log that merges automatically with the Application log?

    - by NightOwl888
    I am trying to create a custom event log for a Windows Service on Windows Server 2003. I would like to name the custom log "(ML) Startup Commands". However, when I add a registry key with that name to HKLM\SYSTEM\CurrentControlSet\Services\Eventlog\, it adds a log but shows the exact same events that are in the Application log when looking in the event viewer. If I add a registry key with the name "(ML) Startup Commands 2" to the event log, it shows a blank event log as expected. In fact, any other name will work correctly except for the one I want. I have searched through the registry for other keys with the string "(ML)" and removed all other references to this key name, however I continue to get merged results in the viewer when I create a key with this name. My question is, how can I fix the server so I can create a custom event log with this name that shows only the events from my application, not the events from the default Application event log that is installed with Windows? Update: I rebooted the server and woudn't you know it, the log started acting normally. I got a strange error message in the Application log: The EventSystem sub system is suppressing duplicate event log entries for a duration of 86400 seconds. The suppression timeout can be controlled by a REG_DWORD value named SuppressDuplicateDuration under the following registry key: HKLM\Software\Microsoft\EventSystem\EventLog. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. I can only hope this error doesn't mean the problem will come back after 86400 seconds. I guess I will have to wait and see.

    Read the article

  • How do I only dp or do just the lines, not the entire block in Vim diff?

    - by hobbes3
    I'm currently using MacVim (Snapshot 64) "Split Diff by..." menu option. The file is Django's my settings.py from version 1.3.1 to a fresh file from version 1.4. (Open the image on a separate window/tab to enlarge.) I know two basic commands do to "obtain" (and replace) a block from the other side. dp to "put" (and replace) a block to the other side. But those two commands writes the entire block, which in MacVim is the purple highlights. If you look at the 2nd block, you can see that from line 2 and 3 only has 2 words that are different: mysite and hobbes3. I just want to replace per line not the entire block. So what is there a command to replace do do and dp per line as oppose to an entire block or do I have to manually type it out? Bonus question: I noticed that once I manually edit a block, I lose the purple highlighting. How do I "refresh" the diff again to include the highlights without reopening the file? Please try to keep the answers Vim-general as oppose to MacVim-specific. Thanks!

    Read the article

  • How have I locked me out from my Ubuntu VPS?

    - by Sanoj
    I have a Ubuntu Server as VPS (OpenVZ), and yesterday I installed php-fpm, but I guess something went wrong with the installation. Because since then I cannot log in to my server over SSH with PuTTY or using WinSCP. The message I get when connecting is Network error: Connection timed out. Immediately after the installation I was not able to use emacs either, I had to re-install it with apt-get install emacs. I have tried with clearing the firewall and rebooting the server from my web-based "control panel", but it doesn't help. The commands I used for installation of the PHP-fpm was from Installing PHP 5.3, Nginx And PHP-fpm On Ubuntu/Debian. And I guess it has something to do with these commands: cd /tmp wget http://us.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkrb53_1.6.dfsg.4~beta1-5ubuntu2_i386.deb wget http://us.archive.ubuntu.com/ubuntu/pool/main/i/icu/libicu38_3.8-6ubuntu0.2_i386.deb sudo dpkg -i *.deb sudo echo "deb http://php53.dotdeb.org stable all" >> /etc/apt/sources.list sudo apt-get update sudo apt-get install php5-cli php5-common php5-suhosin sudo apt-get install php5-fpm php5-cgi The web-sites that are hosted from my server works fine. Anyone that have the same experience or know how this could happen? I guess that I have to re-install Ubuntu Server from my "control panel" now, but I would like to avoid this situation in the future. Finally, I have backup on everything so nothing is lost if I have to re-install the machine.

    Read the article

  • Built local glibc, broke system, how do I ssh without parsing the .bashrc?

    - by Mikhail
    The cluster I am on had really old build tools and I needed to use CUDA5. I'm a pretty clever dude and I planned on building the necissary tools. So, I built a local copy of gcc, bintools, and glibc. Everything a CUDA5 could want. All builds finished without error. and I tested gcc and bintools. Everything was wonderful and I built and ran a few of the programs. I set up the LD_LIBRARY_PATHs in the .bashrc and logged back in, expecting a productive night ahead. To my horror I realized that everything is dynamically linked. Now I can't do simple commands like ls [ex@uid377 ~]$ ls ls: error while loading shared libraries: __vdso_time: invalid mode for dlopen(): Invalid argument and I can't do commands to fix the problem like rm or vim! Is there a way for me to ssh but also to ignore .bashrc file? Any suggestions are much appreciated. This machine is obviously under maintained and I don't know when I could have administrator support.

    Read the article

  • "unrecognized options" while installing php

    - by user1692333
    I want to compile php 5.4.8 on my mac 10.8.2, but get some errors which cant solve by my self, so need your help. Firstly i get default php options with php -i | head, after it do this command ./configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --disable-dependency-tracking --sysconfdir=/private/etc --with-apxs2=/usr/sbin/apxs --enable-cli --with-config-file-path=/etc --with-libxml-dir=/usr --with-openssl=/usr --with-kerberos=/usr --with-zlib=/usr --enable-bcmath --with-bz2=/usr --enable-calendar --disable-cgi --with-curl=/usr --enable-dba --enable-ndbm=/usr --enable-exif --enable-fpm --enable-ftp --with-gd --with-freetype-dir=/BinaryCache/apache_mod_php/apache_mod_php-79~4/Root/usr/local --with-jpeg-dir=/BinaryCache/apache_mod_php/apache_mod_php-79~4/Root/usr/local --with-png-dir=/BinaryCache/apache_mod_php/apache_mod_php-79~4/Root/usr/local --enable-gd-native-ttf --with-icu-dir=/usr --with-iodbc=/usr --with-ldap=/usr --with-ldap-sasl=/usr --with-libedit=/usr --enable-mbstring --enable-mbregex --with-mysql=mysqlnd --with-mysqli=mysqlnd --without-pear --with-pdo-mysql=mysqlnd --with-mysql-sock=/var/mysql/mysql.sock --with-readline=/usr --enable-shmop --with-snmp=/usr --enable-soap --enable-sockets --enable-sqlite-utf8 --enable-suhosin --enable-sysvmsg --enable-sysvsem --enable-sysvshm --with-tidy --enable-wddx --with-xmlrpc --with-iconv-dir=/usr --with-xsl=/usr --enable-zend-multibyte --enable-zip --with-pcre-regex --with-pgsql=/usr --with-pdo-pgsql=/usr But get this error config.status: creating Makefile config.status: creating jconfig.h config.status: jconfig.h is unchanged config.status: executing depfiles commands config.status: executing libtool commands configure: WARNING: unrecognized options: --enable-cli, --with-config-file-path, --with-libxml-dir, --with-openssl, --with-kerberos, --with-zlib, --enable-bcmath, --with-bz2, --enable-calendar, --disable-cgi, --with-curl, --enable-dba, --enable-ndbm, --enable-exif, --enable-fpm, --enable-ftp, --with-gd, --with-freetype-dir, --with-jpeg-dir, --with-png-dir, --enable-gd-native-ttf, --with-icu-dir, --with-iodbc, --with-ldap, --with-ldap-sasl, --with-libedit, --enable-mbstring, --enable-mbregex, --with-mysql, --with-mysqli, --without-pear, --with-pdo-mysql, --with-mysql-sock, --with-readline, --enable-shmop, --with-snmp, --enable-soap, --enable-sockets, --enable-sqlite-utf8, --enable-suhosin, --enable-sysvmsg, --enable-sysvsem, --enable-sysvshm, --with-tidy, --enable-wddx, --with-xmlrpc, --with-iconv-dir, --with-xsl, --enable-zend-multibyte, --enable-zip, --with-pcre-regex, --with-pgsql, --with-pdo-pgsql Maybe someone have some suggestions on this?

    Read the article

  • Team Foundation Server 2008 - TF220056 Error during installation

    - by David
    I'm attempting to install Team Foundation Server 2008 on a Windows Server 2003 instance that exists under Hyper-V. The SQL Server database itself is held on the root partition of the Hyper-V server and has the Reporting Services installed (so I've solved the TF220059 error already). After hitting "Next " after typing the name of the SQL Server I get this error: --------------------------- Microsoft Visual Studio 2008 Team Foundation Server Setup --------------------------- TF220056: An unrecoverable error occurred while trying to check the status of the Team Foundation database. Installation cannot continue. Check the install log for more details. --------------------------- OK --------------------------- The error log's stack trace makes it look like a bug in the TFS installer itself: [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: System.IO.IOException: The directory name is invalid. [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: at System.IO.__Error.WinIOError() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at System.IO.Path.GetTempFileName() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.Commands.InstallerCommand.get_Log() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.Commands.InstallerCommand.Run() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.CommandLine.RunCommand(String[] args) [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: The directory name is invalid. [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe check failed with error code: 100 I'm running the installer as the domain Administrator, although the server is a Terminal Server in Application Mode, might that be the cause of the problems?

    Read the article

  • Command prompt hangs/freezes/crashes sporadically

    - by Leonard Challis
    I'm finding it very difficult to Google this, I don't seem to be able to find anyone with the same issue and I don't know enough about the Windows operating system to troubleshoot. The machine(s) we are seeing the problem on are Windows 7 (professional) both 64bit and 32bit. The problem is with the command prompt freezing up, seemingly randomly. When it does freeze nothing will bring it back to life (i.e. keypress) and it's nothing to do with Quick Insert mode either. It doesn't seem to be when I run standard commands, such as cd, dir, etc, but when I run different programs from the command line. The annoying thing is that sometimes the prompt will freeze and at other times it won't, using the same program/command in the prompt. To add to the frustration, one of my colleagues who had the same problem seems to not have experienced it for a few days now (we're pretty heavy on the command line). It's not a VPN/RDP thing as suggested in other questions and forum posts, as I've seen this both locally and remotely. I thought it was to do with the return code signifying an error or some error state in the program, i.e.: C:\Users\leonardc>mysql -u lalala ERROR 1045 (28000): Access denied for user 'lalala'@'localhost' (using password: NO) but this isn't always the case either. In fact the above command hasn't crashed the shell before. Elevating the prompt to run as Administrator doesn't seem to have any bearing on the problem either. Disabling my anti-virus doesn't have an effect either. Update: I tried the same commands in PowerShell, but I still get the same problem, it will freeze at random times (more often than not, as with the command prompt, but not always). It's not the same as command prompt in the fact that one might work while the other doesn't, but then the next time I try run the same command in both it will suddenly be different again.

    Read the article

  • Command line solution for removing parts from a binary file?

    - by zsero
    I have a binary file and I would like to remove parts from. By removing I mean deleting those parts and thus making the file's size smaller. The parts would be between two ASCII strings. So, for example the file would look like this ........ start ABCD end ..... start EFGH end ..... start IJKL end ........... So in this file, I would like to search for strings "start" and "end" and remove the parts between them. The way I think I can do it is to lookup all the locations for "start" and "end" calculate ranges from that delete those parts Now I am using some GUI based Hex editor and I use the "Search All", "Select Range" and "Delete" commands, but I am sure it would be possible to solve it using some powerful command line hex/text editors. Do you know any solution for this problem which doesn't require using a GUI for looking up, copy & paste on clipboard, select range and delete commands but is just a few lines of command line? I am interested ini both Linux shell scripts or using some command line hex editors under Windows, or even Python scrips are welcome. Do you think it is possible to solve this problem just by a simple Regex replace? Are there any regex replace util which handles binary files well?

    Read the article

  • Millions of files in php's tmp error - how to delete?

    - by Jonatan Littke
    Hey. I've got a tmp-folder with 14 million php session files in my home directory. At least that's what I think it is, it's not like I could ls it or anything. How can I empty this folder? I've tried using find with the -exec rm {} \; commands but that didn't work. ls 'sess_0*' | xargs rm did neither. I'm currently running rm -rf tmp but after two hours the folder appears to be the same size. REFERENCE INFO: I suddenly encountered an error where SESSIONS could no longer be written to disk: [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: open(/var/www/clients/client1/web1/tmp/sess_8e12742b62aa68a3f9476ec80222bbfb, O_RDWR) failed: No space left on device (28) in Unknown on line 0 [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/www/clients/client1/web1/tmp) in Unknown on line 0 I ran: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 457G 126G 308G 29% / tmpfs 1.8G 0 1.8G 0% /lib/init/rw udev 10M 664K 9.4M 7% /dev tmpfs 1.8G 0 1.8G 0% /dev/shm But as you can see, the disk isn't full. So I had a look in the syslog which says the following 20 times per second: kernel: [19570794.361241] EXT3-fs warning (device md0): ext3_dx_add_entry: Directory index full! This led me thinking to a full folder, obviously, but since my web folder only has 60k files (having counted them), I guessed it was the tmp folder (the local one, for this instance of php) that messed things up. Some commands I ran: $ sudo ls sess_a* | xargs rm -f bash: /usr/bin/sudo: Argument list too long find . -exec rm {} \; rm: cannot remove directory '.' find: cannot fork: Cannot allocate memory I'm running Debian Lenny, php5, ISPConfig, SuEXEC and Fast-CGI.

    Read the article

  • Password Protect XML-RPC

    - by Terence Eden
    I have a service running on a server which I want to access via XML-RPC. I've installed all the necessary bits. Within /etc/apache2/httpd.conf I have the single line SCGIMount /RPC2 127.0.0.1:5000 I can run xmlrpc commands from my server - and any server which connects to /RPC2. What I want to do is password protect the directory to stop unauthorised usage. Within /etc/apache2/httpd.conf I've added <Location /RPC2> AuthName "Private" AuthType Basic AuthBasicProvider file AuthUserFile /home/me/myhtpasswd Require user me </Location> Trying to access /RPC2 brings up the "Authorization Required" box and it accepts my username and password. However, xmlrpc now doesn't work! If I run xmlrpc localhost some_command on my server, I get the error Failed. Call failed. HTTP response code is 401, not 200. (XML-RPC fault code -504) Is there any way I can password protect my /RPC2 directory and have xmlrpc commands work?

    Read the article

  • Mongrel Cluster on Ubuntu Server Karmic

    - by trobrock
    I am trying to get mongrel cluster working on my Ubuntu Server Karmic box in preparation to setup Capistrano. I've been trying to get the two to work all day and finally decided to completely remove Capistrano and see if I can just get Mongrel Cluster to work. I ran this to install mongrel cluster: gem install mongrel mongrel_cluster Everything installed fine, when I change into my app's directory... # mongrel_rails -bash: mongrel_rails: command not found I can run it from its install location: # /var/lib/gems/1.8/bin/mongrel_rails Usage: mongrel_rails <command> [options] Available commands are: ... It lets me build the cluster configuration file fine, but when I run the clister:start command: # /var/lib/gems/1.8/bin/mongrel_rails cluster::start starting port 8000 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8000 -P tmp/pids/mongrel.8000.pid -l log/mongrel.8000.log starting port 8001 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8001 -P tmp/pids/mongrel.8001.pid -l log/mongrel.8001.log starting port 8002 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8002 -P tmp/pids/mongrel.8002.pid -l log/mongrel.8002.log It seems it isnt calling it from the right directory after that command, what can I do to fix this? I tried setting the path previously when trying to set up Capistrano, but the path didnt stay set when Capistrano used ssh to run the commands.

    Read the article

  • SSH client not showing prompt after successful login

    - by user431949
    I'm having problems with my SSH client on Ubuntu 10.10. When I switch on my computer and open a Terminal and execute the command ssh user@host, it gives me a password prompt after which I enter the right password, I then get a prompt to execute my commands on the remote computer. Now the problem is, after a little while (probably around 10 minutes), the terminal window stops accepting commands (No matter what I type, nothing shows). Once this happens, I close the Terminal window and try to start all over again by opening another Terminal window. But this time around, after entering the right password, I don't get a welcome message or prompt. The cursor just keeps blinking on a new line. I ran the ssh command with -v parameter and the message I get after a successful login is: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_GB.utf8 Still the cursor keeps blinking on a new line without a prompt. However, Putty SSH client works perfectly on the same machine. Thank you very much for your time. Your help would be greating appreciated.

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • Limiting ssh user account only to access his home directory!

    - by EBAGHAKI
    By reading some tutorials online I used these commands: Make a local group: net localgroup CopsshUsers /ADD Deny access to this group at top level: cacls c:\ /c /e /t /d CopsshUsers Open access to the copSSH installation directory: cacls copssh-inst-dir /c /e /t /r CopsshUsers Add Copssh user to the group above: net localgroup CopsshUsers mysshuser /add simply put these commands will try to create a usergroup that has no permission on your computer and it only have access to the copSSH Installation directory. This is not true, since you cannot change the permission on your windows directory, the third command won't remove access to windows folder (it says access denied on his log). Somehow I achieved that by taking ownership of Windows folder and then i execute the third command so CopsshUsers has no permissions on windows folder from now on. Now i tried to SSH to the server and it simply can't login! this is kind of funny because with permission on windows directory you can login and without it you can't!! So if you CAN SSH to the server somehow you know that you have access to the windows directory! (Is this really true??) Simple task: Limiting ssh user account only to access his home directory on WINDOWS and nothing else! Guys please help!

    Read the article

  • Monitoring memcached with plink

    - by kojiro
    I need a telnet client that can take commands from a file or stdin so I can do some quick-and-dirty automatic monitoring of memcached. I thought plink would be good for this, but it seems to be doing something beyond what I need: If I telnet into localhost 11211 and write stats, I get the memcached stats, like so: $ telnet localhost 11211 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. stats STAT pid 25099 STAT uptime 91182 STAT time 1349191864 STAT version 1.4.5 STAT pointer_size 64 STAT rusage_user 3.570000 STAT rusage_system 2.740000 STAT curr_connections 5 STAT total_connections 23 STAT connection_structures 11 STAT cmd_get 0 STAT cmd_set 0 STAT cmd_flush 0 STAT get_hits 0 STAT get_misses 0 STAT delete_misses 0 STAT delete_hits 0 STAT incr_misses 0 STAT incr_hits 0 STAT decr_misses 0 STAT decr_hits 0 STAT cas_misses 0 STAT cas_hits 0 STAT cas_badval 0 STAT auth_cmds 0 STAT auth_errors 0 STAT bytes_read 82184 STAT bytes_written 7210 STAT limit_maxbytes 67108864 STAT accepting_conns 1 STAT listen_disabled_num 0 STAT threads 4 STAT conn_yields 0 STAT bytes 0 STAT curr_items 0 STAT total_items 0 STAT evictions 0 STAT reclaimed 0 END But with plink, I get an odd error. I'm using this command: watch -n 30 plink -v -telnet -P 11211 127.0.0.1 <<< $'\nstats' The first time through I get: Looking up host "127.0.0.1" Connecting to 127.0.0.1 port 11211 client: WILL NAWS client: WILL TSPEED client: WILL TTYPE client: WILL NEW_ENVIRON client: DO ECHO client: WILL SGA client: DO SGA ERROR STAT pid 25099 STAT uptime 91245 STAT time 1349191927 STAT version 1.4.5 … END But when watch repeats the command I just get: Looking up host "127.0.0.1" Connecting to 127.0.0.1 port 11211 client: WILL NAWS client: WILL TSPEED client: WILL TTYPE client: WILL NEW_ENVIRON client: DO ECHO client: WILL SGA client: DO SGA Failed to connect to 127.0.0.1: Connection reset by peer Connection reset by peer FATAL ERROR: Connection reset by peer What is plink doing here that is different from normal telnet? How should I be going about this? (I'm not married to plink, but I need a way to continuously send simple telnet commands to memcached without writing a full-fledged perl script.)

    Read the article

  • Adding Netem Filter Rules

    - by fontsix
    iam new in programming and using linux. My Question is, is it possible to add Netem Filter Rules later ? I want to create an PHP-Interface for Netem and I don't know how much filters were required. This should be some kind of dynamically. In Example : A user with a static IP starts an Netem Command (Latency) with PHP Interface this means these five command werde executed by php in the first step $classid = 11; $handle = 10; "sudo tc qdisc add dev eth0 handle 1: root htb"; "sudo tc class add dev eth0 parent 1: classid 1:1 htb rate 100Mbps"; "sudo tc class add dev eth0 parent 1:1 classid 1:$classid htb rate 100Mbps"; "sudo tc qdisc add dev eth0 parent 1:$classid handle $handle: netem delay 100ms"; "sudo tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst $dest flowid 1:$classid"; Now, if there would be a second user who wants to use Netem independent of the first user, i only want to execute the last 3 commands, like "sudo tc class add dev eth0 parent 1:1 classid 1:$classid htb rate 100Mbps"; "sudo tc qdisc add dev eth0 parent 1:$classid handle $handle: netem delay 100ms"; "sudo tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst $dest flowid 1:$classid"; There is an Algorithmus for increasing variables $classid and $handle. This should work. Now my Question: Is it possible only to add these 3 commands to add a new class with new qdisc and a new filter rule ? Or how can i realize it ? The Apache Error_log tells me "sh: line 1: flowid: command not found" but i can't find any mistake. I hope you could help Best regards fontsix

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >