Search Results

Search found 33882 results on 1356 pages for 'command window'.

Page 393/1356 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • Megacli is killing me, any help appreciated

    - by Stefan
    I run a server with 2 drives in raid0 configured through BIOS. I just added 2 more drives using hotplug (the server is dell r610 with RHEL 5.4 64bit) and I would like to configure a separate raid0 partition on these drives. I am getting the following error: /opt/MegaRAID/MegaCli/MegaCli64 -CfgLdAdd r0[32:2, 32:3] -a0 The specified physical disk does not have the appropriate attributes to complete the requested command. Exit Code: 0x26 All the parameters are correct and there is just no reason why this command could not work, see this (fujitsu is current raid, seagate is the new one I want to create): /opt/MegaRAID/MegaCli/MegaCli64 -PDList -aALL | egrep 'Adapter|Enclosure|Slot|Inquiry' Adapter #0 Enclosure Device ID: 32 Slot Number: 0 Enclosure position: 0 Inquiry Data: FUJITSU MBD2147RC D807D0A4PA101174 Enclosure Device ID: 32 Slot Number: 1 Enclosure position: 0 Inquiry Data: FUJITSU MBD2147RC D807D0A4PA10115T Enclosure Device ID: 32 Slot Number: 2 Enclosure position: 0 Inquiry Data: SEAGATE ST9300603SS FS033SE0TF5K Enclosure Device ID: 32 Slot Number: 3 Enclosure position: 0 Inquiry Data: SEAGATE ST9300603SS FS023SE070FK I also tried to set up the drive as hotspare, also some strange error: /opt/MegaRAID/MegaCli/MegaCli64 -PDHSP -Set -physdrv[32:3] -a0 Adapter: 0: Set Physical Drive at EnclId-32 SlotId-3 as Hot Spare Failed. FW error description: The specified device is in a state that doesn't support the requested command. Exit Code: 0x32 As you can see the disk is in Unconfigured, Good state: Enclosure Device ID: 32 Slot Number: 3 Enclosure position: 0 Device Id: 3 Sequence Number: 1 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SAS Raw Size: 279.396 GB [0x22ecb25c Sectors] Non Coerced Size: 278.896 GB [0x22dcb25c Sectors] Coerced Size: 278.875 GB [0x22dc0000 Sectors] Firmware state: Unconfigured(good), Spun Up SAS Address(0): 0x5000c50005cd20b1 SAS Address(1): 0x0 Connected Port Number: 3(path0) Inquiry Data: SEAGATE ST9300603SS FS023SE070FK FDE Capable: Not Capable FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: Foreign Foreign Secure: Drive is not secured by a foreign lock key Device Speed: Unknown Link Speed: Unknown Media Type: Hard Disk Device Drive Temperature :30C (86.00 F)

    Read the article

  • PHP CLI not respecting memory limit in php.ini

    - by user13743
    I am using drush, which is a command-line php app to manage a drupal website. I am running a command to import a lot of data, which is causing me to hit php's memory limit. PHP Fatal error: Allowed memory size of 536870912 bytes exhausted ... Which is 512MB if I'm doing the math correctly (536870912 / 1024 / 1024 = 512). I've changed the directive in the php.ini that drush uses: $> drush status ... PHP configuration : /etc/php5/cli/php.ini $> grep memory /etc/php5/cli/php.ini ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 1024M But I'm still hitting the 512 MB limit! I am running in a virtual machine, whose memory settings I changed from 512 to 1025 MB of RAM to allow drush to run. $> free -m total used free shared buffers cached Mem: 1010 578 431 0 14 392 -/+ buffers/cache: 172 837 Swap: 382 0 382 So it says it has some 431 MB free, now that I've bumped the vm up to 1024. I guess half the memory is being used to run the GUI, but I don't understand how the GUI was running okay when the vm had 512 MB of ram. Why is the PHP cli still hitting a 512 MB memory limit? If it was hitting a system memory limit, shouldn't it die around 431MB, which is what the free command says is available?

    Read the article

  • WS2008 subst in Logon script does not "stick"

    - by Frans
    I have a terminal server environment exclusively with Windows Server 2008. My problem is that I need to "map" a drive letter to each users Temp folder. This is due to a legacy app that requries a separate Temp folder for each user but which does not understand %temp%. So, just add "subst t: %temp%" to the logon script, right? The problem is that, even though the command runs, the subst doesn't "stick" and the user doesn't get a T: drive. Here is what I have tried; The simplest version: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "subst T: %temp%", 2, True That didn't work, so tried this for more debug information: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") Set procEnv = WinShell.Environment("Process") wscript.echo(procEnv("TEMP")) tempDir = procEnv("TEMP") WinShell.Run "subst T: " & tempDir, 3, True This shows me the correct temp path when the user logs in - but still no T: Drive. Decided to resort to brute force and put this in my login script: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "\\domain\sysvol\esl.hosted\scripts\tempdir.cmd", 3, True where \domain\sysvol\esl.hosted\scripts\tempdir.cmd has this content: echo on subst t: %temp% pause When I log in with the above then the command window opens up and I can see the subst command being executed correctly, with the correct path. But still no T: drive. I have tried running all of the above scripts outside of a login script and they always work perfectly - this problem only occurs when doing it from inside a login script. I found a passing reference on an MSFN forum about a similar problem when the user is already logged on to another machine - but I have this problem even without being logged on to another machine. Any suggestion on how to overcome this will be much appreciated.

    Read the article

  • Remote RIB iLO on Proliant via RIBCL

    - by Wudang
    I'm trying to automate a process for our Ops. The process requires that some windows servers running on blades are shut down, left down for a few hours, the restarted when some other processes complete. This is done by an op logging on to each blade's iLO web interface to stop and start. I've been trying to automate this with HP's cpqlocfg program with partial success. I can issue the GET_POWER, GET_USER_INFO, etc commands but SET_HOST_POWER fails in a specific way. Using the cpqlocfg GET_EVENTLOG command I can see the events XML login and the power comand being issued from the iLO interface but then nothing happens. Some hints from googling suggest ACPI isn't configured properly but I can't find any hits on how to verify this. Am I even using the right command? There's also a few other options like PRESS_PWR_BUTTON etc. Problem is I have nowhere to test this, all I can do at the moment is give a script to ops and ask them to try it as 4am on a Sunday when they try the proc. The shutdown is trivial as I can use the windows "shutdown" command, it's the power on that I need help on. Anyone done this? I'd tag this "rib ribcl ilo" but lack the rep points, sorry.

    Read the article

  • certutil -ping fails with 30 seconds timeout - what to do?

    - by mark
    The certificate store on my Win7 box is constantly hanging. Observe: C:\1.cmd C:\certutil -? | findstr /i ping -ping -- Ping Active Directory Certificate Services Request interface -pingadmin -- Ping Active Directory Certificate Services Admin interface C:\set PROMPT=$P($t)$G C:\(13:04:28.57)certutil -ping CertUtil: -ping command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:04:58.68)certutil -pingadmin CertUtil: -pingadmin command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:05:28.79)set PROMPT=$P$G C:\ Explanations: The first command shows you that there are –ping and –pingadmin parameters to certutil Trying any ping parameter fails with 30 seconds timeout (the current time is seen in the prompt) This is a serious problem. It screws all the secure communication in my app. If anyone knows how this can be fixed - please share. Thanks. P.S. 1.cmd is simply a batch of these commands: certutil -? | findstr /i ping set PROMPT=$P($t)$G certutil -ping certutil -pingadmin set PROMPT=$P$G EDIT1 I have succeeded to pin down the single windows API that causes the problem - DsGetDcName According to the windbg, the certutil -ping invokes it like so: PDOMAIN_CONTROLLER_INFO pdci; DWORD ret = ::DsGetDcName(NULL, NULL, NULL, NULL, DS_DIRECTORY_SERVICE_PREFERRED, &pdci); On my workstation it times out for 30 seconds and then returns error code 1355, which is ERROR_NO_SUCH_DOMAIN No domain controller is available for the specified domain or the domain does not exist. On another machine, which is accidentally a windows server 2003, it returns almost immediately with the correct domain controller name inside the returned DOMAIN_CONTROLLER_INFO structure. Now the question is what is missing on my workstation for that API to find the correct domain controller?

    Read the article

  • Problems configuring an SSH tunnel to a Nexentastor appliance for use with headless Crashplan

    - by Rob Smallshire
    Problem I am attempting to configure an SSH tunnel to a NexentaStor appliance from either a Windows or Linux computer so that I can connect a Crashplan Desktop GUI to a headless Crashplan server running on the Nexenta box, according to these instructions on the Crashplan support site: Connect to a Headless CrashPlan Desktop. So far, I've failed to get a working SSH tunnel from from either either a Windows client (using Putty) or a Linux client (using command line SSH). I'm fairly sure the problem is at the receiving end with NexentaStor. A blog article - CrashPlan for Backup on Nexenta - indicates that it could be made to work only after "after enabling TCP forwarding in Nexenta in /etc/ssh/sshd_config" - although I'm not sure how to go about that or specifically what I need to do. Things I have tried Ensuring the Crashplan server on the Nexenta box is listening on port 4243 $ netstat -na | grep LISTEN | grep 42 127.0.0.1.4243 *.* 0 0 131072 0 LISTEN *.4242 *.* 0 0 65928 0 LISTEN Establishing a tunnel from a Linux host: $ ssh -L 4200:localhost:4243 admin:10.0.0.56 and then, from another terminal on the Linux host, using telnet to verify the tunnel: $ telnet localhost 4200 Trying ::1... Connected to localhost. Escape character is #^]'. with nothing more, although the Crashplan server should respond with something. From Windows, using PuTTY have followed the instructions on the Crashplan support site to establish an equivalent tunnel, but then telnet on Windows gives me no response at all and the Crashplan GUI can't connect either. The PuTTY log for the tunnelled connection shows reasonable output: ... 2011-11-18 21:09:57 Opened channel for session 2011-11-18 21:09:57 Local port 4200 forwarding to localhost:4243 2011-11-18 21:09:57 Allocated pty (ospeed 38400bps, ispeed 38400bps) 2011-11-18 21:09:57 Started a shell/command 2011-11-18 21:10:09 Opening forwarded connection to localhost:4243 but the telnet localhost 4200 command from Windows does nothing at all - it just waits with a blank terminal. On the NexentaStor server I've examined the /etc/ssh/sshd_config file and everything seems 'normal' - and I've commented out the ListenAddress entries to ensure that I'm listening on all interfaces. How can I establish a tunnel, and how can I verify that it is working?

    Read the article

  • lftp cannot connecto to IIS

    - by ruyrocha
    Hello, I can not connect to IIS using lftp as you can see here: <--- 200 Language is now English, UTF-8 encoding. ---> OPTS UTF8 ON <--- 200 OPTS UTF8 command successful - UTF8 encoding now ON. ---> HOST x.x.x.x <--- 504 Server cannot accept argument. ---> USER bla <--- 331 Password required for hgtrf. ---> PASS blabla <--- 230 User logged in. ---> PWD <--- 257 "/" is current directory. ---> PBSZ 0 <--- 200 PBSZ command successful. ---> PROT P <--- 534 Policy denies SSL. ---> PASV <--- 227 Entering Passive Mode (x.x.x.x,194,118). ---- Connecting data socket to (x.x.x.x) port 49782 **** Socket error (Connection refused) - reconnecting ---> LIST ---> ABOR ---- Closing aborted data socket ---- Closing control socket I could connect, list, retrieve and send files using standard ftp command. Do you have any suggestion?

    Read the article

  • Cannot cd to parent directory with cd dirname

    - by Sharjeel Sayed
    I have made a bash command which generates a one liner for restarting all Weblogic ( 8,9,10) instances on a server /usr/ucb/ps auwwx | grep weblogic | tr ' ' '\n' | grep security.policy | grep domain | awk -F'=' '{print $2}' | sed 's/weblogic.policy//' | sed 's/security\///' | sort | sed 's/^/cd /' | sed 's/$/ ; cd .. ; \/recycle_script_directory_path\/recycle/ ;' | tr '\n' ' ' To restart a Weblogic instance, the recycle ( /recycle_script_directory_path/recycle/) script needs to be initiated from within the domain directory as the recycle script pulls some application information from some .ini files in the domain directory. The following part of the script generates a line to cd to the parent directory of the app i.e. the domain directory sed 's/$/ ; cd .. ; \/recycle_script_directory\/recycle/ ;' | tr '\n' ' ' I am sure there is a better way to cd to the parent directory like cd dirname but every time i run the following cd command , it throws a "Variable syntax" error. cd $(dirname '/domain_directory_path/app_name') How do i incorporate the cd to the directory name in a better way ? Also are there any enhancements for my bash command Some info on my script 1) The following part lists out the weblogic instances running along with their full path /usr/ucb/ps auwwx | grep weblogic | tr ' ' '\n' | grep security.policy | grep domain | awk -F'=' '{print $2}' | sed 's/weblogic.policy//' | sed 's/security\///' | sort 2) The grep domain part is required since all domain names have domain as the suffix

    Read the article

  • Problems during an update of cPanel / WHM

    - by haron
    I ordered a Master WHM account with the couple CentOS / cPanel. whm-cpanel.eu.pn The installation is a fresh update of the basic services was necessary (had: WHM 11.15.0 cPanel 11.17.0 WHM X v3.1.0, Apache 1.3.37, PHP 4.4.7, MySQL 4.1.22). 1 / I started to update cPanel / WHM via the command: / scripts / upcp. Everything went well until the middle of installing the server stopped responding (or ping, or ssh). The installation appears to have continued alone to the end and after some time everything is back to normal (I do not know if there was a reboot) and my interface was updated (cPanel 11.24.4-R36167 - WHM 11.24.2 - X 3.9). 2 / Then I updated via the MySQL interface tweak this in WHM then the command: / scripts / mysqlup. Here everything went fine, no problem. 3 / Finally, I wanted to upgrade Apache 2.2 / PHP 5 and I used this command: / scripts / easyapache. After selecting all the packages and modules installation is started but the same as for point 1: the server did not answer more and this time the installation did not go through. Apache 2.2 is well spent (after the second try) but PHP has remained at 4. I tried several times the same operation without success. I do not think this is a memory problem, a free-m shortly before losing communications gave nothing alarming. By cons CPU time seemed to rise up. I reinstalled the machine again the trick, same problem! Whether via the WHM interface or by Shell, the installation stops short, for 15 minutes the machine is not responding and then everything returns to normal, but no update is done in PHP. Is there a known bug in this version of cPanel / WHM? Someone he met the same problem? If I compile Apache / PHP manually, without using the script easyapache is what I might encounter problems with cPanel later? Thank you!

    Read the article

  • Troubles installing/starting Redis via Resque

    - by Craig Flannagan
    Trying to complete instructions for Resque/Redis installation here: https://github.com/defunkt/resque/blob/master/README.markdown Am stuck at where I'm trying to start up Redis via Resque at the following command: Craig:/usr/local/src/resque$ rake redis:start (in /usr/local/src/resque) Detach with Ctrl+\ Re-attach with rake redis:attach ../../bin/dtach -A /tmp/redis.dtach ../../bin/redis-server ../../../etc/redis.conf rake aborted! Command failed with status (127): [../../bin/dtach -A /tmp/redis.dtach ../../...] (See full trace by running task with --trace) Rerunning with --trace (showing only part of trace): Craig:/usr/local/src/resque$ rake redis:start --trace (in /usr/local/src/resque) ** Invoke redis:start (first_time) ** Execute redis:start Detach with Ctrl+\ Re-attach with rake redis:attach ../../bin/dtach -A /tmp/redis.dtach ../../bin/redis-server ../../../etc/redis.conf rake aborted! Command failed with status (127): [../../bin/dtach -A /tmp/redis.dtach ../../...] /Users/craigflannagan/.rvm/gems/ruby-1.9.2-head@foo/gems/rake-0.8.7/lib/rake.rb:995:in `block in sh' Not sure what is wrong here - by the way, when I did those instructions $ git clone git://github.com/defunkt/resque.git $ cd resque $ PREFIX=<your_prefix> rake redis:install dtach:install $ rake redis:start I wasn't sure whether or not I was supposed to be doing #1 from within the Rails project, or if I was supposed to have the git clone create a new folder outside the Rails project (in this case, I chose to have folder created outside the project).

    Read the article

  • File transfer problems through VPN when Cisco IPS is enabled

    - by Richard West
    We have a Cisco ASA 5510 firewall with the IPS module installed. We have a customer that we must connect to via VPN to their network to exchange files via FTP. We use the Cisco VPN client (version 5.0.01.0600) on our local workstations, which are behind the firewall and subject to the IPS. The VPN client is successful in connecting to the remote site. However when we start the FTP file transfer we are able to upload only 150K to 200K of data, then everything stops. A minute later the VPN session is dropped. I think I have isolated this to an IPS issue by temporarily disabling the Service Policy on the ASA for the IPS with the following command: access-list IPS line 1 extended permit ip 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 inactive After this command was issued I then established the VPN to the remote site and was successful in transferring the entire file. While still connected to the VPN and FTP session I issued the command to enable the IPS: access-list IPS line 1 extended permit ip 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 The file transfer was tried again and was once again successful so I closed the FTP session and reopened it, while keeping the same VPN session open. This file transfer was also successful. This told me that nothing with the FTP programs was being filtered or causing the problem. Furthermore, we use FTP to exchange files with many sites everyday without issue. I then disconnected the original VPN session, which was established when the access-list was inactive, and reconnected the VPN session, now with the access-list active. After starting the FTP transfer the file stopped after 150K. To me this seems like the IPS is blocking, or somehow interfering with the initial VPN setup to the remote site. This only started happening last week after the latest IPS signature updates were applied (sig version 407.0). Our previous sig version was 95 days old becuase the system was not auto updating itself. Any ideas on what could be causing this problem?

    Read the article

  • help Add Any Application Shortcut in Desktop Context Menu

    - by blackjack
    i got the info here but after adding that i didn't get any shortcut on my desktop contest menu :( pls help me i want it only on my desktop context menu Open regedit and goto: CODEHKEY_CLASSES_ROOT\Directory\Background\shell now under this key create another key with any name and in right-side pane set its value to the label, which you want to show in desktop context menu, like Media Player, Winamp, Firefox, anything else. Now create another key under this newly created key with name command. and in right-side pane set its value to the exact path of application, like: C:\Program Files\Windows Media Player\wmplayer.exe C:\Program Files\Winamp\winamp.exe etc... Thats it. Now you can check your favorite application shortcut in desktop context menu. You can create as many shortcut as you want. Simply create a separate key for all the applications. Following is a ready-made code: CODEWindows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\Directory\Background\shell\WMP] @="Windows Media Player" [HKEY_CLASSES_ROOT\Directory\Background\shell\WMP\command] @="C:\Program Files\Windows Media Player\wmplayer.exe" Just change the label and path to ur desired application and save with the name "vishal.reg" (including the quotes) and run it. U can also set the application shortcut to show only when u press key by adding "Extended" String value in right-side pane of the newly created key: CODEWindows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\Directory\Background\shell\WMP] @="Windows Media Player" "Extended"="" [HKEY_CLASSES_ROOT\Directory\Background\shell\WMP\command] @="C:\Program Files\Windows Media Player\wmplayer.exe"

    Read the article

  • Open Office crashes, recovers, crashes again

    - by Daniel R Hicks
    After completely reinstalling my laptop due to apparent registry corruption, I've encountered a problem with Open Office: I open a simple Calc spreadsheet, it comes up normally, but then after anywhere from 5 seconds to several minutes (without even touching the Calc window) OO crashes, then comes up through recovery. If I let it "recover" it will do so and bring the spreadsheet up again, only to repeat the crash scenario again. If I kept clicking "OK" it would apparently do this all day. I reinstalled OO once and the problem went away for awhile, but it came back. I then attempted to "reset" my profile (ie, rename the OO user directory in App Data), but OO crashed during the first startup after that, then resumed the original behavior. If I open the same file using Excel it complains of errors in the file, and "recovers" them, but the "error report" it generates contains no details. If I save the "recovered" file then OO Calc will open it, but the problem returns after saving again. Any ideas? (The system is Vista SP2, running OO 3.4.1) How to reproduce: Start Open Office Calc. Save workspace as "CrashTest.ods" From Task Manager kill Open Office (soffice.exe/bin -- one of each) Double click on the saved "CrashTest.ods" in Explorer. OO puts up a message that recovery will occur -- allow it. When the Calc window comes up, don't touch it -- just wait about 10 seconds. Calc window closes and OO puts up a message that recovery will occur -- from now on the sequence will repeat. I suspect this behavior is limited to a few (recent) versions of OO, and very possibly only Calc. Reported as Open Office Bug 1211094. Sigh!! As much as it irritates me, I'm having to switch over to Excel for several things I used to do with Calc. Excel has a miserable UI, but at least it says up for longer than 10 seconds.

    Read the article

  • frequent "SNMP error" with Cacti

    - by nn4l
    When adding new devices to my Cacti instance, I get frequent "SNMP error" messages in the device screen. But the error is not consistent, not even for the same device. Here's what I already have checked: Sometimes a device shows that "SNMP error" message even when it did not had that error an hour before, and vice versa. I tried this with several different Cacti releases, installed on different OS (Debian squeeze: 0.8.7g-1+squeeze1, Debian Sid: 0.8.7i-3, CentOS 6.0: 0.8.7i-2.el6) tried both from a local (192.168.1.xy) network and from a different data center so I don't think it is a network problem reinstalled the Cacti database, rerun the scripts to install my devices. Now different devices have that error when executing a snmpwalk or snmpgetnext command from the command line, it is always successful increasing the timeout to 20000 (20 seconds) and the retry count to 10 does not make a difference The cacti.log says: 04/14/2012 02:10:19 PM - CMDPHP: Poller[0] WARNING: SNMP GetNext Timeout for Host:'s0026.mydomain.de', and OID:'.1.3.6.1.2.1.1.3.0' 04/14/2012 02:10:20 PM - CMDPHP: Poller[0] WARNING: SNMP GetNext Timeout for Host:'s0026.mydomain.de', and OID:'.1.3' However, when executing snmpget or snmpget with that from the command line a proper response is returned immediately.

    Read the article

  • Remote Scripted Installation of Sun/Oracle JRE

    - by chrisbunney
    I'm attempting to automate the installation of a Debian server (debian 6.0 squeeze 64bit). Part of the installation requires the Sun JRE package to be installed. This package has a licence agreement, which has to be accepted. I have a script which uses the following lines to accept and install the JRE: echo "sun-java6-bin shared/accepted-sun-dlj-v1-1 boolean true" | debconf-set-selections apt-get install -y sun-java6-jre This works fine when executing the script locally. However, I need to execute the script remotely using the ssh command, e.g.: ssh -i keyFile root@hostname './myScript' This doesn't work. In particular, it fails on apt-get install -y sun-java6-jre. It would seem that in spite of me setting the licence agreement to accepted, when run remotely in this manner it is ignored. Despite setting the value to true, I still get prompted to manually accept the agreement when I run this command: ssh -i keyFile root@hostname 'apt-get install -y sun-java6-jre' I suspect it is something to do with environment that is taken care of when running a proper terminal session, but have no idea what to try next to fix it. So, what do I have to do to get this command (and hence my deployment script) to run correctly when executing it remotely? Or is there an alternative way that allows me to install the JRE remotely by another means? Edit 0: I have compared the output of env when executed remotely via ssh and when executed via a local terminal session. The only difference between the outputs is that the local terminal session has the additional value TERM=xterm.

    Read the article

  • How to remotely open gedit with SFTP URL in Gnome through SSH?

    - by Álvaro Justen
    My setup is weird and I can't change it now. I have two machines: local-machine: it's my desktop running Ubuntu with Gnome remote-machine: it's one virtual machine, also running Ubuntu but without X In both machines I have my private and public SSH keys. I need to run SSH from remote-machine to local-machine and run gedit (in local-machine, under the default $DISPLAY) but openning a file in remote-machine throught SFTP. Something like this: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 gedit sftp://remote-machine/some/file" The command above doesn't work. gedit shows this message: Could not open the file sftp://remote-machine/some/file. gedit cannot handle sftp: locations. Note that: /some/file exists on remote-machine. I can SSH normally from remote-machine to local-machine using my SSH key without any problems! I can run the command DISPLAY=:0.0 gedit sftp://remote-machine/some/file in a terminal on local-machine and gedit opens the file on remote-machine without any problems - but the terminal in which I executed the command is running in DISPLAY :0 (really, it's gnome-terminal). I also tried -t option of SSH client (to force pseudo-tty allocation) but it didn't work. If I try to run DISPLAY=:0.0 gedit sftp://remote-machine/some/file in local-machine but under a tty (for example in tty1, by pressing <Ctrl>+<Alt>+<F1>) it doesn't not work - I get the same error when running from remote-machine. I found that if I pass the environment variable DBUS_SESSION_BUS_ADDRESS with a correct value, it works! So, if I do something like that: myuser@local-machine:~$ env | grep DBUS_SESSION_BUS_ADDRESS > env.txt myuser@local-machine:~$ scp env.txt remote-machine: and then: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 $(cat env.txt) gedit sftp://remote-machine/some/file" it works! The problem is that I'm not on local-machine so I can't get the correct value for this env variable. Is there any other way to make this work?

    Read the article

  • QT Creator 64-bit Snow Leopard

    - by quadelirus
    I have a bunch of libraries that I need to link against that I installed via macports. They are 64-bit libraries. I'm working on an application written with QT Creator and the .pro is set up. I downloaded the QT SDK for Mac OS X, but it is 32-bit and so the compiled code won't link against the 64-bit binaries that I got from macports. Ok. So I downloaded the QT SDK source and built from source using -arch x86_64. Now I have a 64-bit version of the SDK (I think) but it didn't build a QT Creator app. So. I need to know one of 4 things: Either, 1.) I'm guessing that a simple make command will convince the QT SDK to build the creator for me. If this is true, then what is the command (make creator?). barring that, I need to know 2.) The easiest way to get MacPorts to redownload the libraries that I installed with a 32-bit version (I keep seeing a "+universal" mentioned, but I haven't seen it on a line, and simply calling ports +universal install XYZ doesn't seem to work--perhaps I need to uninstall and reinstall the package?). Also, is this a stupid idea? or 3.) Someone who actually has a prebuilt 64-bit QT SDK installer so I don't have to mess with this. It is ridiculous that QT doesn't already have this available, in my opinion--SL has been out since, what, last August? 4--and this would take the cake.) I don't understand why I can't simply put a "compile-for-64-bit stupid" command directly into the QT pro file and have it build. There isn't really a reason why a compiler compiled in 32-bits couldn't compile to 64-bits is there? Thanks.

    Read the article

  • Explorer and open file dialog not responding (Vista)

    - by rohancragg
    Any explorer window opened for the first time on my machine causes the explorer window to display the folders tree and folder path in the address bar immediately but the file/folder list pane is blank and the window displays 'Not Responding' in the title bar, this hangs for up to a minute or more. Any file dialog displays 'Not Responding' in the title bar. The files list is eventually displayed after a few seconds or more. Steps to repro: Close all open instances of explorer Windows Key | Run | [enter a folder path such as 'c:\temp'] Or within any app: use a file open / save dialog Once there is at least one open instance of explorer the performance is still fairly poor but not nearly so bad and file lists are displayed in a timely fashion. What I've tried: Cleaned up registry with CCleaner tool, and uninstalled all other unused software Checked nothing unwanted running at startup with Autoruns Removed any ISO burner/recorder/mount software Still to try Get latest version of everything - especially stuff with shell extension behaviour such as TortoiseSVN Anyone have any other suggestions? Thanks alot. Update I'm wondering if this is related, I'll try the hotfix when I get home and report back: KB972685 - FIX:Explorer.exe hangs when using a shell extension written using MFC Update 2 Before I got a chance to try the hotfix it seems one of the above actions fixed this for me; either the removal of IsoRecorder or TortoiseHg (which I was no longer using anyway). Update 3 A similar issue with Explorer.exe has come back since installing TortoiseHg 1.01 :-(

    Read the article

  • email output of powershell script

    - by Gordon Carlisle
    I found this wonderful script that outputs the status of the current DFS backlog to the powershell console. This works great, but I need the script to email me so I can schedule it to run nightly. I have tried using the Send-MailMessage command, but can't get it to work. Mainly because my powershell skills are very weak. I believe most of the issue revolve around the script using the Write-Host command. While the coloring is nice I would much rather have it email me the results. I also need the solution to be able to specify a mail server since the dfs servers don't have email capability. Any help or tips are welcome and appreciated. Here is the code. $RGroups = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query "SELECT * FROM DfsrReplicationGroupConfig" $ComputerName=$env:ComputerName $Succ=0 $Warn=0 $Err=0 foreach ($Group in $RGroups) { $RGFoldersWMIQ = "SELECT * FROM DfsrReplicatedFolderConfig WHERE ReplicationGroupGUID='" + $Group.ReplicationGroupGUID + "'" $RGFolders = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGFoldersWMIQ $RGConnectionsWMIQ = "SELECT * FROM DfsrConnectionConfig WHERE ReplicationGroupGUID='"+ $Group.ReplicationGroupGUID + "'" $RGConnections = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGConnectionsWMIQ foreach ($Connection in $RGConnections) { $ConnectionName = $Connection.PartnerName.Trim() if ($Connection.Enabled -eq $True) { if (((New-Object System.Net.NetworkInformation.ping).send("$ConnectionName")).Status -eq "Success") { foreach ($Folder in $RGFolders) { $RGName = $Group.ReplicationGroupName $RFName = $Folder.ReplicatedFolderName if ($Connection.Inbound -eq $True) { $SendingMember = $ConnectionName $ReceivingMember = $ComputerName $Direction="inbound" } else { $SendingMember = $ComputerName $ReceivingMember = $ConnectionName $Direction="outbound" } $BLCommand = "dfsrdiag Backlog /RGName:'" + $RGName + "' /RFName:'" + $RFName + "' /SendingMember:" + $SendingMember + " /ReceivingMember:" + $ReceivingMember $Backlog = Invoke-Expression -Command $BLCommand $BackLogFilecount = 0 foreach ($item in $Backlog) { if ($item -ilike "*Backlog File count*") { $BacklogFileCount = [int]$Item.Split(":")[1].Trim() } } if ($BacklogFileCount -eq 0) { $Color="white" $Succ=$Succ+1 } elseif ($BacklogFilecount -lt 10) { $Color="yellow" $Warn=$Warn+1 } else { $Color="red" $Err=$Err+1 } Write-Host "$BacklogFileCount files in backlog $SendingMember->$ReceivingMember for $RGName" -fore $Color } # Closing iterate through all folders } # Closing If replies to ping } # Closing If Connection enabled } # Closing iteration through all connections } # Closing iteration through all groups Write-Host "$Succ successful, $Warn warnings and $Err errors from $($Succ+$Warn+$Err) replications." Thanks, Gordon

    Read the article

  • Windows Server 2008 Create Symbolic Link, updated Security Policy still gives privilege error

    - by Matt
    Windows Server 2008, RC2. I am trying to create a symbolic/soft link using the mklink command: mklink /D LinkName TargetDir e.g. c:\temp\>mklink /D foo bar This works fine if I run the command line as Administrator. However, I need it to work for regular users as well, because ultimately I need another program (executing as a user) to be able to do this. So, I updated the Local Security Policy via secpol.msc. Under "Local Policies" "User Rights Management" "Create symbolic links", I added "Users" to the security setting. I rebooted the machine. It still didn't work. So I added "Everyone" to the policy. Rebooted. And STILL it didn't work. What on earth am I doing wrong here? I think my user is even an Administrator on this box, and running plain command line even with this updated policy in place still gives me: You do not have sufficient privilege to perform this operation.

    Read the article

  • process and memory issue on linux server

    - by zapping
    Need some assistance in analyzing apache and php process running on linux server. Its a 8-core intel processor with 4GB ram. When the website on it runs the top displays like this. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23459 username1 16 0 151m 27m 8388 S 11.3 0.7 0:11.71 php5 23730 username1 16 0 151m 28m 8388 S 11.3 0.7 0:03.87 php5 23458 username1 16 0 151m 28m 8388 S 3.0 0.7 0:19.20 php5 16202 mysql 15 0 459m 38m 4624 S 0.7 1.0 62:33.81 mysqld 24141 nobody 15 0 311m 5832 2304 S 0.3 0.1 0:00.03 httpd Why does the command say php5 when the website is accessed. Both apache and php was preconfigured so not sure whats done there. Tried setting up the same site and db on a different server but on it the process shows httpd always and not php5. The site uses mysql db. The problem is server load seems to go till about 5.x when the website was access by about 16users. When the free -m command was given the output shows total used free shared buffers cached Mem: 3941 3727 213 0 236 2734 -/+ buffers/cache: 756 3184 Swap: 4095 0 4095 Lots of memory seems to be in cache and free memory is less. Even when the website is not accessed that is leaving it very much idle for about 2days the free memory showed just 190. When the site is accessed the free memory seems to be go till 90mb then it increases to about 150mb. It always seems to remain just about 200mb. Is it somehow related to the server load showing 5.x. Will adding some more RAM resolve the load issue?

    Read the article

  • How \Deleted flag can be unset for all mails in cyrus-imapd mailbox?

    - by Sachin Divekar
    I have a 5GB mailbox which I moved using imapsync. But somehow I messed up with --delete/--delete2 option and end up with almost all the messages having \Deleted flag set. I do not have delayed expunge enabled, so I can not use unexpunge utility. I am using cyrus-imapd v2.3.7. Using cyrus-imapd's debugging feature I found out that email client(Roundcube in my case) fires following IMAP command to unset it. UID STORE 179 -FLAGS.SILENT (\Deleted) I don't know if somehow I can fire this command for all the mails. Is there any way I can unset \Deleted flag for all the mails in the mailbox? UPDATE: Using @geekosaur's tip of specifying range of message-ids in the above command, I could solve it for one mailbox under INBOX like INBOX.folder1. Is there any way I can do it for multiple mailboxes under INBOX recursively? Now I am working on solving it using/creating some script, maybe using Perl's IMAP related module. But still I need to solve it asap so inputs are welcome.

    Read the article

  • Importing XML into an AWS RDS instance

    - by RoyHB
    I'm trying to load some xml into an AWS RDS (mySql) instance. The xml looks like: (it's an xml dump of the ISO-3661 codes) <?xml version="1.0" encoding="UTF-8"?> <countries> <countries name="Afghanistan" alpha-2="AF" alpha-3="AFG" country-code="004" iso_3166-2="ISO 3166-2:AF" region-code="142" sub-region-code="034"/> <countries name="Åland Islands" alpha-2="AX" alpha-3="ALA" country-code="248" iso_3166-2="ISO 3166-2:AX" region-code="150" sub-region-code="154"/> <countries name="Albania" alpha-2="AL" alpha-3="ALB" country-code="008" iso_3166-2="ISO 3166-2:AL" region-code="150" sub-region-code="039"/> <countries name="Algeria" alpha-2="DZ" alpha-3="DZA" country-code="012" iso_3166-2="ISO 3166-2:DZ" region-code="002" sub-region-code="015"/> The command that I'm running is: LOAD XML LOCAL INFILE '/var/www/ISO-3166_SMS_Country_Codes.xml' INTO TABLE `ISO-3661-codes`(`name`,`alpha-2`,`alpha-3`,`country-code`,`region-code`,`sub-region-code`); The error message I get is: ERROR 1148 (42000): The used command is not allowed with this MySQL version The infile that is referenced exists, I've selected a database before running the command and I have appropriate privileges on the database. The column names in the database table exactly match the xml field names.

    Read the article

  • executable in path, findable by which, yet cannot execute without fully qualifying path?

    - by Peeter Joot
    I've got a bizarre seeming shell issue, with a command in the $PATH that the shell (ksh, running on Linux) appears to cowardly refuse to invoke. Without fully qualifying the command, I get: # mycommand /bin/ksh: mycommand: not found [No such file or directory] but the file can be found by which: # which mycommand /home/me/admbin/mycommand I also explicitly see that directory in $PATH: # echo $PATH | tr : '\n' | grep adm /home/me/admbin The exe at that location seems normal: # file /home/me/admbin/mycommand /home/me/admbin/mycommand: setuid setgid ELF 64-bit LSB executable, x86-64, version 1 (SYSV), for GNU/Linux 2.6.4, dynamically linked (uses shared libs), not stripped # ls -l mycommand -r-sr-s--- 1 me mygroup 97892 2012-04-11 18:01 mycommand and if I run it explicitly using a fully qualified path: # /home/me/admbin/mycommand I see the expected output. Something is definitely confusing the shell here, but I'm at a loss what it could be? EDIT: finding what looked like a similar question: Binary won't execute when run with a path. Eg >./program won't work but >program works fine I also tested for more than one such command in my $PATH, but find only one: # for i in `echo $PATH | tr : '\n'` ; do test -e $i/mycommand && echo $i/mycommand ; done /home/me/admbin/mycommand

    Read the article

  • Cursor lag when mouse cursor changes?

    - by Mathias Lykkegaard Lorenzen
    Cursor lag issue Introduction I'm experiencing a newly arrived problem lately that frustrates me a lot. The computer I bought is a Clevo 150ERM. Two of my friends bought the same machine, and are experiencing the same issue. The computer came with Windows 7. There, I had no issues. Then, when we all switched to Windows 8, they had the mouse problem and I didn't. That is until after 4 or 5 months when I decided to install the RTM driver of my Intel graphics chip, and the latest Nvidia driver. I also installed the latest version of Skype that just released (Skype 6 and Skype for Metro). This basically leads me to conclude that the issue is not hardware-prone, and is not based on the operating system itself, rather the drivers or components that follow with it. Description of the issue The issue itself (lag with the mouse) happens whenever the cursor icon changes. For instance, if I keep hovering from and to a textfield (and the cursor changes into a caret and then back to a mouse), it stops for 200 milliseconds while it changes the icon. An example is if I follow the mouse in the pattern shown by the arrows below. When crossing the window border, the cursor changes into a "resize window" cursor for a short while, making the cursor lag. This doesn't sound like much, but it happens every time the cursor changes (even if it's to just move the mouse somewhere else, and accidentally make it cross a window border from where the resize cursor shows etc). What do you suggest I try?

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >