Search Results

Search found 27581 results on 1104 pages for 'execute command'.

Page 341/1104 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • How to install nvidia drivers (GT 440 ) on Xubuntu 12.10 , screen res gone to 640x480 :(

    - by shaggyjack
    Hi all after days of endless googling I finally gave in and decided to try and directly ask for help. I have just installed Xubuntu and updated to 12.10 on a pretty old (12 years) machine, and I am now struggling to install the correct drivers for the nvidia 440 gt card.. I have managed to get "additional drivers" but the app does not show in the menu, I went through a few procedures which ended up in my screen going no higher than 640x480, and tried all the sudo apt-get variations with nvidia-current and current updates... I think I got the right version of the drivers ( 93.43.07 ) but they won't install from terminal as they say I am running an X server. So I learnt how to shut the graphic interface but then I try and install them from there but after I write the exact same command (sudo home/username/Downloads/NVIDIA-Linux-x86-96.43.07.pkg1.run ) nothing happens and the terminal says something like command not found. I am desperate for help if anybody could point me in the right direction that would be greatly appreciated. There are lots of similar topics on installing nvidia drivers but I seem to understand that current drivers are no good for my old GPU. So if anybody could show me how to install the right version that would be excellent. Thanks in advance! Jack

    Read the article

  • Filename encoding broken after unzip on windows

    - by flammi88
    I zipped a directory on my linux server. Many files in the directory have german umlauts in their filename. The filesystem is ext3 and the system locale is set to de_DE.utf8. I used the following command to create the zip file: zip -r somezip.zip somefolder/ I transfered this file via WinSCP to my windows laptop and unzipped it. The issue: All filenames with german umlauts are broken. On my linux server the filenames are displayed correctly. I assume that I made a mistake when i created the zip file. Has someone any ideas how i can perserve the right filename encoding when I zip the files with the zip command on linux?

    Read the article

  • Rsync over ssh: "ERROR: module is read only" suddenly appeared

    - by user978548
    I've used from some time rsync/ssh to backup my shared host contents to my personal Synology NAS (212j for that matter), and it worked quite well. For information, I use a password-less ssh connection. 3 days ago, I updated my NAS software and since (or at least I believe it's since that), the backup won't work anymore. I get the following error on the host: rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) ERROR: module is read only ..which I do not understand. beside that nothing changed that I know of in both source and destination that can be related to rsync or ssh, I did check a few things and all seems to be alright: I can still connect through ssh from the host to my NAS with the good user, so ssh stuff like keys haven't changed. I also have the correct file permissions on the NAS (I checked, and also tried to create files, directories, .. with the user used by rsync through ssh). I read here and there that the error means that I have to ensure that my rsyncd.conf have the right read only = no in it, but as far as I know, I never used rsyncd as well as I never configured anything for it and until now it worked like a charm.. I use the following command to do the backup: rsync -ab --recursive \ --files-from="$FILES_FROM" \ --backup-dir=backup_$SUFFIX \ --delete \ --filter='protect backup_*' \ $WDIRECTORY/ \ remote_backup:$REMOTE_BACKUP/ So I'm stuck and really can't figure out what happened. Edit: As suggested in comments, I also tried passing commands to ssh (but not from inside a ssh session), that worked as expected, and also tried a single rsync command, which didnt worked, failing just like the complete backup command. (sharedHost):hostuser:~ > touch test.txt (sharedHost):hostuser:~ > rsync test.txt remote_backup:backups/test.txt ERROR: module is read only rsync error: syntax or usage error (code 1) at main.c(1034) [Receiver=3.0.8] rsync: connection unexpectedly closed (9 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.7] and (sharedHost):hostuser:~ > ssh remote_backup 'touch /abs_path_to_backups/backups/test2.txt && echo "ProoF" > /abs_path_to_backups/backups/test2.txt' (sharedHost):hostuser:~ > ssh remote_backup 'cat /abs_path_to_backups/backups/test2.txt' ProoF

    Read the article

  • Multi Gateway and Backup Routing on a cisco router

    - by user64880
    Hi all, I have a 2611 Cisco Router with only one Fastethernet port Now I have two internet gateways. I want to config my router as when primary routing fails second routing automatically start to route all my packets. When I set 2 IP route command in my router then I check I see it work well but when peer IP on primary routing is down it can not change to second routing until I remove first route command.In the following I write my setting. How can I set it? interface FastEthernet0/0 ip address 81.12.21.100 255.255.255.248 secondary ip address 62.220.97.14 255.255.255.252 ip route 0.0.0.0 0.0.0.0 62.220.97.13 ip route 0.0.0.0 0.0.0.0 81.12.21.97 100 Cheer, Kamal

    Read the article

  • Added autossh in rc.local, but the dynamic port forwarding won't work

    - by rankjie
    I am using Rasbian on my newly arrived Rasp.Pi, and decided to make it my own proxy server. Now I need to set up a ssh tunnel on the Pi to my Linode server, and make it auto start with the system. What did I do: Add this line to /etc/rc.local autossh -f theRemoteServer -N -D 5555 -L 1234:localhost:22 After I reboot, I found out that I can't use the localhost:5555 as a socks proxy. So I type the command ps -A | grep ssh then I can see the autossh and ssh all running: pi@raspberrypi ~ $ ps -A | grep ssh 2018 ? 00:00:00 sshd 2116 ? 00:00:00 autossh 2119 ? 00:00:00 sshd 2195 ? 00:00:00 sshd 3173 ? 00:00:00 ssh (I've installed autossh, and the command works if I type it manually.) (I use the passwordless key auth, so I don't have to enter password.) Much appreciated and sorry for my poor English.

    Read the article

  • Why do I always get this error when using 'apt-get' commands?

    - by Venki
    I am using Ubuntu 14.04(with Unity). Just today(as of the date of this post) I did a sudo apt-get update && sudo apt-get upgrade and at the end of the 'Upgrade' process I got the following error :- Setting up crossplatformui (1.0.38) ... * Stopping ACPI services... [ OK ] * Starting ACPI services... [ OK ] package libqtgui4 exist QT_VERSION = 4 make -C /lib/modules/3.13.0-27-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory `/usr/src/linux-headers-3.13.0-27-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:34:28: fatal error: linux/smp_lock.h: No such file or directory #include <linux/smp_lock.h> ^ compilation terminated. make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.13.0-27-generic' make: *** [modules] Error 2 dpkg: error processing package crossplatformui (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: crossplatformui E: Sub-process /usr/bin/dpkg returned an error code (1) From then on whatever apt-get command I use(so far as I know, except apt-get update) I keep getting the above error at the end of the process. But whichever apt-get command I use does what it has to without fail.(For example I tried installing blender with sudo apt-get install blender and it installed fine though it showed the above error.) After this I even got a kernel update(from 3.13.0-27 to 3.13.0-29 via the Software Updater), but even now the issue persists. How do I solve this issue?

    Read the article

  • opennebula 3.4 in debian squeeze

    - by Jin Splif
    hope can get some advise n help.... currently I am installing opennebula 3.4 in debian squeeze everything have being successful where I am able to access the opennebula sunstone webpage localhost:9869 , use one command but when I tried to create a host the status become error... hope someone can assist me on this thanks sample log Monitoring host abc (0) [InM][I]: Command execution fail: 'if [ -x "/var/tmp/one/im/run_probes" ]; then /var/tmp/one/im/run_probes kvm 0 abc; else exit 42; fi' [InM][I]: ssh: Could not resolve hostname abc: Name or service not known [InM][I]: ExitCode: 255 [InM][E]: Error monitoring host 0 : MONITOR FAILURE 0 -

    Read the article

  • how to record mic input and pipe the output to another program

    - by acrs
    Hi everyone Im trying to follow a tutorial on generating truly random bits How To Generate Truly Random Bits This is the command from the tutorial but it does not work rec -c 1 -d /dev/dsp -r 8000 -t wav -s w - | ./noise-filter >bits I know i can record my mic input using rec -c 1 no.wav this is the command i tried using rec -c 1 -r 8000 -t wav -s noise.wav | ./noise-filter >bits but i get root@xxc:~/cc# rec -c 1 -r 8000 -t wav -s noise.wav - | ./noise-filter >bits rec WARN formats: can't set sample rate 8000; using 48000 rec FAIL sox: Input files must have the same sample-rate I have complied noise-filter noise-filter I think the tutorial is using an older version of SOX and REC I'm using sox: SoX v14.3.2 on Ubuntu 12.04 server Can someone please help me ?

    Read the article

  • Where does linux look for shared libs?

    - by EsbenP
    I am trying to get the ar command on an embedded ARM computer running linux. I want to install debian and openjdk. It is a headless system. This is a custom linux distribution provided by the hardware manufacturer. The debian installer is missing the ar command so i tried copying the binaries from the debian package, but when running ar I get error while loading shared libraries: libbfd-2.18.0-multiarch.20080103.so: cannot open shared object file: No such file or directory libbfd is also in the package. I tried linking it to /lib and /usr/lib but I get the same message when running. What is the best way to get debian and ar on a custom linux distro?

    Read the article

  • Is a disk/ata timeout exception dangerous?

    - by j-g-faustus
    I have a few hard drives in mdadm RAID 5 configured to go to standby after a few minutes of inactivity. (Using hdparm.conf spindown_time.) At irregular intervals I get messages like these in dmesg: [ 1840.251661] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [ 1840.251722] ata4.00: failed command: SMART [ 1840.251758] ata4.00: cmd b0/d5:01:06:4f:c2/00:00:00:00:00/00 tag 0 pio 512 in [ 1840.251759] res 40/00:14:50:2e:04/00:00:02:00:00/40 Emask 0x4 (timeout) [ 1840.251858] ata4.00: status: { DRDY } [ 1840.251888] ata4: hard resetting link [ 1840.600742] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1840.601521] ata4.00: configured for UDMA/133 [ 1840.601547] ata4: EH complete [337877.713988] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [337877.714019] ata4.00: failed command: SMART [337877.714038] ata4.00: cmd b0/d5:01:06:4f:c2/00:00:00:00:00/00 tag 0 pio 512 in [337877.714039] res 40/00:04:90:10:81/00:00:00:00:00/40 Emask 0x4 (timeout) [337877.714089] ata4.00: status: { DRDY } [337877.714107] ata4: hard resetting link [337878.063085] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [337878.063743] ata4.00: configured for UDMA/133 [337878.063764] ata4: EH complete I think the exception is caused by smartd when a drive does not wake up quickly enough. There are no issues (that I can tell) in accessing the drives normally through the file system - it takes a few seconds longer than normal when they are asleep, but there are no exceptions. Is this something I should worry about, as a potential symptom on something that could corrupt a drive over time? Or can I safely ignore it as part of normal operation? Edit: By request: smartctl -a for sdaand sde, both disks are members of the array. If ata4is the same as scsi-4 then sde is the one that gave the error above, according to /dev/disk/by-path.

    Read the article

  • how to limit upload bandwidth per user in linux?

    - by Gihan Lasita
    Can anyone provide the tc command to limit upload bandwidth per user in Debian Lenny? I found that to mark packets per user with iptables I can use the following command iptables -t mangle -A OUTPUT -p tcp -m owner --uid-owner testuser -j MARK --set-mark 500 but I have no idea how to use tc update by running following commands, i managed to limit testuser upload bandwidth to 10Mbit iptables -t mangle -N HTB_OUT iptables -t mangle -I POSTROUTING -j HTB_OUT iptables -t mangle -A HTB_OUT -j MARK --set-mark 30 iptables -t mangle -A HTB_OUT -m owner --uid-owner testuser -j MARK --set-mark 10 tc qdisc replace dev eth0 root handle 1: htb default 30 tc class replace dev eth0 parent 1: classid 1:1 htb rate 10Mbit burst 5k tc class replace dev eth0 parent 1:1 classid 1:10 htb rate 10Mbit ceil 10Mbit tc qdisc replace dev eth0 parent 1:10 handle 10: sfq perturb 10 tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 10 fw flowid 1:10 now the problem is, i do not want to limit testuser's FTP bandwidth but by running above commands FTP speed also limited to 10Mbit. Regards

    Read the article

  • apxs cannot install mod_cloudflare on centos

    - by Adam
    [ Linux - CentOS - Apache 2.2 - mod_cloudflare - apxs2 ] I have changed my nameservers to point to CloudFlare. The problem is that all the IP addresses are coming in as CloudFlare's. This is no good, because I have to monitor and block some specific traffic. mod_cloudflare is supposed to resolve this but I have been unable to get this installed. The command in the documentation uses apxs2. I can't figure out how to install this, or if it just means for 'apache 2.4'. I'm running 2.2.3, and I can use 'apxs'. When I run: apxs -aic mod_cloudflare.c I get the error apxs:Error: Command failed with rc=65536 Does this mean I need apxs2 or something else? How do I get mod_cloudflare working on my server? I appreciate any help, the documentation is vague and limited.

    Read the article

  • Notifications for Expiring DBSNMP Passwords

    - by Courtney Llamas
    Most user accounts these days have a password profile on them that automatically expires the password after a set number of days.   Depending on your company’s security requirements, this may be as little as 30 days or as long as 365 days, although typically it falls between 60-90 days. For a normal user, this can cause a small interruption in your day as you have to go get your password reset by an admin. When this happens to privileged accounts, such as the DBSNMP account that is responsible for monitoring database availability, it can cause bigger problems. In Oracle Enterprise Manager 12c you may notice the error message “ORA-28002: the password will expire within 5 days” when you connect to a target, or worse you may get “ORA-28001: the password has expired". If you wait too long, your monitoring will fail because the password is locked out. Wouldn’t it be nice if we could get an alert 10 days before our DBSNMP password expired? Thanks to Oracle Enterprise Manager 12c Metric Extensions (ME), you can! See the Oracle Enterprise Manager Cloud Control Administrator’s Guide for more information on Metric Extensions. To create a metric extension, select Enterprise / Monitoring / Metric Extensions, and then click on Create. On the General Properties screen select either Cluster Database or Database Instance, depending on which target you need to monitor.  If you have both RAC and Single instance you may need to create one for each. In this example we will create a Cluster Database metric.  Enter a Name for the ME and a Display Name. Then select SQL for the Adapter.  Adjust the Collection Schedule as desired, for this example we will collect this metric every 1 day. Notice for metric collected every day, we can determine the exact time we want to collect. On the Adapter page, enter the query that you wish to execute.  In this example we will use the query below that specifically checks for the DBSNMP user that is expiring within 10 days. Of course, you can adjust this query to alert for any user that can cause an outage such as an application account or service account such as RMAN. select username, account_status, trunc(expiry_date-sysdate) days_to_expirefrom dba_userswhere username = 'DBSNMP'and expiry_date is not null; The next step is to create the columns to store the data returned from the query.  Click Add and add a column for each of the fields in the same order that data is returned.  The table below will help you complete the column additions. Name Display Name Column Type Value Type Metric Category Unit Username User Name Key String Security AccountStatus Account Status Data String Security DaysToExpire Days Until Expiration Data Number Security Days When creating the DaysToExpire column, you can add a default threshold here for Warning and Critical (say < 10 and 5).  When all columns have been added, click Next. On the Credentials page, you can choose to use the default monitoring credentials or specify new credentials.  We will use the default credentials established for our target (dbsnmp). The next step is to test your Metric Extension.  Click on Add to select a target for testing, then click Select. Now click the button Run Test to execute the test against the selected target(s). We can see in the example below that the Metric Extension has executed and returned a value of 68 days to expire. Click Next to proceed. Review the metric extension in the final screen and click Finish. The metric will be created in Editable status.  Select the metric, click Actions and select Deployable Draft. You can do this once more to move to Published. Finally, we want to apply this metric to a target. When managing many targets, it’s best to add your metric to a template, for details on adding a Metric Extension to a template see the Administrator’s Guide. For this example, we will deploy this to a target directly. Select Actions / Deploy to Targets. Click Add and select the target you wish to deploy to and click Submit.  Once deployment is complete, we can go to the target and view the Metric & Collection Settings to see the new metric and its thresholds.   After some time, you will find the metric has collected and the days to expiration for DBSNMP user can be seen in the All Metrics view.   For metrics collected once per day, you may have to wait up to 24 hours to see the metric and current severity. In the example below, the current severity is Clear (green check) as it is not scheduled to expire within 10 days. To test the notification, we can edit the thresholds for the new metric so they trigger an alert.  Our password expires in 139 days, so we’ll change our Warning to 140 and leave Critical at 5, in our example we also changed the collection time to every 5 minutes.  At the next collection, you’ll find that the current severity changes to a Warning and any related Incident Rules would be triggered to create an Incident or Notification as desired. Now that you get a notification that your DBSNMP passwords is about to expire, you can use OEM Command Line Interface (EM CLI) verb update_db_password to change it at both the database target and the OEM target in one step.  The caveat is you must know the existing password to use the update_db_password command.  To learn more about EM CLI, see the Oracle Enterprise Manager Command Line Interface Guide.  Below is an example of changing the password with the update_db_password verb.  $ ./emcli update_db_password -target_name=emrep -target_type=oracle_database -user_name=dbsnmp -change_at_target=yes -change_all_references=yes Enter value for old_password :Enter value for new_password :Enter value for retype_new_password :Successfully submitted a job to change the password in Enterprise Manager and on the target database: "emrep"Execute "emcli get_jobs -job_id=FA66C1C4D663297FE0437656F20ACC84" to check the status of the job.Search for job name "CHANGE_PWD_JOB_FA66C1C4D662297FE0437656F20ACC84" on the Jobs home page to check job execution details. The subsequent job created will typically run quickly enough that a blackout is not needed, however if you submit a script with many targets to change, your job may run slower so adding a blackout to the script is recommended. $ ./emcli get_jobs -job_id=FA66C1C4D663297FE0437656F20ACC84 Name Type Job ID Execution ID Scheduled Completed TZ Offset Status Status ID Owner Target Type Target Name CHANGE_PWD_JOB_FA66C1C4D662297FE0437656F20ACC84 ChangePassword FA66C1C4D663297FE0437656F20ACC84 FA66C1C4D665297FE0437656F20ACC84 2014-05-28 09:39:12 2014-05-28 09:39:18 GMT-07:00 Succeeded 5 SYSMAN oracle_database emrep After implementing the above Metric Extension and using the EM CLI update_db_password verb, you will be able to stay on top of your DBSNMP password changes without experiencing an unplanned monitoring outage.  

    Read the article

  • How to configure sudoers with path wildcards?

    - by C. Lee
    I need sudo for a command for any path under a particular area. Example: sudo mycommand /opt/apps/myapp/... What is the sudoers syntax to allow this command to run in any path that falls under /opt/apps/myapp? This is Solaris 10 sudo. Thank you for your reply, but I don't need wildcards for the path to the commands, but wildcards for the arguments for the commands. For example, we want to do something like... sudo mycmd /opt/userarea/area1 sudo mycmd /opt/userarea/area1/area2 sudo mycmd /opt/userarea/area1/area2/area3 So far, using wildcards for the arguments in sudoers look like this: /opt/userarea/* /opt/userarea/*/* And it seems like if we want to have N levels of directories, then we need N lines in sudoers! Is there a better way to include all N levels in one line in sudoers? Thanks.

    Read the article

  • debian packages version convention

    - by JackWu
    I'm using debian/Ubuntu, and get confused about versions of packages. When using dpkg -l command, I get: ii vim 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor ii vim-common 2:7.3.429-2ubuntu2.1 Vi IMproved - Common files ii vim-runtime 2:7.3.429-2ubuntu2.1 Vi IMproved - Runtime files ii vim-tiny 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor - compact version ii virt-what 1.11-1 detect if we are running in a virtual machine ii w3m 0.5.3-5ubuntu1 WWW browsable pager with excellent tables/frames support ii watershed 6 reduce superfluous executions of idempotent command ii wget 1.13.4-2ubuntu1 retrieves files from the web ii whiptail 0.52.11-2ubuntu10 Displays user-friendly dialog boxes from shell scripts ii whoopsie 0.1.33 Ubuntu crash database submission daemon ii wimlib9 1.5.0-1~webupd8~precise Library to extract, create, modify, and mount WIM files ii wimtools 1.5.0-1~webupd8~precise Tools to extract, create, modify, and mount WIM files ii wireless-tools 30~pre9-5ubuntu2 Tools for manipulating Linux Wireless Extensions ii wpasupplicant 0.7.3-6ubuntu2.1 client support for WPA and WPA2 (IEEE 802.11i) ii x11-common 1:7.6+12ubuntu2 X Window System (X.Org) infrastructure ii x11-utils 7.6+4ubuntu0.1 X11 utilities ii xauth 1:1.0.6-1 X authentication utility ii xbitmaps 1.1.1-1 Base X bitmaps ii xclip 0.12-1 command line interface to X selections ii xfonts-encodings 1:1.0.4-1ubuntu1 Encodings for X.Org fonts ii xfonts-utils 1:7.6+1 X Window System font utility programs ii xkb-data 2.5-1ubuntu1.3 X Keyboard Extension (XKB) configuration data ii xml-core 0.13 XML infrastructure and XML catalog file support rc xpdf 3.02-21build1 Portable Document Format (PDF) reader ii xterm 271-1ubuntu2.1 X terminal emulator ii xz-lzma 5.1.1alpha+20110809-3 XZ-format compression utilities - compatibility commands ii xz-utils 5.1.1alpha+20110809-3 XZ-format compression utilities ii zabbix-agent 1:1.8.11-1 network monitoring solution - agent ii zlib1g 1:1.2.3.4.dfsg-3ubuntu4 compression library - runtime ii zlib1g-dev 1:1.2.3.4.dfsg-3ubuntu4 compression library - development ii zsh 4.3.17-1ubuntu1 shell with lots of features The third column is version, but it all messed up in a way I can't understand. I mean, different packages use total different naming specification. Here are the major questions: Why there are ubuntu in them, and there are not? what all the special -~+ mean? alpha and build, dfsg, what are they? Can I just use them casually? vim and other packages have 2:, what does that mean? How version comparison works, since they can be so different? Can anyone please explain this to me? Or where can I find an official document? Thanks in advance.

    Read the article

  • How can one send commands to the "inner" ssh session?

    - by iconoclast
    Picture a scenario where I'm logged into a server (which we'll call "Wallace") from my local machine, and from there I ssh into another server (which we'll call "Gromit"): laptop ---ssh---> Wallace ---ssh---> Gromit Then the ssh session from Wallace to Gromit hangs, and I want to kill it. If I enter ~. to kill ssh, it kills the ssh session from my laptop to Wallace, because the ~ is intercepted by that ssh session, and the . is taken as a command to kill the session. How do I send a command to the ssh session between Wallace and Gromit? How do I kill my "inner" ssh?

    Read the article

  • Installing Image Magick on Amazon EC2

    - by Kapil Sharma
    Well I'm a PHP developer who know few linux commands to get my job done. I need to launch a symfony 1.4 website on Amazon EC2. Everything is fine except IMagick. I magick is also installed through following command: sudo yum install ImageMagick Its php lib is not installed/configured, if that do not happen with above command. In PHP, I'm using IMagick, but script is failing on IMagick. I know problem is with PHP IMagic extention but dont know how to fix that. On dev box, its as simple as turning it on on WAMP. Can someone please suggest where should I look to confirm if IMagick PHP extention installed and configured correctly?

    Read the article

  • Kill Leaking Connections on SQL Server 2005

    - by Thierry Brunet
    We have a legacy ASP application that somewhere leaks SQL Connections. In Activity Monitor, I can see a bunch of idle processes with Last Batch times over an hour old. When I look at the T-SQL command batch, these are always FETCH API_CURSORXXX, which from my understanding is caused by improperly closed ASP ADO Recordsets. While we are try to pinpoint the offeding code, is there a way for me to monitor which requests open which cursors? I'm assuming profiler, but I'm not sure what I should be monitoring exactly. I can see a bunch of calls to sp_cursoropen but I don't see the API_CUSORXXX name anywhere. Second, would anyone be able to suggest a script we could run to kill these processes based on the Last Batch time 10 minutes and Last Batch Command being FETCH API_CURSORXXX? For various reasons, we unfortunately don't have any SQL Server DBAs.

    Read the article

  • Wireless router not connection -> AP Not associated

    - by candido
    I can not connect to internet by wireless router after some months with ubuntu 10.04. I can connect with the same portable but with win OS. My SO is ubuntu 10.04 linux 2.6.32-41 arch SMP i686. The internet wireless network controller is Atheros AR9285 chipset (pci express) Kernel module ath9k I have tried a command line connection: $ sudo /etc/init.d/network-manager stop #stop gui network manager $ iwconfig wlan0 essid WLAN_3C key s:C001D20550B3C $ ifconfig Access Point: NOT-ASSOCIATED $dmesg ... AP 00:1a:2b:08:60:49 associated Is the SO has connected to router for booting long ( associated message), after boot and login why the connection to router is not possible by network-manager or command line (NOT associated message)? Thanks in advance

    Read the article

  • In Windows, a batch file with a recursive for loop and a file name including blanks

    - by uvts_cvs
    Hello, I have a folder tree, like this (it's only an example, it will be deeper in my real case): C:\test | +---folder1 | foo bar.txt | foobar.txt | +---folder2 | foo bar.txt | foobar.txt | \---folder3 foo bar.txt foobar.txt My files have one or more spaces in the name and I need to perform a command on them, so I am interested in foo bar.txt but not in foobar.txt. I tried (inside a batch file): for /r test %%f in (foo bar.txt) do if exist %%f echo %%f where the command is the simple echo. It does not work because the space is skipped and I get no output. This works but it is not what I need: for /r test %%f in (foobar.txt) do if exist %%f echo %%f It prints: C:\test\folder1\foobar.txt C:\test\folder2\foobar.txt C:\test\folder3\foobar.txt I tried using the quotation mark (") but it does not work: for /r test %%f in ("foo bar.txt") do if exist %%f echo %%f It does not work because the quotation mark is still included in the output: C:\test\folder1\"foo bar.txt" C:\test\folder2\"foo bar.txt" C:\test\folder3\"foo bar.txt"

    Read the article

  • merging video and audio with custom panning

    - by cherouvim
    I have: a video which has mono audio inside a audio (mono) I'd like to merge those two to a single video file containing: video from #1 audio from #1 full left pan + audio #2 full right pan Is this possible in ffmpeg using 1 command? I've tried the following which almost does this but the video/audio gets out of sync: ffmpeg -i video.mp4 -filter_complex "amovie=audio.wav [r] ; [r] amerge" output.mp4 -y I've managed to do it with multiple commands: #1 create right panned audio ffmpeg -i audio.wav -ac 2 -vbr 5 audio-stereo.mp3 -y ffmpeg -i audio-stereo.mp3 -af pan=stereo:c1=c1 audio-right.mp3 -y #2 create left panned video ffmpeg -i video.mp4 -af pan=stereo:c0=c0 video-left.mp4 -y #3 merge the two ffmpeg -i video-left.mp4 -i audio-right.mp3 -filter_complex "amix=inputs=2" video-mixed.mp4 -y It does the job, but is it possible with 1 command?

    Read the article

  • Installing Ubuntu along with windows 7 on shrunk partition

    - by Thabo
    I am new to Ubuntu OS and ask Ubuntu community. First this is not a duplicate question. Actually this a question which is a summery of all solutions and questions were posted in this community, related to Install Ubuntu along with Windows 7. I have bought a new Hp laptop with its original windows 7.I want to install Ubuntu along with windows 7 64 bit. I ran the Ubuntu 12.4 Desktop installation CD. But Ubuntu installer doesn't show the "along with windows 7 option"only it is showing two options. I read some questions and answers posted on this community. Specially following link Ubuntu 12.04 does not see windows already install on my computer (dual installation) I tried following thinks, I ran the terminal in live CD and tried sudo dmraid -rE command and dmraid remove command .But terminals says there is no dmraid partitions. So I tried another scenario checked my partitions with g parted.There are some partitions labeled C,HP tools,Recovery and System. C is containing windows 7 Files. So I shrank the volume of C Drive. Now I have 50000Mb of unallocated disk. I tried with Gparted to create a partition on that allocated space.It says some thing that you can't create more than four primary partition.Of course all other four partitions were created on widows are actually type of primary partition. So I went back to Windows 7 and tried to create a new volume on unallocated space.But unfortunately it says,If i create a new volume it will be the type of Dynamic partition.It says we cant boot another OS from that partition. So i cancelled that step. Now i have 50000Mb unallocated space but how can i install Ubuntu on that partition without harming the existing Windows 7? Because still I have only two options: Erase and install Ubuntu. Try something else. (I can see my unallocated space by going to "something else" option.)

    Read the article

  • Procedure or function has too many arguments specified

    - by bullpit
    This error took me a while to figure out. I was using SqlDataSource with stored procedures for SELECT and UPDATE commands. The SELECT worked fine. Then I added UPDATE command and while trying to update, I would get this error: "Procedure of function has too many arguments specified" Apparently, good guys at .NET decided it is necessary to send SELECT parameters with the UPDATE command as well. So when I was sending the required parameters to the UPDATE sp, in reality, it was also getting my SELECT parameters, and thus failing. I had to add the extra parameters in the UPDATE stored procedure and make them NULLABLE so that they are not required....phew... Here is piece of SP with unused parameters. ALTER PROCEDURE [dbo].[UpdateMaintenanceRecord]        @RecordID INT     ,@District VARCHAR(255)     ,@Location VARCHAR(255)         --UNUSED PARAMETERS     ,@MTBillTo VARCHAR(255) = NULL     ,@SerialNumber VARCHAR(255) = NULL     ,@PartNumber VARCHAR(255) = NULL Update: I was getting that error because unkowingly, I had bound the extra fields in my GridVeiw with Bind() call. I changed all the extra one's, which did not need to be in Update, to Eval() and everything works fine.

    Read the article

  • Pushing complete notifications to client

    - by ton.yeung
    So with cqrs, we accept that consistency is eventual. However, that doesn't mean that the user has to continually poll, or that eventual means an update has to take more then 500ms to sync. For the sake of UX, we want to at least give the illusion of consistency, or if not possible, be as transparent as possible. With that in mind, I have this setup: angularjs web client, consumes webapi restful services, sends commands to nservicebus command handlers, saves to neventstore, dispatches events to nservicebus event handlers, sends message to signalr hub, sends notifications to angularjs web client so with that setup, theoretically some initiates a request the server validates the request sends out the necessary commands In the mean time the client gets a 200 response updates the view: working on it gets message sometime later: done, here's the updated data Here's where things get interesting, each command could spawn multiple events. Not sure if this is a serious no, no, or not, but that's how it is currently. For example, a new customer spawns CustomerIDCreated, CustomerNameUpdated, CustomerAddressUpdated, etc... Which event handler needs to notify the client? Should all of them in a progress bar style update?

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >