Search Results

Search found 20360 results on 815 pages for 'capture output'.

Page 201/815 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Couldn't make Angry birds to work on wine

    - by Ashfame
    I could run Notepad++ easily but I fail to run the Angry bird exe. Whenever I open the exe, I see one of my screen flickrs a bit (as lines and not the whole screen) and nothing happens. Any ideas? Edit: Output of wine angrybirds.exe fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC80.CRT" (8.0.50727.4053) fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC90.CRT" (9.0.21022.8) err:module:import_dll Library MSVCP90.dll (which is needed by L"C:\\windows\\system32\\AppUpWrapper.dll") not found err:module:import_dll Library AppUpWrapper.dll (which is needed by L"C:\\windows\\system32\\angrybirds.exe") not found err:module:LdrInitializeThunk Main exe initialization for L"C:\\windows\\system32\\angrybirds.exe" failed, status c0000135 I think it didn't even install. I manually dropped those files in the folder but still no gain. Edit: Progress I dropped the file MSVCP90.dll manually and now this is what I get in the output fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC80.CRT" (8.0.50727.4053) fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC90.CRT" (9.0.21022.8) fixme:heap:HeapSetInformation 0x541000 0 0x32fd48 4 fixme:heap:HeapSetInformation (nil) 1 (nil) 0 EXCEPTION: Failed to open data/scripts/starLimits.lua wine: Unhandled exception 0x40000015 at address 0x7b880023:0x78b271d0 (thread 0009), starting debugger... fixme:msvcr90:__clean_type_info_names_internal (0x10267694) stub fixme:msvcr90:__clean_type_info_names_internal (0x78506644) stub ashfame@ashfame-desktop:~$ Process of pid=0008 has terminated No process loaded, cannot execute 'echo Modules:' Cannot get info on module while no process is loaded No process loaded, cannot execute 'echo Threads:' process tid prio (all id:s are in hex) 0000000e services.exe 00000014 0 00000010 0 0000000f 0 00000011 winedevice.exe 00000018 0 00000016 0 00000013 0 00000012 0 00000019 explorer.exe 0000001a 0 You must be attached to a process to run this command. No process loaded, cannot execute 'detach' and there the terminal hangs (I mean I would have to Ctrl + C to get out). It shows up the famous message, that it needs to close down. Edit: Just to let you know that I am still stuck at it. I don't use wine for anything else, so I am ready to do a clean install of wine and everything if anyone is willing to provide me instructions.

    Read the article

  • Camtasia Studio

    - by Andy Morrison
    Have you heard about this tool? This is a capture tool that records your actions as you perform them on your workstation, remote desktop, etc.  It's made by the same company that makes SnagIt, etc. http://www.techsmith.com/camtasia.asp I have used this at several customers to record our install and configurations of BizTalk and more - this can come in very handy to help reduce differences in environments (because you can go back and review exactly what you did in the previous environment) as well as to supplement your installation docs.   The product also includes an editing studio and various media types for export. I haven't been paid to write this - I just think the tool is very nice.

    Read the article

  • Unity 3D (with Nvidia driver) becomes very slow and laggy

    - by Graham
    How can I prevent my Unity 3D desktop from becoming slow after a while, given that I have an Nvidia Quadro NVS 290 graphics in TwinView mode? The desktop starts out fast on login, but becomes slow / lagging / hesitant / high latency after a while, symptoms being spikes in CPU usage by /usr/bin/X whenever I cause any graphical activity with the mouse or keyboard (e.g. typing, changing tabs, dragging windows). The desktop remains slow even with all windows (except htop in Terminal) and extraneous processes killed. Detail: Changing tabs in Terminal takes about a second, and X spikes to 76% CPU. As I type into Firefox, X spikes to 95% CPU. Dragging Termiinal window, X goes to 70% CPU. Basically, every graphical action sends CPU usage of X through the roof. Device: Nvidia Quadro NVS 290 Driver package: binary driver nvidia-current-updates (280.13-0ubuntu5) Dual Monitors: Pair of DELL UltraSharp 1908FP in TwinView (X screen 2560x1024) OS: Fresh install of Ubuntu 11.10 amd64 Desktop with all updates. Hardware: Dell Precision T5400 Workstation Pastebin of Xorg.0.log Pastebin of xorg.conf Pastebin of nvidia-xconfig -t output (easier to read than xorg.conf) Output of /usr/lib/nux/unity_support_test -p: To obtain the following htop screenshow I typed "asdf" several times in in this text box, alt-tabbed to Terminal and took a screenshot of the high X CPU usage. This also happens when firefox is not running: Quadro NVS 290 has "No" thermal sensor according to sensors-detect: Next adapter: NVIDIA i2c adapter 0 at 2:00.0 (i2c-0) Do you want to scan it? (YES/no/selectively): Client found at address 0x50 Probing for `Analog Devices ADM1033'... No Probing for `Analog Devices ADM1034'... No P robing for `SPD EEPROM'... No Probing for `EDID EEPROM'... Yes (confidence 8, not a hardware monitoring chip) I tried the nouveau driver by disabling the nvidia-current-updates under Additional Drivers, but Ubuntu and xrandr -q fail to detect the second monitor. This may be issue 737349. Funniest thing is that Nouveau wiki says that XRandR 1.2 dual-monitor is supported so it should work with a second monitor.

    Read the article

  • Secure ldap problem

    - by neverland
    Hi there, I have tried to config my openldap to have secure connection by using openssl on Debian5. By the way, I got trouble during the below command. ldap:/etc/ldap# slapd -h 'ldap:// ldaps://' -d1 >>> slap_listener(ldaps://) connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_read(15): unable to get TLS client DN, error=49 id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 ber_get_next ber_get_next on fd 15 failed errno=0 (Success) connection_closing: readying conn=7 sd=15 for close connection_close: conn=7 sd=15 Then I have search for "unable to get TLS client DN, error=49 id=7" but it seems no where has a good solution to this yet. Please help. Thanks # Well, I try to fix something to get it work but now I got this ldap:~# slapd -d 256 -f /etc/openldap/slapd.conf @(#) $OpenLDAP: slapd 2.4.11 (Nov 26 2009 09:17:06) $ root@SD6-Casa:/tmp/buildd/openldap-2.4.11/debian/build/servers/slapd could not stat config file "/etc/openldap/slapd.conf": No such file or directory (2) slapd stopped. connections_destroy: nothing to destroy. What should I do now? log : ldap:~# /etc/init.d/slapd start Starting OpenLDAP: slapd - failed. The operation failed but no output was produced. For hints on what went wrong please refer to the system's logfiles (e.g. /var/log/syslog) or try running the daemon in Debug mode like via "slapd -d 16383" (warning: this will create copious output). Below, you can find the command line options used by this script to run slapd. Do not forget to specify those options if you want to look to debugging output: slapd -h 'ldaps:///' -g openldap -u openldap -f /etc/ldap/slapd.conf ldap:~# tail /var/log/messages Feb 8 16:53:27 ldap kernel: [ 123.582757] intel8x0_measure_ac97_clock: measured 57614 usecs Feb 8 16:53:27 ldap kernel: [ 123.582801] intel8x0: measured clock 172041 rejected Feb 8 16:53:27 ldap kernel: [ 123.582825] intel8x0: clocking to 48000 Feb 8 16:53:27 ldap kernel: [ 131.469687] Adding 240932k swap on /dev/hda5. Priority:-1 extents:1 across:240932k Feb 8 16:53:27 ldap kernel: [ 133.432131] EXT3 FS on hda1, internal journal Feb 8 16:53:27 ldap kernel: [ 135.478218] loop: module loaded Feb 8 16:53:27 ldap kernel: [ 141.348104] eth0: link up, 100Mbps, full-duplex Feb 8 16:53:27 ldap rsyslogd: [origin software="rsyslogd" swVersion="3.18.6" x-pid="1705" x-info="http://www.rsyslog.com"] restart Feb 8 16:53:34 ldap kernel: [ 159.217171] NET: Registered protocol family 10 Feb 8 16:53:34 ldap kernel: [ 159.220083] lo: Disabled Privacy Extensions

    Read the article

  • Lot of FIN_WAIT2, CLOSE_WAIT , LAST_ACK and TIME_WAIT in Haproxy

    - by Tux
    We are running haproxy in production for around 10k+ concurrent users . But we are seeing lot of FIN_WAIT2, CLOSE_WAIT , LAST_ACK and TIME_WAIT in the netstat output. This output is on a 8G ubuntu-12.04 node. 8046 CLOSE_WAIT 1 CLOSING 1 established) 40869 ESTABLISHED 1212 FIN_WAIT1 7575 FIN_WAIT2 1 Foreign 2252 LAST_ACK 7 LISTEN 143 SYN_RECV 4920 TIME_WAIT Can someone please tell me what tweaking i need to do. Please note that all these connections are persistent connections . tcp_fin_timeout = 30 tcp_keepalive_time = 1800 Right now, the application is working fine. But wondering will be there any issues as we add more users to this haproxy node.

    Read the article

  • How to increase video memory in libvirt/KVM gui?

    - by Dejan
    In the 'Virtual Hardware details', it lists the model as 'cirrus' with 9MB of RAM. The RAM field cannot be changed, but how to increate the video RAM? My host OS is RH6 and gust OS is Fedora16. EDIT: From guest OS, when I run xvinfo it displays 'no adaptors present'. I was trying to play a video using gstreamers xvimagesink plugin (XFree86 video output plugin using Xv extension). The problem is that xvimagesink is using hardware acceleration for video performance and hence the error Could not initialize Xv output. I guess I'll have to configure hardware acceleration for the guest.

    Read the article

  • Vim: :silent with makeprg

    - by ash
    I use pylint on .py files for :make in my .vimrc, although any program, pylint or otherwise, applies to this problem. set makeprg=pylint\ --reports=n\ --output-format=parseable\ % When I run :make, I inevitably get the annoying Press ENTER or type command to continue prompt. I know this can be disabled with :silent, but I can't prepend it to the makeprg variable like this, because it throws an error: set makeprg=:silent "pylint\ --reports=n\ --output-format=parseable\ %" If I try to have my own "Silent make command," command Smake silent make The screen goes black after calling it. How would I do it?

    Read the article

  • Redirecting bad links to the correct links via htacess or 301 redirect plugin for WordPress

    - by janoulle
    I'm getting a lot of 404 errors b/c I recently switched content management systems (Habari to WordPress). I would like to use the 301 redirect plugin for WordPress to capture and helpfully redirect the offending links to the correct urls. Here's an example of the type of errors I'm seeing and what they should be redirected to: http://janetalkstech.com/admin/publish?id=146 should redirect to http://janetalkstech.com/?p=146 http://janetalkstech.com/admin/publish?slug=post-title should redirect to http://janetalkstech.com/post-title I would greatly appreciate any specific pointers on how to perform the redirects with either the 301 redirect plugin for WordPress or via .htaccess file Edit: Redirection plugin being used is the one by Urban Giraffe: http://urbangiraffe.com/plugins/redirection/

    Read the article

  • Changing the mac address in a libvirt xml config file breaks network connectivity for the guest

    - by foob
    I'm using Xen with libvirt and trying to set it up on a bridged interface. I am able to install an OS and everything works as I would expect. If I save the xml output from "virsh dumpxml guest", edit the mac address for the interface, and then define the domU with this new xml file I find that traffic is no longer forwarded from the vif0.0 interface to br0. The ifcfg-eth0 file on the guest was automatically updated to reflect the new mac address and the ifconfig output looks the same. Does anyone know why this is happening or how to properly change the mac address for a libvirt configuration?

    Read the article

  • Roll Your Own Hologram with DIY Holography Kit

    - by Jason Fitzpatrick
    If you’re looking for a DIY project with a 1980s theme, this create-your-own hologram kit is your ticket to 3D greatness. Over at Make magazine they’ve put together a tutorial for creating your own holograms using the DIY holographic kit featured in the Maker Shed–Make’s storefront for DIYers. The kit is $99; certainly not pocket change but on par with other holography kits on the market and even a bit generous with the inclusion of 20 sheets of holographic film. Check out the video above to see how easy it is to capture small objects on the film and create your own holograms. How-To: Holography [Make] How to See What Web Sites Your Computer is Secretly Connecting To HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast!

    Read the article

  • Sound & video problem with Toshiba Satellite L35-SP1011

    - by Diego Garcia
    I've installed Ubuntu 10.04 on a Satellite L35-SP1011 and there's no sound. Actually i have many video problems cause i had to disable effect because when it had effect activated, laptop got frozen many times. I saw this problem but in older ubuntu versions and tried some fixes without success. Any idea on how to solve my audio and video problems? I've tried these instructions https://help.ubuntu.com/community/RadeonDriver without success. My video card is a ATI Express 200M. lspci output 00:14.2 Audio device: ATI Technologies Inc IXP SB4x0 High Definition Audio Controller (rev 01) 00:14.3 ISA bridge: ATI Technologies Inc IXP SB400 PCI-ISA Bridge (rev 80) 00:14.4 PCI bridge: ATI Technologies Inc IXP SB400 PCI-PCI Bridge (rev 80) 01:05.0 VGA compatible controller: ATI Technologies Inc RC410 [Radeon Xpress 200M] complete lspci output at http://pastebin.com/AVk1WWQt Update #1 - It's the same on Ubuntu 10.10 and Kubuntu 10.10... Slow graphics, and no sound. Update #2 - Sound SOLVED I edited /etc/modprobe/alsa-base.conf in Ubuntu 10.10 and added options snd-hda-intel model=asus Now i'm working on video, I added xorg-edgers ppa, updated and upgraded without big difference... it's working better but without transparency.

    Read the article

  • LVS TCP connection timeouts - lingering connections

    - by Jon Topper
    I'm using keepalived to load-balance connections between a number of TCP servers. I don't expect it matters, but the service in this case is rabbitmq. I'm using NAT type balancing with weighted round-robin. A client connects to the server thus: [client]-----------[lvs]------------[real server] a b If a client connects to the LVS and remains idle, sending nothing on the socket, this eventually times out, according to timeouts set using ipvsadm --set. At this point, the connection marked 'a' above correctly disappears from the output of netstat -anp on the client, and from the output of ipvsadm -L -n -c on the lvs box. Connection 'b', however, remains ESTABLISHED according to netstat -anp on the real server box. Why is this? Can I force lvs to properly reset the connection to the real server?

    Read the article

  • Slow Memcached: Average 10ms memcached `get`

    - by Chris W.
    We're using Newrelic to measure our Python/Django application performance. Newrelic is reporting that across our system "Memcached" is taking an average of 12ms to respond to commands. Drilling down into the top dozen or so web views (by # of requests) I can see that some Memcache get take up to 30ms; I can't find a single use of Memcache get that returns in less than 10ms. More details on the system architecture: Currently we have four application servers each of which has a memcached member. All four memcached members participate in a memcache cluster. We're running on a cloud hosting provider and all traffic is running across the "internal" network (via "internal" IPs) When I ping from one application server to another the responses are in ~0.5ms Isn't 10ms a slow response time for Memcached? As far as I understand if you think "Memcache is too slow" then "you're doing it wrong". So am I doing it wrong? Here's the output of the memcache-top command: memcache-top v0.7 (default port: 11211, color: on, refresh: 3 seconds) INSTANCE USAGE HIT % CONN TIME EVICT/s GETS/s SETS/s READ/s WRITE/s cache1:11211 37.1% 62.7% 10 5.3ms 0.0 73 9 3958 84.6K cache2:11211 42.4% 60.8% 11 4.4ms 0.0 46 12 3848 62.2K cache3:11211 37.5% 66.5% 12 4.2ms 0.0 75 17 6056 170.4K AVERAGE: 39.0% 63.3% 11 4.6ms 0.0 64 13 4620 105.7K TOTAL: 0.1GB/ 0.4GB 33 13.9ms 0.0 193 38 13.5K 317.2K (ctrl-c to quit.) ** Here is the output of the top command on one machine: ** (Roughly the same on all cluster machines. As you can see there is very low CPU utilization, because these machines only run memcache.) top - 21:48:56 up 1 day, 4:56, 1 user, load average: 0.01, 0.06, 0.05 Tasks: 70 total, 1 running, 69 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st Mem: 501392k total, 424940k used, 76452k free, 66416k buffers Swap: 499996k total, 13064k used, 486932k free, 181168k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6519 nobody 20 0 384m 74m 880 S 1.0 15.3 18:22.97 memcached 3 root 20 0 0 0 0 S 0.3 0.0 0:38.03 ksoftirqd/0 1 root 20 0 24332 1552 776 S 0.0 0.3 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0 5 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kworker/u:0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.62 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper ...output truncated...

    Read the article

  • FIX: Visual Studio Post Build Event Returns &ndash;1 when it should not.

    - by ChrisD
    I had written a Console Application that I run as part of my post build for other projects..  The Console application logs a series of messages to the console as it executes.  I use the Environment.ExitCode value to specify an error or success condition.  When the application executes without issue, the ExitCode is 0, when there is a problem its –1. As part of my logging, I log the value of the exit code right before the application terminates.  When I run this executable from the command line, it behaves as it should; error scenarios return –1 and success scenarios return 0.   When I run the same command line as part of the post-build event, Visual Studio reports the exit code as –1, even when the application reports the exit code as 0.   A snippet of the build output follows: Verbose: Exiting with ExitCode=0 C:\Windows\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(3397,13): error MSB3073: The command ""MGC.exe" "-TargetPath=C:\TFS\Solutions\Research\Source\Framework\Services\Identity\STS\_STSBuilder\bin\Debug\_STSBuilder.dll" C:\Windows\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(3397,13): error MSB3073:  C:\Windows\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(3397,13): error MSB3073: " exited with code -1. The Application returns a 0 exit code.  But visual studio is reporting an error.  Why? The answer is in the way I format my log messages.  Apparently Visual Studio watches the messages that get streamed to the the output console.  If those messages match a pattern used by visual studio to communicate errors, Visual Studio assumes an error has occurred in the executable and returns a –1.  This post details the formats used by Visual Studio to determine error conditions. In my case, the presence of the colon was tripping up Visual studio.  I Replaced all occurrences of colon with an equal sign and Visual Studio once again respected the exit code of the application. Verbose= Exiting with ExitCode=0 ========== Build: 3 succeeded or up-to-date, 0 failed, 0 skipped ==========

    Read the article

  • "wrong fs type, bad option, bad superblock" error while mounting FAT Drives

    - by cshubhamrao
    I am unable to mount any fat32 or fat16 formatted usb disks under Ubuntu 13.10. The thing here to note is that it is happening only with fat formatted Disks. ntfs, ext formatted external usb disks work well (I tried formatting the same with ext4 and it worked) While mounting via nautilus: Error while mounting from terminal: root@shubham-pc:~# mount -t vfat /dev/sdc1 /media/shubham/n mount: wrong fs type, bad option, bad superblock on /dev/sdc1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so As suggested by the error: Output from dmesg | tail root@shubham-pc:~# dmesg | tail [ 3545.482598] scsi8 : usb-storage 1-1:1.0 [ 3546.481530] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 1.26 PQ: 0 ANSI: 5 [ 3546.482373] sd 8:0:0:0: Attached scsi generic sg3 type 0 [ 3546.483758] sd 8:0:0:0: [sdc] 15633408 512-byte logical blocks: (8.00 GB/7.45 GiB) [ 3546.485254] sd 8:0:0:0: [sdc] Write Protect is off [ 3546.485262] sd 8:0:0:0: [sdc] Mode Sense: 43 00 00 00 [ 3546.488314] sd 8:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [ 3546.499820] sdc: sdc1 [ 3546.503388] sd 8:0:0:0: [sdc] Attached SCSI removable disk [ 3547.273396] FAT-fs (sdc1): IO charset iso8859-1 not found Output from fsck.vfat: root@shubham-pc:~# fsck.vfat /dev/sdc1 dosfsck 3.0.16, 01 Mar 2013, FAT32, LFN /dev/sdc1: 1 files, 1/1949978 clusters All normal Tried re-creating the whole partition table and then formatting as fat32 but to no avail so the possibility of corrupted drive is ruled out. Tried the same with around 4 Disks or so and all have the same things

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • How do I classify using GLCM and SVM Classifier in Matlab?

    - by Gomathi
    I'm on a project of liver tumor segmentation and classification. I used Region Growing and FCM for liver and tumor segmentation respectively. Then, I used Gray Level Co-occurence matrix for texture feature extraction. I have to use Support Vector Machine for Classification. But I don't know how to normalize the feature vectors. Can anyone tell how to program it in Matlab? To the GLCM program, I gave the tumor segmented image as input. Was I correct? If so, I think, then, my output will also be correct. My glcm coding, as far as I have tried is, I = imread('fzliver3.jpg'); GLCM = graycomatrix(I,'Offset',[2 0;0 2]); stats = graycoprops(GLCM,'all') t1= struct2array(stats) I2 = imread('fzliver4.jpg'); GLCM2 = graycomatrix(I2,'Offset',[2 0;0 2]); stats2 = graycoprops(GLCM2,'all') t2= struct2array(stats2) I3 = imread('fzliver5.jpg'); GLCM3 = graycomatrix(I3,'Offset',[2 0;0 2]); stats3 = graycoprops(GLCM3,'all') t3= struct2array(stats3) t=[t1;t2;t3] xmin = min(t); xmax = max(t); scale = xmax-xmin; tf=(x-xmin)/scale Was this a correct implementation? Also, I get an error at the last line. My output is: stats = Contrast: [0.0510 0.0503] Correlation: [0.9513 0.9519] Energy: [0.8988 0.8988] Homogeneity: [0.9930 0.9935] t1 = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 Columns 7 through 8 0.9930 0.9935 stats2 = Contrast: [0.0345 0.0339] Correlation: [0.8223 0.8255] Energy: [0.9616 0.9617] Homogeneity: [0.9957 0.9957] t2 = Columns 1 through 6 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 Columns 7 through 8 0.9957 0.9957 stats3 = Contrast: [0.0230 0.0246] Correlation: [0.7450 0.7270] Energy: [0.9815 0.9813] Homogeneity: [0.9971 0.9970] t3 = Columns 1 through 6 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9971 0.9970 t = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9930 0.9935 0.9957 0.9957 0.9971 0.9970 ??? Error using ==> minus Matrix dimensions must agree. The images are:

    Read the article

  • Difference between sh file.sh and file.sh

    - by RAS
    I have two questions : What is the difference between executing sh filename.sh and filename.sh? How can I make both of them giving me the same output ? I'm asking this question as right now I'm facing a problem. I'm trying to run a Java + SWT application from terminal. When I do filename.sh, it gives me the desired output. But when I do sh filename.sh or bash filename.sh, it throws me an error : Exception in thread "main" java.lang.NoClassDefFoundError: MainForm/java Caused by: java.lang.ClassNotFoundException: MainForm.java at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Could not find the main class: MainForm.java. Program will exit. I know this question is already asked here but I'm still not clear about it. I have gone through the following links : What is the difference between ./ and sh to run a script? Can scripts run even when they are not set as executable? Can anyone help me with this?

    Read the article

  • Why does my audio keep reverting to stereo?

    - by GraemeF
    I have an ASUS P7P55D LE motherboard with onboard sound and I am running Windows 7 64-bit, and I am using the SPDIF output from my motherboard to my receiver. On my receiver I can see which audio channels are in use (i.e. stereo or 5.1) In the audio interface properties I can test DTS Audio and Dolby Digital outputs and both work fine, but when I try to play a game with 5.1 sound (I've tried Left 4 Dead and Dragon Age Origins) it reverts to stereo. I was getting this behaviour with the default Microsoft drivers so I installed the latest from the ASUS website but there is no difference. I notice that in the Advanced tab the Default Format only allows me to choose from various 2 channel formats so maybe that is something to do with it? How can I get 5.1 output all of the time?

    Read the article

  • Munin Aggregated Graphs Configuration Error

    - by Sparsh Gupta
    I tried making some Munin Aggregated graphs but somehow I am unable to make the configuration work. I think I have followed the instructions but since its not working, I would love some assistance or guidance as to what I am doing wrong. I want to Aggregate (sum) the total number of requests / second all my nginx servers are doing combined together. The configuration looks like [TRAFFIC.AGGREGATED] update no requests.graph_title nGinx requests requests.graph_vlabel nGinx requests per second requests.draw LINE2 requests.graph_args --base 1000 requests.graph_category nginx requests.label req/sec requests.type DERIVE requests.min 0 requests.graph_order output requests.output.sum \ lb1.visualwebsiteoptimizer.com:nginx_request_lb1.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb2.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb3.visualwebsiteoptimizer.com_request.request The munin graph I want to aggregate is http://exchange.munin-monitoring.org/plugins/nginx_request/details Thanks Sparsh Gupta

    Read the article

  • Troubleshooting EBS Discovery Issues - Part 2

    - by Kenneth E.
    Part 1 of “Troubleshooting EBS Discovery Issues”, which was posted on May 17th, focused on the diagnostics associated with the initial discovery of an EBS instances (e.g., Forms servers, APPL_TOPs, databases, etc.).Part 2 focuses on verifying parameters of the Change Management features, also called "Pack Diagnostics, specifically for Customization Manager, Patch Manager, Setup Manager, Automated Cloning, and User Monitoring.  As stated in the first post, there can be numerous reasons that Discovery fails - credentials, file-level permissions, patch levels - just to name a few.The Discovery Wizard can be accessed from the EBS homepage.  From the home page, click "Pack Diagnostics"Click "Create" to define the diagnostic processProvide the required inputs; Name, Module (i.e., Customization Manager, Patch Manager, Setup Manager, Automated Cloning, and User Monitoring), Show Details (typically "All"), and Category (typically check both Generic and User Specific).  Add the appropriate targets.Once the diagnostic process has completed, view the results.  Click on "Succeeded" or "Failed" in the Status column.Expand the entire tree and click on each "Succeeded/Failed" status to see the results of each test within that task.Sample output verifying o/s user name and required patches Additional sample output showing a failed testComplete descriptions of, as well as recommended corrective actions for, all of the diagnostic tests that are run in EM 12c is found in this spreadsheet.  Additional information can be found in the Application Management Pack User Guide.

    Read the article

  • Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

    - by SQLOS Team
    Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server. Overview The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.  Scenario 1: VMs connected through the same Cloud Service 2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required. Step 1 – Provision VMs and Configure Ports   Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):   Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal): After provisioning of the 2 VMs above, here is the default port configurations for example: Step2 – Verify / Confirm the TCP port used by the database Engine By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.   1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration 2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP - > IP Addresses   3. Confirm the port number used by SQL Server Engine; in this case 1433 4. Update from Windows Authentication to Mixed mode   5.       Restart SQL Server service for the change to take effect 6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2 Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) Your will then be prompted to enter the password. Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok. Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1 Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.   Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1. Scenario 2: VMs provisioned in different Cloud Services 2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed. Step 1 – Provision new VM3 Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal): After provisioning is complete, here is the default port configurations: Step 2 – Add public port to VM1 connect to from VM3’s DB instance Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance. Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) You will then be prompted to enter the password.   Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: Ok. netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok.   Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1 RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above. Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port   Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.  Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1. Conclusion Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections. References SQL Server in Windows Azure Virtual Machines   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • ghc6 install trouble: hGetContents: invalid argument (invalid UTF-8 byte sequence)

    - by olimay
    Having trouble installing ghc6. Here's what seems to be the relevant error that comes up when I try to (apt-get|aptitude) install ghc6: A package failed to install. Trying to recover: Setting up ghc6 (6.12.1-13ubuntu1) ... ghc-pkg: /home/opm/.ghc/i386-linux-6.12.1/package.conf.d/unix-compat-0.2-edefa7bced91ebe610d455bab466e200.conf: hGetContents: invalid argument (invalid UTF-8 byte sequence) (Here's the full output, if you're interested: http://paste.ubuntu.com/566475/ ) This still happens after apt-get clean and apt-get update. My searching around has not really helped me understand what's going on, except that it might be caused by a mismatch in locale. So, here's the output of locale too: LANG=en_US.utf8 LANGUAGE=en_US:en LC_CTYPE="en_US.utf8" LC_NUMERIC="en_US.utf8" LC_TIME="en_US.utf8" LC_COLLATE="en_US.utf8" LC_MONETARY="en_US.utf8" LC_MESSAGES="en_US.utf8" LC_PAPER="en_US.utf8" LC_NAME="en_US.utf8" LC_ADDRESS="en_US.utf8" LC_TELEPHONE="en_US.utf8" LC_MEASUREMENT="en_US.utf8" LC_IDENTIFICATION="en_US.utf8" LC_ALL= Any ideas? Additional background: this all seems very strange to me, because I used to have ghc6 installed correctly--I use XMonad as my main window manager most of the time. I tried to install haskell-platform (through apt), which failed and told me that there was something wrong with ghc6, and so I reinstalled ghc6 and began to get the above error message. (Originally posted here to SuperUser, until I remembered today that this SE site existed.)

    Read the article

  • bash one-liner loop over directories throws errors

    - by cori
    I'm trying to build a bash one-liner to loop over the directories within the current directory and tar the content into unique tars, using the directory name as the tar file name. I've got the basics working (finding the directory names, and tarring them up with those names) but my loop tosses some error messages and I can't understand where it's getting the commands its trying to run. Here's the mostly-working one-liner: for f in `ls -d */`; do `tar -czvvf ${f%/}.tar.gz $f`;done The "strange" output is: -bash: drwxrwxr-x: command not found -bash: drwxr-xr-x: command not found -bash: drwxr-xr-x: command not found -bash: drwxrwxr-x: command not found What portion of the command that I'm running do I not understand and that's generating that output?

    Read the article

  • Migrate AD DS Server 2003 to Server 2008 R2

    - by user2566483
    I would like to get a couple opinions Found this article online and wanted to know if it is good to follow http://www.msserverpro.com/migrating-active-directory-domain-controller-from-windows-server-2003-sp2-to-windows-server-2008-r2/ Couple of things that need to be done. 1. Move over all active directory settings from old Server 2003 server to new Server 2008R2 2. Setup all users on new server using csvde. csvde -f output.csv -- on old server csvde -i -f output.csv -- on new server Any advice would be greatly appreciated.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >