Search Results

Search found 29195 results on 1168 pages for 'test explorer'.

Page 48/1168 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • IE 8 Finishes Last on Google JavaScript Test

    Google last week provided an additional means for users to test JavaScript performance in Web browsers....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Google veut suivre vos achats hors de la sphère internet, un projet en phase de test

    Google veut suivre vos achats hors de la sphère internet, un projet en phase de test Début octobre, Google avait annoncé son intention de mieux mesurer l'efficacité de ses publicités en ligne sur les ventes en boutique physique. Il faut dire que l'entreprise est loin d'être la seule ; de nombreuses compagnies ont déjà cherché à établir un lien entre le nombre de clics sur une publicité et un achat réellement effectué. Digiday affirme que pour y parvenir, Google a lancé un programme qui se...

    Read the article

  • Deploying, but without those pesky test files!

    - by Chris Skardon
    Silverlight testing is great, we all know that (don’t we??), we’re expected to do it as part of the development process, but once we’ve got an awesome application written and we come to deploy it, we don’t want the test files going out with it… You might be like me, have the files in a Web project – let’s face it, that’s how we’re pushed into doing it… So let’s stick with it! Now. I’m deploying via the wonders of the Web Deployment shizzle, but this also applies to the classic ‘installer’ project as well.. Baaaasically, we’re going to use the ‘Debug’ / ‘Release’ configurations to include given files. ?? OK, you know in the top of your visual studio editor, you (usually) have a drop down which predominantly reads ‘Debug’? Those are ‘configurations’. Mostly we don’t bother changing it, primarily due to laziness, but also the fact that we generally don’t see ‘Release’ as actually doing anything other than making it harder to find problems :) Well today my friends we’re going to change that bad boy… The next few steps are just helping you set up a new ‘Debug’ configuration, but you can just switch to the ‘Release’ configuration and skip to the end… First let’s go to the Configuration Manager. There are multiple ways, through the ‘Build’ menu (at the bottom), or via the drop down which currently has ‘Debug’ in it :) Got it? Select ‘New’ from the ‘Active solution configuration’ drop down: Create a new configuration, kind of like the picture below shows (or for those graphically challenged – Name: DebugWithNoTests, and Copy settings from: ‘Debug’, ensuring the ‘Create new project configurations’ checkbox is checked). Press OK. VS will do some shizzle, and in the Configuration manager, you will see pretty much exactly what you did before, only with ‘Debug’ replaced with ‘DebugWithNoTests’. Turn off the build options for the test projects. We won’t need them.. IF you skipped down from the top, this is where you’ll be wanting to stop!!! Close and now we’re one notepad step away from achieving our goals. Yes, I said notepad. You can’t do what we’re going to do in VS. (Pity). Go to the folder where your web project is, and right click on the ‘.csproj’ file. Now open it with notepad. Head on down to the ‘<Content Include’ bits, they’ll look like this: <ItemGroup> <Content Include="ClientBin\Tests.xap" /> ... </ItemGroup> Take this and modify each of the files you don’t want deployed and change to: <Content Include="ClientBin\Tests.xap" Condition="'$(Configuration)' == 'Debug'" /> Once you’ve got that sorted publish your project, once with the Debug configuration selected, and another with any other configuration (‘Release’, ‘DebugWithNoTests’ etc).. No files! Huzzah!

    Read the article

  • New Test Fest Available

    - by Cinzia Mascanzoni
    Test Fest is back by popular demand, and has been included as one of the many partner benefits for attending OPN Exchange this year. Remind your partners to join us from October 1 - 4 in the Marriott Marquis, Juniper Room at Oracle OpenWorld, and be sure to watch and share the new video.

    Read the article

  • Test de la caméra Raspberry Pi 5M, tutoriel par Nicolargo

    Bonjour,Nous avons précédemment publié le tutoriel :Raspberry Pi : Déballage et installationComme suite nous proposons ce tutoriel : Test de la caméra Raspberry Pi 5M Citation: Raspberry propose depuis peu et pour moins de 25 € une caméra dédiée à sa gamme Pi. Cette caméra de quelques grammes se connecte à une Raspberry Pi (modèle A ou B) à travers une interface CSI v2 (MIPI camera interface) dédiée. Grâce à Kubii (fournisseur Farnell en France), j'ai pu obtenir rapidement...

    Read the article

  • Quick introduction to the Web Load Test features of Visual Studio 2010

    any developers are not even aware that you can set up and run some very sophisticated web load tests for an ASP.NET Application right from within Visual Studio. This article provides a quick introduction to the Web Load Test features of Visual Studio 2010.  read moreBy Peter BrombergDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Is Zscalar's HTTPS Everywhere for IE affiliated with EFF?

    - by noloader
    The Electronic Frontier Foundation has a tool called HTTPS Everywhere for Firefox, Chrome and Opera. But they don't list Internet Explorer. There is a company called Zscalar that provides HTTPS Everywhere for Internet Explorer. I can't determine if the EFF approves of Zscalar's IE extension, or if the company hijacked the EFF tool's name. The only mention of Zscalar on the EFF website is the HTTPS Everywhere Atlas. The closest I can find from Zscalar are link backs on the Zscalar website and a Zscalar PDF that points back to the EFF website. The EFF tool is free, open source and can be audited, but I don't see similar governance for Zscalar's tool. For example, I don't see mention of source code at HTTPS Everywhere for Internet Explorer. And it looks like there's a "Free Trial" for HTTPS Everywhere for Internet Explorer (see the link on the right). Is Zscalar's HTTPS Everywhere for IE affiliated with EFF?

    Read the article

  • Windows 2003 Server, can't connect to an SSL Site from IE.

    - by JL
    I am trying to connect to connect to www.czebox.cz using internet explorer on Windows 2003 server. If you have a server to test from please do, and you'll notice that it does not connect instead returns - Internet Explorer cannot display the webpage. From Firefox it works fine on the server. From Windows 7 it works fine in Internet Explorer. How can I get it to work in Windows 2003 Server using IE?

    Read the article

  • Registry settings for disabling autocomplete options in IE8

    - by redphoenix
    I have a script that disables all the autocomplete settings in IE6 and IE7 but with IE8 there are new settings that aren't disabled by the script. The settings are under Tools-Content-AutoComplete-Settings The current registry keys I'm modifying are HKCU\Software\Microsoft\Internet Explorer\Main\FormSuggest Passwords "no" HKCU\Software\Microsoft\Internet Explorer\Main\FormSuggest PW Ask "no" HKCU\Software\Microsoft\Internet Explorer\Main\Use FormSuggest "no" HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\AutoComplete\AutoSuggest "no" The settings that are not unchecked by these keys are Browsing History and Favorites. Does anyone know the registry settings that are associated with these IE settings?

    Read the article

  • Windows XP to remote server 2008 R2 shares - awful response times

    - by nick3216
    I have a network infrastructure of Windows XP clients (a mix of XP and 64-bit XP), that are accessing a network share on a Windows 2008 R2 server. Whenever users type the address of a folder into the address bar of Windows Explorer it's as snappy at determining the contents of the current folder and presenting them to you in the address bar as if you're working on a local drive. But if you open one of the subfolders users get the animated red torch and 'Searching for items...' dialog, typically for 45 seconds. Similarly when using the open folder dialog to try and select a subfolder on this share it takes, on average, 45 seconds for the dialog to expand each node and show the subfolders of each node. Also, while the Explorer instance accsesing the network share is running slowly users notice that the performance of all other Explorer windows suffers. So while Explorer is searching for files on the network share they can't switch to another task and navigate around their local drive using Explorer because it's now as slow as a dead dog at accessing anything. Are there any settings we can change which will improve the performance accessing network shares?

    Read the article

  • copied the reachability-test from apple, but the linker gives a failure

    - by nico
    i have tried to use the reachability-project published by apple to detect a reachability in an own example. i copied the most initialization, but i get this failure in the linker: Ld build/switchViews.build/Debug-iphoneos/test.build/Objects-normal/armv6/test normal armv6 cd /Users/uid04100/Documents/TEST setenv IPHONEOS_DEPLOYMENT_TARGET 3.1.3 setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc-4.2 -arch armv6 -isysroot /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.3.sdk -L/Users/uid04100/Documents/TEST/build/Debug-iphoneos -F/Users/uid04100/Documents/TEST/build/Debug-iphoneos -filelist /Users/uid04100/Documents/TEST/build/switchViews.build/Debug-iphoneos/test.build/Objects-normal/armv6/test.LinkFileList -dead_strip -miphoneos-version-min=3.1.3 -framework Foundation -framework UIKit -framework CoreGraphics -o /Users/uid04100/Documents/TEST/build/switchViews.build/Debug-iphoneos/test.build/Objects-normal/armv6/test Undefined symbols: "_SCNetworkReachabilitySetCallback", referenced from: -[Reachability startNotifer] in Reachability.o "_SCNetworkReachabilityCreateWithAddress", referenced from: +[Reachability reachabilityWithAddress:] in Reachability.o "_SCNetworkReachabilityScheduleWithRunLoop", referenced from: -[Reachability startNotifer] in Reachability.o "_SCNetworkReachabilityGetFlags", referenced from: -[Reachability connectionRequired] in Reachability.o -[Reachability currentReachabilityStatus] in Reachability.o "_SCNetworkReachabilityUnscheduleFromRunLoop", referenced from: -[Reachability stopNotifer] in Reachability.o "_SCNetworkReachabilityCreateWithName", referenced from: +[Reachability reachabilityWithHostName:] in Reachability.o ld: symbol(s) not found collect2: ld returned 1 exit status my delegate.h: import @class Reachability; @interface testAppDelegate : NSObject { UIWindow *window; UINavigationController *navigationController; Reachability* hostReach; Reachability* internetReach; Reachability* wifiReach; } @property (nonatomic, retain) IBOutlet UIWindow *window; @property (nonatomic, retain) IBOutlet UINavigationController *navigationController; @end my delegate.m: import "testAppDelegate.h" import "SecondViewController.h" import "Reachability.h" @implementation testAppDelegate @synthesize window; @synthesize navigationController; (void) updateInterfaceWithReachability: (Reachability*) curReach { if(curReach == hostReach) { BOOL connectionRequired= [curReach connectionRequired]; if(connectionRequired) { //in these brackets schould be some code with sense, if i´m getting it to run } else { } } if(curReach == internetReach) { } if(curReach == wifiReach) { } } //Called by Reachability whenever status changes. - (void) reachabilityChanged: (NSNotification* )note { Reachability* curReach = [note object]; NSParameterAssert([curReach isKindOfClass: [Reachability class]]); [self updateInterfaceWithReachability: curReach]; } (void)applicationDidFinishLaunching:(UIApplication *)application { // Override point for customization after application launch // Observe the kNetworkReachabilityChangedNotification. When that notification is posted, the // method "reachabilityChanged" will be called. // [[NSNotificationCenter defaultCenter] addObserver: self selector: @selector(reachabilityChanged:) name: kReachabilityChangedNotification object: nil]; //Change the host name here to change the server your monitoring hostReach = [[Reachability reachabilityWithHostName: @"www.apple.com"] retain]; [hostReach startNotifer]; [self updateInterfaceWithReachability: hostReach]; internetReach = [[Reachability reachabilityForInternetConnection] retain]; [internetReach startNotifer]; [self updateInterfaceWithReachability: internetReach]; wifiReach = [[Reachability reachabilityForLocalWiFi] retain]; [wifiReach startNotifer]; [self updateInterfaceWithReachability:wifiReach]; [window addSubview:[navigationController view]]; [window makeKeyAndVisible]; } (void)dealloc { [navigationController release]; [window release]; [super dealloc]; } @end

    Read the article

  • How do I get javascript-generated image maps to work with internet explorer?

    - by schwerwolf
    I'm using javascript to generate a high resolution grid for an image that I generated on a web server. The high-resolution grid is composed of a 'map' element with hundreds of 'area' child elements. Each 'area' element has onmouseover attribute that causes the display of a popup box. After assigning the map to the img (via the usemap attribute), Internet explorer ignores the 'onmouseover' attribute of the area elements that I added via javascript. The behavior is not caused by syntactical variations between IE and other browsers. A static map behaves correctly. Only the elements that I add dynamically to an existing image map fail to fire their corresponding mouse-over events. How can I get IE to fire the mouse-over event for the added 'area' elements? function generate_image_map ( img ) { var tile_width = 8; var tile_height = 10; var plotarea_left = 40; var plotarea_top = 45; var column_count = 100; var row_count = 120; var img_id = YAHOO.util.Dom.getAttribute(img, "id"); var img_map_id = YAHOO.util.Dom.getAttribute(img, "usemap"); var original_map = YAHOO.util.Selector.query(img_map_id)[0]; var area_nodes = YAHOO.util.Selector.query("area", original_map); var last_node = area_nodes[area_nodes.length - 1]; for (var y = 0; y < row_count; y++) { var top = Math.round(plotarea_top + (y * tile_height)); var bottom = Math.round(plotarea_top + (y * tile_height) + tile_height); for (var x = 0; x < column_count; x++) { var left = Math.round(plotarea_left + (x * tile_width)); var right = Math.round(plotarea_left + (x * tile_width) + tile_width); var area = document.createElement("area"); YAHOO.util.Dom.setAttribute(area, "shape", "rect"); YAHOO.util.Dom.setAttribute(area, "onmouseover", "alert('This does not appear in IE')" ); var coords = [ left, top, right, bottom ]; YAHOO.util.Dom.setAttribute(area, "coords", coords.join(",")); YAHOO.util.Dom.insertBefore(area, last_node); } } }

    Read the article

  • Kruskal-Wallis test with details on pairwise comparisons

    - by dalloliogm
    The standard stats::kruskal.test module allows to calculate the kruskal-wallis test on a dataset: >>> data(diamonds) >>> kruskal.test.test(price~carat, data=diamonds) Kruskal-Wallis rank sum test data: price by carat by color Kruskal-Wallis chi-squared = 50570.15, df = 272, p-value < 2.2e-16 this is correct, it is giving me a probability that all the groups in the data have the same mean. However, I would like to have the details for each pair comparison, like if diamonds of colors D and E have the same mean price, as some other softwares do (SPSS) when you ask for a Kruskal test. I have found kruskalmc from the package pgirmess which allows me to do what I want to do: > kruskalmc(diamonds$price, diamonds$color) Multiple comparison test after Kruskal-Wallis p.value: 0.05 Comparisons obs.dif critical.dif difference D-E 571.7459 747.4962 FALSE D-F 2237.4309 751.5684 TRUE D-G 2643.1778 726.9854 TRUE D-H 4539.4392 774.4809 TRUE D-I 6002.6286 862.0150 TRUE D-J 8077.2871 1061.7451 TRUE E-F 2809.1767 680.4144 TRUE E-G 3214.9237 653.1587 TRUE E-H 5111.1851 705.6410 TRUE E-I 6574.3744 800.7362 TRUE E-J 8649.0330 1012.6260 TRUE F-G 405.7470 657.8152 FALSE F-H 2302.0083 709.9533 TRUE F-I 3765.1977 804.5390 TRUE F-J 5839.8562 1015.6357 TRUE G-H 1896.2614 683.8760 TRUE G-I 3359.4507 781.6237 TRUE G-J 5434.1093 997.5813 TRUE H-I 1463.1894 825.9834 TRUE H-J 3537.8479 1032.7058 TRUE I-J 2074.6585 1099.8776 TRUE However, this package only allows for one categoric variable (e.g. I can't study the prices clustered by color and by carat, as I can do with kruskal.test), and I don't know anything about the pgirmess package, whether it is maintained or not, or if it is tested. Can you recommend me a package to execute the Kruskal-Wallis test which returns details for every comparison? How would you handle the problem?

    Read the article

  • .NET Test Harness what should it have

    - by Conor
    Hi Folks, We have a software house developing code for us on a project, .NET Web Service (WCF) and we are also paying for a test harness to be built as a separate billable task on a daily rate. I have just joined the company and am reviewing what we are getting from the software house and wanted to know what you guys in industry thought about it? Basically what we got was a WinForm that called the w/s that had an input area (Web Service Request) to drop our XML a Submit button along with a response area for the result of the Web Response and that's it... Our internal BA has created all the xml request documents so there was no logic put into the harness around this. Looking on the Net for a definition of a Test Harness I got this: http://en.wikipedia.org/wiki/Test_harness It states it should have these 3 below things: Automate the testing process. Execute test suites of test cases. Generate associated test reports. Clearly we have got none of this apart from a partial "Automate the testing process" via a WinForm. OK, from my development background I would expect someone to Produce a WinForm as a test harness 5 years ago and really should be using some sort of Tooling around this, I explicitly told the Software House I expected some sort of tooling (NUnit,NBUnit, SOAPIU) so we could create a regression test pack for future use. [Didn’t get it but I asked for this after the requirements were signed off as I wasn’t employed then :)] Would someone be able to clarify with me if my requirement for this is over realistic, I know if I did this, I would use NUnit and TDD and then reuse the test harness as a regression test pack in future? I am interested to see what the community thought. Cheers

    Read the article

  • playframework auto-test Jenkins CI wait for completion?

    - by notbrain
    I am trying to set up Jenkins CI for a playframework.org application but am having trouble properly launching play after the auto-test command is run. The tests all run fine, but it seems as though my script is launching both play auto-test and play start --%ci at the same time. When the play start --%ci command runs, it gets a pid and everything, but it's not running. FILE: auto-test.sh, jenkins runs this with execute shell #!/bin/bash # pwd is jenkins workspace dir # change into approot dir cd customer-portal; # kill any previous play launches if [ -e "server.pid" ] then kill `cat server.pid`; rm -rf server.pid; fi # drop and re-create the DB mysql --user=USER --password=PASS --host=HOSTNAME < ../setupdb.sql # auto-test the most recent build /usr/local/lib/play/play auto-test; # this is inadequate for waiting for auto-test to complete? # how to wait for actual process completion? # sleep 60; wait; # Conditional start based on tests # Launch normal on pass, test on fail # if [ -e "./test-result/result.passed" ] then /usr/local/lib/play/play start --%ci; exit 0; else /usr/local/lib/play/play test; exit 1; fi

    Read the article

  • Test Results window in VS2008 not showing results

    - by TimK
    I have an existing solution that has been working for a long time, containing around 600 tests in a couple of test projects. I recently moved to a new PC - it's Win7-x64, and I installed a fresh copy of VS2008. When I first opened the solution on the new machine, the Test List Editor was completely empty. Trying to create a new test list caused the editor to refresh, and now it shows my test lists, but they're acting funny. I can select tests in the lists, and run them, but the results window doesn't usually update automatically to show the results of the latest test. It has done this when running a single test a couple of times, but even that is not consistent. The only way I can view the results is by manually going to the Test Runs window and connecting to individual test runs. When I do that, the results show up in the results list, but I can't check them to re-run the failed tests - the check boxes are all disabled. I guess I should describe the way it used to work, in case that was unusual - I used to select some tests from the Test Lists window, tell it to run them, and the results window would clear itself, and then display the results from the current run. I could then check any tests that I wanted to re-run, and use the run/debug button in the results window to do so. Any ideas what's going on here?

    Read the article

  • Using TortoiseHg’s Repository explorer

    - by krebstar
    I posted this question on superuser.com but I wasn't sure if it was appropriate there.. Anyway: Hi, I'm coming from a TortoiseSVN background and decided to give TortoiseHg a try.. One thing I got really used to with TortoiseSVN was the SVN Repo-Explorer, which worked quite similarly to Windows Explorer.. However, when I tried to use TortoiseHg's Repository Explorer, what I got was something else, it was more like TortoiseSVN's Show Log. It showed me what the recent commits were and what files were changed and even had nifty graphs.. However, I'm still left wanting for TortoiseSVN's Repo-Explorer.. Does TortoiseHg have anything like this? How am I supposed to poke around the Repository if I can only view changed stuff? Thanks, kreb

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • Eclipse RCP File Explorer

    - by yournamehere
    Is there a good Eclipse RCP file explorer out there? I need a platform independent file explorer which should be extensible through plugins. I only found File Arranger , wich seems to be outdated. I just ask cause i want to develop such an explorer, but it wouldn't make sense if there is already a solution out there.

    Read the article

  • A new name for unit tests

    - by Will
    I never used to like unit testing. I always thought it increased the amount of work I had to do. Turns out, that's only true in terms of the actual number of lines of code you write and furthermore, this is completely offset by the increase in the number of lines of useful code that you can write in an hour with tests and test driven development. Now I love unit tests as they allow me to write useful code, that quite often works first time! (knock on wood) I have found that people are reluctant to do unit tests or start a project with test driven development if they are under strict time-lines or in an environment where others don't do it, so they don't. Kinda like, a cultural refusal to even try. I think one of the most powerful things about unit testing is the confidence that it gives you to undertake refactoring. It also gives new found hope, that I can give my code to someone else to refactor/improve, and if my unit tests still work, I can use the new version of the library that they modified, pretty much, without fear. It's this last aspect of unit testing that I think needs a new name. The unit test is more like a contract of what this code should do now, and in the future. When I hear the word testing, I think of mice in cages, with multiple experiments done on them to see the effectiveness of a compound. This is not what unit testing is, we're not trying out different code to see what is the most affective approach, we're defining what outputs we expect with what inputs. In the mice example, unit tests are more like the definitions of how the universe will work as opposed to the experiments done on the mice. Am I on crack or does anyone else see this refusal to do testing and do they think it's a similar reason they don't want to do it? What reasons do you / others give for not testing? What do you think their motivations are in not unit testing? And as a new name for unit testing that might get over some of the objections, how about jContract? (A bit Java centric I know :), or Unit Contracts?

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • InstallShield 2009: Remove a dll (being used by explorer.exe for context menu) during uninstall with

    - by Samir
    Install Shield Premier 2009: Basic MSI Project I have a dll that is supposed to be removed during uninstall. But this dll is being used by explorer.exe, it is used for generating windows explorer context menus with their icons(same as the menu items, icons we see when we right click on any file/folder and see the WinRAR items, icons). Manually, I have to unregister the dll and close explorer.exe from task manager and again run explorer.exe. Then I can delete the dll. So from Install Shield how to delete this dll without any problem(as we see while uninstalling WinRAR or other software there is no problem no message boxes shown and the related context menu icons also removed).

    Read the article

  • android instrumentation testsuite

    - by siri
    Hi I have written two test cases in a package com.app.myapp.test When I try to run them both of them are not getting executed, only one test case gets executed and stops. I have written the following testsuite in the same package AllTests.java public class AllTests extends TestSuite { public static Test suite() { return new TestSuiteBuilder(AllTests.class).includePackages("./src/com.ni.mypaint.test","./src/com.ni.mpaint.test").build(); /* .includeAllPackagesUnderHere() .build();*/ } Is the code and location for this testsuite is correct?

    Read the article

  • How to locate a file in windows explorer

    - by Yigang Wu
    I have a application to list all music files in user machine, a "Explorer" button is using to quickly open Windows Explorer and highlight the file in Windows Explorer. I tried ShellExecute, but it doesn't work, the API will launch associate application. Any Windows API can do that? Thanks in advance.

    Read the article

  • Problems with CloseMainWindow() to close a Windows Explorer window

    - by MorgoZ
    Hello! I´m facing a problem when trying to close a Windows Explorer (not Internet Explorer) window through another application, using the "Process.CloseMainWindow()" method; because it doesn´t close the Explorer window, it tries to close the full Windows (Operative System), by the way, Windows XP. The code is as follows: [DllImport("user32.dll")] static extern int GetForegroundWindow(); [DllImport("user32.dll")] private static extern UInt32 GetWindowThreadProcessId(Int32 hWnd, out Int32 lpdwProcessId); public String[] exeCommand() { try { //Get App Int32 hwnd = 0; hwnd = GetForegroundWindow(); Process actualProcess = Process.GetProcessById(GetWindowProcessID(hwnd)); //Close App if (!actualProcess.CloseMainWindow()) actualProcess.Kill(); } catch { throw; } return null; } Suppose that the "actualProcess" is "explorer.exe" Any help will be appreciated!! Salutes!

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >