Search Results

Search found 5723 results on 229 pages for 'turing machines'.

Page 218/229 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • Installing Fabric On Windows (Error No Module Called Readline)

    - by Jon
    I'm trying to use the Fabric 0.1.1 deploy tool (http://docs.fabfile.org/) on Windows and we're running into an issue with the readline module. I've been through various threads but can't seem to solve the issue. It's important because we can't deploy applications from Windows based machines. C:\Documents and Settings\dev\Desktop\deploy>fab Traceback (most recent call last): File "C:\python\Scripts\fab-script.py", line 8, in <module> load_entry_point('fabric==0.1.1', 'console_scripts', 'fab')() File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\pkg_resources.py" , line 277, in load_entry_point File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\pkg_resources.py" , line 2180, in load_entry_point File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\pkg_resources.py" , line 1913, in load File "build\bdist.win32\egg\fabric.py", line 25, in <module> **ImportError: No module named readline** Installing the module results in: **easy_install readline** Searching for readline Reading http://pypi.python.org/simple/readline/ Reading http://www.python.org/ Best match: readline 2.6.4 Downloading http://pypi.python.org/packages/source/r/readline/readline-2.6.4.tar .gz#md5=7568e8b78f383443ba57c9afec6f4285 Processing readline-2.6.4.tar.gz Running readline-2.6.4\setup.py -q bdist_egg --dist-dir c:\docume~1\ji81b9~1.che \locals~1\temp\easy_install-pzkz1a\readline-2.6.4\egg-dist-tmp-szs2ps Traceback (most recent call last): File "C:\python\Scripts\easy_install-script.py", line 8, in <module> load_entry_point('setuptools==0.6c9', 'console_scripts', 'easy_install')() File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 1671, in main File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 1659, in with_ei_usage File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 1675, in <lambda> File "c:\python\lib\distutils\core.py", line 152, in setup dist.run_commands() File "c:\python\lib\distutils\dist.py", line 975, in run_commands self.run_command(cmd) File "c:\python\lib\distutils\dist.py", line 995, in run_command cmd_obj.run() File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 211, in run File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 446, in easy_install File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 476, in install_item File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 655, in install_eggs File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 930, in build_and_install File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\comman d\easy_install.py", line 919, in run_setup File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\sandbo x.py", line 27, in run_setup File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\sandbo x.py", line 63, in run File "c:\python\lib\site-packages\setuptools-0.6c9-py2.6.egg\setuptools\sandbo x.py", line 29, in <lambda> File "setup.py", line 93, in <module> AttributeError: 'module' object has no attribute 'symlink' Has anybody solved this issue or can anybody suggest a workaround?

    Read the article

  • How can I reliably check client identity whilst making DCOM calls to a C# .Net 3.5 Server?

    - by pionium
    Hi, I have an old Win32 C++ DCOM Server that I am rewriting to use C# .Net 3.5. The client applications sit on remote XP machines and are also written in C++. These clients must remain unchanged, hence I must implement the interfaces on new .Net objects. This has been done, and is working successfully regarding the implementation of the interfaces, and all of the calls are correctly being made from the old clients to the new .Net objects. However, I'm having problems obtaining the identity of the calling user from the DCOM Client. In order to try to identify the user who instigated the DCOM call, I have the following code on the server... [DllImport("ole32.dll")] static extern int CoImpersonateClient(); [DllImport("ole32.dll")] static extern int CoRevertToSelf(); private string CallingUser { get { string sCallingUser = null; if (CoImpersonateClient() == 0) { WindowsPrincipal wp = System.Threading.Thread.CurrentPrincipal as WindowsPrincipal; if (wp != null) { WindowsIdentity wi = wp.Identity as WindowsIdentity; if (wi != null && !string.IsNullOrEmpty(wi.Name)) sCallingUser = wi.Name; } if (CoRevertToSelf() != 0) ReportWin32Error("CoRevertToSelf"); } else ReportWin32Error("CoImpersonateClient"); return sCallingUser; } } private static void ReportWin32Error(string sFailingCall) { Win32Exception ex = new Win32Exception(); Logger.Write("Call to " + sFailingCall + " FAILED: " + ex.Message); } When I get the CallingUser property, the value returned the first few times is correct and the correct user name is identified, however, after 3 or 4 different users have successfully made calls (and it varies, so I can't be more specific), further users seem to be identified as users who had made earlier calls. What I have noticed is that the first few users have their DCOM calls handled on their own thread (ie all calls from a particular client are handled by a single unique thread), and then subsequent users are being handled by the same threads as the earlier users, and after the call to CoImpersonateClient(), the CurrentPrincipal matches that of the initial user of that thread. To Illustrate: User Tom makes DCOM calls which are handled by thread 1 (CurrentPrincipal correctly identifies Tom) User Dick makes DCOM calls which are handled by thread 2 (CurrentPrincipal correctly identifies Dick) User Harry makes DCOM calls which are handled by thread 3 (CurrentPrincipal correctly identifies Harry) User Bob makes DCOM calls which are handled by thread 3 (CurrentPrincipal incorrectly identifies him as Harry) As you can see in this illustration, calls from clients Harry and Bob are being handled on thread 3, and the server is identifying the calling client as Harry. Is there something that I am doing wrong? Are there any caveats or restrictions on using Impersonations in this way? Is there a better or different way that I can RELIABLY achieve what I am trying to do? All help would be greatly appreciated. Regards Andrew

    Read the article

  • Managed (.net) library with html-tidy like functionality?

    - by Eamon Nerbonne
    Does anybody know of an html cleaner for .NET that can parse html and (for instance) convert it to a more machine friendly format such as xhtml? I've tried the HTML Agility Pack, but that fails to correctly parse even fairly simple examples. To give an example of html that should be parsed correctly: <html><body> <ul><li>TestElem1 <li>TestElem2 <li>TestElem3 List: <ul><li>Nested1 <li>Nested2</li> <li>Nested3 </ul> <li>TestElem4 </ul> <p>paragraph 1 <p>paragraph 2 <p>paragraph 3 </body></html> li tags don't need to be closed (see spec), and neither do P tags. In other words, the above sample should be parsed as: <html><body> <ul><li>TestElem1</li> <li>TestElem2</li> <li>TestElem3 List: <ul><li>Nested1</li> <li>Nested2</li> <li>Nested3</li> </ul></li> <li>TestElem4</li> </ul> <p>paragraph 1</p> <p>paragraph 2</p> <p>paragraph 3</p> </body></html> Since the aim is to use the library on various machines, it's a big disadvantage to need to fall back to native code (such as a wrapper around html tidy) which would require extra deployment hassle and sacrifice platform independance. Any suggestions? To recap, I'm looking for: An html cleaner ala HTML tidy Must be able to deal with real world html, not just xhtml, at the very least correctly reading valid html 4 Must be able to convert to a more easily processable xml format Should be a purely managed app.

    Read the article

  • get local groups and not the primary groups for a domain user

    - by user175084
    i have a code to get the groups a user belongs to. try { DirectoryEntry adRoot = new DirectoryEntry(string.Format("WinNT://{0}", Environment.UserDomainName)); DirectoryEntry user = adRoot.Children.Find(completeUserName, "User"); object obGroups = user.Invoke("Groups"); foreach (object ob in (IEnumerable)obGroups) { // Create object for each group. DirectoryEntry obGpEntry = new DirectoryEntry(ob); listOfMyWindowsGroups.Add(obGpEntry.Name); } return true; } catch (Exception ex) { new GUIUtility().LogMessageToFile("Error in getting User MachineGroups = " + ex); return false; } the above code works fine when i have to find the groups of a local user but for a domain user it returns a value "Domain User" which is kind of wierd as it is a part of 2 local groups. Please can some1 help in solving this mystery. thanks Research I did some finding and got that i am being returned the primary group of the domain user called "Domain User" group but what i actually want is the groups of the local machines the domain user is a part of... i cannot get that.. any suggestions another code using LDAP string domain = Environment.UserDomainName; DirectoryEntry DE = new DirectoryEntry("LDAP://" + domain, null, null, AuthenticationTypes.Secure); DirectorySearcher search = new DirectorySearcher(); search.SearchRoot = DE; search.Filter = "(SAMAccountName=" + completeUserName + ")"; //Searches active directory for the login name search.PropertiesToLoad.Add("displayName"); // Once found, get a list of Groups try { SearchResult result = search.FindOne(); // Grab the records and assign them to result if (result != null) { DirectoryEntry theUser = result.GetDirectoryEntry(); theUser.RefreshCache(new string[] { "tokenGroups" }); foreach (byte[] resultBytes in theUser.Properties["tokenGroups"]) { System.Security.Principal.SecurityIdentifier mySID = new System.Security.Principal.SecurityIdentifier(resultBytes, 0); DirectorySearcher sidSearcher = new DirectorySearcher(); sidSearcher.SearchRoot = DE; sidSearcher.Filter = "(objectSid=" + mySID.Value + ")"; sidSearcher.PropertiesToLoad.Add("distinguishedName"); SearchResult sidResult = sidSearcher.FindOne(); if (sidResult != null) { listOfMyWindowsGroups.Add((string)sidResult.Properties["distinguishedName"][0]); } } } else { new GUIUtility().LogMessageToFile("no user found"); } return true; } catch (Exception ex) { new GUIUtility().LogMessageToFile("Error obtaining group names: " + ex.Message + " Please contact your administrator."); // If an error occurs report it to the user. return false; } this works too but i get the same result "Domain Users" . Please can some1 tell me how to get the local machine groups...????

    Read the article

  • iPhone SDK 3/4 App will not run on a iPhone 2.x device even with deployment target set to 2.0!

    - by MarqueIV
    Ok... I know about the difference between the base/active SDKs and the deployment target. I have my base SDK set at 4.0 and the deployment target set at 2.0. I am not using any APIs post 2.x, conditional or otherwise. Since I can't debug on a 2.x device, after building it, I use the iPhone Configuration Utility to install the app on the device, which it does just fine. Problem is, it doesn't run! I just get a blank screen. The main window never comes up! Now before you ask... I had this same problem with the iPhone SDK 3.x. I upgraded to the 4.x hoping it would be solved. It wasn't. Yes the provisioning profile is installed. (Couldn't install the app if it wasn't.) This same compiled app works fine on 3.x devices. Same with 4.x devices. Just not 2.x devices. Again, no I am not using any post-2.x SDKs. To prove this I created a brand-new, window-based app from the 'New Project' dialog and the only changes I made was the background color of the window (to prove the XIB loaded) and I set the deployment target to 2.0 (It's still compiled against the 4.x SDK though.) Again, it runs fine on 3.x or 4.x devices, but just a black, blank screen on 2.x devices. I've tried this on three separate 2.x devices included one freshly restored. I've used three separate dev machines (MacBook Pro with the 3.x SDK, MacBook Pro with the 4.x SDK and a Mac Pro with the 3.x SDK.) Same result every time. I am stumped. The fact that even an unmodified project doesn't run really has me confused. Could it be the XIB file? Did they change the format from 2.x to something newer in the 3.x SDK? If so, how do I set it back to 2.x. (Again, this is just a complete guess.) But I'm really stumped! Mark

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow. Is there anyway that the server can auto-detect wither to use the development or production configs?

    Read the article

  • Oracle JDBC connection exception in Solaris but not Windows?

    - by lupefiasco
    I have some Java code that connects to an Oracle database using DriverManager.getConnection(). It works just fine on my Windows XP machine. However, when running the same code on a Solaris machine, I get the following exception. Both machines can reach the database machine on the network. I have included the Oracle trace logs. Mar 23, 2010 12:12:33 PM org.apache.commons.configuration.ConfigurationUtils locate FINE: ConfigurationUtils.locate(): base is /users/theUser/ADCompare, name is props.txt Mar 23, 2010 12:12:33 PM org.apache.commons.configuration.ConfigurationUtils locate FINE: Loading configuration from the path /users/theUser/ADCompare/props.txt Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver connect FINE: OracleDriver.connect(url=jdbc:oracle:thin:@//theServer:1521/theService, info) Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver connect FINER: OracleDriver.connect() walletLocation:(null) Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver parseUrl FINER: OracleDriver.parseUrl(url=jdbc:oracle:thin:@//theServer:1521/theService) Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver parseUrl FINER: sub_sub_index=12, end=46, next_colon_index=16, user=17, slash=18, at_sign=17 Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver parseUrl FINER: OracleDriver.parseUrl(url):return Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.OracleDriver connect FINER: user=theUser, password=******, database=//theServer:1521/theService, protocol=thin, prefetch=null, batch=null, accumulate batch result =true, remarks=null, synonyms=null Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.PhysicalConnection <init> FINE: PhysicalConnection.PhysicalConnection(ur="jdbc:oracle:thin:@//theServer:1521/theService", us="theUser", p="******", db="//theServer:1521/theService", info) Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.PhysicalConnection <init> FINEST: PhysicalConnection.PhysicalConnection() : connectionProperties={user=theUser, password=******, protocol=thin} Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.PhysicalConnection initialize FINE: PhysicalConnection.initialize(ur="jdbc:oracle:thin:@//theServer:1521/theService", us="theUser", access) Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.PhysicalConnection initialize FINE: PhysicalConnection.initialize(ur, us):return Mar 23, 2010 12:12:33 PM oracle.jdbc.driver.PhysicalConnection needLine FINE: PhysicalConnection.needLine()--no return java.lang.ArrayIndexOutOfBoundsException: 31 at oracle.net.nl.NVTokens.parseTokens(Unknown Source) at oracle.net.nl.NVFactory.createNVPair(Unknown Source) at oracle.net.nl.NLParamParser.addNLPListElement(Unknown Source) at oracle.net.nl.NLParamParser.initializeNlpa(Unknown Source) at oracle.net.nl.NLParamParser.<init>(Unknown Source) at oracle.net.resolver.TNSNamesNamingAdapter.loadFile(Unknown Source) at oracle.net.resolver.TNSNamesNamingAdapter.checkAndReload(Unknown Source) at oracle.net.resolver.TNSNamesNamingAdapter.resolve(Unknown Source) at oracle.net.resolver.NameResolver.resolveName(Unknown Source) at oracle.net.resolver.AddrResolution.resolveAndExecute(Unknown Source) at oracle.net.ns.NSProtocol.establishConnection(Unknown Source) at oracle.net.ns.NSProtocol.connect(Unknown Source) at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1037) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:282) at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:468) at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:165) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:35) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:839) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:185) The above exception is also thrown if I use OracleDataSource instead of the generic DriverManager.getConnection(). Any ideas on why the behavior is different in the different environments?

    Read the article

  • How to configure maximum number of transport channels in WCF using basicHttpBinding?

    - by Hemant
    Consider following code which is essentially a WCF host: [ServiceContract (Namespace = "http://www.mightycalc.com")] interface ICalculator { [OperationContract] int Add (int aNum1, int aNum2); } [ServiceBehavior (InstanceContextMode = InstanceContextMode.PerCall)] class Calculator: ICalculator { public int Add (int aNum1, int aNum2) { Thread.Sleep (2000); //Simulate a lengthy operation return aNum1 + aNum2; } } class Program { static void Main (string[] args) { try { using (var serviceHost = new ServiceHost (typeof (Calculator))) { var httpBinding = new BasicHttpBinding (BasicHttpSecurityMode.None); serviceHost.AddServiceEndpoint (typeof (ICalculator), httpBinding, "http://172.16.9.191:2221/calc"); serviceHost.Open (); Console.WriteLine ("Service is running. ENJOY!!!"); Console.WriteLine ("Type 'stop' and hit enter to stop the service."); Console.ReadLine (); if (serviceHost.State == CommunicationState.Opened) serviceHost.Close (); } } catch (Exception e) { Console.WriteLine (e); Console.ReadLine (); } } } Also the WCF client program is: class Program { static int COUNT = 0; static Timer timer = null; static void Main (string[] args) { var threads = new Thread[10]; for (int i = 0; i < threads.Length; i++) { threads[i] = new Thread (Calculate); threads[i].Start (null); } timer = new Timer (o => Console.WriteLine ("Count: {0}", COUNT), null, 1000, 1000); Console.ReadLine (); timer.Dispose (); } static void Calculate (object state) { var c = new CalculatorClient ("BasicHttpBinding_ICalculator"); c.Open (); while (true) { try { var sum = c.Add (2, 3); Interlocked.Increment (ref COUNT); } catch (Exception ex) { Console.WriteLine ("Error on thread {0}: {1}", Thread.CurrentThread.Name, ex.GetType ()); break; } } c.Close (); } } Basically, I am creating 10 proxy clients and then repeatedly calling Add service method. Now if I run both applications and observe opened TCP connections using netstat, I find that: If both client and server are running on same machine, number of tcp connections are equal to number of proxy objects. It means all requests are being served in parallel. Which is good. If I run server on a separate machine, I observed that maximum 2 TCP connections are opened regardless of the number of proxy objects I create. Only 2 requests run in parallel. It hurts the processing speed badly. If I switch to net.tcp binding, everything works fine (a separate TCP connection for each proxy object even if they are running on different machines). I am very confused and unable to make the basicHttpBinding use more TCP connections. I know it is a long question, but please help!

    Read the article

  • doofus ASP.Net master page scrollbar question

    - by Stephen Falken
    Like happens to all of us sometimes, I inherited some crappy code I have to fix. We need to center our pages on widescreen machines, so we have a master page layout div like so: .MasterLayout { width:1100px; height: 100%; position:absolute; left:50%; margin-left:-550px; vertical-align:top; } I removed most of the detailed attributes for readability here, but here's how the table for the master page is laid out: <table border="0" cellpadding="0" cellspacing="0" style="width: 100%;"> <tr> <td style="width: 100%" align="center" colspan="2"> </td> </tr> <tr> <td colspan="2" style="height: 20px; background-color: #333;"> <asp:SiteMapPath/> </td> </tr> <tr> <td style="width: 86px; height: 650px; background-color: #B5C7DE; margin: 6px;" valign="top"> <asp:Menu /> <asp:SiteMapDataSource /> </td> <td style="background-color:#ffffff; margin:5px; width:1000px;" valign="top"> <asp:contentplaceholder id="ContentPlaceHolder1" runat="server"/> </td> </tr> </table> When resizing the browser window, the horizontal scrollbar only reaches as far as the left edge of the <asp:contentplaceholder/> control, and the <asp:menu/> that's in the 86px wide <td> is hidden. How can I fix this problem? THANKS IN ADVANCE

    Read the article

  • Connection Refused running multiple environments on Selenium Grid 1.04 via Ubuntu 9.04

    - by ReadyWater
    Hello, I'm writing a selenium grid test suite which is going to be run on a series of different machines. I wrote most of it on my macbook but have recently transfered it over to my work machine, which is running ubuntu 9.04. That's actually my first experience with a linux machine, so I may be missing something very simple (I have disabled the firewall though). I haven't been able to get the multienvironment thing working at all, and I've been trying and manual reviewing for a while. Any recommendations and help would be greatly, greatly appreciated! The error I'm getting when I run the test is: [java] FAILED CONFIGURATION: @BeforeMethod startFirstEnvironment("localhost", 4444, "*safari", "http://remoteURL:8080/tutor") [java] java.lang.RuntimeException: Could not start Selenium session: ERROR: Connection refused I thought it might be the mac refusing the connection, but using wireshark I determined that no connection attempt was made on the mac . Here's the code for setting up the session, which is where it seems to be dying @BeforeMethod(groups = {"default", "example"}, alwaysRun = true) @Parameters({"seleniumHost", "seleniumPort", "firstEnvironment", "webSite"}) protected void startFirstEnvironment(String seleniumHost, int seleniumPort, String firstEnvironment, String webSite) throws Exception { try{ startSeleniumSession(seleniumHost, seleniumPort, firstEnvironment, webSite); session().setTimeout(TIMEOUT); } finally { closeSeleniumSession(); } } @BeforeMethod(groups = {"default", "example"}, alwaysRun = true) @Parameters({"seleniumHost", "seleniumPort", "secondEnvironment", "webSite"}) protected void startSecondEnvironment(String seleniumHost, int seleniumPort, String secondEnvironment, String webSite) throws Exception { try{ startSeleniumSession(seleniumHost, seleniumPort, secondEnvironment, webSite); session().setTimeout(TIMEOUT); } finally { closeSeleniumSession(); } } and the accompanying build script used to run the test <target name="runMulti" depends="compile" description="Run Selenium tests in parallel (20 threads)"> <echo>${seleniumHost}</echo> <java classpathref="runtime.classpath" classname="org.testng.TestNG" failonerror="true"> <sysproperty key="java.security.policy" file="${rootdir}/lib/testng.policy"/> <sysproperty key="webSite" value="${webSite}" /> <sysproperty key="seleniumHost" value="${seleniumHost}" /> <sysproperty key="seleniumPort" value="${seleniumPort}" /> <sysproperty key="firstEnvironment" value="${firstEnvironment}" /> <sysproperty key="secondEnvironment" value="${secondEnvironment}" /> <arg value="-d" /> <arg value="${basedir}/target/reports" /> <arg value="-suitename" /> <arg value="Selenium Grid Java Sample Test Suite" /> <arg value="-parallel"/> <arg value="methods"/> <arg value="-threadcount"/> <arg value="15"/> <arg value="testng.xml"/> </java>

    Read the article

  • LSP packet modify

    - by kellogs
    Hello, anybody care to share some insights on how to use LSP for packet modifying ? I am using the non IFS subtype and I can see how (pseudo?) packets first enter WSPRecv. But how do I modify them ? My inquiry is about one single HTTP response that causes WSPRecv to be called 3 times :((. I need to modify several parts of this response, but since it comes in 3 slices, it is pretty hard to modify it accordingly. And, maybe on other machines or under different conditions (such as high traffic) there would only be one sole WSPRecv call, or maybe 10 calls. What is the best way to work arround this (please no NDIS :D), and how to properly change the buffer (lpBuffers-buf) by increasing it ? int WSPAPI WSPRecv( SOCKET s, LPWSABUF lpBuffers, DWORD dwBufferCount, LPDWORD lpNumberOfBytesRecvd, LPDWORD lpFlags, LPWSAOVERLAPPED lpOverlapped, LPWSAOVERLAPPED_COMPLETION_ROUTINE lpCompletionRoutine, LPWSATHREADID lpThreadId, LPINT lpErrno ) { LPWSAOVERLAPPEDPLUS ProviderOverlapped = NULL; SOCK_INFO *SocketContext = NULL; int ret = SOCKET_ERROR; *lpErrno = NO_ERROR; // // Find our provider socket corresponding to this one // SocketContext = FindAndRefSocketContext(s, lpErrno); if ( NULL == SocketContext ) { dbgprint( "WSPRecv: FindAndRefSocketContext failed!" ); goto cleanup; } // // Check for overlapped I/O // if ( NULL != lpOverlapped ) { /*bla bla .. not interesting in my case*/ } else { ASSERT( SocketContext->Provider->NextProcTable.lpWSPRecv ); SetBlockingProvider(SocketContext->Provider); ret = SocketContext->Provider->NextProcTable.lpWSPRecv( SocketContext->ProviderSocket, lpBuffers, dwBufferCount, lpNumberOfBytesRecvd, lpFlags, lpOverlapped, lpCompletionRoutine, lpThreadId, lpErrno); SetBlockingProvider(NULL); //is this the place to modify packet length and contents ? if (strstr(lpBuffers->buf, "var mapObj = null;")) { int nLen = strlen(lpBuffers->buf) + 200; /*CHAR *szNewBuf = new CHAR[]; CHAR *pIndex; pIndex = strstr(lpBuffers->buf, "var mapObj = null;"); nLen = strlen(strncpy(szNewBuf, lpBuffers->buf, (pIndex - lpBuffers->buf) * sizeof (CHAR))); nLen = strlen(strncpy(szNewBuf + nLen * sizeof(CHAR), "var com = null;\r\n", 17 * sizeof(CHAR))); pIndex += 18 * sizeof(CHAR); nLen = strlen(strncpy(szNewBuf + nLen * sizeof(CHAR), pIndex, 1330 * sizeof (CHAR))); nLen = strlen(strncpy(szNewBuf + nLen * sizeof(CHAR), "if (com == null)\r\n" \ "com = new ActiveXObject(\"InterCommJS.Gateway\");\r\n" \ "com.lat = latitude;\r\n" \ "com.lon = longitude;\r\n}", 111 * sizeof (CHAR))); pIndex = strstr(szNewBuf, "Content-Length:"); pIndex += 16 * sizeof(CHAR); strncpy(pIndex, "1465", 4 * sizeof(CHAR)); lpBuffers->buf = szNewBuf; lpBuffers->len += 128;*/ } if ( SOCKET_ERROR != ret ) { SocketContext->BytesRecv += *lpNumberOfBytesRecvd; } } cleanup: if ( NULL != SocketContext ) DerefSocketContext( SocketContext, lpErrno ); return ret; } Thank you

    Read the article

  • Securing a license key with RSA key.

    - by Jesse Knott
    Hello, it's late, I'm tired, and probably being quite dense.... I have written an application that I need to secure so it will only run on machines that I generate a key for. What I am doing for now is getting the BIOS serial number and generating a hash from that, I then am encrypting it using a XML RSA private key. I then sign the XML to ensure that it is not tampered with. I am trying to package the public key to decrypt and verify the signature with, but every time I try to execute the code as a different user than the one that generated the signature I get a failure on the signature. Most of my code is modified from sample code I have found since I am not as familiar with RSA encryption as I would like to be. Below is the code I was using and the code I thought I needed to use to get this working right... Any feedback would be greatly appreciated as I am quite lost at this point the original code I was working with was this, this code works fine as long as the user launching the program is the same one that signed the document originally... CspParameters cspParams = new CspParameters(); cspParams.KeyContainerName = "XML_DSIG_RSA_KEY"; cspParams.Flags = CspProviderFlags.UseMachineKeyStore; // Create a new RSA signing key and save it in the container. RSACryptoServiceProvider rsaKey = new RSACryptoServiceProvider(cspParams) { PersistKeyInCsp = true, }; This code is what I believe I should be doing but it's failing to verify the signature no matter what I do, regardless if it's the same user or a different one... RSACryptoServiceProvider rsaKey = new RSACryptoServiceProvider(); //Load the private key from xml file XmlDocument xmlPrivateKey = new XmlDocument(); xmlPrivateKey.Load("KeyPriv.xml"); rsaKey.FromXmlString(xmlPrivateKey.InnerXml); I believe this to have something to do with the key container name (Being a real dumbass here please excuse me) I am quite certain that this is the line that is both causing it to work in the first case and preventing it from working in the second case.... cspParams.KeyContainerName = "XML_DSIG_RSA_KEY"; Is there a way for me to sign/encrypt the XML with a private key when the application license is generated and then drop the public key in the app directory and use that to verify/decrypt the code? I can drop the encryption part if I can get the signature part working right. I was using it as a backup to obfuscate the origin of the license code I am keying from. Does any of this make sense? Am I a total dunce? Thanks for any help anyone can give me in this..

    Read the article

  • Offline backup synchronization

    - by Pavan Kumar
    There is a Central Server running Windows Server 2003 and SQL Server 2005 and there are 7 client machines situated in various places and has XP Pro & SQL Server 2005 installed in all of them. They are not interconnected so they are physically seperate. One person goes to each of these centers maybe twice a month and takes the backup (Full database consisting of mdf and ldf files) with a pen drive and brings it to the Central server which contains the central database holding same schema as all the other client databases. I need to synchronize each backup database (belonging to different centers) one by one to update the existing data or inserting new data in the central database . The solution i got was Replication. The pendrive is brought to central server consisting of 7 instances of the databases and then the databases is attached to the central server one by one to the same SQL Server where the central database exists. Then my idea was to replicate the backup database one by one i.e using single subscription (Central Database) and multiple publication ( i.e 7 instances of databases in my case) toplogy by performing replication locally (i.e in the same machine). So i tried to develop a UI in C# .Net to programatically run the Transactional Replication with push subscription using RMO Programming (which is incomplete as of now because there is no point in developing when you already know it is not the solution). Transactional Replication can either be set to initialize with a snapshot or without a snapshot. If i go for the first option i.e with a snapshot , the data whatever is present in Central Database is overwritten by the new data . So the data present initially in the central database is lost. If i try to initialize without snapshot , no data (the data already has the updated and new data) will be sent from the backup database to server. The replication will work in a scenario where any incremental changes is done only after you set the replication . So the initial data whatever was present in the backup database when setting up the replication will not be replicated when running the snapshot agent for the first time to synchronize. Only changes in the backup database thereafter will be reflected to the central database .(Remember I am not going to insert new data or make any changes to the backup database after i attach it to the Central Server. ) So this solution is not feasible. I want a solution for synchronizing from one client database to central database present in the same machine using C#.NET. If you can provide me small example maybe with two databases(with same schema) DB1(Client) to DB2(Server) consisting of one or two tables it will be very helpful. The synchronization is not bidirectional.I want to only update existing data or insert new data from DB1 to DB2 (DB2 may contain some data initially). Thanks and Regards Pavan

    Read the article

  • Gathering Staff anyone interested?

    - by kasene
    Thread Title - Gathering Staff Rush-Soft Game Design is currently looking for staff of a moderate skill level. Team Name - RushSoft Game Design Project Name - N/A We are gathering staff so that we can begin working on a new game. Target Aim - Freeware / Free Version - Paid Version With our first project our aim is to simply get our name out there. Generally we will be targeting a freeware distribution platform or a Free and Paid version. Compensation - Prehaps in the future but don't rely on it If in the future we start developing a game we intend to make any sort of sizable profit from then yes, there will be compensation however currently our low, low funding comes from generous donations. Any money that we make for now will go to the teams funding for things like engine licenses and company registration. Technology - C/C++ RSETech Our primary functional language will be C/C++ as most games are. We will be using a custom built library built on Direct3D called RSETech or RushSoft Engine Technology. Currently its is fully capable of being used for developing a game. The final version is made up of almost entirely C (No C++ or OOP). There is a C++ version currently in the works. Programming: - Microsoft Visual C++ 2008 / 2010 2D Art - Photoshop CS2 - GIMP Talent Needed - We currently are in need of x2 Programmers - With understanding of the following C/C ++ and game programming aspects: -If/Else Conditions -Functions/Methods -Arrays -Pointers (You don't need to fully understand these. Just know when they need to be used.) -Enums -Loops (For and While) -Structs (and How to use . and - syntax) -Classes (and how to call methods and access variables from a class) -State Machines -Switches -Include Guards -Understanding of how game loops work in general. (Init, Update, Render, Deinit) x2 Artists - As long as you have the means to and are able to draw 2D sprites and collab with a game designer to get a good result. 1 or more Game Designers - You can design levels (for platformers) as well as write game scripts and you can come up with good ideas and game mechanics. As long as you can do these things and are able to work well with artists and programmers you're golden. Business Consultant - Someone who knows the industry and how it works. Will inquire about possible distribution platforms as well as contact other developers, websites, and publishers on RushSofts behalf. Team Structure - Kasene Clark - Co-Founder/Lead Programmer/Game Designer Casey W - Co-Founder/Artist(GC/Animation)/Game Designer Nathan Mayworm - Game Designer. Website - RushSoft Websitek Contact - Kasene Clark [email protected] - [email protected] Phone - 12075181967 Feedback - Any Thank You! -Kasene

    Read the article

  • ASP.Net master page scrollbar question

    - by Stephen Falken
    Like happens to all of us sometimes, I inherited some crappy code I have to fix. We need to center our pages on widescreen machines, so we have a master page layout div like so: .MasterLayout { width:1100px; height: 100%; position:absolute; left:50%; margin-left:-550px; vertical-align:top; } I removed most of the detailed attributes for readability here, but here's how the table for the master page is laid out: <table border="0" cellpadding="0" cellspacing="0" style="width: 100%;"> <tr> <td style="width: 100%" align="center" colspan="2"> </td> </tr> <tr> <td colspan="2" style="height: 20px; background-color: #333;"> <asp:SiteMapPath/> </td> </tr> <tr> <td style="width: 86px; height: 650px; background-color: #B5C7DE; margin: 6px;" valign="top"> <asp:Menu /> <asp:SiteMapDataSource /> </td> <td style="background-color:#ffffff; margin:5px; width:1000px;" valign="top"> <asp:contentplaceholder id="ContentPlaceHolder1" runat="server"/> </td> </tr> </table> When resizing the browser window, the horizontal scrollbar only reaches as far as the left edge of the <asp:contentplaceholder/> control, and the <asp:menu/> that's in the 86px wide <td> is hidden. How can I fix this problem? THANKS IN ADVANCE

    Read the article

  • Calculating CPU frequency in C with RDTSC always returns 0

    - by Nazgulled
    Hi, The following piece of code was given to us from our instructor so we could measure some algorithms performance: #include <stdio.h> #include <unistd.h> static unsigned cyc_hi = 0, cyc_lo = 0; static void access_counter(unsigned *hi, unsigned *lo) { asm("rdtsc; movl %%edx,%0; movl %%eax,%1" : "=r" (*hi), "=r" (*lo) : /* No input */ : "%edx", "%eax"); } void start_counter() { access_counter(&cyc_hi, &cyc_lo); } double get_counter() { unsigned ncyc_hi, ncyc_lo, hi, lo, borrow; double result; access_counter(&ncyc_hi, &ncyc_lo); lo = ncyc_lo - cyc_lo; borrow = lo > ncyc_lo; hi = ncyc_hi - cyc_hi - borrow; result = (double) hi * (1 << 30) * 4 + lo; return result; } However, I need this code to be portable to machines with different CPU frequencies. For that, I'm trying to calculate the CPU frequency of the machine where the code is being run like this: int main(void) { double c1, c2; start_counter(); c1 = get_counter(); sleep(1); c2 = get_counter(); printf("CPU Frequency: %.1f MHz\n", (c2-c1)/1E6); printf("CPU Frequency: %.1f GHz\n", (c2-c1)/1E9); return 0; } The problem is that the result is always 0 and I can't understand why. I'm running Linux (Arch) as guest on VMware. On a friend's machine (MacBook) it is working to some extent; I mean, the result is bigger than 0 but it's variable because the CPU frequency is not fixed (we tried to fix it but for some reason we are not able to do it). He has a different machine which is running Linux (Ubuntu) as host and it also reports 0. This rules out the problem being on the virtual machine, which I thought it was the issue at first. Any ideas why this is happening and how can I fix it?

    Read the article

  • Are "EXC_BREAKPOINT (SIGTRAP)" exceptions caused by debugging breakpoints?

    - by Dennis
    I have a multithreaded app that is very stable on all my test machines and seems to be stable for almost every one of my users (based on no complaints of crashes). The app crashes frequently for one user, though, who was kind enough to send crash reports. All the crash reports (~10 consecutive reports) look essentially identical: Date/Time: 2010-04-06 11:44:56.106 -0700 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000002, 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 com.apple.CoreFoundation 0x90ab98d4 __CFBasicHashRehash + 3348 1 com.apple.CoreFoundation 0x90adf610 CFBasicHashRemoveValue + 1264 2 com.apple.CoreText 0x94e0069c TCFMutableSet::Intersect(__CFSet const*) const + 126 3 com.apple.CoreText 0x94dfe465 TDescriptorSource::CopyMandatoryMatchableRequest(__CFDictionary const*, __CFSet const*) + 115 4 com.apple.CoreText 0x94dfdda6 TDescriptorSource::CopyDescriptorsForRequest(__CFDictionary const*, __CFSet const*, long (*)(void const*, void const*, void*), void*, unsigned long) const + 40 5 com.apple.CoreText 0x94e00377 TDescriptor::CreateMatchingDescriptors(__CFSet const*, unsigned long) const + 135 6 com.apple.AppKit 0x961f5952 __NSFontFactoryWithName + 904 7 com.apple.AppKit 0x961f54f0 +[NSFont fontWithName:size:] + 39 (....more text follows) First, I spent a long time investigating [NSFont fontWithName:size:]. I figured that maybe the user's fonts were screwed up somehow, so that [NSFont fontWithName:size:] was requesting something non-existent and failing for that reason. I added a bunch of code using [[NSFontManager sharedFontManager] availableFontNamesWithTraits:NSItalicFontMask] to check for font availability in advance. Sadly, these changes didn't fix the problem. I've now noticed that I forgot to remove some debugging breakpoints, including _NSLockError, [NSException raise], and objc_exception_throw. However, the app was definitely built using "Release" as the active build configuration. I assume that using the "Release" configuration prevents setting of any breakpoints--but then again I am not sure exactly how breakpoints work or whether the program needs to be run from within gdb for breakpoints to have any effect. My questions are: could my having left the breakpoints set be the cause of the crashes observed by the user? If so, why would the breakpoints cause a problem only for this one user? If not, has anybody else had similar problems with [NSFont fontWithName:size:]? I will probably just try removing the breakpoints and sending back to the user, but I'm not sure how much currency I have left with that user. And I'd like to understand more generally whether leaving the breakpoints set could possibly cause a problem (when the app is built using "Release" configuration).

    Read the article

  • LINQ to SQL Could not find key member. Only fails on server.

    - by Adam Carr
    I have a scenario where I am inheriting from an abstract class in my partial linq to sql auto generated class implementation. My base abstract class has an abstract property called ID which I have flagged inside my LINQ to SQL model with the instance modifier override. This works fine locally without any issues. I have also done some development on another machine and it works fine there too (both in VS2008 and using Subversion). I am running CI with TeamCity and the build succeeds and deploys as desired. The problem is when the server tries to hit the database for the first time via the LINQ to SQL data context, it generates the following error. "Could not find key member 'Id' of key 'Id' on type 'CustomType'. The key may be wrong or the field or property on 'CustomType' has changed names." I have tried changing my configuration by not implementing the Id field in my base class but this still fails. Why does it work on both of my DEV machines but not on the server? I am using LINQ to SQL in another project that runs on this server just fine. FYI: LINQ to SQL, SQL 2008, .NET 3.5, SERVER 2008, IIS 7.0 UPDATE I have gone back and added the same table a second time in the same data model but without a base class and have then displayed the results from that table and got no errors. This tells me it has something to do with my base abstract class and the need to flag a property on one of my linq to sql model classes (that belongs to a key relationship) with the instance modifier of override. No answer to this yet but am getting closer. UPDATE I have fixed my issue by simply changing my approach to my problem but I am still interested in why this doesn't work. I created a new WinSrv2008 VPC and patched it, deployed a pre-built version of my site to it and still got the same error. I now assume the issue is like what the person said here, a dependency issue with VS2008. My question is what or what? Will install VS2008 on the VPC to see if it works after that.

    Read the article

  • Possible to use Python with Intel's Atom Developer SDK (C/C++)?

    - by Jordan Magnuson
    So I've made a game in Python and PyGame. Now I'm interested in submitting the game to Intel's March Developer Challenge. However, the developer challenge requires use of Intel's Atom Developer SDK (http://appdeveloper.intel.com/en-us/sdk), which only has API's for C and C++. I'm new to Python and PyGame, and have no experience in C or C++. My question is, would it be possible to somehow implement Intel's Atom SDK through/with/from a Python application (as the first link above suggests)? I've read up a little bit on embedding/extending Python into/with C, but I'm not entirely sure what to embed or where. I mean, I know I can do things like this in C: #include <Python.h> int main(int argc, char *argv[]) { Py_Initialize(); PyRun_SimpleString("from time import time,ctime\n" "print 'Today is',ctime(time())\n"); Py_Finalize(); return 0; } But what do I do about all my dependencies on Python and Pygame, for people that don't have those installed on their machines? Normally Py2Exe takes care of compacting the required dependencies (I've managed to package my game into an exe/zip), but how do I take care of that stuff in the context of embedding within C? Can I somehow work with py2exe on this, or do I need to do something entirely different for embedding within C? It seems like it would be a lot easier to go the route of extending Python with the C validation code, rather than trying to embed my whole game within C, but I think that's not an option, "because the library provided is currently only available as a Visual Studio 2008 '.lib'", meaning the application has to be compiled with Visual Studio...? Any help, thoughts, or ideas are much appreciated! You can find the complete SDK Developer's Guide on the intel site above, but here is their "Hello World" using the C Language API: #include <stdio.h> #include “adpcore.h” int main( int argc, char* argv[] ) { ADP_RET_CODE ret_code; const ADP_APPLICATIONID myApplicationID = {{ 0x12345678,0x11112222,0x33331234,0x567890ab}}; if ((ret_code = ADP_Initialize()) != ADP_SUCCESS ){ printf( “ERROR: exiting” ); exit( -1 ); } if (( ret_code = ADP_IsAuthorized( myApplicationId )) == ADP_AUTHORIZED ) printf( “Hello World” ); else printf( “Not authorized to run” ); exit 0; } 35 Page SDK Developer Guide: http:// appdeveloper.intel.com/sites/files/pages/SDK%20Developer%20Guide.pdf

    Read the article

  • .NET Application broken on one PC, unhandleable exception

    - by Bobby
    Hello people. I have a .NET 2.0 application with nothing fancy in it. It worked until yesterday on every PC I installed or copied it to, no matter if 2.0, 3.0, 3.5 or 3.5 SP1 was installed, no matter if it was Win2000, XP or even Win7 (in total 100+ machines). Yesterday I did my normal installation procedure and wanted to start it one time to check if everything is working...and it wasn't. The program crashed hard leaving me with the uninformative "Do you wanna report this error?" dialog. The problem is an exception in the Main(String[] args) routine of my application. The event viewer is showing the following entry: Event Type: ErrorEvent Source: .NET Runtime 2.0 Error Reporting Event Category: None Event ID: 5000 Date: 05/05/2010 Time: 16:09:09 User: N/A Computer: myClientPC Description: EventType clr20r3, P1 apomenu.exe, P2 1.4.90.53, P3 4bdedea4, P4 system.configuration, P5 2.0.0.0, P6 4889de74, P7 1a6, P8 136, P9 ioibmurhynrxkw0zxkyrvfn0boyyufow, P10 NIL. Well...great information. After a lot of searching I finally was able to get further information about this exception (by adding a handler for UnhandledExceptions directly in My.MyApplication.New(), Application.Designer.vb): System.Configuration.ConfigurationErrorsException Configuration system failed to initialize at System.Configuration.ClientConfigurationSystem.EnsureInit(String configKey) at System.Configuration.ClientConfigurationSystem.PrepareClientConfigSystem(String sectionName) at System.Configuration.ClientConfigurationSystem.System.Configuration.Internal.IInternalConfigSystem.GetSection(String sectionName) at System.Configuration.ConfigurationManager.GetSection(String sectionName) at System.Configuration.PrivilegedConfigurationManager.GetSection(String sectionName) at System.Net.Configuration.SettingsSectionInternal.get_Section() at System.Net.Sockets.Socket.InitializeSockets() at System.Net.Sockets.Socket.get_SupportsIPv4() at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.get_HostName() at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.RegisterChannel(Boolean SecureChannel) at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.Run(String[] commandLine) at MyAppNameHere.My.MyApplication.Main(String[] Args) in 17d14f5c-a337-4978-8281-53493378c1071.vb:Line 81. And at this point I'm stuck...I'm out of ideas. I'm not using any kind of configuration system from the framework (no reference to System.Configuration, and there never was a MyAppnameHere.exe.config generated or distributed, nor have I seen this error before). I also found a bug report at Microsoft (Google Cache) about this bug (in another context, though). But as it seems, they won't even look at it. Every help is greatly appreciated! Edit: I'm using Visual Studio 2008 Prof.. Crash happens in Release- and Debug-Build on the client machine. Debugging the application directly on this machine is out of question I fear, 300+ Miles and they only have two computers to work with.

    Read the article

  • How do you stop a user-instance of Sql Server? (Sql Express user instance database files locked, eve

    - by Bittercoder
    When using SQL Server Express 2005's User Instance feature with a connection string like this: <add name="Default" connectionString="Data Source=.\SQLExpress; AttachDbFilename=C:\My App\Data\MyApp.mdf; Initial Catalog=MyApp; User Instance=True; MultipleActiveResultSets=true; Trusted_Connection=Yes;" /> We find that we can't copy the database files MyApp.mdf and MyApp_Log.ldf (because they're locked) even after stopping the SqlExpress service, and have to resort to setting the SqlExpress service from automatic to manual startup mode, and then restarting the machine, before we can then copy the files. It was my understanding that stopping the SqlExpress service should stop all the user instances as well, which should release the locks on those files. But this does not seem to be the case - could anyone shed some light on how to stop a user instance, such that it's database files are no longer locked? Update OK, I stopped being lazy and fired up Process Explorer. Lock was held by sqlserver.exe - but there are two instances of sql server: sqlserver.exe PID: 4680 User Name: DefaultAppPool sqlserver.exe PID: 4644 User Name: NETWORK SERVICE The file is open by the sqlserver.exe instance with the PID: 4680 Stopping the "SQL Server (SQLEXPRESS)" service, killed off the process with PID: 4644, but left PID: 4680 alone. Seeing as the owner of the remaining process was DefaultAppPool, next thing I tried was stopping IIS (this database is being used from an ASP.Net application). Unfortunately this didn't kill the process off either. Manually killing off the remaining sql server process does remove the open file handle on the database files, allowing them to be copied/moved. Unfortunately I wish to copy/restore those files in some pre/post install tasks of a WiX installer - as such I was hoping there might be a way to achieve this by stopping a windows service, rather then having to shell out to kill all instances of sqlserver.exe as that poses some problems: Killing all the sqlserver.exe instances may have undesirable consequencies for users with other Sql Server instances on their machines. I can't restart those instances easily. Introduces additional complexities into the installer. Does anyone have any further thoughts on how to shutdown instances of sql server associated with a specific user instance?

    Read the article

  • Boggling Direct3D9 dynamic vertex buffer Lock crash/post-lock failure on Intel GMA X3100.

    - by nj
    Hi, For starters I'm a fairly seasoned graphics programmer but as wel all know, everyone makes mistakes. Unfortunately the codebase is a bit too large to start throwing sensible snippets here and re-creating the whole situation in an isolated CPP/codebase is too tall an order -- for which I am sorry, do not have the time. I'll do my best to explain. B.t.w, I will of course supply specific pieces of code if someone wonders how I'm handling this-or-that! As with all resources in the D3DPOOL_DEFAULT pool, when the device context is taken away from you you'll sooner or later will have to reset your resources. I've built a mechanism to handle this for all relevant resources that's been working for years; but that fact nothingwithstanding I've of course checked, asserted and doubted any assumption since this bug came to light. What happens is as follows: I have a rather large dynamic vertex buffer, exact size 18874368 bytes. This buffer is locked (and discarded fully using the D3DLOCK_DISCARD flag) each frame prior to generating dynamic geometry (isosurface-related, f.y.i) to it. This works fine, until, of course, I start to reset. It might take 1 time, it might take 2 or it might take 5 resets to set off a bug that causes an access violation either on the pointer returned by the Lock() operation on the renewed resource or a plain crash -- regarding a somewhat similar address, but without the offset that it has tacked on to it in the first case because in that case we're somewhere halfway writing -- iside the D3D9 dll Lock() call. I've tested this on other hardware, upgraded my GMA X3100 drivers (using a MacBook with BootCamp) to the latest ones, but I can't reproduce it on any other machine and I'm at a loss about what's wrong here. I have tried to reproduce a similar situation with a similar buffer (I've got a large scratch pad of the same type I filled with quads) and beyond a certain amount of bytes it started to behave likewise. I'm not asking for a solution here but I'm very interested if there are other developers here who have battled with the same foe or maybe some who can point me in some insightful direction, maybe ask some questions that might shed a light on what I may or may not be overlooking. Another interesting artifact is that the vertex buffer starts to bug if I supply both D3DLOCK_DISCARD and D3DLOCK_NOOVERWRITE together which, even though not very logical (you're not going to overwrite if you've just discarded all), gives graphics glitches. Thanks and any corrections are more than welcome. Niels p.s - A friend of mine raised the valid point that it is a huge buffer for onboard video RAM and it's being at least double or triple buffered internally due to it's dynamic nature. On the other hand, the debug output (D3D9 debug DLL + max. warning output) remains silent. p.s 2 - Had it tested on more machines and still works -- it's probably a matter of circumstance: the huge dynamic, internally double/trippled buffered buffer, not a lot of memory and drivers that don't complain when they should.. Unless someone has a better suggestion; I'd still love to hear it :)

    Read the article

  • How do I prevent programmatically the "Program Compatibility Assistant" in Vista (and Windows 7) fro

    - by Asaf
    I develop a C++ program which might use adobe flash, although it is not essential. I use CoCreateInstance to create the flash object, and if it fails, I know flash is not installed so I don't use it. However, in Vista (and I think Windows 7 as well), when flash is not installed, after leaving the application, the "Program Compatibility Assistant" pops up a message saying that "This program requires a missing Windows component" specifying the flash.ocx. Is there a way to prevent this message from appearing? I don't want to force any user to install flash (especially since it's the IE ActiveX, and FireFox users might not have it installed), and my application can operate well without the flash. Plus this message is really annoying when it appears after every run. I don't mean of course disabling the PCA on the user's machine, but programmatically disable this specific appearance on all machines. Any thoughts? Thanks [EDIT:] I followed Shay's lead (thanks), and did some more digging of my own. I added the following XML to the application's manifest: <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"> </requestedExecutionLevel> </requestedPrivileges> </security> </trustInfo> (see also: msdn.microsoft.com/en-us/library/bb756929.aspx) This solved the problem on Vista 64. To solve the same problem on Windows 7, I added the following: <compatibility xmlns="urn:schemas-microsoft-com:compatibility.v1"> <application> <!--The ID below indicates application support for Windows Vista --> <supportedOS Id="{e2011457-1546-43c5-a5fe-008deee3d3f0}"/> <!--The ID below indicates application support for Windows 7 --> <supportedOS Id="{35138b9a-5d96-4fbd-8e2d-a2440225f93a}"/> </application> </compatibility> (See also: blogs.msdn.com/yvesdolc/archive/2009/09/22/the-new-compatibility-section-in-the-application-manifest.aspx) Solved Windows 7. But for some reason, it still happens in Vista 32... I also tried editing the manifest of the specific DLL which causes the problem, but it had no effect. Only the executable's manifest itself affected the problem. So... Vista 32?

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • Why is it still so hard to write software?

    - by nornagon
    Writing software, I find, is composed of two parts: the Idea, and the Implementation. The Idea is about thinking: "I have this problem; how do I solve it?" and further, "how do I solve it elegantly?" The answers to these questions are obtainable by thinking about algorithms and architecture. The ideas come partially through analysis and partially through insight and intuition. The Idea is usually the easy part. You talk to your friends and co-workers and you nut it out in a meeting or over coffee. It takes an hour or two, plus revisions as you implement and find new problems. The Implementation phase of software development is so difficult that we joke about it. "Oh," we say, "the rest is a Simple Matter of Code." Because it should be simple, but it never is. We used to write our code on punch cards, and that was hard: mistakes were very difficult to spot, so we had to spend extra effort making sure every line was perfect. Then we had serial terminals: we could see all our code at once, search through it, organise it hierarchically and create things abstracted from raw machine code. First we had assemblers, one level up from machine code. Mnemonics freed us from remembering the machine code. Then we had compilers, which freed us from remembering the instructions. We had virtual machines, which let us step away from machine-specific details. And now we have advanced tools like Eclipse and Xcode that perform analysis on our code to help us write code faster and avoid common pitfalls. But writing code is still hard. Writing code is about understanding large, complex systems, and tools we have today simply don't go very far to help us with that. When I click "find all references" in Eclipse, I get a list of them at the bottom of the window. I click on one, and I'm torn away from what I was looking at, forced to context switch. Java architecture is usually several levels deep, so I have to switch and switch and switch until I find what I'm really looking for -- by which time I've forgotten where I came from. And I do that all day until I've understood a system. It's taxing mentally, and Eclipse doesn't do much that couldn't be done in 1985 with grep, except eat hundreds of megs of RAM. Writing code has barely changed since we were staring at amber on black. We have the theoretical groundwork for much more advanced tools, tools that actually work to help us comprehend and extend the complex systems we work with every day. So why is writing code still so hard?

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >