Search Results

Search found 5279 results on 212 pages for 'execution counter'.

Page 134/212 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • SQL 05 full-text query fails "Specified module could not be found."

    - by Dan Bailiff
    I'm running SQL 2005 on Windows XP. I have a database table that has full text searching enabled. I was able to build and even re-build the index. However, when I try to query it like this: Select * from fulltext_english WHERE CONTAINS(page_data, 'causes') I get this error: Msg 7619, Level 16, State 1, Line 1 The execution of a full-text query failed. "The specified module could not be found." Did I miss something on the install? Is this a dll issue? I've googled and binged for days and can't find anything similar to this message. Thanks!

    Read the article

  • JetBrains dotTrace, is it possible to profile source code line by line? else I need another tool

    - by m3ntat
    I am using JetBrains dotTrace, I've profiled my app which is entirely CPU bound. But the results as you walk down the tree don't sum to the level above in the tree, I only see method calls not the body lines of the node in questions method. Is it possible to profile the source code line by line. i.e for one node: SimulatePair() 99.04% --nextUniform() 30.12% --IDCF() 24.08% So the method calls nextUniform + IDCF use 54% of the time in SimulatePair (or 54% total execution time I'm not sure how to read this) regardless what is happening the other 46% of SimulatePair I need some detail on a line by line basis. Any help or alternative tools is much appreciated. Thanks

    Read the article

  • XDebug, how to disable remote debugging for single .php file?

    - by Kirzilla
    Hello, I'm using Eclipse IDE + remote Xdebug. EclipseIDE is listening 9000 port for some kind of Xdebug information. There are some php scripts running by cron on server. So, every cron execution xdebug is sending information to my workstation and EclipseIDE is trying to find this file in my project. But file couldn't be find because cron running scrits do not relate to the project I'm working with. So, every cron run Eclipse IDE is alerting this message http://img2.pict.com/22/fc/86/3299517/0/screenshot2b142.png I've tried to add to cron executed php scripts some strings... if (function_exists('xdebug_disable')) { xdebug_disable(); } ... but it didn't helped. Any ideas? Thank you

    Read the article

  • WCF - Network Cost

    - by Mubashar Ahmad
    Dear Devs I have a wcf service deployed on IIS with basicHttpBinding and aspNetCompatibilityEnabled=true I have a test client as well which invokes multiple service functions simultaneously. To check the performance of service call on client and server I calculated the Avg time it takes to complete a service request on client(in proxy code) and on server as well. after a test of 8 hrs (server and client were on the same machine) i came to know that average response time on client is around 34ms where as the Avg execution time on server is around 3ms so the difference is 31ms. I would like to know why every call is taking 31ms is it justified? and how can i reduce this?

    Read the article

  • router connect configuration in cakephp 1.3

    - by Gaurav Sharma
    Hello everyone, I have defined the following rule in the router.php file of my cakephp 1.3 based application Router::connect('/tags/*',array('plugin' => 'tags', 'controller' => 'Tags', 'action' => 'index', 'admin' => false)); I have used the tags plugin. When I display the tags related to a topic then the URL on the tags appears something like this http://localhost/testapp/tags/Tags/view/{tag_key_name} I want to make this url for any action to tags plugin like this http://localhost/testapp/Tags/{action}/{tag_key_name} for (view, admin_view and admin_edit) and http://localhost/testapp/Tags/{action} for (index, admin_index and any other action that does not require an id or keyname for execution)

    Read the article

  • problem using base64 encoder and InputStreamReader

    - by karoberts
    I have some CLOB columns in a database that I need to put Base64 encoded binary files in. These files can be large, so I need to stream them, I can't read the whole thing in at once. I'm using org.apache.commons.codec.binary.Base64InputStream to do the encoding, and I'm running into a problem. My code is essentially this FileInputStream fis = new FileInputStream(file); Base64InputStream b64is = new Base64InputStream(fis, true, -1, null); InputStreamReader reader = new InputStreamReader(b64is); preparedStatement.setCharacterStream(1, reader); When I run the above code, I get one of these during the execution of the update java.io.IOException: Underlying input stream returned zero bytes, it is thrown deep in the InputStreamReader code. Why would this not work? It seems to me like the reader would attempt to read from the base 64 stream, which would read from the file stream, and everything should be happy.

    Read the article

  • Atomic Instructions and Variable Update visibility

    - by dsimcha
    On most common platforms (the most important being x86; I understand that some platforms have extremely difficult memory models that provide almost no guarantees useful for multithreading, but I don't care about rare counter-examples), is the following code safe? Thread 1: someVariable = doStuff(); atomicSet(stuffDoneFlag, 1); Thread 2: while(!atomicRead(stuffDoneFlag)) {} // Wait for stuffDoneFlag to be set. doMoreStuff(someVariable); Assuming standard, reasonable implementations of atomic ops: Is Thread 1's assignment to someVariable guaranteed to complete before atomicSet() is called? Is Thread 2 guaranteed to see the assignment to someVariable before calling doMoreStuff() provided it reads stuffDoneFlag atomically? Edits: The implementation of atomic ops I'm using contains the x86 LOCK instruction in each operation, if that helps. Assume stuffDoneFlag is properly cleared somehow. How isn't important. This is a very simplified example. I created it this way so that you wouldn't have to understand the whole context of the problem to answer it. I know it's not efficient.

    Read the article

  • Question about spring transaction propagation

    - by Yousui
    Hi guys, I have a question about spring transaction propagation. If I use @Transactional(propagation = Propagation.REQUIRED) to annotate a method m1. When execution logic enter m1, if there is already a transaction, m1 will use that one. When after m1, what about the transaction? It ends or still open?(if I call m1 in another method, and after the invocation there is still other things to do). In summary, I want to know when exiting an annotated method, the transaction ends or still open? Great thanks.

    Read the article

  • Import xml to database with high end performance and Audit log- A best Practice

    - by karthik
    Hi, I have to import big xml files to Ms SQL 2005 Database by using C# with high end Performance. Even if any record fails in middle, i have to take next record for process and failed record need to log for audit. I don't want to put insert query with in for loop. Could you please suggest a best way to do this. If I can use bulkcopy methods or Data Adapter update methods- Its very nice, But if any record fails, execution of that statement breaks and rolled back totally, right? Any alternatives and Best practices with example please..? Is Multi-threading works for me to improve performance..? Give me example please. Thanks Karthikeyan

    Read the article

  • using fopen on uploaded file with php

    - by Patrick
    Im uploading a file, and then attempting to use fgetcsv to do things to it. This is my script below. Everything was working fine before i added the file upload portions. Now its not showing any errors, its just not displaying the uploaded data. Im getting my "file uploaded" notice, and the file is saving properly, but the counter for number of entries is showing 0. if(isset($_FILES[csvgo])){ $tmp = $_FILES[csvgo][tmp_name]; $name = "temp.csv"; $tempdir = "csvtemp/"; if(move_uploaded_file($tmp, $tempdir.$name)){ echo "file uploaded<br>"; } $csvfile = $tempdir.$name; $fh = fopen($csvfile, 'r'); $headers = fgetcsv($fh); $data = array(); while (! feof($fh)) { $row = fgetcsv($fh); if (!empty($row)) { $obj = new stdClass; foreach ($row as $i => $value) { $key = $headers[$i]; $obj->$key = $value; } $data[] = $obj; } } fclose($fh); $c=0; foreach($data AS $key){ $c++; $phone = $key->PHONE; $fax = $key->Fax; $email = $key->Email; // do things with the data here } echo "$c entries <br>"; }

    Read the article

  • Eclipse takes ages to display breakpoint when running tomcat

    - by Ryan
    Hi, When tomcat hits a breakpoint in Eclipse, the execution thread stops, but the breakpoint takes absolutely ages to appear in Eclipse. The same is true if I try to inspect a variable; the first time takes about 2 minutes. After that, the debug session is fine. What with that and the CONSTANT need to keep re-publishing to tomcat every time I change something, it's driving me nuts. Does anybody have any ideas why it's so slow? Also, how can I stop tomcat restarting the webapp every time I try to change something during a debug session? I am sure it never used to do that... Eclipse is 3.3.1.1 with J2EE Standard Tools and Web Standard Tools. Tomcat is 5.5 Thanks a lot for any advice! Ryan

    Read the article

  • Running Sybase ISQL scripts from windows batch file

    - by user1328709
    I have already researched on this site as well as on google extensively for this. I have created a number of batch files that perform certain automated transactions(backups etc) on our production database. i want to further simplify my end of day processes by taking the dumps using a script that accepts input of some parameters. the script is able to login the isql prompt but unable to do the execution of the commands. @ECHO ***Started*** @ECHO Enter MonthDay(MMDD) SET /p md= @ECHO %md% mkdir \\10.20.1.17\arch\212%md%_banking set run=isql -Uuser -SORBITS -Ppass %run% @echo dump database banking to '/media/newArch/212%md%_banking/212%md%EOD_banking.dmp' with compression=5 @echo dump database master to '/media/newArch/212%md%_banking/212%md%EOD_master.dmp' @echo go pause I have been unsuccessful at putting these in a separate script file because the script itself uses a passed parameter. Please give me hints and links to Thanks

    Read the article

  • Using TSQLUnit to test INSTEAD OF triggers

    - by Jeff Jones
    I have an INSTEAD OF trigger on a table in my SQL Server 2005 database that checks several incoming values. If an incoming value is invalid, an error is raised and the transaction is rolled back. Otherwise the record is inserted. I would like to include a TSQLUnit test of this trigger where, if an invalid value is inserted, having the transaction rolled back is the successful outcome of the test. I have created a test procedure to do this, but rolling back the transaction aborts execution of the whole suite of tests. Has anyone had success with this? If so, how did you accomplish it? If this is not possible with TSQLUnit, how do you test your triggers? Or do you test them at all?

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • How To Aggregate API Data?

    - by Mindblip
    Hi, I have a system that connects to 2 popular APIs. I need to aggregate the data from each into a unified result that can then be paginated. The scope of the project means that the system could end up supporting 10's of APIs. Each API imposes a max limit of 50 results per request. What is the best way of aggregating this data so that it is reliable i.e ordered, no duplicates etc I am using CakePHP framework on a LAMP environment, however, I think this question relates to all programming languages. My approach so far is to query the search API of each provider and then populate a MySQL table. From this the results are ordered, paginated etc. However, my concern is performance: API communication, parsing, inserting and then reading all in one execution. Am I missing something, does anyone have any other ideas? I'm sure this is a common problem with many alternative solutions. Any help would be greatly appreciated. Thanks, Paul

    Read the article

  • Indy10 Deadlock at TCPServer

    - by user1769184
    to write information on the processing state to the GUI inside a tcpserver.onexecute(..) function , i used the following command sequence ExecuteDUMMYCommand(Global_Send_Record); BitMap_PaintImageProcess; TThread.Synchronize(nil, BitMap_PaintImageProcess); The code is working well on some machines, but on a few it fails. The code execution stops atTThread.Synchronize command. I guess on these machines the function call is trapped inside a deadlock Any chance to figure out the real problem behind ? The procedure BitMap_PaintImageProcess , here I create a Bitmap and do a lot of painting stuff , but is seems that this code is never executed ?

    Read the article

  • C builder RAD 2010 RTL/VCL Application->Terminate() Function NOT TERMINATING THE APPLICATION

    - by ergey
    Hello, I have problem descriebed also here: http://www.delphigroups.info/3/9/106748.html I have tryed almost all forms of placing Application-Terminate() func everywhere in the code, following and not 'return 0', 'ExitProcess(0)', 'ExitThread(0)', exit(0). None working variant closes the app. Instead the code after Application-Terminate() statement is running. I have two or more threads in the app. I tryed calling terminate func in created after execution threads and in main thread. Also this is not related (as i can imagine) with CodeGuard / madExcept (i have turned it off and on, no effect). CodeGuard turning also did not do success. What i should do to terminate all the threads in c builder 2010 application and then terminate the process? Thank you.

    Read the article

  • How do I fix HttpRuntime.get_UsingIntegratedPipeline() method not found exception?

    - by Nick Berardi
    This is the exception that I am getting when I run my application with the Managed Fusion Url Rewriter installed. Exception Details: System.MissingMethodException: Method not found: 'Boolean System.Web.HttpRuntime.get_UsingIntegratedPipeline()'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [MissingMethodException: Method not found: 'Boolean System.Web.HttpRuntime.get_UsingIntegratedPipeline()'.] ManagedFusion.Rewriter.RewriterModule.context_BeginRequest(Object sender, EventArgs e) in C:\Users\Nick\Documents\Projects\Managed Fusion (Open Source)\ManagedFusion.Rewriter\Source\RewriterModule.cs:162 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +92 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +64 I have .NET 2.0 SP1 installed on my server that is throwing this error.

    Read the article

  • Is it safe to lock a static variable in a non-static class?

    - by Dario Solera
    I've got a class that manages a shared resource. Now, since access to the resource depends on many parameters, this class is instantiated and disposed several times during the normal execution of the program. The shared resource does not support concurrency, so some kind of locking is needed. The first thing that came into my mind is having a static instance in the class, and acquire locks on it, like this: // This thing is static! static readonly object MyLock = new object(); // This thing is NOT static! MyResource _resource = ...; public DoSomeWork() { lock(MyLock) { _resource.Access(); } } Does that make sense, or would you use another approach?

    Read the article

  • Redirect and parse in realtime stdout of an long running process in vb.net

    - by Richard
    Hello there, This code executes "handbrakecli" (a command line application) and places the output into a string: Dim p As Process = New Process p.StartInfo.FileName = "handbrakecli" p.StartInfo.Arguments = "-i [source] -o [destination]" p.StartInfo.UseShellExecute = False p.StartInfo.RedirectStandardOutput = True p.Start Dim output As String = p.StandardOutput.ReadToEnd p.WaitForExit The problem is that this can take up to 20 minutes to complete during which nothing will be reported back to the user. Once it's completed, they'll see all the output from the application which includes progress details. Not very useful. Therefore I'm trying to find a sample that shows the best way to: Start an external application (hidden) Monitor its output periodically as it displays information about it's progress (so I can extract this and present a nice percentage bar to the user) Determine when the external application has finished (so I can't continue with my own applications execution) Kill the external application if necessary and detect when this has happened (so that if the user hits "cancel", I get take the appropriate steps) Does anyone have any recommended code snippets?

    Read the article

  • Some problem running NUnit

    - by prosseek
    I have NUnit installed at this directory. C:\Program Files\NUnit 2.5.5\bin\net-2.0 When I try to run my unit test (mut.dll) in some random directory. I get the following error. I have to copy the mut.dll under the NUnit directory in order to run it. ProcessModel: Default DomainUsage: Single Execution Runtime: net-2.0 Could not load file or assembly 'nunit.framework, Version=2.5.5.10112, Culture=n eutral, PublicKeyToken=96d09a1eb7f44a77' or one of its dependencies. The system cannot find the file specified. What's wrong? Is there anything that I have to configure to run NUNit under any directory?

    Read the article

  • CommandManager Executed Events don't fire for custom ICommands

    - by Andre Luus
    The WPF CommandManager allows you to do the following (pseudo-ish-code): <Button Name="SomeButton" Command="{Binding Path=ViewModelCommand}"/> And in the code-behind: private void InitCommandEvents() { CommandManager.AddExecutedEventHandler(this.SomeButton, SomeEventHandler); } The SomeEventHandler never gets called. To me this didn't seem like something very wrong to try and do, but if you consider what happens inside CommandManager.AddExecutedEventHandler, it makes sense why it doesn't. Add to that the fact that the documentation clearly states that the CommandManager only works with RoutedCommands. Nonetheless, this had me very frustrated for a while and led me to this question: What would you suggest is the best workaround for the fact that the CommandManager does not support custom ICommands? Especially if you want to add behavior around the execution of a command? For now, I fire the command manually in code behind from the button click event.

    Read the article

  • Google Toolbox For Mac with Core Data on iPhone results in error

    - by JaanusSiim
    I have set up my project for using Google Toolbox for Mac as described on official wiki. And everything is working as expected. For core data usage I have created a 'database' class that uses for final application SQLite storage (this is done based on Xcode template code). For unit tests I have created separate init method for 'database' to use in memory storage (storage url is [NSURL URLWithString:@"memory://store"] and type NSInMemoryStoreType). Without adding my model file (*.xcdatamodel) to unit tests target, test fail in expected place with message: executeFetchRequest:error: A fetch request must have an entity. If I add model file to the test target, then test is executed as expected (core data part looks OK), but after tests execution I get: RunIPhoneUnitTest.sh: line 123: 9487 Segmentation fault "$TARGET_BUILD_DIR/$EXECUTABLE_PATH" -RegisterForSystemEvents Command /bin/sh failed with exit code 139 This problem does not looks directly related to core data, but only happens if model file is added to target. Any pointers on resolving this issue would be appreciated!

    Read the article

  • Oracle EXECUTE IMMEDIATE changes explain plan of query.

    - by Gunny
    I have a stored procedure that I am calling using EXECUTE IMMEDIATE. The issue that I am facing is that the explain plan is different when I call the procedure directly vs when I use EXECUTE IMMEDIATE to call the procedure. This is causing the execution time to increase 5x. The main difference between the plans is that when I use execute immediate the optimizer isn't unnesting the subquery (I'm using a NOT EXISTS condition). We are using Rule Based Optimizer here at work. Example: Fast: begin package.procedure; end; / Slow: begin execute immediate 'begin package.' || proc_name || '; end;'; end; /

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >