Search Results

Search found 3969 results on 159 pages for 'differential execution'.

Page 52/159 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Invoking different methods on threads

    - by Kraken
    I have a main process main. It creates 10 threads (say) and then what i want to do is the following: while(required){ Thread t= new Thread(new ClassImplementingRunnable()); t.start(); counter++; } Now i have the list of these threads, and for each thread i want to do a set of process, same for all, hence i put that implementation in the run method of ClassImplementingRunnable. Now after the threads have done their execution, i wan to wait for all of them to stop, and then evoke them again, but this time i want to do them serially not in parallel. for this I join each thread, to wait for them to finish execution but after that i am not sure how to evoke them again and run that piece of code serially. Can i do something like for(each thread){ t.reevoke(); //how can i do that. t.doThis(); // Also where does `dothis()` go, given that my ClassImplementingRunnable is an inner class. } Also, i want to use the same thread, i.e. i want the to continue from where they left off, but in a serial manner. I am not sure how to go about the last piece of pseudo code. Kindly help. Working with with java.

    Read the article

  • Cost of logic in a query

    - by FrustratedWithFormsDesigner
    I have a query that looks something like this: select xmlelement("rootNode", (case when XH.ID is not null then xmlelement("xhID", XH.ID) else xmlelement("xhID", xmlattributes('true' AS "xsi:nil"), XH.ID) end), (case when XH.SER_NUM is not null then xmlelement("serialNumber", XH.SER_NUM) else xmlelement("serialNumber", xmlattributes('true' AS "xsi:nil"), XH.SER_NUM) end), /*repeat this pattern for many more columns from the same table...*/ FROM XH WHERE XH.ID = 'SOMETHINGOROTHER' It's ugly and I don't like it, and it is also the slowest executing query (there are others of similar form, but much smaller and they aren't causing any major problems - yet). Maintenance is relatively easy as this is mostly a generated query, but my concern now is for performance. I am wondering how much of an overhead there is for all of these case expressions. To see if there was any difference, I wrote another version of this query as: select xmlelement("rootNode", xmlforest(XH.ID, XH.SER_NUM,... (I know that this query does not produce exactly the same, thing, my plan was to move the logic to PL/SQL or XSL) I tried to get execution plans for both versions, but they are the same. I'm guessing that the logic does not get factored into the execution plan. My gut tells me the second version should execute faster, but I'd like some way to prove that (other than writing a PL/SQL test function with timing statements before and after the query and running that code over and over again to get a test sample). Is it possible to get a good idea of how much the case-when will cost? Also, I could write the case-when using the decode function instead. Would that perform better (than case-statements)?

    Read the article

  • maven ant echoproperties task

    - by user373201
    I am new to maven. I have written build scripts using ant. I am trying to display all the evn properties, user defined properties, system properties etc. in maven. In ant i could do the following . I tried to do the same with maven with the maven-antrun-plugin But get the following error. Embedded error: Could not create task or type of type: echoproperties. Ant could not find the task or a class this task relies upon. How can i see all properties in maven with or without using echoproperties. This is my configuration in maven <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>${maven.plugin.antrun.version}</version> <executions> <execution> <phase>validate</phase> <goals> <goal>run</goal> </goals> <configuration> <tasks> <echo>Displaying value of properties</echo> <echo>[org.junit.version] ${org.junit.version}</echo> <echoproperties prefix="org" /> </tasks> </configuration> </execution> </executions> </plugin>

    Read the article

  • Arduino variable going blank after first pass.

    - by user541597
    I have an arduino sketch that takes a timet and when that timet is equal to the current time it sets the new timet to timet + 2. For example: char* convert(char* x, String y){ int hour; int minute; sscanf(x, "%d:%d", &hour, &minute); char buf[6]; if (y == "6"){ if (hour > 17){ hour = (hour+6)%24; snprintf(buf, 10, "%d:%d", hour, minute ); }else if (hour < 18){ //hour = hour + 6; minute = (minute + 2); snprintf(buf, 10, "%d:%d", hour, minute); } } if (y == "12"){ if (hour > 11){ hour = (hour+12)%24; snprintf(buf, 10, "%d:%d", hour, minute ); } else if (hour < 12){ hour = hour + 12; snprintf(buf, 10, "%d:%d", hour, minute); } } if (y == "24"){ hour = (hour+24)%24; snprintf(buf, 10, "%d:%d", hour, minute ); } return buf; } sketch starts for example at 1:00am timet is set to 1:02, at system time 1:02 timet == system time my loops looks like this: if (timet == currenttime){ timet = convert(timet) } From this now when ever I check the value of timet it should equal 1:04, however I am getting the correct value at the first run after the execution of convert however everytime after that my timet value is blank. I tried changing the code instead of using the if loop I only run the convert function when I send for example t through the serial monitor, this works fine and outputs the correct timet after the execution of the convert function, So I figured the problem is in the if loop... Any ideas?

    Read the article

  • How to configure an index.htm file in IIS?

    - by salvationishere
    I am running IIS 6.0 on an XP OS using VS 2008 and SQL Server 2008 (Full install). I developed two web apps. Both of these I can run from IIS by setting them to the default website. However, now I tried adding an index.htm file. Real simple; all it has is two hyperlinks to these web apps. But now only the first web app works. The first web app is pure VS. The second web app modifies an Adventureworks database table. But now when I click the hyperlink for the second web app, it gives me the error below. However this error doesn't make sense to me cause I have the two web apps configured as two virtual directories beneath C:\inetpub\ and the index.htm file is also beneath C:\inetpub. And the default website is set to home directory C:\inetpub\ with Document index.htm on top. Also, why does the first web app work and not the second now? Server Error in '/AddFileToSQL' Application. The path '/AddFileToSQL/App_GlobalResources/' maps to a directory outside this application, which is not supported. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: The path '/AddFileToSQL/App_GlobalResources/' maps to a directory outside this application, which is not supported. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

    Read the article

  • WCF publish/subscribe service, and ASP.NET MVC client

    - by d3j4vu
    I managed to develop a custom WCF service, using the publish / subscribe model, and hosted inside a managed windows service. Everything's working. I developed an interface as the service contract implementing a method definition marked as a non-one way operation contract (OperationContract(IsOneWay = false)]. This, to make possible returns an instance of a custom class derived from System.Web.Mvc.ActionResult. In the MVC app, event fires ok. It wraps inside an action method, (just the one defined in the interface), but, and this is my current problem, i believe that something relative to the execution context of the windows service (and the hosted wcf counterpart) blocks the execution of the action method in the MVC app. This is what i have until now (some pieces ripped off just to be more clear): /// Method definition for the contract's service. Maps to a MVC ActionMethod. [OperationContract(IsOneWay = false)] ActionResult Imagen(string data, CustomActionResult result); The class to hold an ActionResult derived class instance: public class ServiceEventArgsMvc : ServiceEventArgs { /// <summary> /// /// </summary> public CustomActionResult Result { get; set; } } And the code in the MVC client app: /// <summary> /// Just a simple class to hold an abstract ActionResult derived class instance. /// </summary> public ActionResult Image(string data, CustomActionResult result) { ViewData["data"] = data; return View(); } Ok. ActionMethod sucessfully executes...but when it's done (and usually expected obtain a reditection to a View named Image, like the action method), the WCF service throws a Timeout exception, making clear that he's still waiting for a response from the MVC client. The response never arrives, so the MVC app never finish his work (redirect to the "Image" view as expected). Any ideas?. Guess i'm missing something very simple, but i don't know what it could be. This is drivin' me nuts.

    Read the article

  • C# and Objects/Classes

    - by user1192890
    I have tried to compile code from Deitel's C# 2010 for programmers. I copied it exactly out of the book, but it still can't find main, even though I declared it in one of the classes. Here is a look at the two classes: For GradeBookTest: // Fig. 4.2: GradeBookTest.cs // Create a GradeBook object and call its DisplayMessage method. public class GradeBookTest { // Main method begins program execution public static void Main(string[] args) { // create a GradeBook object and assign it to myGradeBook GradeBook myGradeBook = new GradeBook(); // call myGradeBook's DisplayMessage method myGradeBook.DisplayMessage(); } // end Main } // end class GradeBookTest Now for the GradeBook class: // Fig. 4.1: GradeBook.cs // Class declaration with one method. using System; public class GradeBook { // display a welcome message to the GradeBook user public void DisplayMessage() { Console.WriteLine( "Welcome to the Grade Book!" ); } // end method DisplayMessage } // end class GradeBook That is how I copied them. Here is how they appeared in the book: 1 // Fig. 4.2: GradeBookTest.cs 2 // Create a GradeBook object and call its DisplayMessage method. 3 public class GradeBookTest 4 { 5 // Main method begins program execution 6 public static void Main( string[] args ) 7 { 8 // create a GradeBook object and assign it to myGradeBook 9 GradeBook myGradeBook = new GradeBook(); 10 11 // call myGradeBook's DisplayMessage method 12 myGradeBook.DisplayMessage(); 13 } // end Main 14 } // end class GradeBookTest and // Fig. 4.1: GradeBook.cs // Class declaration with one method. using System; public class GradeBook { // display a welcome message to the GradeBook user public void DisplayMessage() { Console.WriteLine( "Welcome to the Grade Book!" ); } // end method DisplayMessage } // end class GradeBook I don't see why they are not working. Right now I am using Visual Studio Pro 2010. Any Thoughts?

    Read the article

  • Automating Excel through the PIA makes VBA go squiffy.

    - by Jon Artus
    I have absolutely no idea how to start diagnosing this, and just wondered if anyone had any suggestions. I'm generating an Excel spreadsheet by calling some Macros from a C# application, and during the generation process it somehow breaks. I've got a VBA class containing all of my logging/error-handling logic, which I instantiate using a singleton-esque accessor, shown here: Private mcAppFramework As csys_ApplicationFramework Public Function AppFramework() As csys_ApplicationFramework If mcAppFramework Is Nothing Then Set mcAppFramework = New csys_ApplicationFramework Call mcAppFramework.bInitialise End If Set AppFramework = mcAppFramework End Function The above code works fine before I've generated the spreadsheet, but afterwards fails. The problem seems to be the following line; Set mcAppFramework = New csys_ApplicationFramework which I've never seen fail before. If I add a watch to the variable being assigned here, the type shows as csys_ApplicationFramework/wksFoo, where wksFoo is a random worksheet in the same workbook. What seems to be happening is that while the variable is of the right type, rather than filling that slot with a new instance of my framework class, it's making it point to an existing worksheet instead, the equivalent of Set mcAppFramework = wksFoo which is a compiler error, as one might expect. Even more bizarrely, if I put a breakpoint on the offending line, edit the line, and then resume execution, it works. For example, I delete the word 'New' move off the line, move back, re-type 'New' and resume execution. This somehow 'fixes' the workbook and it works happily ever after, with the type of the variable in my watch window showing as csys_ApplicationFramework/csys_ApplicationFramework as I'd expect. This implies that manipulating the workbook through the PIA is somehow breaking it temporarily. All I'm doing in the PIA is opening the workbook, calling several macros using Excel.Application.Run(), and saving it again. I can post a few more details if anyone thinks that it's relevant. I don't know how VBA creates objects behind the scenes or how to debug this. I also don't know how the way the code executes can change without the code itself changing. As previously mentioned, VBA has frankly gone a bit squiffy on me... Any thoughts?

    Read the article

  • Visual C++ function suddenly 170 ms slower (4x longer)

    - by Mikael
    For the past few months I've been working on a Visual C++ project to take images from cameras and process them. Up until today this has taken about 65 ms to update the data but now it has suddenly increased significantly. What happens is: I launch my program and for the first 30 or so iterations it performs as expected, then suddenly the loop time increases from 65 ms to 250 ms. The odd thing is, after timing each function I found out that the part of the code which is causing the slowdown is fairly basic and has not been modified in over a month. The data which goes into it is unchanged and identical every iteration but the execution time which is initially less than 1 ms suddenly increases to 170 ms while the rest of the code is still performing as expected (time-wise). Basically, I am calling the same function over and over, for the first 30 calls it performs as it should, after that it slows down for no apparent reason. It might also be worth noting that it is a sudden change in execution time, not a gradual increase. What could be causing this? The code is leaking some memory (~50 kb/s) but not nearly enough to warrant a sudden 4x slowdown. If anyone has any ideas I'd love to hear them!

    Read the article

  • Visual C++ function suddenly 170x slower

    - by Mikael
    For the past few months I've been working on a Visual C++ project to take images from cameras and process them. Up until today this has taken about 65 ms to update the data but now it has suddenly increased significantly. What happens is: I launch my program and for the first 30 or so iterations it performs as expected, then suddenly the loop time increases from 65 ms to 250 ms. The odd thing is, after timing each function I found out that the part of the code which is causing the slowdown is fairly basic and has not been modified in over a month. The data which goes into it is unchanged and identical every iteration but the execution time which is initially less than 1 ms suddenly increases to 170 ms while the rest of the code is still performing as expected (time-wise). Basically, I am calling the same function over and over, for the first 30 calls it performs as it should, after that it slows down for no apparent reason. It might also be worth noting that it is a sudden change in execution time, not a gradual increase. What could be causing this? The code is leaking some memory (~50 kb/s) but not nearly enough to warrant a sudden 4x slowdown. If anyone has any ideas I'd love to hear them!

    Read the article

  • How to catch this low level MySQL (?) error in PHP/Magento

    - by andnil
    When I'm executing the following statement in Magento with a really large $sku, the execution terminates without any errors thrown what so ever. There are no errors in either Magento's, Apache's or PHP's error logs. Mage::getModel('catalog/product')-loadByAttribute('sku', $sku); Question: How do I catch the error? I've tried to set custom error handlers, and for testing purposes I've also managed to trigger error situations where each of the error handler functions are invoked. But when running the previously mentioned Magento code with a large $sku, none of the error handling functions are executed. error_reporting( -1 ); set_error_handler( array( 'Error', 'captureNormal' ) ); set_exception_handler( array( 'Error', 'captureException' ) ); register_shutdown_function( array( 'Error', 'captureShutdown' ) ); For completeness, this is the $sku I'm passing to loadByAttribute(). (The sku is invalid, but that is not the issue) 1- 9685 0102046|1- 9685 1212100|1- 9685 1212092|1- 9685 1212096|1- 9685 1102100|1- 9685 1102108|1- 9685 1102112|1- 9685 1102092|1- 9685 0102048|1- 9685 0102054|1- 9685 0102056|1- 9685 0102058|1- 9685 1212104|1- 9685 1212108|1- 9685 0212058|1- 9685 0104050|1- 9685 0212050|1- 9685 0212056|1- 9685 0212044|1- 9685 0212048|1- 9685 0212052|1- 9685 0212054|1- 9685 1102104|1- 9685 1102124 Any insight into this matter is much appreciated! Update: Upon further investigation, this is the exact point in the code where execution terminates. when the foreach is executed I guess Magento goes into MySQL world and starts loading up data from the database. \Mage\Catalog\Model\Abstract.php public function loadByAttribute($attribute, $value, $additionalAttributes = '*') { $collection = $this->getResourceCollection() ->addAttributeToSelect($additionalAttributes) ->addAttributeToFilter($attribute, $value) ->setPage(1,1); foreach ($collection as $object) { // <--------------- HERE return $object; } return false; } Note, I'm ONLY interested in finding out how to properly CATCH these kinds of errors, not "fix" the logic. This is so that I can present a proper error message to the user. The example above with the malformed sku is contrived and I have no desire to make my Magento app work with those erroneous skus.

    Read the article

  • How do I exclude a properties file when deploy

    - by Huy
    I want to include this file when running locally, but exclude it when deploy. I tried the following the doesn't seem to work. <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.3</version> <executions> <execution> <phase>package</phase> <goals> <goal>jar</goal> </goals> <configuration> <excludes> <exclude>filename.properties</exclude> </excludes> </configuration> </execution> </executions> </plugin>

    Read the article

  • Why doesn't sed's automatic printing deliver the expected results?

    - by CodeGnome
    What Works This sed script works as intended: $ echo -e "2\n1\n4\n3" | sed -n 'h; n; G; p' 1 2 3 4 It takes pair of input lines at a time, and swaps the lines. So far, so good. What Doesn't Work What I don't understand is why I can't use sed's automatic printing. Since sed automatically prints the pattern space at the end of each execution cycle (except when it's suppressed), why is this not equivalent? $ echo -e "2\n1\n4\n3" | sed 'h; n; G' 2 1 2 4 3 4 What I think the code says is: The input line is copied to the hold space. The next line is read into the pattern space. The hold space is appended to the pattern space. The pattern space (line1 + newline + line2) is printed automatically because we've reached the end of the execution cycle. Obviously, I'm wrong...but I don't understand why. Can anyone explain why this second example breaks, and why print suppression is needed to yield the correct results?

    Read the article

  • iPhone app developed with SDK 4.2, requires backward compatibility with iOS 3.1.3 .. easy way?

    - by mrd3650
    I have built an iPhone app with SDK 4.2 however I know also want to make it compatible with iOS 3.1.3. First step was to set the Deployment Target to 3.1.3. It runs fine on the 3.2 Simulator but the app crashes at times since I'm using some methods which are not available in this early SDK. So my qestion is, is there a straight forward way to locate the offending methods/classes I'm using in my project which are not available in 3.1.3 ? (without manually going through each method call and consult with the docs for the SDK availability?) Thanks. UPDATE: I have executed the app on 3.1.3 and attempted to manually test each execution path with the hope of locating all exceptions. This was completed with some level of success. However, what if the application is huge? and there are lots of execution paths? There must be some tool for this scenario. Any thoughts are much appreciated.

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • i don't know how to solve this error

    - by wide
    in local it works. when i load server, i got this error. Using themed css files requires a header control on the page. (e.g. <head runat="server" />). Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Using themed css files requires a header control on the page. (e.g. <head runat="server" />). Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: Using themed css files requires a header control on the page. (e.g. <head runat="server" />).] System.Web.UI.PageTheme.SetStyleSheet() +2458406 System.Web.UI.Page.OnInit(EventArgs e) +8699420 System.Web.UI.Control.InitRecursive(Control namingContainer) +333 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +378

    Read the article

  • Automated browser testing: How to test JavaScript in web pages?

    - by Dave
    I am trying to write an application that will test a series of web-pages programmatically. The web pages being tested have JavaScript embedded within them which alter the structure of the HTML when they complete execution. It is then the goal to take the final HTML (post-execution of the embedded JavaScript) and compare it against a known output. Essentially, the Input --- Output for the test application is: URL ---[retrieve HTML]--- HTML ---[execute JS, then compare]--- PASS/FAIL Here is the challenge: I have been unable to find a solution that is able to take the HTML I retrieve from the URL and process the JavaScript, as a browser would, and generate the final HTML a user might see from "View Source" on the same page within the browser. It would be very surprising if this sort of approach has not been made before, so I'm hoping someone out there knows of a fitting solution for this application/problem? If at all possible, I'm hoping for a solution that integrates with .NET (I've tried using the WebBrowser, with no luck). However, if there is an existing 3rd party application that can do exactly this, that would be quite acceptable. Thanks in advance for the suggestions! Dave

    Read the article

  • How to delay program for a certain number of milliseconds, or until a key is pressed?

    - by Jack
    I need to delay my program's execution for a specified number of milliseconds, but also want the user to be able to escape the wait when a key is pressed. If no key is pressed the program should wait for the specified number of milliseconds. I have been using Thread.Sleep to halt the program (which in the context of my program I think is ok as the UI is set to minimise during the execution of the main method). I have thought about doing something like this: while(GetAsyncKeyState(System.Windows.Forms.Keys.Escape) == 0 || waitTime > totalWait) { Thread.Sleep(100); waitTime += 100; } As Thread.Sleep will wait until at least the time specified before waking the thread up, there will obviously be a large unwanted extra delay as it is scaled up in the while loop. Is there some sort of method that will sleep for a specified amount of time but only while a condition holds true? Or is the above example above the "correct" way to do it but to use a more accurate Sleep method? If so what method can I use? Thanks in advance for your help.

    Read the article

  • SAS V9.1.3 - Error when combining %INC and CALL EXECUTE

    - by Mark
    Hi, I am getting a resolution error with some SAS v9.1.3 code. Here is some code I want to store in a .txt file (called problem2.txt) and bring into SAS with a %INC %macro email020; %if &email = 1 %then %do; %put THIS RESOLVED AT 1; %end; %else %if &email = 2 %then %do; %put THIS RESOVLED AT 2; %end; %put _user_; %mend email020; %email020; Then this is the main code: filename problem2 'C:\Documents and Settings\Mark\My Documents\problem2.txt'; %macro report1; %let email = 1; %inc problem2; %mend report1; %macro report2 (inc); %let email = 2; %inc problem2; %mend report2; data test; run = 'YES'; run; data _null_; set test; call execute("%report1"); call execute("%report2"); run; The log shows: NOTE: CALL EXECUTE generated line. 1 + %inc problem2; MLOGIC(EMAIL020): Beginning execution. WARNING: Apparent symbolic reference EMAIL not resolved. ERROR: A character operand was found in the %EVAL function or %IF condition where a numeric operand is required. The condition was: &email = 1 ERROR: The macro EMAIL020 will stop executing. MLOGIC(EMAIL020): Ending execution. So the question is why does CALL EXECUTE generate %inc problem2 rather than %report1, causing SAS to miss the assignment and what can I do about it?

    Read the article

  • SAN performance issues storing SQL Server tempdb on a SAN that's being backed up

    - by user42724
    I'm afraid I don't know much about SAN's so please forgive my lack of detail or technical terms. As a developer I've just completed and put on an existing production system a new application but it would appear to have tipped the scales regarding the performance of the backups being taken from the SAN. As I understand it there's a mirror of the SAN being taken usually constantly at the block-level. However, there seem to be so many new writes to the disk that the SAN mirroring/backup process can no longer keep up. I believe I've narrowed this down to SQL Servers tempdb which exists on a drive that contributes the largest portion of the problem! In fact I think tempdb has be contributing the largest portion of the issues all along regardless of my application! My question therefore is whether the tempdb should ever be mirrored or backed on the SAN and whether anyone else has gone through this sort of pain already? I'm wondering whether it's a best practise to make sure that tempdb is never mirrored on a SAN simply because any writes to it don't need to be saved. This also raises a slightly connected question - is it better to rely on SQL Servers built-in database backups tools (DB in full-recovery mode with full/differential and transaction log backups) or, as is the case with our application, SQL server is in simple recovery mode and never backed up since the SAN is mirrored and backed up? Many thanks

    Read the article

  • SQL Server Backup File Significantly Smaller After Table Recreation

    - by userx
    We run automated weekly backups of our SQL Server. The database in question is configured for Simple Recovery. We backup using Full, not differential. Recently, we had to re-create one of our tables with data in it (making 2 varchar fields a couple characters longer). This required running a script which created a new table, copied the data over, and then dropped the old. This worked correctly. Oddly though, our weekly backup files now SHRANK by over 75%! The tables don't have large indexes. All data was copied over correctly (and verified). I've verified that we are doing full and not incremental backups. The new files restore just fine. I can't seem to figure out why the backup files would have shrank so much? I've also noticed that they get about 10 MB larger every week, even though less than that amount of data is being added. I'm guessing that I'm simply not understanding something. Any insight would be appreciated.

    Read the article

  • Slow network file transfer (under 20KB/s) on newly built x64 Win7

    - by Mangoshake
    I am getting <20KB/s for local network file transfer. If I transfer a very small file (less than 100KB) it would start quickly then slow down to <20KB/s. all subsequently network file transfer would be slow, a reboot is needed to reset this. If I transfer a large file it would be stuck on calculating for a long time and then begin with <20KB/s immediately. This is a newly built desktop running Windows 7 x64 SP1. Realtek gigabit LAN from the motherboard (ASRock Extreme3 gen3). Problematic speed is observed on the private LAN, both through ethernet and WiFi. The Router is D-Link DIR-655. Remote Differential Compression is off. Drivers are up-to-date from ASRock's website. I have tested network file transfer to and from another Windows 7 laptop and a MacBook Pro, so I am fairly certain it is the desktop's problem. The slow speed only happens with one direction also, outbound from the desktop, regardless of whether I initiate the file transfer action from the origin or the destination. Inbound network file transfer and internet speeds are fine, so I don't think this is a hardware issue. I am getting 74.8MB/s internet upload speed from speedtest.net (http://www.speedtest.net/result/1852752479.png). Inbound network file transfer I can get around 10-15MB/s. I am hoping this community has some insight for me to troubleshoot this. I don't see anything obviously related from the Event Viewer, and beyond that I just don't know where else to look. Any suggestions are greatly appreciated, thank you in advance.

    Read the article

  • Tidy up old Windows Server Backup snapshots

    - by dty
    Hi, I'm running wbadmin from a scheduled job, backing up my C: and D: drives to my E: and (I believe!) including the system state: wbadmin start backup -backuptarget:e: -include:c:,d: -allCritical -noVerify -quiet I'd like to delete old backups, but I'm concerned that all the information I can find says to use wbadmin to delete old system state backups, and vssadmin to delete other backups. As far as I know, my backups ARE system state backups, but are using VSS on E: for storage, so I'm worried about trying either of these techniques for fear of losing all my backups. This is a home network, so I don't have a spare server to test this on. I'm also happy to simply restrict the space used on E:, but I can't make sense of the difference between the /for and /on parameters of the relevant vssadmin command. For reference, here's the output of vssadmin show shadows: Contents of shadow copy set ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Contained 1 shadow copies at creation time: 07/01/2011 08:12:05 Shadow Copy ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Original Volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Volume: \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy83 Originating Machine: x.y.com Service Machine: x.y.com Provider: 'Microsoft Software Shadow Copy provider 1.0' Type: DataVolumeRollback Attributes: Persistent, No auto release, No writers, Differential [... repeated a lot...] vssadmin show shadowstorage: Shadow Copy Storage association For volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 5.859 GB Shadow Copy Storage association For volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 40.317 GB Shadow Copy Storage association For volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 168.284 GB Allocated Shadow Copy Storage space: 171.15 GB Maximum Shadow Copy Storage space: UNBOUNDED wbadmin get versions: Backup time: 07/01/2011 03:00 Backup target: 1394/USB Disk labeled xxxxxxxxx(E:) Version identifier: 01/07/2011-03:00 Can Recover: Volume(s), File(s), Application(s), Bare Metal Recovery, System State [... repeated a lot...]

    Read the article

  • Slow Local Network, Windows 7, Snow Leopard, WiFi/Wired

    - by WerkkreW
    Hello - I am experiencing really poor local network performance in my home. I was recently using a Linksys WRT54G Router with DD-WRT on it, and a couple comparable Linksys-G PCI cards for connectivity but decided to upgrade hoping it would help with my performance issues. The computers in my house are connected as follows: Comcast Business Class Commercial 25mbps/10mbps (Verified with SpeakEasy and Speedtest.net) D-Link DGL-4500 Wireless N Router Windows 7x64 - D-Link DWA-552 Wireless-N Windows 7x64 - D-Link DWA-552 Wireless-N Mac Mini 10.6.2 - AirPort Extreme N Playstation 3, Hard Wired Xbox 360, Hard Wired Essentially the problem is very specific. Web browsing and uploading/downloading files from the internet is fine, more than fine. But if I want to say, Stream a video from one of my Windows 7 computers to my PS3, or copy a large video file between either of the PC's or the Mac, I get a consistent 500-900Kbps throughput at the high end. If I open my network browser, or try to browse my homegroup the response time is horrible. Both of my Windows computers are showing Strong wireless signals with a connection speed of 300Mbps. I know I can never expect to achieve anything near those speeds, but 500Kbps? Here is what I have tried so far: Enabled Single mode N-only and N/G Only on router WPA2 with AES Encrpytion Disabled "Remote Differential Compression" in Windows 7 Disabled TCP "Auto-Tuning" Used other software for file copies such as "Teracopy" I am at the end of my rope. Unfortunately I live in a 75 year old home with plaster walls, so hard-wiring my entire house isn't really an option I can handle right now. Any ideas to help me get decent speed when transferring files across my network would be greatly appreciated.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >