Search Results

Search found 24301 results on 973 pages for 'execution process mfg'.

Page 268/973 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • Lucid hangs at booting after kernel upgrade

    - by Thomas Deutsch
    This weekend, one of our servers running Lucid has installed some upgrades: libgcrypt11 1.4.4-5ubuntu2.1 linux-firmware 1.34.14 linux-image-2.6.32-41-generic 2.6.32-41.91 linux-libc-dev 2.6.32-41.91 Afterwards, it rebooted since this was a kernel upgrade. Now, it hangs at booting, after /scripts/init-bottom. init-bottom itself should not be the problem, the last line I can see is "done". So the problem has to be shortly after that. http://manpages.ubuntu.com/manpages/hardy/man8/initramfs-tools.8.html tells me, that the next step is procfs and sysfs are moved to the real rootfs and execution is turned over to the init binary which should now be found in the mounted rootfs. But I don't know how and where. The problem exists with older kernels too, and this one here doesn't fix the problem: http://www.tummy.com/journals/entries/jafo_20111003_160440 Anyone an idea?

    Read the article

  • What are the 'must know' GDB commands?

    - by Chris Smith
    I'm starting to get the hang of GDB, but everything still feels much slower than when debugging in Eclipse or Visual Studio. Are there any GDB commands you find particularly useful/productive? My life became dramatically better when I discovered: list - Display source code near the current instruction But that is still pretty basic. (And unnecessary when running GDB from Emacs.) Is there any way to do things like setup a watch window? (Print and update the result of an expression every time execution stops.)

    Read the article

  • svn commit is hung at start of commit

    - by jwhitlock
    I'm commiting a large changeset, including a large binary file (180 MB) over a slow VPN connection. It looks for all the world like it is stalled. How can I diagnose where it is stuck? The output is: $ svn commit -m "My commit message" Connecting to deprecated signal QDBusConnectionInterface::serviceOwnerChanged(QString,QString,QString)` Local subversion is 1.6.9 on Linux, KDE 4.3, and svn status shows ML . L ws M ws/manage.py L ws/locales L ws/locales/ja_JP L ws/locales/ja_JP/LC_MESSAGES The process isn't using much of any resources. The server is Linux, served by Apache and mod_dav_svn, same subversion 1.6.9. I can't see any process that is handling the commit.

    Read the article

  • IE browser doesn't close and file download popup needs focus

    - by jkranthi
    Hii all , I am trying to to click on a link after it is active which again produces a popup ( file download) after clicking. Here i have 2 problems 1) I start the code and leave it .what the code does is -after long process -it waits for the link to be active .Once the link is active it clicks on the link and a download popups opens (if everything goes well) and then it hangs there ( showing yellow flashing in the task bar which mean i have to click on the explorer for it to process whatever is next ).every time i have to click on the IE whenever the download popup appears .Is there a way to handle this or am i doing some wrong ? 2) The next problem is even if i click on the IE .the IE doesn't get close even though i write ie.close . my code is below : ## if the link is active ie.link(:text,a).click_no_wait prompt_message = "Do you want to open or save this file?" window_title = "File Download" save_dialog =WIN32OLE.new("AutoItX3.Control") save_dialog.WinGetText(window_title) save_dialog_obtained =save_dialog.WinWaitActive(window_title) save_dialog.WinKill(window_title) # end #' #some more code -normal puts statements # ie.close ie is hanging up for some strange reason ..?

    Read the article

  • Funny statements, quotes, phrases, errors found on technical Books [closed]

    - by Felipe Fiali
    I found some funny or redundant statements on technical books I've read, I'd like to share. And I mean good, serious, technical books. Ok so starting it all: The .NET framework doesn't support teleportation From MCTS 70-536 Training kit book - .NET Framework 2.0 Application Development Foundation Teleportation in science fiction is a good example of serialization (though teleportation is not currently supported byt the .NET Framework). C# 3 is sexy From Jon Skeet's C# in Depth second Edition You may be itching to get on to the sexy stuff from C# 3 by this point, and I don’t blame you. Instantiating a class From Introduction to development II in Microsoft Dynamics AX 2009 To instantiate a class, is to create a new instance of it. Continue or break From Introduction to development II in Microsoft Dynamics AX 2009 Continue and break commands are used within all three loops to tell the execution to break or continue. These are just a few. I'll post some more later. Share some that you might have found too.

    Read the article

  • gtk glade need help

    - by shiv garg
    I am using glade to make an user interface. i have successfully generated the glade file Now i have to include this file in my C code. I am using following code: #include <stdlib.h> #include<gtk/gtk.h> int main (int argc, char *argv[]) { GtkWidget *builder,*window,*button; gtk_init (&argc, &argv); builder=gtk_builder_new(); gtk_builder_add_from_file(builder,"shiv.glade",NULL); window=GTK_WIDGET (gtk_builder_get_object(builder,"window1")) ; button=GTK_WIDGET (gtk_builder_get_object(builder,"button1")); g_object_unref(G_OBJECT(builder)); gtk_widget_show(button); gtk_widget_show(window); gtk_main (); return 0; } My UI is a simple window having a button without any callback function. I am getting following errors on execution GTK-CRITICAL **: IA__gtk_widget_show assertion 'GTK_IS_WIDGET(widget)' failed

    Read the article

  • mysql_connect() causes page to not display (WAMP)

    - by VOIDHand
    I currently have a website running MySQL and PHP in which this is all working. I've tried installing WAMPServer to be able to work on the site on my own computer, but I have been having issues trying to get the site to work correctly. HTML and PHP files work correctly (by going to http://localhost/index.php, etc.). But some of the pages display "This Webpage is not available" (in Chrome) or "Internet Explorer cannot display this webpage", which leads me to think there is an error is the server, as when the page doesn't exist, it typically dispays "Oops! This link appears to be broken" or IE's standard 404 page. Each page in my site has an include() call to set up an instance of a class to handle all database transactions. This class opens the connection to the database in its constructor. I have found commenting out the contents of the constructor will allow the page to load, although this understandably causes errors later in the page. This is the contents of the included file: class dbAccess { private $db; function __construct() { $this->db = mysql_connect("externalURL","user","password") or die ("Connection for database not found."); mysql_select_db("dbName", $this->db) or die ("Database not selected"); } ... } As a note for what's happening here. I'm attempting to use the database on my site, rather than the local machine. The actual values of externalURL, dbName, user, and password work when I plug them into my database software and the file fine of the site, so I don't think the issue is with those values themselves, but rather some setting with Apache or Wamp. The issue persists if I try a local database as well. This is an excerpt of my Apache error log for one attempt at logging into the site: [Wed Apr 14 15:32:54 2010] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Apr 14 15:32:54 2010] [notice] Apache/2.2.11 (Win32) PHP/5.3.0 configured -- resuming normal operations [Wed Apr 14 15:32:54 2010] [notice] Server built: Dec 10 2008 00:10:06 [Wed Apr 14 15:32:54 2010] [notice] Parent: Created child process 1756 [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Child process is running [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Acquired the start mutex. [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Starting 64 worker threads. [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Starting thread to listen on port 80. For the first line, isLoggedIn (from the first line) is a method of the class above. Assuming that the connection works, this error should disappear. I've tried searching for solutions online, but haven't been able to find anything to help. If you need any further information to help solve this issue, feel free to ask for it. (One more note, I don't even have Skype on this computer, so I can't see it being an issue, as this conflict seems to be the default response for any Wamp issue.) [Edit: Removed entry from the error log as it was solved as an unrelated issue]

    Read the article

  • How to delete multiple files with msbuild/web deployment project?

    - by Alex
    I have an odd issue with how msbuild is behaving with a VS2008 Web Deployment Project and would like to know why it seems to randomly misbehave. I need to remove a number of files from a deployment folder that should only exist in my development environment. The files have been generated by the web application during dev/testing and are not included in my Visual Studio project/solution. The configuration I am using is as follows: <!-- Partial extract from Microsoft Visual Studio 2008 Web Deployment Project --> <ItemGroup> <DeleteAfterBuild Include="$(OutputPath)data\errors\*.xml" /> <!-- Folder 1: 36 files --> <DeleteAfterBuild Include="$(OutputPath)data\logos\*.*" /> <!-- Folder 2: 2 files --> <DeleteAfterBuild Include="$(OutputPath)banners\*.*" /> <!-- Folder 3: 1 file --> </ItemGroup> <Target Name="AfterBuild"> <Message Text="------ AfterBuild process starting ------" Importance="high" /> <Delete Files="@(DeleteAfterBuild)"> <Output TaskParameter="DeletedFiles" PropertyName="deleted" /> </Delete> <Message Text="DELETED FILES: $(deleted)" Importance="high" /> <Message Text="------ AfterBuild process complete ------" Importance="high" /> </Target> The problem I have is that when I do a build/rebuild of the Web Deployment Project it "sometimes" removes all the files but other times it will not remove anything! Or it will remove only one or two of the three folders in the DeleteAfterBuild item group. There seems to be no consistency in when the build process decides to remove the files or not. When I've edited the configuration to include only Folder 1 (for example), it removes all the files correctly. Then adding Folder 2 and 3, it starts removing all the files as I want. Then, seeming at random times, I'll rebuild the project and it won't remove any of the files! I have tried moving these items to the ExcludeFromBuild item group (which is probably where it should be) but it gives me the same unpredictable result. Has anyone experienced this? Am I doing something wrong? Why does this happen?

    Read the article

  • Integrating Jython Cpython

    - by eric.frederich
    I am about to begin a project where I will likely use PyQt or Pyside. I will need to interface with a buggy 3rd party piece of server software that provides C++ and Java APIs. The Java APIs are a lot easier to use because you get Exceptions where with the C++ libraries you get segfaults. Also, the Python bindings to the Java APIs are automatic with Jython whereas the Python bindings for the C++ APIs don't exist. So, how would a CPython PyQt client application be able to communicate with these Java APIs? How would you go about it? Would you have another separate Java process on the client that serializes / pickles objects and communicates with the PyQt process over a socket? I don't want to re-invent the wheel... is there some sort of standard interface for these types of things? Some technology I should look into? RPC, Corba, etc? Thanks, ~Eric

    Read the article

  • Why does mysqldump need to be fully pathed when called from a controller or model?

    - by Kris
    When I call mysqldump from a controller or model I need to fully path the binary, when I call it from Rake I don't need to. If I do not fully path I get a zero byte file... I can confirm both processes are run using the same user. # Works in a controller, model and Rake task system "/usr/local/mysql/bin/mysqldump -u root #{w.database_name} > #{target_file}" # Only works in a Rake task system "mysqldump -u root #{w.database_name} > #{target_file}" If I call the Rake task from the action it also fails (zero byte file). OS: Mac Ruby 1.8.6 EDIT: I use Etc.getpwuid(Process.uid).name to get the User of the current process

    Read the article

  • Impersonation in asp.net, confused about implmentation when used with Active Directory & Sql Server

    - by AWC
    I have an internal website that is using integrated windows authentication and this website uses sql server & active directory queries via the System.Directory.Services namespace. To use the System.Directory.Services namespace in ASP.NET I have to run IIS under an account that has the correct privileges and importantly have impersonation set to true in the web config. If this is done then when I make a query against AD then the credentials of the wroker process (IIS) are used instead of the ASPNET account and therefore the queries will now succeed. Now if I am also using Sql Server with a connection string configured for integrated security ('Integrated Security=SSPI') then this interprets the ASP.NET impersonation to mean that I want to access the database as the windows credentials of the request no the worker process. I hope I'm wrong and that I've got the config wrong, but I don't think I have and this seems not to be inconsistent? It should be noted I'm using IIS 5.1 for development and obivously this doesn't have the concept of app-pools which I believe would resolve the problem.

    Read the article

  • Recreation of MySQL DB using "mysql mydb < mydb.sql" is really slow when the table has tens of milli

    - by Jian Lin
    It seems that a MySQL database that has a table with tens of millions of records will get a big INSERT INTO statement when the following mysqldump some_db > some_db.sql is done to back up the database. (is it 1 insert statement that handles all the records?) So when reconstructing the DB using mysql some_db < some_db.sql then the CPU is hardly busy (about 1.8% usage by the mysql process... I don't see a mysqld either?) and also the hard disk doesn't seem to be too busy... Last time, the whole restore process took 5 hours. Is there a way to make it faster? Such as, when doing mysqldump, can it break the INSERT statement into shorter ones, so that the mysql doesn't need to parse the line so hard when restoring the DB?

    Read the article

  • A simple explanation of Naive Bayes Classification

    - by Jaggerjack
    I am finding it hard to understand the process of Naive Bayes, and I was wondering if someone could explained it with a simple step by step process in English. I understand it takes comparisons by times occurred as a probability, but I have no idea how the training data is related to the actual dataset. Please give me an explanation of what role the training set plays. I am giving a very simple example for fruits here, like banana for example training set--- round-red round-orange oblong-yellow round-red dataset---- round-red round-orange round-red round-orange oblong-yellow round-red round-orange oblong-yellow oblong-yellow round-red

    Read the article

  • Executing Secondary Applications

    - by JooBlow
    I have an application I am attempting to make "portable". The application contains a lot of secondary utility functions that I would like to execute on external files(from the app). I tried adding them in the build process but I didn't get any "Executables" for them(just the main one and a few others). Is there a way to get these to excute? They are basically command line utility functions to process some text files but use large files in the distribution and are also used by the main application. Thanks

    Read the article

  • Animated Gif in form using C#

    - by Rabin
    In my project, whenever a long process in being executed, a small form is displayed with a small animated gif file. I used this.Show() to open the form and this.Close() to close the form. Following is the code that I use. public partial class PlzWaitMessage : Form { public PlzWaitMessage() { InitializeComponent(); } public void ShowSpalshSceen() { this.Show(); Application.DoEvents(); } public void CloseSpalshScreen() { this.Close(); } } When the form opens, the image file do not immediately start animating. And when it does animate, the process is usually complete or very near completion which renders the animation useless. Is there a way I can have the gif animate as soon as I load the form?

    Read the article

  • Oracle AQ dequeue order

    - by yawn
    A trigger in an Oracle 10g generates upsert and delete messages for a subset of rows in a regular table. These messages consist out of two fields: A unique row id. A non-unique id. When consuming these message I want to impose an order on the deque process that respects the following constraints: Messages must be dequeued in insertion order. Messages belonging to the same id must be dequeued in such a fashion that no other dequeuing process should be able to dequeue a potential successor message (or messages) with this id. Since the messages are generated using a trigger I cannot use groups for this purpose. I am using the Oracle Java interface for AQ. Any pointers on how that could be achieved?

    Read the article

  • When debugging on WinCE, how can I set Visual Studio to always load symbol files it knows about?

    - by tsellon
    I'm debugging a WinCE, C++ program in Visual Studio across an ActiveSync connection. Every time I start the process it fails to load symbol information. However, if I right click on the module and hit 'Load Symbols' it correctly locates the symbol information without any further prompting from me. Is there a way that I can set Visual Studio to either: (a) automatically load this symbol information, or (b) automatically break the process into the debugger once it's loaded (similar to what windbg does)? I'm guessing there's a setting somewhere, but I've yet to find it. Update: I forgot to mention in the original question that I'm not debugging with the instance of Visual Studio that created the exe.

    Read the article

  • Agile and code release

    - by ring bearer
    Do you know of any agile process that is created for code releases? One of the main theme of agile is frequent releases and each company/client would have their own test/approval processes that control code releases. Most of the time these slow down the pace of "frequent releases" Currently we have a proprietary tool based workflow. The team who needs a code promotion needs to create a promotion request to one of the final UAT servers. Once this is complete, and once tests are done, certain customers, technical/non-technical managers need to approve, then it goes in to production deploy stage. Meanwhile no sprint planning meeting or anything of that sort. What is the code release process (Which is agile) that has worked for you?

    Read the article

  • Le gestionnaire d'accès de Sun repris par des anciens de la société : OpenSSO devient OpenAM grâce à

    Le gestionnaire d'accès de Sun repris par des anciens de la société OpenSSO devient OpenAM sous l'égide de Simon Phipps, nouvel employé de ForgeRock Dans la famille des technologies de Sun dont on se demande ce qu'elles vont devenir avec leur rachat par Oracle, voici OpenSSO. OpenSSO est un gestionnaire d'accès à des services web, open source, fondé sur un mécanisme de single sign-on qui fournit « des services d'identité essentiels pour simplifier, de manière transparente, l'exécution de la connexion unique ». Sous l'égide d'Oracle, cette technologie était semble-t-il sur une voie de garage. Le géant du logiciel possédait déjà ses propres solutions avant même le rach...

    Read the article

  • Will more CPUs/cores help with VS.NET build times?

    - by LoveMeSomeCode
    I was wondering if anyone knew whether Visual Studio .NET had a parallel build process or not? I have a solution with lots of projects, every project has lots of markup/code, lots of types, etc. Just sitting there with intellisense on runs it up to about 700MB. But the build times are really slow and only seem to max out one of my two cpu cores. Does this mean the build process is single threaded? My solution's build dependency chain isn't linear, so I don't see why it couldn't be building some of the projects in parallel. I remember Joel Spolsky blogging about his new SSD, and how it didn't help with compile times, but he didn't mention which compiler he was using. We're using VS 2005. Anyone know how it's compilation works? And is it any different/better in 2008/2010?

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • How can I reuse my javascript code between client and server?

    - by Chris Farmer
    I have some javascript code that includes an ANTLR-generated lexer and parser, and some associated syntax tree evaluation functionality. This code runs in the browser in my web app to support users who author code snippets which process scientific data. Now I'd like to do some additional background processing on the server using the same generated parser. I would prefer not to have to re-implement this stuff in C# and have multiple bits of code that did the exact same thing. Performance isn't as critical to me as eliminating duplication, since this is a background process. So, how can I call into my javascript code from C#? And how can I format my script so that it plays nicely with my .NET web app?

    Read the article

  • Windows Azure worker roles: One big job or many small jobs?

    - by Ryan Elkins
    Is there any inherent advantage when using multiple workers to process pieces of procedural code versus processing the entire load? In other words, if my workflow looks like this: Get work from queue0 and do A Store result from A in queue1 Get result from queue 1 and do B Store result from B in queue2 Get result from queue2 and do C Is there an inherent advantage to using 3 workers who each do the entire process themselves versus 3 workers that each do a part of the work (Worker 1 does 1 & 2, worker 2 does 3 & 4, worker 3 does 5). If we only care about working being done (finished with step 5) it would seem that it scales the same way (once you're using at least 3 workers). Maybe the big job is better because workers with that setup have less bottleneck issues?

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >