Search Results

Search found 24301 results on 973 pages for 'execution process mfg'.

Page 267/973 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • How can I get elements out of an array with Template Toolkit?

    - by Przemek
    I have an array of Paths which i want to read out with Template Toolkit. How can I access the array Elements of this array? The Situation is this: my @dirs; opendir(DIR,'./directory/') || die $!; @dirs = readdir(DIR); close DIR; $vars->{'Tree'} = @dirs; Then I call the Template Page like this: $template->process('create.tmpl', $vars) || die "Template process failed: ", $template->error(), "\n"; In this template I want to make an Tree of the directories in the array. How can I access them? My idea was to start with a foreach in the template like this [% FOREACH dir IN Tree.dirs %] $dir [% END %]

    Read the article

  • Linux ext3 readdir and concurrent updates

    - by Wangnick
    Dear all, we are receiving about 10000 messages per hour. We store them as individual files in hourly directories on an ext3 filesystem. The file name includes a sequence number. We use rsync to mirror these files every 20 seconds at another location (via a SAN, but that doesn't matter). Sometimes an rsync run picks up files n-3, n-2, n-1, n+1, and then next rsync run continues with n, n+2, n+3, n+4 and so on. Is it possible that when one process creates files in a certain sequence within a directory, that another process using readdir() sees the files appearing in a different sequence? Kind regards, Sebastian

    Read the article

  • Automate Testing on future only items business rules

    - by Titan
    I currently have a business object with a validation business rule, which is it can only be created for the future, tomorrow onwards, and I cannot create new items for today. I have a process, which runs the non-future business objects through some steps.. Because I have to set things up today, and test tomorrow, and when it fails, I can only create a new object tomorrow and test the following day. Are there any easy ways to automate this process in any testing frameworks? I think our testers are using the visual studio 2010 test manager. How do you guys manage situations like this? Cheers

    Read the article

  • How to salvage SQL server 2008 query from KILLED/ROLLBACK state?

    - by littlegreen
    I have a stored procedure that inserts batches of millions of rows, emerging from a certain query, into an SQL database. It has one parameter selecting the batch; when this parameter is omitted, it will gather a list of batches and recursively call itself, in order to iterate over batches. In (pseudo-)code, it looks something like this: CREATE PROCEDURE spProcedure AS BEGIN IF @code = 0 BEGIN ... WHILE @@Fetch_Status=0 BEGIN EXEC spProcedure @code FETCH NEXT ... INTO @code END END ELSE BEGIN -- Disable indexes ... INSERT INTO table SELECT (...) -- Enable indexes ... Now it can happen that this procedure is slow, for whatever reason: it can't get a lock, one of the indexes it uses is misdefined or disabled. In that case, I want to be able kill the procedure, truncate and recreate the resulting table, and try again. However, when I try and kill the procedure, the process frequently oozes into a KILLED/ROLLBACK state from which there seems to be no return. From Google I have learned to do an sp_lock, find the spid, and then kill it with KILL <spid>. But when I try to kill it, it tells me SPID 75: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 554 seconds. I did find a forum message hinting that another spid should be killed before the other one can start a rollback. But that didn't work for me either, plus I do not understand, why that would be the case... could it be because I am recursively calling my own stored procedure? (But it should be having the same spid, right?) In any case, my process is just sitting there, being dead, not responding to kills, and locking the table. This is very frustrating, as I want to go on developing my queries, not waiting hours on my server sitting dead while pretending to be finishing a supposed rollback. Is there some way in which I can tell the server not to store any rollback information for my query? Or not to allow any other queries to interfere with the rollback, so that it will not take so long? Or how to rewrite my query in a better way, or how kill the process successfully without restarting the server?

    Read the article

  • java share data between thread

    - by ayush
    i have a java process that reads data from a socket server. Thus i have a BufferedReader and a PrintWriter object corresponding to that socket. Now in the same java process i have a multithreaded java server that accepts client connections. I want to achieve a functionality where all these clients that i accept can read data from the BufferedReader object that i mentioned above.(so that they can multiplex the data) How do i make these individual client threads read the data from BuffereReader single object? Sorry for the confusion.

    Read the article

  • Rendering javascript at the server side level. A good or bad idea?

    - by davidhong
    I want to make it clear first: This isn't a question in relation to server-side Javascript or running Javascript server side. This is a question regarding rendering of Javascript code (which will be executed on the client-side) from server-side code. Having said that, take a look at below ASP.net code for example: hlRemoveCategory.Attributes.Add("onclick", "return confirm('Are you sure you want to delete this?');") This is prescribing the client-side onclick event on the server-side. As oppose to: $('a[rel=remove]').bind('click', function(event) { return confirm('Are you sure you want to delete this?'); } Now the question I want to ask is: What is the benefit of rendering javascript from the server-side code? Or the vice-versa? I personally prefer the second way of hooking up client-side UI/behaviour to HTML elements for the following reasons: Server-side does what ever it needs to already, including data-validation, event delegation and etc; and What server-side sees as an event is not necessarily the same process on the client-side. i.e., there are plenty more events on client-side (just look at custom events); and What happens on client-side and on server-side, during an event, could be completely irrelevant and decoupled; and What ever happens on client-side happens on client-side, there is no need for the server to know. Server should process and run what is given to them, how the process comes to life is not really up to them to decide in the event of the client-side events; and so and so forth. These are my thoughts obviously. I want to know what others think and if there has been any discussions on this topic. Topics branching from this argument can reach: Code management: is it easier to render everything from server-side? Separation of concern: is it easier if client-side logic is separated to server-side logic? Efficiency: which is more efficient both in terms of coding and running? At the end of the day, I am trying to move my team to go towards the second approach. There are lot of old guys in this team who are afraid of this change. I just wish to convince them with the right facts and stats. Let me know your thoughts.

    Read the article

  • Microsoft Detours

    - by Bruce
    I am new to Microsoft Detours. I have installed it to trace the system calls a process makes. I run the following commands which I got from the web syelogd.exe /q C:\Users\xxx\Desktop\log.txt withdll.exe /d:traceapi.dll C:\Program Files\Google\Google Talk\googletalk.exe I get the log file. The problem is I don't fully understand what is happening here. How does detours work? How does it trace the system calls? Also I don't know how to read the output in log.txt. Here is one line in log.txt 20101221060413329 2912 50.60: traceapi: 001 GetCurrentThreadId() Finally I want to get the stack trace of the process. How can I get that?

    Read the article

  • How can I use Code Contracts in a C++/CLI project?

    - by Daniel Wolf
    I recently stumbled upon Code Contracts and have started using them in my C# projects. However, I also have a number of projects written in C++/CLI. For C# and VB, Code Contracts offer a handy configuration panel in the project properties dialog. For a C++/CLI project, there is no such panel. From the documentation, I got the impression that adding Code Contracts support to a C++/CLI project should be a simple matter of calling some external tools as part of the build process (namely ccrefgen.exe, cccheck.exe, and ccrewrite.exe). However, the number of command line options and restrictions concerning the call sequence have me somewhat intimidated. Can anybody point me to a simple way to run the Code Contracts tools as an automated part of the build process in Visual Studio?

    Read the article

  • What happens when you run out of ram with mlockall set?

    - by James Dean
    I am working on a C++ application that requires a large amounts of memory for a batch run. ( 20gb) Some of my customers are running into memory limits where sometimes the OS starts swapping and the total run time doubles or worse. I have read that I can use the mlockall to keep the process from being swapped out. What would happen when the process memory requirements approaches or exceeds the the available physical memory in this way? I guess the answer might be OS specific so please list the OS in your answer.

    Read the article

  • svnlook always returns an error and no output

    - by Pierre-Alain Vigeant
    I'm running this small C# test program launched from a pre-commit batch file private static int Test(string[] args) { var processStartInfo = new ProcessStartInfo { FileName = "svnlook.exe", UseShellExecute = false, ErrorDialog = false, CreateNoWindow = true, RedirectStandardOutput = true, RedirectStandardError = true, Arguments = "help" }; using (var svnlook = Process.Start(processStartInfo)) { string output = svnlook.StandardOutput.ReadToEnd(); svnlook.WaitForExit(); Console.Error.WriteLine("svnlook exited with error 0x{0}.", svnlook.ExitCode.ToString("X")); Console.Error.WriteLine("Current output is: {0}", string.IsNullOrEmpty(output) ? "empty" : output); return 1; } } I am deliberately calling svnlook help and forcing an error so I can see what is going on when committing. When this program run, SVN displays svnlook exited with error 0xC0000135. Current output is: empty I looked up the error 0xC0000135 and it mean App failed to initialize properly although it wasn't specific to svnhook. Why is svnlook help not returning anything? Does it fail when executed through another process?

    Read the article

  • In-Proc SxS opens for shell extension in managed code?

    - by Jens Granlund
    The recommendation used to be "Do not write in-process shell extensions in managed code." But with .NET Framework 4 and In-Process Side-by-Side the main reason not to write shell extensions in managed code should be resolved. With that said, I have three questions. Is it now okay to write shell extensions in managed code? Which problems, if any might there be with writing shell extensions in managed code? What reasons might there be to write shell extensions in unmanaged code?

    Read the article

  • Testing install procedure of a program requiring administrative privileges

    - by Lucas Meijer
    I'm trying to write automated test, to ensure that the installer for my program works okay. The program can be installed for all users (requires admin privs), or for current user (does not require admin privs). The program can also autoupdate itself, which in some cases requires admin privileges, and in some cases doesn't. I'm looking for a way where I can have an automated test click "Yes, Allow" on the UAC dialogs, so I can write tests for all different scenarios, on many different operating systems, so that I can be confident when I make changes to the installer that I didn't break anything. Obviously, the installer process itself cannot do this. However, I control the complete machine, and could easily start some sort of daemon process with administrative rights, that the testprogram could make a socket connection to, to request it to "please click ok on the UAC now".

    Read the article

  • Top Innovations for Sales Managers

    - by divya.malik
    Sales managers are always looking for ways to motivate their troops as well as make themselves more effective and productive. Here is a small X’mas present for those folks that are looking for some effective tips. Our friends at Selling Power magazine recently wrote an interesting blog post with top 10 best practices for sales managers. Here we go: Harness social media Strategically align marketing campaigns with sales efforts Establish a customer-centric sales process Realize ROI with CRM Embrace online collaboration Improve accuracy in sales forecasting and pipeline metrics Coach for sales success Leverage mobile technology Focus on sales enablement Improve sales performance and compensation management We have a complete suite of sales applications, to help increase sales revenues, sales productivity as well as to improve your sales execution. You can find more details here. For more details on the SellingPower blog post click here. Happy Holidays to you and your family.

    Read the article

  • Init modules in apache2

    - by user306963
    Hello, I used to write apache modules in apache 1.3, but these days I am willing to pass to apache2. The module that I am writing at the moment has is own binary data, not a database, for performance purposes. I need to load this data in shared memory, so every child can access it without making his own copy, and it would be practical to load/create the binary data at startup, as I was used to do with apache 1.3. Problem is that I don't find an init event in apache2, in 1.3 in the module struct, immediatly after STANDARD_MODULE_STUFF you find a place for a /** module initializer */, in which you can put a function that will be executed early. Body of the function I used to write is something like: if ( getppid == 1 ) { // Load global data here // this is the parent process void* data = loadGlobalData( someFilePath ); setGlobalData( config, data ); } else { // this is the init of a child process // do nothing } I am looking for a place in apache2 in where I can put a similar function. Can you help? Thanks Benvenuto

    Read the article

  • Multiple merchant accounts with Activemerchant gem.

    - by sosborn
    I am developing a rails site that will allow a group of merchants (5 - 10) to accept credit card orders online. I plan on using the Activemerchant gem to handle the processing. In this case, each merchant will have their own merchant accounts to handle the payments. Storing banking information like that is not something I am a fan of. This could be solved by queing orders and allowing the merchant to log in to the site, input their credentials and process the order. However, if I go that route then it seems to me that I would have to store the customers' credit card information temporarily until the merchant has the opportunity to log in and process the order, which to me is the greater evil. Has anyone dealt with this situation? If so, what are the options available and what pitfalls should I look out for? In my mind, security customer credit card information is priority number one with the merchant account information a close second.

    Read the article

  • Firefox stalls on rendering when chrome doesn't

    - by amccormack
    I have a webpage that loads quickly 100% of the time in chrome, but only 10% or so of the time in Firefox. Looking at the fiddler capture, Firefox only loads 2 of the 100ish files being pulled before it hangs. The error does not seem to be on the server or network side, however, because Chrome never encounters a problem. How do I find the root of this stall? While I suspect Firefox's javascript execution is what is causing the hang, are there any particular methods to narrow down the search for the bad code?

    Read the article

  • Control Windows VM from Linux Host

    - by vy32
    I am looking for a tool that will allow me to monitor and control programs running inside a Windows VM from the Linux host machine. I realize that this is similar to what a rootkit would do, and I am completely happy to use some hacker software if it provides the necessary functionality (and if I can get it in source-code form). If I can't find something, I'll have to write it using C. Probably an embedded HTTP server running on an odd port and doing some kind of XMLRPC thing. Here is the basic functionality I need: Get list of running processes Kill a process. Start a process Read/write/create/delete files I would like to: - Read contents of screen - Read all controls on screen. - Send arbitrary click to a Windows control. Does anything like this exist?

    Read the article

  • Run java thread at specific times

    - by rmarimon
    I have a web application that synchronizes with a central database four times per hour. The process usually takes 2 minutes. I would like to run this process as a thread at X:55, X:10, X:25, and X:40 so that the users knows that at X:00, X:15, X:30, and X:45 they have a clean copy of the database. It is just about managing expectations. I have gone through the executor in java.util.concurrent but the scheduling is done with the scheduleAtFixedRate which I believe provides no guarantee about when this is actually run in terms of the hours. I could use a first delay to launch the Runnable so that the first one is close to the launch time and schedule for every 15 minutes but it seems that this would probably diverge in time. Is there an easier way to schedule the thread to run 5 minutes before every quarter hour?

    Read the article

  • WPF Skin Skinning Security Concerns

    - by Erik Philips
    I'm really new to the WPF in the .Net Framework (get that out of the way). I'm writing an application where the interface is very customizable by simply loading .xaml (at the moment a Page element) files into a frame and then mapping the controls via names as needed. The idea is to have a community of people who are interested in making skins, skin my application however they want (much like Winamp). Now the question arises, due to my lack of Xaml knowledge, is it possible to create malicious Xaml pages that when downloaded and used could have other embedded Iframes or other elements that could have embed html or call remote webpages with malicious content? I believe this could be the case. If this is the case then I two options; either I have an automated process that can remove these types of Xaml files by checking it’s elements prior to allowing download (which I would assume would be most difficult) or have a human review them prior to download. Are there alternatives I’m unaware of that could make this whole process a lot easier?

    Read the article

  • Livre blanc de l'outillage de Qt Quick, traduit par Thibaut Cuvelier et Louis du Verdier

    Qt Quick correspond à une collection de technologies qui sont conçues pour aider les développeurs à créer des interfaces utilisateur intuitives, fluides et à l'apparence moderne, le genre d'interfaces graphiques de plus en plus utilisées sur les téléphones portables, lecteurs média, set-top boxes et autres appareils portables. Qt Quick est constitué d'un ensemble riche d'éléments d'interface utilisateur, d'un langage déclaratif pour la représentation d'interfaces utilisateur, et d'un moteur d'exécution de langage. Une collection d'API C++ est utilisée pour intégrer ces caractéristiques de haut niveau avec les applications Qt classiques. La version 2.1 de l'environnement de développement (EDI) Qt Creator introduit des outils utiles au développement d'applications Qt Quick. Ce livre blanc do...

    Read the article

  • How can I handle template dependencies in Template Toolkit?

    - by Smack my batch up
    My static web pages are built from a huge bunch of templates which are inter-included using Template Toolkit's "import" and "include", so page.html looks like this: [% INCLUDE top %] [% IMPORT middle %] Then top might have even more files included. I have very many of these files, and they have to be run through to create the web pages in various languages (English, French, etc., not computer languages). This is a very complicated process and when one file is updated I would like to be able to automatically remake only the necessary files, using a makefile or something similar. Are there any tools like makedepend for C files which can parse template toolkit templates and create a dependency list for use in a makefile? Or are there better ways to automate this process?

    Read the article

  • Compute if a function is pure

    - by Oni
    As per Wikipedia: In computer programming, a function may be described as pure if both these statements about the function hold: The function always evaluates the same result value given the same argument value(s). The function result value cannot depend on any hidden information or state that may change as program execution proceeds or between different executions of the program, nor can it depend on any external input from I/O devices. Evaluation of the result does not cause any semantically observable side effect or output, such as mutation of mutable objects or output to I/O devices. I am wondering if it is possible to write a function that compute if a function is pure or not. Example code in Javascript: function sum(a,b) { return a+b; } function say(x){ console.log(x); } isPure(sum) // True isPure(say) // False

    Read the article

  • Cardinality Estimation Bug with Lookups in SQL Server 2008 onward

    - by Paul White
    Cost-based optimization stands or falls on the quality of cardinality estimates (expected row counts).  If the optimizer has incorrect information to start with, it is quite unlikely to produce good quality execution plans except by chance.  There are many ways we can provide good starting information to the optimizer, and even more ways for cardinality estimation to go wrong.  Good database people know this, and work hard to write optimizer-friendly queries with a schema and metadata (e.g. statistics) that reduce the chances of poor cardinality estimation producing a sub-optimal plan.  Today, I am going to look at a case where poor cardinality estimation is Microsoft’s fault, and not yours. SQL Server 2005 SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; The query plan on SQL Server 2005 is as follows (if you are using a more recent version of AdventureWorks, you will need to change the year on the date range from 2003 to 2007): There is an Index Seek on ProductID = 1, followed by a Key Lookup to find the Transaction Date for each row, and finally a Filter to restrict the results to only those rows where Transaction Date falls in the range specified.  The cardinality estimate of 45 rows at the Index Seek is exactly correct.  The table is not very large, there are up-to-date statistics associated with the index, so this is as expected. The estimate for the Key Lookup is also exactly right.  Each lookup into the Clustered Index to find the Transaction Date is guaranteed to return exactly one row.  The plan shows that the Key Lookup is expected to be executed 45 times.  The estimate for the Inner Join output is also correct – 45 rows from the seek joining to one row each time, gives 45 rows as output. The Filter estimate is also very good: the optimizer estimates 16.9951 rows will match the specified range of transaction dates.  Eleven rows are produced by this query, but that small difference is quite normal and certainly nothing to worry about here.  All good so far. SQL Server 2008 onward The same query executed against an identical copy of AdventureWorks on SQL Server 2008 produces a different execution plan: The optimizer has pushed the Filter conditions seen in the 2005 plan down to the Key Lookup.  This is a good optimization – it makes sense to filter rows out as early as possible.  Unfortunately, it has made a bit of a mess of the cardinality estimates. The post-Filter estimate of 16.9951 rows seen in the 2005 plan has moved with the predicate on Transaction Date.  Instead of estimating one row, the plan now suggests that 16.9951 rows will be produced by each clustered index lookup – clearly not right!  This misinformation also confuses SQL Sentry Plan Explorer: Plan Explorer shows 765 rows expected from the Key Lookup (it multiplies a rounded estimate of 17 rows by 45 expected executions to give 765 rows total). Workarounds One workaround is to provide a covering non-clustered index (avoiding the lookup avoids the problem of course): CREATE INDEX nc1 ON Production.TransactionHistory (ProductID) INCLUDE (TransactionDate); With the Transaction Date filter applied as a residual predicate in the same operator as the seek, the estimate is again as expected: We could also force the use of the ultimate covering index (the clustered one): SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WITH (INDEX(1)) WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; Summary Providing a covering non-clustered index for all possible queries is not always practical, and scanning the clustered index will rarely be optimal.  Nevertheless, these are the best workarounds we have today. In the meantime, watch out for poor cardinality estimates when a predicate is applied as part of a lookup. The worst thing is that the estimate after the lookup join in the 2008+ plans is wrong.  It’s not hopelessly wrong in this particular case (45 versus 16.9951 is not the end of the world) but it easily can be much worse, and there’s not much you can do about it.  Any decisions made by the optimizer after such a lookup could be based on very wrong information – which can only be bad news. If you think this situation should be improved, please vote for this Connect item. © 2012 Paul White – All Rights Reserved twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • possible to save pdf with .net from populated form by .fdf

    - by Blair Jones
    I have a process that creates form data in the form of an .fdf file that then has a reference to the .pdf document that is its "parent". is it possible with any .NET process to save that .pdf file (populated with the data that came in from the .fdf)? I need this because I need to email the fully populated .pdf documents out. I've been just sending the .fdfs with fully qualified links to the pdfs, but some people are having problems with it and I'd rather just go full-blown pdf if I can do it. FYI, my server does have a licensed copy of Acrobat installed if that matters....

    Read the article

  • MYSQL Event to update another database table

    - by Lee
    Hey All, I have just taken over a project for a client and the database schema is in a total mess. I would like to rename a load of fields make it a relationship database. But doing this will be a painstaking process as they have an API running of it also. So the idea would be to create a new database and start re-writing the code to use this instead. But I need a way to keep these tables in sync during this process. Would you agree that I should use MYSQL EVENT's to keep updating the new table on Inserts / updates & deletes?? Or can you suggest a better way?? Hope you can advise !! thanks for any input I get

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >