Search Results

Search found 6852 results on 275 pages for 'ascension systems'.

Page 231/275 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • What are some commonly used source code check-in policies?

    - by rwmnau
    I'm curious what code review policies other development shops apply to their source code when it's checked into the source control repository. I'm setting up a TFS (Team Foundation) server, and I'd like to apply some check-in policies to start to stamp out bad practices. For example, I was thinking of starting with the following couple, so this is the kind of stuff I'm looking for: Prohibit empty "Catch" blocks. This would prevent applications from swallowing any exceptions without at least requiring a comment explaining why it's not necessary to do anything with the exception. Prohibit "Catch ex as Exception" generic exception handling. Instead, require code to catch specific types of exceptions and deal with them appropriately, instead of just building catch-all handling. Require a check-in comment. This one should be self-explanatory, though it seems that TFS (and most other source-control systems) don't require a comment by default. While these are just examples, they're where I'm thinking of starting, and while I'd like some additional examples of what's popular, I'm open to feedback on these. Also, though we're a mostly .NET shop, I imagine the popular policies are universal across languages and IDEs (we have some Java development and a few people who will use the repository develop with Eclipse).

    Read the article

  • Connection Pool Strategy: Good, Bad or Ugly?

    - by Drew
    I'm in charge of developing and maintaining a group of Web Applications that are centered around similar data. The architecture I decided on at the time was that each application would have their own database and web-root application. Each application maintains a connection pool to its own database and a central database for shared data (logins, etc.) A co-worker has been positing that this strategy will not scale because having so many different connection pools will not be scalable and that we should refactor the database so that all of the different applications use a single central database and that any modifications that may be unique to a system will need to be reflected from that one database and then use a single pool powered by Tomcat. He has posited that there is a lot of "meta data" that goes back and forth across the network to maintain a connection pool. My understanding is that with proper tuning to use only as many connections as necessary across the different pools (low volume apps getting less connections, high volume apps getting more, etc.) that the number of pools doesn't matter compared to the number of connections or more formally that the difference in overhead required to maintain 3 pools of 10 connections is negligible compared to 1 pool of 30 connections. The reasoning behind initially breaking the systems into a one-app-one-database design was that there are likely going to be differences between the apps and that each system could make modifications on the schema as needed. Similarly, it eliminated the possibility of system data bleeding through to other apps. Unfortunately there is not strong leadership in the company to make a hard decision. Although my co-worker is backing up his worries only with vagueness, I want to make sure I understand the ramifications of multiple small databases/connections versus one large database/connection pool.

    Read the article

  • My project is no longer used - how should I feel?

    - by flybywire
    For the last two years I have been developing and supporting an important project for a big customer. The project included mining data from the customer's existing systems, processing, and displaying and updating in the customer's public home page. The project was defined as crucial by the customer and I was payed good money and flown at the customer's expense to meet key employees. Some months ago, when the project was finished and in maintainance mode, I informed the customer that I am no longer interested in doing it as I had a new opportunity that would not be compatible with my existing customer. I was payed to train one of their employees, flown to meet him, make sure everything works and that he can be safely left in charge of the project. We finished in good terms after I complied with all my obligations and they payed me all they owed me. Some days ago, just out of curiosity, I entered to their website to see how the data continues to be updated and much to my dismay I discovered that the day after my contract was finished my system was "turned off" and it ceased to feed data to the public website. Let's put it clear, there is no issue of money or broken contract here. They are in they full right to do whatever they want with my software. But it is an issue of broken "programmer's ego". Should I feel bad about it (I do). Should I care and check out with my customer if they need some help? Or is it none of my matters?

    Read the article

  • IIS doesn't send two responses to the same client at the same time (only for ASP)

    - by dr. evil
    I've got 2 ASP pages. I do a request to the first page from Firefox (which takes 30 seconds to process on server-side), and during the execution of 30 seconds I do another request from Firefox to the second page (takes less than 1 second in server-side), but it does come after 31 second. Because it waits first requests to finish. When I request to the first page from Firefox and then request the second page from IE it's just instant. So basically ASP - IIS 6 somehow limiting every client to one request (long processing request) at a time. I need to get around this problem in my .NET client application. This is tested in 3 different systems. If you want to test you can try the ASP scripts at the end. This behaviour is same in a long SQL execution or just in a time consuming ASP operation. Note: It's not about HTTP Keep Alive It's not about persistent connection limit (we tried to increase this in firefox and in .NET with Net.ServicePointManager.DefaultConnectionLimit) It's not about User Agent This doesn't happen in ASP.NET so I assume it's something to the with ASP.dll I'm trying to solve this on the client not the server. I don't have direct control over the server it's a 3rd party solution. Is there any way to get around this? Sample ASP Code: First ASP: <% Set cnn = Server.CreateObject("Adodb.Connection") cnn.Open "Provider=sqloledb;Data Source=.;Initial Catalog=master;User Id=sa;Password=;" cnn.Execute("WAITFOR DELAY '0:0:30'") cnn.Close %> Second ASP: <% Response.Write "bla bla" %>

    Read the article

  • What are the benefits and risks of moving to a Model Driven Architecture approach?

    - by Tone
    I work for a company with about 350 employees and we are in the process of growing. Our current codebase is not structured very well and we are looking both at how to improve it immediately (by organizing objects into namespaces, separating concerns, etc.) and moving to a model driven architecture approach, where we model and design everything first with uml, then generate code from that model. We have been looking heavily at Sparx Systems Enterprise Architect (EA) (which is UML 2.0 capable) and we are also considering the tools in VS 2010. I know there are other tools out there (Rational XDE being one) but I really do not think we can spend $1500+ per license at this point. I'm not looking for answers on which tool is better than another but more for experiences moving from a cowboy coding environment (that is, little planning and design, just jump in and start coding) to a model driven architecture. Looking back was it helpful to your organization? What are the pain points? What are the risks? What are the benefits?

    Read the article

  • On Disk Substring index

    - by emeryc
    I have a file (fasta file to be specific) that I would like to index, so that I can quickly locate any substring within the file and then find the location within the original fasta file. This would be easy to do in many cases, using a Trie or substring array, unfortunately the strings I need to index are 800+ MBs which means that doing them in memory in unacceptable, so I'm looking for a reasonable way to create this index on disk, with minimal memory usage. (edit for clarification) I am only interested in the headers of proteins, so for the largest database I'm interested in, this is about 800 MBs of text. I would like to be able to find an exact substring within O(N) time based on the input string. This must be useable on 32 bit machines as it will be shipped to random people, who are not expected to have 64 bit machines. I want to be able to index against any word break within a line, to the end of the line (though lines can be several MBs long). Hopefully this clarifies what is needed and why the current solutions given are not illuminating. I should also add that this needs to be done from within java, and must be done on client computers on various operating systems, so I can't use any OS Specific solution, and it must be a programatic solution.

    Read the article

  • Java, Massive message processing with queue manager (trading)

    - by Ronny
    Hello, I would like to design a simple application (without j2ee and jms) that can process massive amount of messages (like in trading systems) I have created a service that can receive messages and place them in a queue to so that the system won't stuck when overloaded. Then I created a service (QueueService) that wraps the queue and has a pop method that pops out a message from the queue and if there is no messages returns null, this method is marked as "synchronized" for the next step. I have created a class that knows how process the message (MessageHandler) and another class that can "listen" for messages in a new thread (MessageListener). The thread has a "while(true)" and all the time tries to pop a message. If a message was returned, the thread calls the MessageHandler class and when it's done, he will ask for another message. Now, I have configured the application to open 10 MessageListener to allow multi message processing. I have now 10 threads that all time are in a loop. Is that a good design?? Can anyone reference me to some books or sites how to handle such scenario?? Thanks, Ronny

    Read the article

  • How do I prevent qFatal() from aborting the application?

    - by Dave
    My Qt application uses Q_ASSERT_X, which calls qFatal(), which (by default) aborts the application. That's great for the application, but I'd like to suppress that behavior when unit testing the application. (I'm using the Google Test Framework.) I have by unit tests in a separate project, statically linking to the class I'm testing. The documentation for qFatal() reads: Calls the message handler with the fatal message msg. If no message handler has been installed, the message is printed to stderr. Under Windows, the message is sent to the debugger. If you are using the default message handler this function will abort on Unix systems to create a core dump. On Windows, for debug builds, this function will report a _CRT_ERROR enabling you to connect a debugger to the application. ... To supress the output at runtime, install your own message handler with qInstallMsgHandler(). So here's my main.cpp file: #include <gtest/gtest.h> #include <QApplication> void testMessageOutput(QtMsgType type, const char *msg) { switch (type) { case QtDebugMsg: fprintf(stderr, "Debug: %s\n", msg); break; case QtWarningMsg: fprintf(stderr, "Warning: %s\n", msg); break; case QtCriticalMsg: fprintf(stderr, "Critical: %s\n", msg); break; case QtFatalMsg: fprintf(stderr, "My Fatal: %s\n", msg); break; } } int main(int argc, char **argv) { qInstallMsgHandler(testMessageOutput); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } But my application is still stopping at the assert. I can tell that my custom handler is being called, because the output when running my tests is: My Fatal: ASSERT failure in MyClass::doSomething: "doSomething()", file myclass.cpp, line 21 The program has unexpectedly finished. What can I do so that my tests keep running even when an assert fails?

    Read the article

  • System Calls in windows & Native API?

    - by claws
    Recently I've been using lot of Assembly language in *NIX operating systems. I was wondering about the windows domain. Calling convention in linux: mov $SYS_Call_NUM, %eax mov $param1 , %ebx mov $param2 , %ecx int $0x80 Thats it. That is how we should make a system call in linux. Reference of all system calls in linux: Regarding which $SYS_Call_NUM & which parameters we can use this reference : http://docs.cs.up.ac.za/programming/asm/derick_tut/syscalls.html OFFICIAL Reference : http://kernel.org/doc/man-pages/online/dir_section_2.html Calling convention in Windows: ??? Reference of all system calls in Windows: ??? Unofficial : http://www.metasploit.com/users/opcode/syscalls.html , but how do I use these in assembly unless I know the calling convention. OFFICIAL : ??? If you say, they didn't documented it. Then how is one going to write libc for windows without knowing system calls? How is one gonna do Windows Assembly programming? Atleast in the driver programming one needs to know these. right? Now, whats up with the so called Native API? Is Native API & System calls for windows both are different terms referring to same thing? In order to confirm I compared these from two UNOFFICIAL Sources System Calls: http://www.metasploit.com/users/opcode/syscalls.html Native API: http://undocumented.ntinternals.net/aindex.html My observations: All system calls are beginning with letters Nt where as Native API is consisting of lot of functions which are not beginning with letters Nt. System Call of windows are subset of Native API. System calls are just part of Native API. Can any one confirm this and explain.

    Read the article

  • Logging in "Java Library Code" libs for Android applications

    - by K. Claszen
    I follow the advice to implement Android device (not application!) independent library code for my Android Apps in a seperate "Java Library Code" project. This makes testing easier for me as I can use the standard maven project layout, Spring test support and Continuous Build systems the way I am used to. I do not want to mix this inside my Android App project although this might be possible. I now wonder how to implement logging in this library. As this library will be used on the Android device I would like to go with android.util.Log. I added the followind dependency to my project to get the missing Android packages/classes and dependencies: <dependency> <groupId>com.google.android</groupId> <artifactId>android</artifactId> <version>2.2.1</version> <scope>provided</scope> </dependency> But this library just contains stub method (similar to the android.jar inside the android-sdk), thus using android.util.Log results in java.lang.RuntimeException: Stub! when running my unit tests. How do I implement logging which works on the Android device but does not fail outside (I do not expect it to work, but it must not fail)? Thanks for any advice Klaus For now I am going with the workaround to catch the exception outside android but hope there is a better solution. try { Log.d("me", "hello"); } catch (RuntimeException re) { // ignore failure outside android }

    Read the article

  • GNU/Linux developement n00b needs help porting C++ application from windows to GNU/Linux.

    - by AndrejaKo
    Hi! These questions may not be perfectly suited for this site, so I apologize in advance for asking them here. I'm trying to port a computer game from windows to GNU/Linux. It uses Ogre3D,CEGUI, ogreogg and ogrenewt. As far as I know all dependencies work on GNU/Linux and in the game itself there is no ooze-specific code. Here's the questions part: Is there any easy way to port visual studio 2008 project to GNU/Linux tool-chain? How do I manage dependencies? In Visual Studio, I'd just add them in property sheets or default directories. I assume on GNU/Linux autoconf and make take care of that, but in which way? Do I have to add each .cpp and .hpp manually or is there some way to automate things? How do I solve the problem of dependencies on different locations on different systems? I'd like to use Eclipse as IDE under GNU/Linux. I know that the best answer to most of my questions is RTFM, but I'm not sure what I exactly need and where to start looking.

    Read the article

  • Give the mount point of a path

    - by Charles Stewart
    The following, very non-robust shell code will give the mount point of $path: (for i in $(df|cut -c 63-99); do case $path in $i*) echo $i;; esac; done) | tail -n 1 Is there a better way to do this? Postscript This script is really awful, but has the redeeming quality that it Works On My Systems. Note that several mount points may be prefixes of $path. Examples On a Linux system: cas@txtproof:~$ path=/sys/block/hda1 cas@txtproof:~$ for i in $(df -a|cut -c 57-99); do case $path in $i*) echo $i;; esac; done| tail -1 /sys On a Mac osx system cas local$ path=/dev/fd/0 cas local$ for i in $(df -a|cut -c 63-99); do case $path in $i*) echo $i;; esac; done| tail -1 /dev Note the need to vary cut's parameters, because of the way df's output differs: indeed, awk is better. Answer It looks like munging tabular output is the only way within the shell, but df /dev/fd/impossible | tail -1 | awk '{ print $NF}' is a big improvement on what I had. Note two differences in semantics: firstly, df $path insists that $path names an existing file, the script I had above doesn't care; secondly, there are no worries about dereferncing symlinks. It's not difficult to write Python code to do the job.

    Read the article

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • How do I programmatically run all the JUnit tests in my Java application?

    - by Andrew McKinlay
    From Eclipse I can easily run all the JUnit tests in my application. I would like to be able to run the tests on target systems from the application jar, without Eclipse (or Ant or Maven or any other development tool). I can see how to run a specific test or suite from the command line. I could manually create a suite listing all the tests in my application, but that seems error prone - I'm sure at some point I'll create a test and forget to add it to the suite. The Eclipse JUnit plugin has a wizard to create a test suite, but for some reason it doesn't "see" my test classes. It may be looking for JUnit 3 tests, not JUnit 4 annotated tests. I could write a tool that would automatically create the suite by scanning the source files. Or I could write code so the application would scan it's own jar file for tests (either by naming convention or by looking for the @Test annotation). It seems like there should be an easier way. What am I missing?

    Read the article

  • Can you recommend an easy to use easy to develop CMS?

    - by el_at_yahoo
    We need some easy way to manage web sites at our company, and we are evaluating some CMS tools for this purpose. We do not yet know what features the sites will need to have (but it will definitely be something with lots of functionalities), so we are looking for something with lots of features and more importantly to be easily extensible (if it does not have some feature, we at least want to be able to build-it by ourselves). We have no experience with Content Management Systems but we do with Java, so it has to be something written in Java. We evaluated some tools and from our perspective the following seem the promising of them (in no particular order): OpenCMS dotCMS (Community Edition vs Enterprise Edition) InfoGlue Alfresco (EE vs CE) Magnolia (EE vs EE Pro vs CE) Jahia (CE vs EE) Since we have no experience with either one of them, we were wondering if someone of you who have can share some information about how good they are or how easily they can be used and extended. I know similar questions have been asked on SO and I also know this is highly subjective and people will vote for closing it as soon as it is posted, but for us it is important to know what difficulties other people have been facing in using the above tools (we don’t want to walk a path that takes nowhere if other people already know it leads nowhere). Others could then vote on the posted answers if they agree or not. From your experience, which from the above mentioned CMSs is the more easily extensible, the easier to use, the easiest to learn etc? Thank you and Happy Holidays to all.

    Read the article

  • google maps api keys to be set webserver-wide, (as env var? inside apache?)

    - by ~knb
    I have a web site with many virtual hosts and each registered with several domain names (ending in .org, .de), site1.mysite.de, site2.mysite.org Then I have different templating systems based on several programming languages (perl and php) in use on the web server. The Google Maps Api requires a unique Google Maps api key for each vhost. I want to have something like a web-server wide variable $goomapkey that I can call from inside my code. In PHP code, Now I have a kludgy case-analysis solution like $domain = substr($_SERVER['SERVER_NAME'], -3); if (".de" == $domain){ //if ("xxxxxx" eq substr($ENV{SERVER_NAME}, 0, 5)){ // $gookey = "ABQIAAA..."; //} else { //site1.de $gookey = "ABQIAAAA1Js..."; //} } elseif ("dev" == substr($_SERVER['SERVER_NAME'], 0, 3)){ //dev.mysite.org $gookey = "ABQIAAAA1JsSb..."; } else { //www.mysite.org $gookey = "ABQIAAAA1JsS..."; //TODO: Add more keys for each virtual host, for my.machinename.de, IP-address based URL, ... } ... inside my php-based CMS. A non-ideal solution, because it is, php-only, and I still have to set it at several html templates inside the CMS, and there are too many cases. I want the google maps api key to be set by the apache web server who examines the request *early in the request loop before any php page template code is constructed and evaluated. is an environment variable a good solution? which technology should be used to set the $goomapkey variable? I'd prefer mod_perl2 Apache request handler, but the documentation is confusing (many API changes in the past ). Which Apache module could I use? Is there a built-in Apache module that does the same thing?

    Read the article

  • Python refuses text.replace() in one environment

    - by gx
    Hi fellow programmers, I've been mocking about with the following bit of dirty support-code for a pylons app, which works fine in a python-shell, a separate python file, or when running in paster. Now, we've put the application on-line through mod_wsgi and apache and this specific piece of code stopped working completely. First off, the code itself: def fixStyle(self, text): t = text.replace('<p>', '<p style="%s">' % (STYLEDEF,)) t = t.replace('class="wide"', 'style="width: 125px; %s"' % (DEFSTYLE,)) t = t.replace('<td>', '<td style="%s">' % (STYLEDEF,)) t = t.replace('<a ', '<a style="%s" ' % (LINKSTYLE,)) return t It seems pretty straightforward, and to be honest, it is. So what happens when I put a piece of text in it, for example: <table><tr><td>Test!</td></tr></table> The output should be: <table><tr><td style="stuff-from-styledef">Test!</td></tr></table> and it is, on most systems. When we put it through the app on Apache/mod_wsgi though, the following happens: <table><tr><td>Test!</td></tr></table> You guessed it. I'm currently at a loss and have no idea where to go next. Googling doesn't really work out, so I'm hoping on you guys to help out and perhaps point out a fundamental issue with using whatever-is-causing-this. If anything is missing I'll edit it in.

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • File size monitoring in C#

    - by manemawanna
    Hello, I work in the Systems & admin team and have been given the task of creating a quota management application to try and encourage users to better manage there resources as we currently have issues with disc space and don't enforce hard quotas. At the moment I'm using the code below to go through all the files in a users homespace to retrieve the overall amount of space they are using. As from what I've seen else where theres no other way to do this in C#, the issue with it is theirs quite a high overhead while it retireves the size of each file then creates a total. try { long dirSize = 0; FileInfo[] FI = new DirectoryInfo("I:\\").GetFiles("*.*", SearchOption.AllDirectories); foreach (FileInfo F1 in FI) { dirSize += F1.Length; } return dirSize; } So I'm looking for a quicker way to do this or a quick way to monitor changes in the size of files while using the options avaliable through FileSystemWatcher. At the moment the only thing I can think of is creating a hashtable containing the file location and size of each file, so when a size changed event occurs I can compare the old size against the new one and update the total. Any suggestions would be greatly appreciated.

    Read the article

  • How can I abstract out the core functionality of several Rails applications?

    - by hornairs
    I'd like to develop a number of non-trivial Rails applications which all implement a core set of functionality but each have certain particular customizations, extensions, and aesthetic differences. How can I pull the core functionality (models, controllers, helpers, support classes, tests) common to all these systems out in such a way that updating the core will benefit every application based upon it? I've seen Rails Engines but they seem to be too detached, almost too abstracted to be built upon. I can seem them being useful for adding one component to an existing app, for example bolting on a blog engine to your existing e-commerce site. Since engines seem to be mostly self contained, it seems difficult and inconvenient to override their functionality and views while keeping DRY. I've also considered abstracting the code into a gem, but this seems a little odd. Do I make the gem depend on the Rails gems, and the define models & controllers inside it, and then subclass them in my various applications? Or do I define many modules inside the gem that I include in the different spots inside my various applications? How do I test the gem and then test the set of customizations and overridden functionality on top of it? I'm also concerned with how I'll develop the gem and the Rails apps in tandem, can I vendor a git repository of the gem into the app and push from that so I don't have to build a new gem every iteration? Also, are there private gem hosts/can I set my own gem source up? Also, any general suggestions for this kind of undertaking? Abstraction paradigms to adhere to? Required reading? Comments from the wise who have done this before? Thanks!

    Read the article

  • Invoke Command When "ENTER" Key Is Pressed In XAML

    - by bitxwise
    I want to invoke a command when ENTER is pressed in a TextBox. Consider the following XAML: <UserControl ... xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity" ...> ... <TextBox> <i:Interaction.Triggers> <i:EventTrigger EventName="KeyUp"> <i:InvokeCommandAction Command="{Binding MyCommand}" CommandParameter="{Binding Text}" /> </i:EventTrigger> </i:Interaction.Triggers> </TextBox> ... </UserControl> and that MyCommand is as follows: public ICommand MyCommand { get { return new DelegateCommand<string>(MyCommandExecute); } } private void MyCommandExecute(string s) { ... } With the above, my command is invoked for every key press. How can I restrict the command to only invoke when the ENTER key is pressed? I understand that with Expression Blend I can use Conditions but those seem to be restricted to elements and can't consider event arguments. I have also come across SLEX which offers its own InvokeCommandAction implementation that is built on top of the Systems.Windows.Interactivity implementation and can do what I need. Another consideration is to write my own trigger, but I'm hoping there's a way to do it without using external toolkits.

    Read the article

  • Finding missing files by checksum

    - by grw
    Hi there, I'm doing a large data migration between two file systems (let's call them F1 and F2) on a Linux system which will necessarily involve copying the data verbatim into a differently-structured hierarchy on F2 and changing the file names. I'd like to write a script to generate a list of files which are in F1 but not in F2, i.e. the ones which weren't copied by the migration script into the new hierarchy, so that I can go back and migrate them manually. Unfortunately for reasons not worth going into, the migration script can't be modified to list files that it doesn't migrate. My question differs from this previously answered one because of the fact that I cannot rely on filenames as a comparison. I know the basic outline of the process would be: Generate a list of checksums for all files, recursing through F1 Do the same for F2 Compare the lists and generate a negative intersection of the checksums, ignoring the file names, to find files which are in F1 but not in F2. I'm kind of stuck getting past that stage, so I'd appreciate any pointers on which tools to use. I think I need to use the 'comm' command to compare the list of file checksums, but since md5sum, sha512sum and the like put the file name next to the checksum, I can't see a way to get it to bring me a useful comparison. Maybe awk is the way to go? I'm using Red Hat Enterprise Linux 5.x. Thanks.

    Read the article

  • Wicket application + Apache + mod_jk - AJP queues are filling up!

    - by nojyarg
    Dear community, We are having a Wicket-based Java application deployed in a production server cluster using Apache (2.2.3) with mod_jk (1.2.30) as load balancing component w/ sticky session and Jboss 5 as application container for the Java application. We are inconsistently seeing an issue in our production environment where our AJP queues between Apache and Jboss as shown in the JMX console fill up with requests to the point where the application server is no longer taking on any new requests. When looking at all involved system components (overall traffic, load db, process list db, load of all clustered application server nodes) nothing points towards a capacity issue which would explain why the calls are being stalled in the AJP queue. Instead all systems appear sufficiently idle. So far, our only remedy to this issue is to restart the appservers and the load balancer which only occasionally clears the AJP queues. We are trying to figure out why the queues are filling up to the point that no calls get returned to the end user although the system is not under a high load. Has anyone else experienced similar problems? Are there any other system metrics we should monitor that could explain the queuing behavior? Is this potentially a mod_jk issue? If so, is it advisable to swap mod_jk with mod_cluster to resolve the issue? Any advice is highly appreciated. If I can provide additional information for the sake of troubleshooting I would be more than willing to do so. /Ben

    Read the article

  • Better language or checking tool?

    - by rwallace
    This is primarily aimed at programmers who use unmanaged languages like C and C++ in preference to managed languages, forgoing some forms of error checking to obtain benefits like the ability to work in extremely resource constrained systems or the last increment of performance, though I would also be interested in answers from those who use managed languages. Which of the following would be of most value? A language that would optionally compile to CLR byte code or to machine code via C, and would provide things like optional array bounds checking, more support for memory management in environments where you can't use garbage collection, and faster compile times than typical C++ projects. (Think e.g. Ada or Eiffel with Python syntax.) A tool that would take existing C code and perform static analysis to look for things like potential null pointer dereferences and array overflows. (Think e.g. an open source equivalent to Coverity.) Something else I haven't thought of. Or put another way, when you're using C family languages, is the top of your wish list more expressiveness, better error checking or something else? The reason I'm asking is that I have a design and prototype parser for #1, and an outline design for #2, and I'm wondering which would be the better use of resources to work on after my current project is up and running; but I think the answers may be useful for other tools programmers also. (As usual with questions of this nature, if the answer you would give is already there, please upvote it.)

    Read the article

  • Where is mpx386.6 and start.c in Minix 3.2?

    - by John Bowlinger
    I'm trying to follow along in Operating Systems and Implementation 3rd edition and I'm now at the part in the book where Tanenbaum is discussing bootup and kernel process switching. He keeps referring to these 2 files (mpx386.s, start.c) that are supposedly in a directory called kernel, but I can't seem to find them. In the root directory, when I go to boot/minix/3.2.0/kernel, kernel just seems to be a binary file that is illegible in terminal. There also seems to be a bunch of mod01-mod12 gz binary files as well in the 3.2.0 directory. Am I in the wrong directory, or is there something I need to install and do to read kernel? I would like to follow along with the book to what's on my screen, instead of constantly flipping back and forth. I realize alot of files are completely different from this book published in 2006 and I accept that, but this seems to be a critical juncture of the book and the operating system as a whole. If it's any consolation, I'm running the OS in Virtualbox on a 64-bit Macbook.

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >