Search Results

Search found 24117 results on 965 pages for 'write through'.

Page 536/965 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • Difference between coder and programmer in common examples, rules

    - by MInner
    Real definition is a kind of definition based on out-of-subjects axioms, rules. (Subjective, I know.) It's easy to speak about 'difference ..' with person, who's in programming. But usually it's quite hard to show difference to the person who have never used to write program. How do you think - which examples, analogies, logical chains are best for showing this kind of difference. The only example, which comes to mind is - economist (coder) and mathematician (programmer). How do you feel about it?

    Read the article

  • In Ada how do I initialise an array constant with a repeated number?

    - by mat_geek
    I need an array of 820 zeors for using with a mathematical function. In C I could just write the following and the compiler would fill the array: const float EMPTY_NUMBER_A[820] = { 0.0, }; However in Ada that isn't possible. I really don't want to hard code the 820 elements as 0.0. Is there a way to get the compiler to do it? type Number_A is array (1 .. 820) of Float; EMPTY_NUMBER_A : constant Number_A := something; Using Ada 95 and GNAT.

    Read the article

  • Code Contracts: Unit testing contracted code

    - by DigiMortal
    Code contracts and unit tests are not replacements for each other. They both have different purpose and different nature. It does not matter if you are using code contracts or not – you still have to write tests for your code. In this posting I will show you how to unit test code with contracts. In my previous posting about code contracts I showed how to avoid ContractExceptions that are defined in code contracts runtime and that are not accessible for us in design time. This was one step further to make my randomizer testable. In this posting I will complete the mission. Problems with current code This is my current code. public class Randomizer {     public static int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           var rnd = new Random();         return rnd.Next(min, max);     } } As you can see this code has some problems: randomizer class is static and cannot be instantiated. We cannot move this class between components if we need to, GetRandomFromRangeContracted() is not fully testable because we cannot currently affect random number generator output and therefore we cannot test post-contract. Now let’s solve these problems. Making randomizer testable As a first thing I made Randomizer to be class that must be instantiated. This is simple thing to do. Now let’s solve the problem with Random class. To make Randomizer testable I define IRandomGenerator interface and RandomGenerator class. The public constructor of Randomizer accepts IRandomGenerator as argument. public interface IRandomGenerator {     int Next(int min, int max); }   public class RandomGenerator : IRandomGenerator {     private Random _random = new Random();       public int Next(int min, int max)     {         return _random.Next(min, max);     } } And here is our Randomizer after total make-over. public class Randomizer {     private IRandomGenerator _generator;       private Randomizer()     {         _generator = new RandomGenerator();     }       public Randomizer(IRandomGenerator generator)     {         _generator = generator;     }       public int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           return _generator.Next(min, max);     } } It seems to be inconvenient to instantiate Randomizer now but you can always use DI/IoC containers and break compiled dependencies between the components of your system. Writing tests for randomizer IRandomGenerator solved problem with testing post-condition. Now it is time to write tests for Randomizer class. Writing tests for contracted code is not easy. The main problem is still ContractException that we are not able to access. Still it is the main exception we get as soon as contracts fail. Although pre-conditions are able to throw exceptions with type we want we cannot do much when post-conditions will fail. We have to use Contract.ContractFailed event and this event is called for every contract failure. This way we find ourselves in situation where supporting well input interface makes it impossible to support output interface well and vice versa. ContractFailed is nasty hack and it works pretty weird way. Although documentation sais that ContractFailed is good choice for testing contracts it is still pretty painful. As a last chance I got tests working almost normally when I wrapped them up. Can you remember similar solution from the times of Visual Studio 2008 unit tests? Cannot understand how Microsoft was able to mess up testing again. [TestClass] public class RandomizerTest {     private Mock<IRandomGenerator> _randomMock;     private Randomizer _randomizer;     private string _lastContractError;       public TestContext TestContext { get; set; }       public RandomizerTest()     {         Contract.ContractFailed += (sender, e) =>         {             e.SetHandled();             e.SetUnwind();               throw new Exception(e.FailureKind + ": " + e.Message);         };     }       [TestInitialize()]     public void RandomizerTestInitialize()     {         _randomMock = new Mock<IRandomGenerator>();         _randomizer = new Randomizer(_randomMock.Object);         _lastContractError = string.Empty;     }       #region InputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_not_less_than_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(100, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_equal_to_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(10, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     public void GetRandomFromRangeContracted_should_work_when_min_is_less_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 50;           _randomMock.Setup(r => r.Next(minValue, maxValue))             .Returns(returnValue)             .Verifiable();           var result = _randomizer.GetRandomFromRangeContracted(minValue, maxValue);           _randomMock.Verify();         Assert.AreEqual<int>(returnValue, result);     }     #endregion       #region OutputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_less_than_min()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 7;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_more_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 102;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }     #endregion        } Although these tests are pretty awful and contain hacks we are at least able now to make sure that our code works as expected. Here is the test list after running these tests. Conclusion Code contracts are very new stuff in Visual Studio world and as young technology it has some problems – like all other new bits and bytes in the world. As you saw then making our contracted code testable is easy only to the point when pre-conditions are considered. When we start dealing with post-conditions we will end up with hacked tests. I hope that future versions of code contracts will solve error handling issues the way that testing of contracted code will be easier than it is right now.

    Read the article

  • Firefox-Addon: Restart and save all current tabs and windows

    - by nokturnal
    Hello guys / gals, First off, this is my first attempt at writing an add-on. That being said, I am attempting to write an add-on that makes some configuration changes and needs to restart Firefox in order to have the changes take effect. I am currently restarting Firefox using the following code: var boot = Components.classes["@mozilla.org/toolkit/app-startup;1"].getService(Components.interfaces.nsIAppStartup); boot.quit(Components.interfaces.nsIAppStartup.eForceQuit|Components.interfaces.nsIAppStartup.eRestart); The problem is, it restarts and opens the browser window(s) to whatever the users homepage is currently set to. I want it to re-open all windows / tabs that were previously open before the restart (similar to what happens when you install a new add-on). Anyone ever messed with this type of functionality before?

    Read the article

  • Overwrite the Soap Envelope in Suds python

    - by chrissygormley
    Hello, I have a camera and I am trying to connect to it vis suds. I have tried to send raw xml and have found that the only thing stopping the xml suds from working is an incorrect Soap envelope namespace. The envelope namespace is: xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" and I want to rewrite it to: xmlns:SOAP-ENV="http://www.w3.org/2003/05/soap-envelope" In order to add a namespace in python I try this code: message = Element('Element_name').addPrefix(p='SOAP-ENC', u='www.w3.org/ENC') But when I add the SOAP-ENV to the namespace it doesn't write as it is hardcoded into the suds bindings. Is there a way to overwrite this in suds? Thanks for any help.

    Read the article

  • Android: How to save a preview frame as jpeg image?

    - by niko
    Hi, I would like to save a preview frame as a jpeg image. I have tried to write the following code: public void onPreviewFrame(byte[] _data, Camera _camera) { if(settings.isRecording()) { Camera.Parameters params = _camera.getParameters(); params.setPictureFormat(PixelFormat.JPEG); _camera.setParameters(params); String path = "ImageDir" + frameCount; fileRW.setPath(path); fileRW.WriteToFile(_data); frameCount++; } } but it's not possible to open a saved file as a jpeg image. Does anyone know how to save preview frames as jpeg images? Thanks

    Read the article

  • Monitoring disk performance with MRTG

    - by Ghostrider
    I use MRTG to monitor vital stats on my servers like disk space, CPU load, memory usage, temperatures etc. It all works fine and well for parameters that don't change rapidly. By running small VB script I can also get any Performance Counter. However these scripts are called by MRTG every 5 minutes while performance counters like physical disk idle time return a snapshot value from previous few seconds so a lot or data is missed. Surely I could write a service that would poll all required counters in background and store average values somewhere on disk where MRTG would pick them up. However before I do so I would like to find out if there is some ready solution that would allow me to get average value of some counter for the last 5 minutes as opposed to immediate snapshot.

    Read the article

  • Is Visual Studio 2010 WebDev WebServer (Cassini) 64-bit compatible?

    - by David
    I'm now developing on Visual Studio 2008 on a 64-bit OS (Windows Server 2008 64-bit). While the apps I write are 64-bit capable, as is IIS7, the built-in ASP.NET Development Server (aka Cassini aka WebDev.Webserver.exe) runs as 32-bit. This brings up a plethora of issues, such as: 32-bit and 64-bit applications have separate HKLM\Software registry homes There are 32-bit and 64-bit versions of the SQL Server Client Network Utility Other fun surprises I haven't discovered but I'm sure will spring up While I am finding workarounds for most of this, I have to ask... Does anyone who has played with the Visual Studio 2010 preview bits on 64-bit architecture know if the development web servers can handle 64-bit, and if so, are there options for which mode to run it in? (Like a checkbox in the project properties, for instance)

    Read the article

  • Using separate model methods to manage transactions

    - by DCrystal
    If i’ve got 2(or more) model methods which do (for example, in billing system) enrolling/withdrawing, and one controller’s method that calls 2(or more) of these model methods. Is it a good way(maybe, any suggestions how to do it better) to write/use 2model methods like these: public function start_transaction(){ $this->db->trans_start(); } public function end_transaction(){ $this->db->trans_complete(); } And call in controller’s method: public function smth(){ //something $this->model->start_transaction(); $this->model->enroll(); //something else $this->model->withdraw(); $this->model->end_transaction(); } Will transaction be reversed, if model's withdraw() method fails? Thanks.

    Read the article

  • Accessing one process's memory contents from another module

    - by Fangkai Yang
    I am developing a virtual device driver such that the user can write to the driver a process' pid and a virtual address, and the module will use these two values to get the memory contents of the target process. I am wondering if there is any easy functions that can fetch user page's data at this virtual address. I have tried get_user but this is not possible because the modules executing get_user at another process's context. I also tried to use ptrace_readdata, however, it seems that the file at /kernel/ptrace.c leaves a function access_process_vm undefined and also I don't know how to compile the source code of my module with this file (the linker seaches file in /linux/include by default). I am wondering if there are any other solutions...

    Read the article

  • Brute force characters into a textbox in c#

    - by Fred Dunly
    Hey everyone, I am VERY new to programming and the only language I know is C# So I will have to stick with that... I want to make a program that "test passwords" to see how long they would take to break with a basic brute force attack. So what I did was make 2 text boxes. (textbox1 and textbox2) and wrote the program so if the text boxes had the input, a "correct password" label would appear, but i want to write the program so that textbox2 will run a brute force algorithm in it, and when it comes across the correct password, it will stop. I REALLY need help, and if you could just post my attached code with the correct additives in it that would be great. The program so far is extremely simple, but I am very new to this, so. Thanks in advance. private void textBox2_TextChanged(object sender, EventArgs e) { } private void button1_Click(object sender, EventArgs e) { if (textBox2.Text == textBox1.Text) { label1.Text = "Password Correct"; } else { label1.Text = "Password Wrong"; } } private void label1_Click(object sender, EventArgs e) { } } } `

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • Capturing network traffic in ruby - pcap related issues

    - by Acidburn2k
    What I need is to write very simple application, which would listen to network traffic, filter out some packets based on various layer 4/5 information and then dump those information into database. I am quite confused on which pcap gem/plugin should I use. The basic pcap implemention seem to be a bit outdated (no changes since 2001) and doesn't work properly. I also tried pcaprub, but I am not quite sure how to get around with this library. It seem to capture raw packets without te ability to actualy get any data out of the pcap dump. Do you have any advices on how can I realize this simple task? Thanks in advance. :-)

    Read the article

  • Oracle analytic functions for "the attribute from the row with the max date"

    - by tpdi
    I'm refactoring a colleague's code, and I have several cases where he's using a cursor to get "the latest row that matches some predicate": His technique is to write the join as a cursor, order it by the date field descending, open the cursor, get the first row, and close the cursor. This requires calling a cursor for each row of the result set that drives this, which is costly for many rows. I'd prefer to be able to join, but what something cheaper than a correlated subquery: select a.id_shared_by_several_rows, a.foo from audit_trail a where a.entry_date = (select max(a.entry_date) from audit_trail b where b.id_shared_by_several_rows = a.id_shared_by_several_rows ); I'm guessing that since this is a common need, there's an Oracle analytic function that does this?

    Read the article

  • Handling exception from unmanaged dll in C#

    - by StuffHappens
    Hello. I have the following function written in C# public static string GetNominativeDeclension(string surnameNamePatronimic) { if(surnameNamePatronimic == null) throw new ArgumentNullException("surnameNamePatronimic"); IntPtr[] ptrs = null; try { ptrs = StringsToIntPtrArray(surnameNamePatronimic); int resultLen = MaxResultBufSize; int err = decGetNominativePadeg(ptrs[0], ptrs[1], ref resultLen); ThrowException(err); return IntPtrToString(ptrs, resultLen); } catch { return surnameNamePatronimic; } finally { FreeIntPtr(ptrs); } } Function decGetNominativePadeg is in unmanaged dll [DllImport("Padeg.dll", EntryPoint = "GetNominativePadeg")] private static extern Int32 decGetNominativePadeg(IntPtr surnameNamePatronimic, IntPtr result, ref Int32 resultLength); and throws an exception: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. The catch that is in C# code doesn't actually catch it. Why? How to handle this exception? Thank you for your help!

    Read the article

  • Using GIT Smart HTTP via IIS

    - by Andrew Matthews
    I recently read Scott Chacon's post "Smart HTTP Transport", and I was hoping that it might have become possible via IIS (windows 7) since that post was written. I haven't been able to find anything showing how it can be done, and Apache is not an option in my IIS 7 based environment. So, I'm at a loss (git daemon was foiled for me by a combination of AVG anti-virus and AD). I want to provide LDAP authenticated read/write access for selected users. So this question seems not to be relevant. Do you know of a way to provide access to GIT via IIS?

    Read the article

  • Generic method to create deep copy of all elements in a collection

    - by bwarner
    I have various ObservableCollections of different object types. I'd like to write a single method that will take a collection of any of these object types and return a new collection where each element is a deep copy of elements in the given collection. Here is an example for a specifc class private static ObservableCollection<PropertyValueRow> DeepCopy(ObservableCollection<PropertyValueRow> list) { ObservableCollection<PropertyValueRow> newList = new ObservableCollection<PropertyValueRow>(); foreach (PropertyValueRow rec in list) { newList.Add((PropertyValueRow)rec.Clone()); } return newList; } How can I make this method generic for any class which implements ICloneable?

    Read the article

  • SQL server recursive query error.The maximum recursion 100 has been exhausted before statement completion

    - by ienax_ridens
    I have a recursive query that returns an error when I run it; in other databases (with more data) I have not the problem. In my case this query returns 2 colums (ID_PARENT and ID_CHILD) doing a recursion because my tree can have more than one level, bit I wanna have only "direct" parent. NOTE: I tried to put OPTION (MAXRECURSION 0) at the end of the query, but with no luck. The following query is only a part of the entire query, I tried to put OPTION only at the end of the "big query" having a continous running query, but no errors displayed. Error have in SQL Server: "The statement terminated.The maximum recursion 100 has been exhausted before statement completion" The query is the following: WITH q AS (SELECT ID_ITEM, ID_ITEM AS ID_ITEM_ANCESTOR FROM ITEMS_TABLE i JOIN ITEMS_TYPES_TABLE itt ON itt.ID_ITEM_TYPE = i.ID_ITEM_TYPE UNION ALL SELECT i.ID_ITEM, q.ID_ITEM_ANCESTOR FROM q JOIN ITEMS_TABLE i ON i.ID_ITEM_PADRE = q.ID_ITEM JOIN ITEMS_TYPES_TABLE itt ON itt.ID_ITEM_TYPE = i.ID_ITEM_TYPE) SELECT ID_ITEM AS ID_CHILD, ID_ITEM_ANCESTOR AS ID_PARENT FROM q I need a suggestion to re-write this query to avoid the error of recursion and see the data, that are few.

    Read the article

  • C Programming - Convert an integer to binary

    - by leo
    Hi guys - i was hopefully after some tips opposed to solutions as this is homework and i want to solve it myself I am firstly very new to C. In fact i have never done any before, though i have previous java experience from modules at university. I am trying to write a programme that converts a single integer in to binary. I am only allowed to use bitwise operations and no library functions Can anyone possibly suggest some ideas about how i would go about doing this. Obviously i dont want code or anything, just some ideas as to what avenues to explore as currenty i am a little confused and have no plan of attack. Well, make that a lot confused :D thanks very much

    Read the article

  • help me with asp.net mvc 2 custom validation attribute

    - by Omu
    I'm trying to write a validation attribute that is going to check that at least one of the specified properties is true [AttributeUsage(AttributeTargets.Class, AllowMultiple = false, Inherited = true)] public sealed class AtLeastOneTrueAttribute : ValidationAttribute { private const string DefaultErrorMessage = "select at least one"; public AtLeastOneTrueAttribute(params string[] props) : base(DefaultErrorMessage) { this.props = props; } private readonly string[] props; public override string FormatErrorMessage(string name) { return DefaultErrorMessage; } public override bool IsValid(object value) { var properties = TypeDescriptor.GetProperties(value); return props.Any(p => (bool) properties.Find(p, true).GetValue(value)); } } now when I'm trying to use I can't really get specify the props after the fir , the intellisence shows me that I'm entering the ErrorMessage and only the first string is the params string[] props

    Read the article

  • How to use MFC with ATL

    - by nimo
    Hi, I'm trying to write a COM EXE using ATL. I also have a MFC application. Both these applications would be run in local machines. Therefore, I don't need to run these two processes (COM EXE and MFC) separately. Can I create a single application (process) by combining these two applications ? Is there any possibility that I can embed my MFC code in ATL code, or is there a way to initialize the COM EXE within my MFC code ? Appreciate your help and concerns . Thank you

    Read the article

  • Pre-built UI components for displaying SSRS Local Mode Parameters

    - by namenlos
    BACKGROUND Am writing a WinForm app that uses the SSRS Report Viewer control to render reports in local mode - there is no SSRS server involved at all I can successully create and execute reports in local mode What I want to do is add Parameters to the report THE PROBLEM When the Report Viewer control executes a report in local mode it does not provide any UI that allows the user to enter values for parameters defined in the report This is in contrast to to when the Report Viewer control is running a server report - parameters are shown. MY QUESTION Are there pre-built components or even sample code I can use that would provide a reasonble parameter user experience for this scenario (scenario = SSRS + Report Viewer Control + Local Mode)? Yes, I could write this UI code myself - what I am looking for is existing code to avoid having to implement this UI because I'd rather spend my time making the content of the report work well NOTES Switching to another reporting engine besides SSRS is not an option. Switching to server side reports is not an option.

    Read the article

  • JPQL cross tab query

    - by Phil
    Hi can anyone tell me if its possible to write a cross tab query in JPQL? (I'm using eclipse link JPA2) An example of a cross tab query in SQL can found here http://onlamp.com/pub/a/onlamp/2003/12/04/crosstabs.html SELECT dept, COUNT(CASE WHEN gender = 'm' THEN id ELSE NULL END) AS m, COUNT(CASE WHEN gender = 'f' THEN id ELSE NULL END) AS f, COUNT(*) AS total FROM person GROUP BY dept How can I do the same thing as a single query in JPQL? Looking at the spec it doesn't seem to look like CASE is valid in COUNT Is there any other way?

    Read the article

  • IBM MQ corrupted messages

    - by Anand
    Hi I posted the question below in the forum and now I am asking another question in the hope that I get some pointers to my answers. my previous post Ok lets begin: Now the problem is like this: OS: Linux 1. I post messages to the IBM MQ 2. The some random messages in the queue get randomly corrupted as posted in the previous stackoverflow question OS: Windows 1. I post messages to the IBM MQ 2. The some random messages in the queue get randomly corrupted as posted in the previous stackoverflow question OS: Windows 1. I post messages to the IBM MQ 2. Now I read the messages and write them to a file just to observe them 3. Also I allow the messages to pass through as is after writing them to file Now everything goes through fine How can I resolve this problem

    Read the article

  • Append an object to a list in R?

    - by Nick
    If I have some R list mylist, you can append an item obj to it like so: mylist[[length(mylist)+1]] <- obj But surely there is some more compact way. When I was new at R, I tried writing append() like so: append <- function(lst, obj) { lst[[length(list)+1]] <- obj return(lst) } but of course that doesn't work due to R's call-by-name semantics (lst is effectively copied upon call, so changes to lst are not visible outside the scope of append(). I know you can do environment hacking in an R function to reach outside the scope of your function and mutate the calling environment, but that seems like a large hammer to write a simple append function. Can anyone suggest a more beautiful way of doing this? Bonus points if it works for both vectors and lists.

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >