Search Results

Search found 20155 results on 807 pages for 'things'.

Page 133/807 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • How can I unit test a PHP class method that executes a command-line program?

    - by acoulton
    For a PHP application I'm developing, I need to read the current git revision SHA which of course I can get easily by using shell_exec or backticks to execute the git command line client. I have obviously put this call into a method of its very own, so that I can easily isolate and mock this for the rest of my unit tests. So my class looks a bit like this: class Task_Bundle { public function execute() { // Do things $revision = $this->git_sha(); // Do more things } protected function git_sha() { return `git rev-parse --short HEAD`; } } Of course, although I can test most of the class by mocking git_sha, I'm struggling to see how to test the actual git_sha() method because I don't see a way to create a known state for it. I don't think there's any real value in a unit test that also calls git rev-parse to compare the results? I was wondering about at least asserting that the command had been run, but I can't see any way to get a history of shell commands executed by PHP - even if I specify that PHP should use BASH rather than SH the history list comes up empty, I presume because the separate backticks executions are separate terminal sessions. I'd love to hear any suggestions for how I might test this, or is it OK to just leave that method untested and be careful with it when the app is being maintained in future?

    Read the article

  • rabbitmq-erlang-client, using rebar friendly pkg, works on dev env fails on rebar release

    - by lfurrea
    I am successfully using the rebar-friendly package of rabbitmq-erlang-client for a simple Hello World rebarized and OTP "compliant" app and things work fine on the dev environment. I am able to fire up an erl console and do my application:start(helloworld). and connect to the broker, open up a channel and communicate to queues. However, then I proceed to do rebar generate and it builds up the release just fine, but when I try to fire up from the self contained release package then things suddenly explode. I know rebar releases are known to be an obscure art, but I would like to know what are my options as far as deployment for an app using the rabbitmq-erlang-client. Below you will find the output of the console on the crash: =INFO REPORT==== 18-Dec-2012::16:41:35 === application: session_record exited: {{{badmatch, {error, {'EXIT', {undef, [{amqp_connection_sup,start_link, [{amqp_params_network,<<"guest">>,<<"guest">>,<<"/">>, "127.0.0.1",5672,0,0,0,infinity,none, [#Fun<amqp_auth_mechanisms.plain.3>, #Fun<amqp_auth_mechanisms.amqplain.3>], [],[]}], []}, {supervisor2,do_start_child_i,3, [{file,"src/supervisor2.erl"},{line,391}]}, {supervisor2,handle_call,3, [{file,"src/supervisor2.erl"},{line,413}]}, {gen_server,handle_msg,5, [{file,"gen_server.erl"},{line,588}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,227}]}]}}}}, [{amqp_connection,start,1, [{file,"src/amqp_connection.erl"},{line,164}]}, {hello_qp,start_link,0,[{file,"src/hello_qp.erl"},{line,10}]}, {session_record_sup,init,1, [{file,"src/session_record_sup.erl"},{line,55}]}, {supervisor_bridge,init,1, [{file,"supervisor_bridge.erl"},{line,79}]}, {gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,227}]}]}, {session_record_app,start,[normal,[]]}} type: permanent

    Read the article

  • How to visually reject user input in a table?

    - by FX
    In the programming of a table-based application module (i.e. the user mostly enters tabular data in an already laid-out table), how would you reject user input for a given cell? The scenario is: the user edits the cell, enters something (text, picture, ...) and you want them to notice when they finish editing (hitting enter, for example) that their entry is not valid for your given "format" (in the wider meaning: it can be that they entered a string instead of a number, that their entry is too long, too short, they include a picture while it's not acceptable, ...). I can see two different things happening: You can rather easily fit their entry into your format, and you do so, but you want them to notice it so they can change if your guess is not good enough (example: they entered "15.47" in a field that needs to be an integer, so your program makes it "15") You cannot guess what to do with their entry, and want to inform them that it's not valid. My question specifically is: what visual display can you offer to inform the user that his input is invalid? Is it preferable to refuse to leave the editing mode, or not? The two things I can imagine are: using colors (red background if invalid, yellow background for my case 1 above) when you reject an input, do something like Apple does for password entry of user accounts: you make the cell "shaking" (i.e. oscillating left and right) for one second, and keep the focus/editing in their so they don't loose what they've typed. Let's hear your suggestions. PS: This question is, at least in my thought process, somehow a continuation and a specialization of my previous question on getting users to read error messages. PPS: Made this community wiki, was that the right thing to do on this kind of question or not?

    Read the article

  • PayPal sandbox anomalies

    - by Christian
    When testing some donations on my local machine, I set various key=value pairs to do various things (return to specific thank you page, get POST data from PayPal and not GET data and others) I also built my code around the response from the PayPal sandbox. BUT, when my code goes to the production server and we switch on live payments and test with real accounts and money, a few strange things happen; We get a GET response from PayPal - the URL is filled with crap. We get no transaction details. This is the biggie, no name, no txn_id, no dates, nothing. We get a handful of keys etc, its not totally empty and the payment has gone through, but nowhere near the verbosity of the sandbox. Curious about why this might be? It doesn't really make sense to have a sandbox (or dev environment) that is substantially different from the production environment. Or, am I missing something? EDIT: Still no response to my question in the PayPal Developer Forums. I don't even get a donation amount back from PayPal. Is this a setting maybe? EDIT #2: Two of you have suggested to check PDT and Auto-Return. The data analytics guy for the project only 2 hrs ago suggested the same. I have asked the client to confirm this. I can't see a setting for it in the Sandbox so can assume that it is enabled by default?

    Read the article

  • Best practices for withstanding launch day traffic burst

    - by Sam McAfee
    We are working on a website for a client that (for once) is expected to get a fair amount of traffic on day one. There are press releases, people are blogging about it, etc. I am a little concerned that we're going to fall flat on our face on day one. What are the main things you would look at to ensure (in advance without real traffic data) that you can stay standing after a big launch. Details: This is a L/A/M/PHP stack, using an internally developed MVC framework. This is currently being launched on one server, with Apache and MySQL both on it, but we can break that up if need be. We are already installing memcached and doing as much PHP-level caching as we can think of. Some of the pages are rather query intensive, and we are using Smarty as our template engine. Keep in mind there is no time to change any of these major aspects--this is the just the setup. What sorts of things should we watch out for?

    Read the article

  • What's a good way to provide additional decoration/metadata for Python function parameters?

    - by Will Dean
    We're considering using Python (IronPython, but I don't think that's relevant) to provide a sort of 'macro' support for another application, which controls a piece of equipment. We'd like to write fairly simple functions in Python, which take a few arguments - these would be things like times and temperatures and positions. Different functions would take different arguments, and the main application would contain user interface (something like a property grid) which allows the users to provide values for the Python function arguments. So, for example function1 might take a time and a temperature, and function2 might take a position and a couple of times. We'd like to be able to dynamically build the user interface from the Python code. Things which are easy to do are to find a list of functions in a module, and (using inspect.getargspec) to get a list of arguments to each function. However, just a list of argument names is not really enough - ideally we'd like to be able to include some more information about each argument - for instance, it's 'type' (high-level type - time, temperature, etc, not language-level type), and perhaps a 'friendly name' or description. So, the question is, what are good 'pythonic' ways of adding this sort of information to a function. The two possibilities I have thought of are: Use a strict naming convention for arguments, and then infer stuff about them from their names (fetched using getargspec) Invent our own docstring meta-language (could be little more than CSV) and use the docstring for our metadata. Because Python seems pretty popular for building scripting into large apps, I imagine this is a solved problem with some common conventions, but I haven't been able to find them.

    Read the article

  • .next is undefined, jquery plugin problem

    - by ndelangen
    I'm trying to create my own plugin. But I'm having trouble getting things right. It appears when I'm trying to traverse inside .each things go wrong. I'm trying to go to the next item every 6 seconds by fading. jQuery(function($){ $.fn.rotator = function(options){ this.each(function() { var container = $(this); var images = container.children(); //Set the opacity of all images to 0 images.css({opacity: 0.0}); //Get the first image and display it (gets set to full opacity) $('div:first',this).css({opacity: 1.0}).addClass('show'); //Call the rotator function to run the slideshow, 6000 = change to next image after 6 seconds var obj = $(this); setInterval(nextimage(obj),6000); }); }; // rotate function function nextimage(obj) { var container = $(obj); var images = container.children(); //Get the current image var current = (images.hasClass('show')? images.hasClass('show') : images.first()); //Get next image, when it reaches the end, rotate it back to the first image var next = ((current.next().length) ? ((current.next().hasClass('show')) ? images.first() :current.next()) : images.first()); //Set the fade in effect for the next image, the show class has higher z-index next.css({opacity: 0.0}) .addClass('show') .animate({opacity: 1.0}, 1000); //Hide the current image current.animate({opacity: 0.0}, 1000) .removeClass('show'); }; }); $(document).ready(function(){ $("#bg").rotator({ }) }); The error I get is: current.next is not a function Line 35 Line 35 = var next = ((current.next().length) ? ((current.next().hasClass('show')) ? images.first() :current.next()) : images.first()); Can someone tell me what I'm doing wrong?

    Read the article

  • Error handling in C++, constructors vs. regular methods

    - by Dennis Ritchie
    I have a cheesesales.txt CSV file with all of my recent cheese sales. I want to create a class CheeseSales that can do things like these: CheeseSales sales("cheesesales.txt"); //has no default constructor cout << sales.totalSales() << endl; sales.outputPieChart("piechart.pdf"); The above code assumes that no failures will happen. In reality, failures will take place. In this case, two kinds of failures could occur: Failure in the constructor: The file may not exist, may not have read-permissions, contain invalid/unparsable data, etc. Failure in the regular method: The file may already exist, there may not be write access, too little sales data available to create a pie chart, etc. My question is simply: How would you design this code to handle failures? One idea: Return a bool from the regular method indicating failure. Not sure how to deal with the constructor. How would seasoned C++ coders do these kinds of things?

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Hover-to-click on jQuery UI datePicker 'next month' and 'prev month' not working

    - by user316727
    Hi there, I have a calendar which is meant to look much like the calendar in Outlook. There is a big field representing the hours in a day, and there is a date navigator. The navigator is the jQuery UI Datepicker. I want users to be able to navigate to a new day by clicking on a date in the datepicker, but also to be able to drag appointments over the datepicker and drop them on a specific date. I have that working now. I also want users, while they are dragging an appointment, to be able to move to next or previous month simply by hovering over the datepicker. So I've added a mouseenter and mouseleave event: one runs a setInterval function which sends a click every 1,5 seconds; the other cancels the interval function. This is where all sorts of things go wrong. As soon as one click has been triggered, the mouseleave function no longer works: in other words, the datepicker keeps flipping over to another month every 1,5 seconds. It seems that the datePicker interferes, or that the click event causes other things to go wrong. What can I do?

    Read the article

  • git commit best practices

    - by Ivan Z. Siu
    I am using git to manage a C++ project. When I am working on the projects, I find it hard to organize the changes into commits when changing things that are related to many places. For example, I may change a class interface in a .h file, which will affect the corresponding .cpp file, and also other files using it. I am not sure whether it is reasonable to put all the stuff into one big commit. Intuitively, I think the commits should be modular, each one of them corresponds to a functional update/change, so that the collaborators could pick things accordingly. But seems that sometimes it is inevitable to include lots of files and changes to make a functional change actually work. Searching did not yield me any good suggestion or tips. Hence I wonder if anyone could give me some best practices when doing commits. Thanks! PS. I've been using git for a while and I know how to interactively add/rebase/split/amend/... What I am asking is the PHILOSOPHY part.

    Read the article

  • IIS7 dynamic content compression and webservices

    - by vandalo
    I am moving and old asmx webservice to a new server with IIS7. This webservice basically sends a big dataset (10mb+) to a winform application. The old solution was implemented using a custom soap extension which compressed the content before sending the stream to the client. The client, of course, implemented the same custom soap extension, to decompressed the stream in a dataset. Everything has worked pretty well for years. My customer doesn't want to change the code upgrading to WCF. They just want to put the old App on the new server and use the new dynamic content compression features. We're testing things on a test server (win serv 2008) and it seems that it's working pretty well, even if it seems slow: we can't see any difference in performance (speed) between the uncompressed and compressed stream. Here's the question. Where should I put the settings? Most people say I can't put it in my web.config; others say it can be put there. I am a bit confused. Are there any tricks or things I should know? What about mimeTypes? Should I set some parameters, somewhere? ... considering my stream is XML (dataset) ?? Thanks to everyone who would like to help Alberto

    Read the article

  • To OpenID or not to OpenID? Is it worth it?

    - by Eloff
    Does OpenID improve the user experience? Edit Not to detract from the other comments, but I got one really good reply below that outlined 3 advantages of OpenID in a rational bottom line kind of way. I've also heard some whisperings in other comments that you can get access to some details on the user through OpenID (name? email? what?) and that using that it might even be able to simplify the registration process by not needing to gather as much information. Things that definitely need to be gathered in a checkout process: Full name Email (I'm pretty sure I'll have to ask for these myself) Billing address Shipping address Credit card info There may be a few other things that are interesting from a marketing point of view, but I wouldn't ask the user to manually enter anything not absolutely required during the checkout process. So what's possible in this regard? /Edit (You may have noticed stackoverflow uses OpenID) It seems to me it is easier and faster for the user to simply enter a username and password in a signup form they have to go through anyway. I mean you don't avoid entering a username and password either with OpenID. But you avoid the confusion of choosing a OpenID provider, and the trip out to and back from and external site. With Microsoft making Live ID an OpenID provider (More Info), bringing on several hundred million additional accounts to those provided by Google, Yahoo, and others, this question is more important than ever. I have to require new customers to sign up during the checkout process, and it is absolutely critical that the experience be as easy and smooth as possible, every little bit harder it becomes translates into lost sales. No geek factor outweighs cold hard cash at the end of the day :) OpenID seems like a nice idea, but the implementation is of questionable value. What are the advantages of OpenID and is it really worth it in my scenario described above?

    Read the article

  • are C functions declared in <c____> headers gauranteed to be in the global namespace as well as std?

    - by Evan Teran
    So this is something that I've always wondered but was never quite sure about. So it is strictly a matter of curiosity, not a real problem. As far as I understand, what you do something like #include <cstdlib> everything (except macros of course) are declared in the std:: namespace. Every implementation that I've ever seen does this by doing something like the following: #include <stdlib.h> namespace std { using ::abort; // etc.... } Which of course has the effect of things being in both the global namespace and std. Is this behavior guaranteed? Or is it possible that an implementation could put these things in std but not in the global namespace? The only way I can think of to do that would be to have your libstdc++ implement every c function itself placing them in std directly instead of just including the existing libc headers (because there is no mechanism to remove something from a namespace). Which is of course a lot of effort with little to no benefit. The essence of my question is, is the following program strictly conforming and guaranteed to work? #include <cstdio> int main() { ::printf("hello world\n"); }

    Read the article

  • How to re-prompt after a trap return in bash?

    - by verbose
    I have a script that is supposed to trap SIGTERM and SIGTSTP. This is what I have in the main block: trap 'killHandling' TERM And in the function: killHandling () { echo received kill signal, ignoring return } ... and similar for SIGINT. The problem is one of user interface. The script prompts the user for some input, and if the SIGTERM or SIGINT occurs when the script is waiting for input, it's confusing. Here is the output in that case: Enter something: # SIGTERM received received kill signal, ignoring # shell waits at blank line for user input, user gets confused # user hits "return", which then gets read as blank input from the user # bad things happen because of the blank input I have definitely seen scripts which handle this more elegantly, like so: Enter something: # SIGTERM received received kill signal, ignoring Enter something: # re-prompts user for user input, user is not confused What is the mechanism used to accomplish the latter? Unfortunately I can't simply change my trap code to do the re-prompt as the script prompts the user for several things and what the prompt says is context-dependent. And there has to be a better way than writing context-dependent trap functions. I'd be very grateful for any pointers. Thanks!

    Read the article

  • How would I instruct extconf.rb to use additional g++ optimization flags, and which are advisable?

    - by mohawkjohn
    I'm using Rice to write a C++ extension for a Ruby gem. The extension is in the form of a shared object (.so) file. This requires 'mkmf-rice' instead of 'mkmf', but the two (AFAIK) are pretty similar. By default, the compiler uses the flags -g -O2. Personally, I find this kind of silly, since it's hard to debug with any optimization enabled. I've resorted to editing the Makefile to take out the flags I don't like (e.g., removing -fPIC -shared when I need to debug using main() instead of Ruby's hooks). But I figure there's got to be a better way. I know I can just do $CPPFLAGS += " -DRICE" to add additional flags. But how do I remove things without editing the Makefile directly? A secondary question: what optimizations are safe for shared objects loaded by Ruby? Can I do things like -funroll-loops? What do you all recommend? It's a scientific computing project, so the faster the better. Memory is not much of an issue. Many thanks!

    Read the article

  • Efficiency of manually written loops vs operator overloads (C++)

    - by Sagekilla
    Hi all, in the program I'm working on I have 3-element arrays, which I use as mathematical vectors for all intents and purposes. Through the course of writing my code, I was tempted to just roll my own Vector class with simple +, -, *, /, etc overloads so I can simplify statements like: for (int i = 0; i < 3; i++) r[i] = r1[i] - r2[i]; // becomes: r = r1 - r2; Which should be more or less identical in generated code. But when it comes to more complicated things, could this really impact my performance heavily? One example that I have in my code is this: Manually written version: for (int j = 0; j < 3; j++) { p.vel[j] = p.oldVel[j] + (p.oldAcc[j] + p.acc[j]) * dt2 + (p.oldJerk[j] - p.jerk[j]) * dt12; p.pos[j] = p.oldPos[j] + (p.oldVel[j] + p.vel[j]) * dt2 + (p.oldAcc[j] - p.acc[j]) * dt12; } Using a Vector class with operator overloads: p.vel = p.oldVel + (p.oldAcc + p.acc) * dt2 + (p.oldJerk - p.jerk) * dt12; p.pos = p.oldPos + (p.oldVel + p.vel) * dt2 + (p.oldAcc - p.acc) * dt12; I am compiling my code for maximum possible speed, as it's extremely important that this code runs quickly and calculates accurately. So will me relying on my Vector's for these sorts of things really affect me? For those curious, this is part of some numerical integration code which is not trivial to run in my program. Any insight would be appreciated, as would any idioms or tricks I'm unaware of.

    Read the article

  • Best practices for encrytping continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (on waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Good ways to earn income as a self employed developer

    - by nullptr
    I was just wondering if people could share their experiences and ideas about generating / earning income from a software product or service they have personally developed. To me this seems like a good way to earn a living while doing what we love (programming) and working on projects and problems which interest us. Ie, NOT boring bank or marketing software etc 9-5 all week... Some ideas I have are things like web 2.0 style sites (Facebook,Youtube,Twitter,Digg) etc etc... - These can be very very profitable as we all know but can take years to take off. Are there ways to survive until/if this does happen? Mobile applications. Iphone, Google Android and the new up coming Nintendo DS app store. These have good potential to make it easy to find a market for your application and make selling it easy. Shareware/PC software. A bit 80's and 90's and you kind of need to be a salesman/marketer to sell it but its the only other thing I can think of. Also im not talking about doing freelance work. Im only interested in idea's you can come up with and develop your self (not other peoples ideas or problems which are you are payed to develop). Things that a sole developer or at the most 2 developers could work on and have good potential for high returns on investment (in terms of time) would be great. PS, I wish I thought of stackoverflow!

    Read the article

  • ASP.NET MVC Unit Testing Controllers - Repositories

    - by Brian McCord
    This is more of an opinion seeking question, so there may not be a "right" answer, but I would welcome arguments as to why your answer is the "right" one. Given an MVC application that is using Entity Framework for the persistence engine, a repository layer, a service layer that basically defers to the repository, and a delete method on a controller that looks like this: public ActionResult Delete(State model) { try { if( model == null ) { return View( model ); } _stateService.Delete( model ); return RedirectToAction("Index"); } catch { return View( model ); } } I am looking for the proper way to Unit Test this. Currently, I have a fake repository that gets used in the service, and my unit test looks like this: [TestMethod] public void Delete_Post_Passes_With_State_4() { //Arrange var stateService = GetService(); var stateController = new StateController( stateService ); ViewResult result = stateController.Delete( 4 ) as ViewResult; var model = (State)result.ViewData.Model; //Act RedirectToRouteResult redirectResult = stateController.Delete( model ) as RedirectToRouteResult; stateController = new StateController( stateService ); var newresult = stateController.Delete( 4 ) as ViewResult; var newmodel = (State)newresult.ViewData.Model; //Assert Assert.AreEqual( redirectResult.RouteValues["action"], "Index" ); Assert.IsNull( newmodel ); } Is this overkill? Do I need to check to see if the record actually got deleted (as I already have Service and Repository tests that verify this)? Should I even use a fake repository here or would it make more sense just to mock the whole thing? The examples I'm looking at used this model of doing things, and I just copied it, but I'm really open to doing things in a "best practices" way. Thanks.

    Read the article

  • Does the pointer to free() have to point to beginning of the memory block, or can it point to the interior?

    - by Lambert
    The question is in the title... I searched but couldn't find anything. Edit: I don't really see any need to explain this, but because people think that what I'm saying makes no sense (and that I'm asking the wrong questions), here's the problem: Since people seem to be very interested in the "root" cause of all the problem rather than the actual question asked (since that apparently helps things get solved better, let's see if it does), here's the problem: I'm trying to make a D runtime library based on NTDLL.dll, so that I can use that library for subsystems other than the Win32 subsystem. So that forces me to only link with NTDLL.dll. Yes, I'm aware that the functions are "undocumented" and could change at any time (even though I'd bet a hundred dollars that wcstombs will still do the same exact thing 20 years from now, if it still exists). Yes, I know people (especially Microsoft) don't like developers linking to that library, and that I'll probably get criticized for the right here. And yes, those two points above mean that programs like chkdsk and defragmenters that run before the Win32 subsystem aren't even supposed to be created in the first place, because it's literally impossible to link with anything like kernel32.dll or msvcrt.dll and still have NT-native executables, so we developers should just pretend that those stages are meant to be forever out of our reaches. But no, I doubt that anyone here would like me to paste a few thousand lines of code and help me look through them and try to figure out why memory allocations that aren't failing are being rejected by the source code I'm modifying. So that's why I asked about a different problem than the "root" cause, even though that's supposedly known to be the best practice by the community. If things still don't make sense, feel free to post comments below! :)

    Read the article

  • using spring, hibernate and scala, is there a better way to load test data than dbunit?

    - by egervari
    Here are some things I really dislike about dbunit: 1) You cannot specify the exact ordering the inserts because dbunit likes to group your inserts by table name, and not by the order you define them in the XML file. This is a problem when you have records depending on other records in other tables, so you have to disable foreign key constraints during your tests... which actually sucks because these foreign key constraints will get fired in production while your tests won't be aware of them! 2) They seem hellbent on forcing you to use an xml namespace to define your xml... and I honestly can't be bothered to do this. I like the data.xml without any namespace. It works. But they are so hellbent on deprecating it. 3) Creating different xml files is hard on a per test basis, so it actually encourages creating data for your entire app. Unfortunately, this process is a little bloated too once the data grows in size and things get inter tangled. There has got to be a better way to split up your test data into chunks without having to copy/paste a lot of the test data across all of your tests. 4) Keeping track of id references in a big xml file is just impossible. If you have 130 domain classes, it just gets bewildering. This model simply does not scale. Is there something less bloated and better in the Spring/Hibernate space? db unit has worn out its welcome and I'm really looking for something better.

    Read the article

  • Java: Tracking a user login session - Session EJBs vs HTTPSession

    - by bguiz
    If I want to keep track of a conversational state with each client using my web application, which is the better alternative - a Session Bean or a HTTP Session - to use? Using HTTP Session: //request is a variable of the class javax.servlet.http.HttpServletRequest //UserState is a POJO HttpSession session = request.getSession(true); UserState state = (UserState)(session.getAttribute("UserState")); if (state == null) { //create default value .. } String uid = state.getUID(); //now do things with the user id Using Session EJB: In the implementation of ServletContextListener registered as a Web Application Listener in WEB-INF/web.xml: //UserState NOT a POJO this this time, it is //the interface of the UserStateBean Stateful Session EJB @EJB private UserState userStateBean; public void contextInitialized(ServletContextEvent sce) { ServletContext servletContext = sce.getServletContext(); servletContext.setAttribute("UserState", userStateBean); ... In a JSP: public void jspInit() { UserState state = (UserState)(getServletContext().getAttribute("UserState")); ... } Elsewhere in the body of the same JSP: String uid = state.getUID(); //now do things with the user id It seems to me that the they are almost the same, with the main difference being that the UserState instance is being transported in the HttpRequest.HttpSession in the former, and in a ServletContext in the case of the latter. Which of the two methods is more robust, and why?

    Read the article

  • Excel 2010 VBA code is stuck when UserForm is shown

    - by Denis
    I've created a UserForm as a progress indicator while a web query (using InternetExplorer object) runs in the background. The code gets triggered as shown below. The progress indicator form is called 'Progerss'. Private Sub Worksheet_Change(ByVal Target As Range) If Target.Row = Range("B2").Row And Target.Column = Range("B2").Column Then Progress.Show vbModeless Range("A4:A65535").ClearContents GetWebData (Range("B2").Value) Progress.Hide End If End Sub What I see with this code is that the progress indicator form pops up when cell B2 changes. I also see that the range of cells in column A gets cleared which tells me that the vbModeless is doing what I want. But then, somewhere within the GetWebData() procedure, things get hung up. As soon as I manually destroy the progress indicator form, the GetWebData() routine finishes and I see the correct results. But if I leave the progress indicator visible, things just get stuck indefinitely. The code below shows what GetWebData() is doing. Private Sub GetWebData(ByVal Symbol As String) Dim IE As New InternetExplorer 'IE.Visible = True IE.navigate MyURL Do DoEvents Loop Until IE.readyState = READYSTATE_COMPLETE Dim Doc As HTMLDocument Set Doc = IE.document Dim Rows As IHTMLElementCollection Set Rows = Doc.getElementsByClassName("financialTable").Item(0).all.tags("tr") Dim r As Long r = 0 For Each Row In Rows Sheet1.Range("A4").Offset(r, 0).Value = Row.Children.Item(0).innerText r = r + 1 Next End Sub Any thoughts?

    Read the article

  • The use of getters and setters for different programming languages [closed]

    - by leonhart88
    So I know there are a lot of questions on getters and setters in general, but I couldn't find something exactly like my question. I was wondering if people change the use of get/set depending on different languages. I started learning with C++ and was taught to use getters and setters. This is what I understand: In C++ (and Java?), a variable can either be public or private, but we cannot have a mix. For example, I can't have a read-only variable that can still be changed inside the class. It's either all public (can read and change it), or all private (can't read and can only change inside the class). Because of this (and possibly other reasons), we use getters and setters. In MATLAB, I can control the "setaccess" and "getaccess" properties of variables, so that I can make things read-only (can directly access the property, but can't overwrite it). In this case, I don't feel like I need a getter because I can just do class.property. Also, in Python it is considered "Pythonic" to not use getters/setters and to only put things into properties if needed. I don't really understand why its OK to have all public variables in Python, because that's opposite of what I learned when I started with C++. I'm just curious what other people's thoughts are on this. Would you use getters and setters for all languages? Would you only use it for C++/Java and do direct access in MATLAB and Python (which is what I am currently doing)? Is the second option considered bad? For my purposes, I am only referring to simple getters and setters (just return/set the value and do not do anything else). Thanks!

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >