Search Results

Search found 4882 results on 196 pages for 'odd behaviour'.

Page 161/196 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • PHP weirdness extending IMagick class

    - by Jamie Carl
    This is a really weird one. I have some code that is happily working on version 2.1.1RC1 of the php5-imagick module. It's basically just a class I wrote that extends the Imagick class and manages images stored in a database. Since upgrading to version 3.0.0RC1 (thankfully only on my dev box) things have gone to hell. It seems that object members are writeable but are NOT readable. Take the following sample code: class db_image extends IMagick { private $data; function __construct( $id = null ){ parent::__construct(); $this->data = 'some plain text'; echo $this->data; } This will output absolutely NOTHING. My debugger indicates that the contents of $this-data are the correct string value, but I am unable to read the value back out of the member variable. Seriously. WTF? Does anyone know what is causing this or has seen it before? I don't even know how to replicate this behaviour in my own classes.

    Read the article

  • Forcing Kernel::method_name to be called in Ruby

    - by Peter
    I want to add a foo method to Ruby's Kernel module, so I can write foo(obj) anywhere and have it do something to obj. Sometimes I want a class to override foo, so I do this: module Kernel private # important; this is what Ruby does for commands like 'puts', etc. def foo x if x.respond_to? :foo x.foo # use overwritten method. else # do something to x. end end end this is good, and works. but, what if I want to use the default Kernel::foo in some other object that overwrites foo? Since I've got an instance method foo, I've lost the original binding to Kernel::foo. class Bar def foo # override behaviour of Kernel::foo for Bar objects. foo(3) # calls Bar::foo, not the desired call of Kernel::foo. Kernel::foo(3) # can't call Kernel::foo because it's private. # question: how do I call Kernel::foo on 3? end end Is there any clean way to get around this? I'd rather not have two different names, and I definitely don't want to make Kernel::foo public.

    Read the article

  • Semantically correct XHTML markup

    - by Dori
    Hello all. Just trying to get the hang of using the semantically correct XHTML markup. Just writing the code for a small navigation item. Where each button has effectivly a title and a descrption. I thought a definition list would therefore be great so i wrote the following <dl> <dt>Import images</dt> <dd>Read in new image names to database</dd> <dt>Exhibition Management</dt> <dd>Create / Delete an exhibition </dd> <dt>Image Management</dt> <dd>Edit name, medium and exhibition data </dd> </dl> But...I want the above to be 3 buttons, each button containing the dt and dd text. How can i do this with the correct code? Normally i would make each button a div and use that for the visual button behaviour (onHover and current page selection stuff). Any advice please Thanks

    Read the article

  • Multiple dispatching issue

    - by user1440263
    I try to be synthetic: I'm dispatching an event from a MovieClip (customized symbol in library) this way: public function _onMouseDown(e:MouseEvent){ var obj = {targetClips:["tondo"],functionString:"testFF"}; dispatchEvent(new BridgeEvent(BridgeEvent.BRIDGE_DATA,obj)); } The BridgeEvent class is the following: package events { import flash.events.EventDispatcher; import flash.events.Event; public class BridgeEvent extends Event { public static const BRIDGE_DATA:String = "BridgeData"; public var data:*; public function BridgeEvent(type:String, data:*) { this.data = data; super(type, true); } } } The document class listens to the event this way: addEventListener(BridgeEvent.BRIDGE_DATA,eventSwitcher); In eventSwitcher method I have a simple trace("received"). What happens: when I click the MovieClip the trace action gets duplicated and the output window writes many "received" (even if the click is only one). What happens? How do I prevent this behaviour? What is causing this? Any help is appreciated. [SOLVED] I'm sorry, you will not believe this. A colleague, to make me a joke, converted the MOUSE_DOWN handler to MOUSE_OVER.

    Read the article

  • Test MVC using moq

    - by Raminder
    I am new to moq and I was trying to test a controller (MVC) behaviour that when the view raises a certain event, controller calls a certain function on model, here are the classes - public class Model { public void CalculateAverage() { ... } ... } public class View { public event EventHandler CalculateAverage; private void RaiseCalculateAverage() { if (CalculateAverage != null) { CalculateAverage(this, EventArgs.Empty); } } ... } public class Controller { private Model model; private View view; public Controller(Model model, View view) { this.model = model this.view = view; view.CalculaeAverage += view_CalculateAverage; } priavate void view_CalculateAverage(object sender, EventArgs args) { model.CalculateAverage(); } } and the test - [Test] public void ModelCalculateAverageCalled() { Mock<Model> modelMock = new Mock<Model>(); Mock<View> viewMock = new Mock<View>(); Controller controller = new Controller(modelMock.Object, viewMock.Object); viewMock.Raise(x => x.CalculateAverage += null, new EventArgs.Empty); modelMock.Verify(x => x.CalculateAverage()); //never comes here, test fails in above line and exits Assert.True(true); } The issue is that the test is failing in the second last line with "Invocation was not performed on the mock: x = x.CalculateAverage()". Another thing I noticed is that the test terminates on this second last line and the last line is never executed. Am I doing everything correct?

    Read the article

  • NSTableView with columns bound to different NSArrayControllers

    - by Vyacheslav Karpukhin
    i have NSTableView and two columns in it: NSTableColumn *column = [[[NSTableColumn alloc] initWithIdentifier:@"custId"] autorelease]; [column bind:@"value" toObject:arrC2 withKeyPath:@"arrangedObjects.custId" options:nil]; [table addTableColumn:column]; column = [[[NSTableColumn alloc] initWithIdentifier:@"totalGrams"] autorelease]; [column bind:@"value" toObject:valuationArrC withKeyPath:@"arrangedObjects.totalGrams_double" options:nil]; [table addTableColumn:column]; as you can see, columns bound to different NSArrayControllers. first column shows correct values, but second just shows "(" symbol. but if i swap columns like this: NSTableColumn *column = [[[NSTableColumn alloc] initWithIdentifier:@"totalGrams"] autorelease]; [column bind:@"value" toObject:valuationArrC withKeyPath:@"arrangedObjects.totalGrams_double" options:nil]; [table addTableColumn:column]; column = [[[NSTableColumn alloc] initWithIdentifier:@"custId"] autorelease]; [column bind:@"value" toObject:arrC2 withKeyPath:@"arrangedObjects.custId" options:nil]; [table addTableColumn:column]; then i see values of first column (which was second in the first example) and again "(" in the second column. i don't understand that behaviour. how can i bound two array controllers to one table?

    Read the article

  • Why would certain browsers request all pages on my ASP.Net Web site twice?

    - by Deane
    Firefox is issuing duplicate requests to my ASP.Net web site. It will request a page, get the response, then immediately issue the same request again (well, almost the same -- see below). This happens on every page of this particular Web site (but not any others). IE does not do this, but Chrome also does this. I have confirmed that there is no Location header in the response, and no Javascript or meta tag in the page which would cause the page to be re-requested (if any of these were true, IE would be re-requesting pages as well). I have confirmed this behavior on multiple Firefox installs on multiple machines. Versions vary, but all are 3.x. The only difference between the two requests is the Accepts header. For the first request, it looks like this: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 For the second request, it looks like this: Accept: */* The Content-Type response header in all cases is: Content-Type: text/html; charset=utf-8 Something else odd -- even though Firefox requests the page twice, it uses the first response and discards the second. I put a counter on a page that increments with every request. I can watch the responses come back (via the Charles proxy). Firefox will get a "1" the first time, and a "2" the second time. Yet it will display the "1," for some reason. Chrome exhibits this exact same behavior. I suspect it's a protocol-level issue, given the difference in Accepts header, but I've never seen this before.

    Read the article

  • Horizontally-centering an absolute position to match a relative position

    - by Chris Vandevelde
    I'm trying to make a div box, containing various elements, fixed at the top of the page once the page has been scrolled so that the box would normally be out of view, but scroll normally until that point (like the behaviour at http://perldoc.perl.org/perl.html). The conditionally-fixed part is pretty simple to implement (set the position to "fixed" once the user has scrolled past a certain point, and "static" once it's scrolled back up), but I'm having trouble with the positioning and dimensions; it screws up if I'm not specifying an absolute position (if I'm using % or "auto", rather than px, em, cm, etc.) or it, confusingly, left-aligns if the box is less than the width of the page. I can understand why, more or less, I'm just trying to fix it. My strategy right now is to have an invisible DIV hold the place of the box and use Prototype's clonePosition() function to hold its position, but it doesn't seem to work for some reason. Neither does copying the margin from one element to the other. Any ideas? Bonus points and eternal gratitude if you can come up with an idea that will adjust itself with the browser window (like auto margins) without setting an onresize event.

    Read the article

  • Permuting a binary tree without the use of lists

    - by Banang
    I need to find an algorithm for generating every possible permutation of a binary tree, and need to do so without using lists (this is because the tree itself carries semantics and restraints that cannot be translated into lists). I've found an algorithm that works for trees with the height of three or less, but whenever I get to greater hights, I loose one set of possible permutations per height added. Each node carries information about its original state, so that one node can determine if all possible permutations have been tried for that node. Also, the node carries information on weather or not it's been 'swapped', i.e. if it has seen all possible permutations of it's subtree. The tree is left-centered, meaning that the right node should always (except in some cases that I don't need to cover for this algorithm) be a leaf node, while the left node is always either a leaf or a branch. The algorithm I'm using at the moment can be described sort of like this: if the left child node has been swapped swap my right node with the left child nodes right node set the left child node as 'unswapped' if the current node is back to its original state swap my right node with the lowest left nodes' right node swap the lowest left nodes two childnodes set my left node as 'unswapped' set my left chilnode to use this as it's original state set this node as swapped return null return this; else if the left child has not been swapped if the result of trying to permute left child is null return the permutation of this node else return the permutation of the left child node if this node has a left node and a right node that are both leaves swap them set this node to be 'swapped' The desired behaviour of the algoritm would be something like this: branch / | branch 3 / | branch 2 / | 0 1 branch / | branch 3 / | branch 2 / | 1 0 <-- first swap branch / | branch 3 / | branch 1 <-- second swap / | 2 0 branch / | branch 3 / | branch 1 / | 0 2 <-- third swap branch / | branch 3 / | branch 0 <-- fourth swap / | 1 2 and so on... Sorry for the ridiculisly long and waddly explanation, would really, really apreciate any sort of help you guys could offer me. Thanks a bunch!

    Read the article

  • C# IE Proxy Detection not working from a Windows Service

    - by mytwocents
    This is very odd. I've written a windows form tool in C# to return the default IE proxy. It works (See code snippets below). I've created a service to return the proxy using the same code, it works when I step into the debugger, BUT it doesn't work when the service is running as binary code. This must be some sort of permissions or context thing that I cannot recreate. Further, if I call the tool from the windows service, it still doesn't return the proxy information.... this seems isolated to the service somehow. I don't see any permission errors in the events log, it's as if a service has a totally different context than a regular windows form exe. I've tride both: WebRequest.GetSystemWebProxy() and Uri requestUri = new Uri(urlOfDomain); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(requestUri); IWebProxy proxy = request.Proxy; if (!proxy.IsBypassed(requestUri)){ Uri uri2 = proxy.GetProxy(request.RequestUri); } Thanks for any help!

    Read the article

  • MSSQL: Views that use SELECT * need to be recreated if the underlying table changes

    - by cbp
    Is there a way to make views that use SELECT * stay in sync with the underlying table. What I have discovered is that if changes are made to the underlying table, from which all columns are to be selected, the view needs to be 'recreated'. This can be achieved simly by running an ALTER VIEW statement. However this can lead to some pretty dangerous situations. If you forgot to recreate the view, it will not be returning the correct data. In fact it can be returning seriously messed up data - with the names of the columns all wrong and out of order. Nothing will pick up that the view is wrong unless you happened to have it covered by a test, or a data integrity check fails. For example, Red Gate SQL Compare doesn't pick up the fact that the view needs to be recreated. To replicate the problem, try these statements: CREATE TABLE Foobar (Bar varchar(20)) CREATE VIEW v_Foobar AS SELECT * FROM Foobar INSERT INTO Foobar (Bar) VALUES ('Hi there') SELECT * FROM v_Foobar ALTER TABLE Foobar ADD Baz varchar(20) SELECT * FROM v_Foobar DROP VIEW v_Foobar DROP TABLE Foobar I am tempted to stop using SELECT * in views, which will be a PITA. Is there a setting somewhere perhaps that could fix this behaviour?

    Read the article

  • Silverlight, Flash, or JavaScript for web app that runs client-side, or just stick with C#?

    - by Sootah
    Silverlight, Flash, and JavaScript, oh my.. I have a couple of applications that I need to develop for one of my business partners that will be distributed to dozens of people. These applications will need to be able to query information from the internet (query via Google, grab feeds from our other sites, just general web access) and save files to their computer. The reason I want to host the application is so that it all can be centrally managed, and any updates would be instantly deployed to everyone that uses the service. There always seems to be headaches with developing a pure desktop app in a language like C# with regards to making sure people use the latest version, don't have some odd problem with the installer, etc. Since we don't want to tie up our server's CPU I want effectively all of the processing done client-side. Meaning that they would log into their account, access the app, and then all the work done within the app is all handled by their machine. Only specific data would be sent back to the server. So - which language is best for this? Microsoft's Silverlight, Adobe's Flash, or Sun's JavaScript? I've heard a lot of good things about Silverlight and have wanted to try it for some time. I've only done extremely limited JavaScript programming, and absolutely none with Flash. Or, with my main requirement being that the client does all of its own processing should I just stick with C#? Also, is there any way to integrate a C# app into a webpage? I've never even considered it (or have any idea if it's even possible) until just now. Thanks in advance! -Sootah

    Read the article

  • C++ operator lookup rules / Koenig lookup

    - by John Bartholomew
    While writing a test suite, I needed to provide an implementation of operator<<(std::ostream&... for Boost unit test to use. This worked: namespace theseus { namespace core { std::ostream& operator<<(std::ostream& ss, const PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } }} This didn't: std::ostream& operator<<(std::ostream& ss, const theseus::core::PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } Apparently, the second wasn't included in the candidate matches when g++ tried to resolve the use of the operator. Why (what rule causes this)? The code calling operator<< is deep within the Boost unit test framework, but here's the test code: BOOST_AUTO_TEST_SUITE(core_image) BOOST_AUTO_TEST_CASE(test_output) { using namespace theseus::core; BOOST_TEST_MESSAGE(PixelRGB(5,5,5)); // only compiles with operator<< definition inside theseus::core std::cout << PixelRGB(5,5,5) << "\n"; // works with either definition BOOST_CHECK(true); // prevent no-assertion error } BOOST_AUTO_TEST_SUITE_END() For reference, I'm using g++ 4.4 (though for the moment I'm assuming this behaviour is standards-conformant).

    Read the article

  • 'C++ object destroyed' in QComboBox descendant editor in delegate

    - by Max
    Hi, all! I have modified combobox to hold colors, using QtColorCombo (http://qt.nokia.com/products/appdev/add-on-products/catalog/4/Widgets/qtcolorcombobox) as howto for the 'more...' button implementation details. It works fine in C++ and in PyQt on linux, but I get 'underlying C++ object was destroyed' when use this control in PyQt on Windows. It seels like the error happens when: ... # in constructor: self.activated.connect(self._emitActivatedColor) ... def _emitActivatedColor(self, index): if self._colorDialogEnabled and index == self.colorCount(): print '!!!!!!!!! QtGui.QColorDialog.getColor()' c = QtGui.QColorDialog.getColor() # <----- :( delegate fires 'closeEditor' print '!!!!!!!!! ' + c.name() if c.isValid(): self._numUserColors += 1 #at the next line currentColor() tries to access C++ layer and fails self.addColor(c, self.currentColor().name()) self.setCurrentIndex(index) ... Maybe console output will help. I've overridden event() in editor and got: ... MouseButtonRelease FocusOut Leave Paint Enter Leave FocusIn !!!!!!!!! QtGui.QColorDialog.getColor() WindowBlocked Paint WindowDeactivate !!!!!!!!! 'CloseEditor' fires! Hide HideToParent FocusOut DeferredDelete !!!!!!!!! #6e6eff ... Can someone explain, why there is such a different behaviour in the different environments, and maybe give a workaround to fix this. Here is minimal example: http://docs.google.com/Doc?docid=0Aa0otNVdbWrrZDdxYnF3NV80Y20yam1nZHM&hl=en

    Read the article

  • Is it safe to silently catch ClassCastException when searching for a specific value?

    - by finnw
    Suppose I am implementing a sorted collection (simple example - a Set based on a sorted array.) Consider this (incomplete) implementation: import java.util.*; public class SortedArraySet<E> extends AbstractSet<E> { @SuppressWarnings("unchecked") public SortedArraySet(Collection<E> source, Comparator<E> comparator) { this.comparator = (Comparator<Object>) comparator; this.array = source.toArray(); Collections.sort(Arrays.asList(array), this.comparator); } @Override public boolean contains(Object key) { return Collections.binarySearch(Arrays.asList(array), key, comparator) >= 0; } private final Object[] array; private final Comparator<Object> comparator; } Now let's create a set of integers Set<Integer> s = new SortedArraySet<Integer>(Arrays.asList(1, 2, 3), null); And test whether it contains some specific values: System.out.println(s.contains(2)); System.out.println(s.contains(42)); System.out.println(s.contains("42")); The third line above will throw a ClassCastException. Not what I want. I would prefer it to return false (as HashSet does.) I can get this behaviour by catching the exception and returning false: @Override public boolean contains(Object key) { try { return Collections.binarySearch(Arrays.asList(array), key, comparator) >= 0; } catch (ClassCastException e) { return false; } } Assuming the source collection is correctly typed, what could go wrong if I do this?

    Read the article

  • Python calling class methods with the wrong number of parameters

    - by Hussain
    I'm just beginning to learn python. I wrote an example script to test OOP in python, but something very odd has happened. When I call a class method, Python is calling the function with one more parameter than given. Here is the code: 1. class Bar: 2. num1,num2 = 0,0 3. def __init__(num1,num2): 4. num1,num2 = num1,num2 5. def foo(): 6. if num1 num2: 7. print num1,'is greater than ',num2,'!' 8. elif num1 is num2: 9. print num1,' is equal to ',num2,'!' 10. else: 11. print num1,' is less than ',num2,'!' 12. a,b,t = 42,84,bar(a,b) 13. t.foo 14. 15. t.num1 = t.num1^t.num2 16. t.num2 = t.num2^t.num1 17. t.num1 = t.num1^t.num2 18. 19. t.foo 20. And the error message I get: python test.py Traceback (most recent call last): File "test.py", line 12, in a,b,t = 42,84,bar(a,b) NameError: name 'bar' is not defined Can anyone help? Thanks in advance

    Read the article

  • Testing a method used from an abstract class

    - by Bas
    I have to Unit Test a method (runMethod()) that uses a method from an inhereted abstract class to create a boolean. The method in the abstract class uses XmlDocuments and nodes to retrieve information. The code looks somewhat like this (and this is extremely simplified, but it states my problem) namespace AbstractTestExample { public abstract class AbstractExample { public string propertyValues; protected XmlNode propertyValuesXML; protected string getProperty(string propertyName) { XmlDocument doc = new XmlDocument(); doc.Load(new System.IO.StringReader(propertyValues)); propertyValuesXML= doc.FirstChild; XmlNode node = propertyValuesXML.SelectSingleNode(String.Format("property[name='{0}']/value", propertyName)); return node.InnerText; } } public class AbstractInheret : AbstractExample { public void runMethod() { bool addIfContains = (getProperty("AddIfContains") == null || getProperty("AddIfContains") == "True"); //Do something with boolean } } } So, the code wants to get a property from a created XmlDocument and uses it to form the result to a boolean. Now my question is, what is the best solution to make sure I have control over the booleans result behaviour. I'm using Moq for possible mocking. I know this code example is probably a bit fuzzy, but it's the best I could show. Hope you guys can help.

    Read the article

  • Partial template specialization for more than one typename

    - by Matt Joiner
    In the following code, I want to consider functions (Ops) that have void return to instead be considered to return true. The type Retval, and the return value of Op are always matching. I'm not able to discriminate using the type traits shown here, and attempts to create a partial template specialization based on Retval have failed due the presence of the other template variables, Op and Args. How do I specialize only some variables in a template specialization without getting errors? Is there any other way to alter behaviour based on the return type of Op? template <typename Retval, typename Op, typename... Args> Retval single_op_wrapper( Retval const failval, char const *const opname, Op const op, Cpfs &cpfs, Args... args) { try { CallContext callctx(cpfs, opname); Retval retval; if (std::is_same<bool, Retval>::value) { (callctx.*op)(args...); retval = true; } else { retval = (callctx.*op)(args...); } assert(retval != failval); callctx.commit(cpfs); return retval; } catch (CpfsError const &exc) { cpfs_errno_set(exc.fserrno); LOGF(Info, "Failed with %s", cpfs_errno_str(exc.fserrno)); } return failval; }

    Read the article

  • Django IntegrityError: foreign key violation upon delete

    - by Lukasz Korzybski
    I have Order and Shipment model. Shipment has a foreign key to Order. class Order(...): ... class Shipment() order = m.ForeignKey('Order') ... Now in one of my views I want do delete order object along with all related objects. So I invoke order.delete(). I have Django 1.0.4, PostgreSQL 8.4 and I use transaction middleware, so whole request is enclosed in single transaction. The problem is that upon order.delete() I get: ... File "/usr/local/lib/python2.6/dist-packages/django/db/backends/__init__.py", line 28, in _commit return self.connection.commit() IntegrityError: update or delete on table "main_order" violates foreign key constraint "main_shipment_order_id_fkey" on table "main_shipment" DETAIL: Key (id)=(45) is still referenced from table "main_shipment". I checked in connection.queries that proper queries are executed in proper order. First shipment is deleted, after that django executes delete on order row: {'time': '0.000', 'sql': 'DELETE FROM "main_shipment" WHERE "id" IN (17)'}, {'time': '0.000', 'sql': 'DELETE FROM "main_order" WHERE "id" IN (45)'} Foreign key have ON DELETE NO ACTION (default) and is initially deferred. I don't know why I get foreign key constraint violation. I also tried to register pre_delete signal and manually delete shipment objects before delete on order is called, but it resulted in the same error. I can change ON DELETE behaviour for this key in Postgres but it would be just a hack, I wonder if anyone has a better idea what's going on here. There is also a small detail, my Order model inherits from Cart model, so it actually doesn't have id field but cart_ptr_id and after DELETE on order is executed there is also DELETE on cart, but it seems unrelated? to the shipment-order problem so I simplified it in the example.

    Read the article

  • Expand chaining hashtable. Errors on code.

    - by FILIaS
    Expanding a hashtable with linked lists there are some errors and warnings. I wanna make sure that the following code is right (expand method) and find out what happens that raise these warnings/errors typedef struct { int length; struct List *head; struct List *tail; } HashTable; //resolving collisions using linked lists - chaining typedef struct { char *number; char *name; int time; struct List *next; }List; //on the insert method i wanna check hashtable's size, //if it seems appropriate there is the following code: //Note: hashtable variable is: Hashtable * ...... hashtable = expand(hashtable,number,name,time); /**WARNING**:assignment makes pointer from integer without a cast*/ HashTable* expand( HashTable* h ,char number[10],char* name,int time) /**error**: conflicting types for ‘expand’ previous implicit declaration of ‘expand’ was here*/ { HashTable* new; int n; clientsList *node,*next; PrimesIndex++; int new_size= primes[PrimesIndex]; /* double the size,odd length */ if (!(new=malloc((sizeof( List*))*new_size))) return NULL; for(n=0; n< h->length; ++n) { for(node=h[n].head; node; node=next) { add (&new, node->number, node->name,node->time); next=node->next;//// free(node); } } free(h); return new; }

    Read the article

  • Any useful suggestions to figure out where memory is being free'd in a Win32 process?

    - by LeopardSkinPillBoxHat
    An application I am working with is exhibiting the following behaviour: During a particular high-memory operation, the memory usage of the process under Task Manager (Mem Usage stat) reaches a peak of approximately 2.5GB (Note: A registry key has been set to allow this, as usually there is a maximum of 2GB for a process under 32-bit Windows) After the operation is complete, the process size slowly starts decreasing at a rate of 1MB per second. I am trying to figure out the easiest way to quickly determine who is freeing this memory, and where it is being free'd. I am having trouble attaching a memory profiler to my code, and I don't particularly want to override the new/delete operators to track the allocations/deallocations (IOW, I want to do this without re-compiling my code). Can anyone offer any useful suggestions of how I could do this via the Visual Studio debugger? Update I should also mention that it's a multi-threaded application, so pausing the application and analysing the call stack through the debugger is not the most desirable option. I considered freezing different threads one at a time to see if the memory stops reducing, but I'm fairly certain this will cause the application to crash.

    Read the article

  • Are there any libraries for parsing "number expressions" like 1,2-9,33- in Java

    - by mihi
    Hi, I don't think it is hard, just tedious to write: Some small free (as in beer) library where I can put in a String like 1,2-9,33- and it can tell me whether a given number matches that expression. Just like most programs have in their print range dialogs. Special functions for matching odd or even numbers only, or matching every number that is 2 mod 5 (or something like that) would be nice, but not needed. The only operation I have to perform on this list is whether the range contains a given (nonnegative) integer value; more operations like max/min value (if they exist) or an iterator would be nice, of course. What would be needed that it does not occupy lots of RAM if anyone enters 1-10000000 but the only number I will ever query is 12345 :-) (To implement it, I would parse a list into several (min/max/value/mod) pairs, like 1,10,0,1 for 1-10 or 11,33,1,2 for 1-33odd, or 12,62,2,10 for 12-62/10 (i. e. 12, 22, 32, ..., 62) and then check each number for all the intervals. Open intervals by using Integer.MaxValue etc. If there are no libs, any ideas to do it better/more efficient?)

    Read the article

  • Entity Framework and associations between string keys

    - by fredrik
    Hi, I am new to Entity Framework, and ORM's for that mather. In the project that I'm involed in we have a legacy database, with all its keys as strings, case-insensitive. We are converting to MSSQL and want to use EF as ORM, but have run in to a problem. Here is an example that illustrates our problem: TableA has a primary string key, TableB has a reference to this primary key. In LINQ we write something like: var result = from t in context.TableB select t.TableA; foreach( var r in result ) Console.WriteLine( r.someFieldInTableA ); if TableA contains a primary key that reads "A", and TableB contains two rows that references TableA but with different cases in the referenceing field, "a" and "A". In our project we want both of the rows to endup in the result, but only the one with the matching case will end up there. Using the SQL Profiler, I have noticed that both of the rows are selected. Is there a way to tell Entity Framework that the keys are case insensitive? Edit:We have now tested this with NHibernate and come to the conclution that NHibernate works with case-insensitive keys. So NHibernate might be a better choice for us.I am however still interested in finding out if there is any way to change the behaviour of Entity Framework.

    Read the article

  • Excess errors on model from somewhere

    - by gmile
    I have a User model, and use an acts_as_authentic (from authlogic) on it. My User model have 3 validations on username and looks as following: User < ActiveRecord::Base acts_as_authentic validates_presence_of :username validates_length_of :username, :within => 4..40 validates_uniqueness_of :username end I'm writing a test to see my validations in action. Somehow, I get 2 errors instead of one when validating a uniqueness of a name. To see excess error, I do the following test: describe User do before(:each) do @user = Factory.build(:user) end it "should have a username longer then 3 symbols" do @user2 = Factory(:user) @user.username = @user2.username @user.save puts @user.errors.inspect end end I got 2 errors on username: @errors={"username"=>["has already been taken", "has already been taken"]}. Somehow the validation passes two times. I think authlogic causes that, but I don't have a clue on how to avoid that. Another case of problem is when I set username to nil. Somehow I get four validation errors instead of three: @errors={"username"=>["is too short (minimum is 3 characters)", "should use only letters, numbers, spaces, and .-_@ please.", "can't be blank", "is too short (minimum is 4 characters)"]} I think authlogic is one that causes this strange behaviour. But I can't even imagine on how to solve that. Any ideas?

    Read the article

  • ExecutorSerrvice memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >