Search Results

Search found 369 results on 15 pages for 'assertion'.

Page 13/15 | < Previous Page | 9 10 11 12 13 14 15  | Next Page >

  • Is it possible to change 2-3 times f->SetSensor() ?

    - by Asen
    Hello there ! i use cocos2d-iphone-0.99.2 and integrated in it box2d. i have 2 kind of sprites with tags 1 and 2. Also i created bodies and shape definitions for them. what i'm trying to do is to make sprite1 kinds to act as solid or act as not solid when sprite2 colides with them. i tried this code : for(b2Body *b = _world->GetBodyList(); b; b=b->GetNext()) { if (b-GetUserData() != NULL) { CCSprite *sprite = (CCSprite )b-GetUserData(); if (sprite.tag == 1) { b2Fixture f = b-GetFixtureList(); f-SetSensor(solid); } } } Where solid is bool. The first time when i change fixture to sensor everything is just fine but when i try to revert and change again to solid my app crashes with the following error : Assertion failed: (manifold-pointCount 0), function b2ContactSolver, file /Documents/myapp/libs/Box2D/Dynamics/Contacts/b2ContactSolver.cpp, line 58. Is it possible somehow to change fixture-SetSensor several times and if so ... how ? Any help is highly appreciated.

    Read the article

  • C++ operator lookup rules / Koenig lookup

    - by John Bartholomew
    While writing a test suite, I needed to provide an implementation of operator<<(std::ostream&... for Boost unit test to use. This worked: namespace theseus { namespace core { std::ostream& operator<<(std::ostream& ss, const PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } }} This didn't: std::ostream& operator<<(std::ostream& ss, const theseus::core::PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } Apparently, the second wasn't included in the candidate matches when g++ tried to resolve the use of the operator. Why (what rule causes this)? The code calling operator<< is deep within the Boost unit test framework, but here's the test code: BOOST_AUTO_TEST_SUITE(core_image) BOOST_AUTO_TEST_CASE(test_output) { using namespace theseus::core; BOOST_TEST_MESSAGE(PixelRGB(5,5,5)); // only compiles with operator<< definition inside theseus::core std::cout << PixelRGB(5,5,5) << "\n"; // works with either definition BOOST_CHECK(true); // prevent no-assertion error } BOOST_AUTO_TEST_SUITE_END() For reference, I'm using g++ 4.4 (though for the moment I'm assuming this behaviour is standards-conformant).

    Read the article

  • Best way to unit test Collection?

    - by limc
    I'm just wondering how folks unit test and assert that the "expected" collection is the same/similar as the "actual" collection (order is not important). To perform this assertion, I wrote my simple assert API:- public void assertCollection(Collection<?> expectedCollection, Collection<?> actualCollection) { assertNotNull(expectedCollection); assertNotNull(actualCollection); assertEquals(expectedCollection.size(), actualCollection.size()); assertTrue(expectedCollection.containsAll(actualCollection)); assertTrue(actualCollection.containsAll(expectedCollection)); } Well, it works. It's pretty simple if I'm asserting just bunch of Integers or Strings. It can also be pretty painful if I'm trying to assert a collection of Hibernate domains, say for example. The collection.containsAll(..) relies on the equals(..) to perform the check, but I always override the equals(..) in my Hibernate domains to check only the business keys (which is the best practice stated in the Hibernate website) and not all the fields of that domain. Sure, it makes sense to check just against the business keys, but there are times I really want to make sure all the fields are correct, not just the business keys (for example, new data entry record). So, in this case, I can't mess around with the domain.equals(..) and it almost seems like I need to implement some comparators for just unit testing purposes instead of relying on collection.containsAll(..). Are there some testing libraries I could leverage here? How do you test your collection? Thanks.

    Read the article

  • Error copying file from app bundle

    - by Michael Chen
    I used the FireFox add-on SQLite Manager, created a database, which saved to my desktop as "DB.sqlite". I copied the file into my supporting files for the project. But when I run the app, immediately I get the error "Assertion failure in -[AppDelegate copyDatabaseIfNeeded], /Users/Mac/Desktop/Note/Note/AppDelegate.m:32 2014-08-19 23:38:02.830 Note[28309:60b] Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Failed to create writable database file with message 'The operation couldn’t be completed. (Cocoa error 4.)'.' First throw call stack: "... Here is the App Delegate Code where the error takes place -(void) copyDatabaseIfNeeded { NSFileManager *fileManager = [NSFileManager defaultManager]; NSError *error; NSString *dbPath = [self getDBPath]; BOOL success = [fileManager fileExistsAtPath:dbPath]; if (!success) { NSString *defaultDBPath = [[ [NSBundle mainBundle ] resourcePath] stringByAppendingPathComponent:@"DB.sqlite"]; success = [fileManager copyItemAtPath:defaultDBPath toPath:dbPath error:&error]; if (!success) NSAssert1(0, @"Failed to create writable database file with message '%@'.", [error localizedDescription]); } } I am very new to Sqlite, so I maybe I didn't create a database correctly in the FireFox Sqlite manager, or maybe I didn't "properly" copy the .sqlite file in? (I did check the target membership in the sqlite and it correctly has my project selected. Also, the .sqlite file names all match up perfectly.)

    Read the article

  • How to handle 'this' pointer in constructor?

    - by Kyle
    I have objects which create other child objects within their constructors, passing 'this' so the child can save a pointer back to its parent. I use boost::shared_ptr extensively in my programming as a safer alternative to std::auto_ptr or raw pointers. So the child would have code such as shared_ptr<Parent>, and boost provides the shared_from_this() method which the parent can give to the child. My problem is that shared_from_this() cannot be used in a constructor, which isn't really a crime because 'this' should not be used in a constructor anyways unless you know what you're doing and don't mind the limitations. Google's C++ Style Guide states that constructors should merely set member variables to their initial values. Any complex initialization should go in an explicit Init() method. This solves the 'this-in-constructor' problem as well as a few others as well. What bothers me is that people using your code now must remember to call Init() every time they construct one of your objects. The only way I can think of to enforce this is by having an assertion that Init() has already been called at the top of every member function, but this is tedious to write and cumbersome to execute. Are there any idioms out there that solve this problem at any step along the way?

    Read the article

  • Why is Grails Searchable Plugin causing errors on Hibernate AutoFlush?

    - by Mark Rogers
    In the Grails 1.2.5 project that I am trying to troubleshoot, we use the Grails Searchable plugin .5.5.1. The problem is that whenever we attempt to index large sets domain classes, Grails keeps throwing: ERROR hibernate.AssertionFailure - an assertion failure occured (this may indicate a bug in Hibernate, but is more likely due to unsafe use of the session) org.hibernate.AssertionFailure: collection [domain-class] was not processed by flush() But the domain classes involved have been mapped and used by hibernate without issues outside of the calls to searchable plugin. The use of the searchable plugin goes as follows: Create a compass session with compass.openSession() Begin compass transaction: compassSession.beginTransaction() Then compassSession.create(result.get(0)) is called on an important unindexed domain class Finally compassTransaction.commit() is called to commit the transaction. Goto 2 and process next domain class Between the 3 and 4th Domain class, an autoflush is triggered that throws the error. Can anyone give me any hints about how to solve this problem? Has anyone encountered this problem before? I know that they had a systemic issue with this back in pre .5 versions of the searchable-plugin. Is it possible those issues weren't totally fixed?

    Read the article

  • Report Viewer Configuration error - In View Source of webpage

    - by sarada
    I found the following error message when I checked View source of the web page , but the web page works just fine. Our Test lead found the error while performing Assertion tests. Report Viewer Configuration Error The Report Viewer Web Control HTTP Handler has not been registered in the application's web.config file. Add <add verb=" * " path="Reserved.ReportViewerWebControl.axd" type = "Microsoft.Reporting.WebForms.HttpHandler, Microsoft.ReportViewer.WebForms, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> to the system.web/httpHandlers section of the web.config file, or add <add name="ReportViewerWebControlHandler" preCondition="integratedMode" verb="*" path="Reserved.ReportViewerWebControl.axd" type="Microsoft.Reporting.WebForms.HttpHandler, Microsoft.ReportViewer.WebForms, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> to the system.webServer/handlers section for Internet Information Services 7 or later Why is this error message coming up in view source.. Note:There is a div tag around this error message which has style="display:none" I am trying to find out why but everybody has only discussed this error message as one which is thrown in the webpage. The changes suggested to web.config are already present in our config file.

    Read the article

  • App crashes frequently at time when UIActionSheet would be presented

    - by Jim Hankins
    I am getting the following error intermittently when a call is made to display an action sheet. Assertion failure in -[UIActionSheet showInView:] Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: view != nil' Now in this case I've not changed screens. The UIActionSheet is presented when a local notification is fired and I have an observer call a local method on this view as such: I have the property marked as strong. When the action sheet is dismissed I also set it to nil. I am using a story board for the UI. It's fairly repeatable to crash it, perhaps less than 5 tries. (Thankfully I have that going for me). Any suggestions what to try next? I'm really pulling my hair out on this one. Most of the issues I've seen on this topic are pointing to the crash occurring once the selection is made. In my case it's at presentation and intermittently. Also for what it's worth, this particular view is several stacks deep in an embedded navigation controller. Hometableviewdetail selectviewController in question. This same issue occurs so far in testing on iOS 5.1 and iOS 6. I'm presuming it's something to do with how the show InView is being targeted. self.actionSheet = [[UIActionSheet alloc] initWithTitle:@"Select Choice" delegate:self cancelButtonTitle:@"Not Yet" destructiveButtonTitle:@"Do this Now" otherButtonTitles:nil]; [self.actionSheet showInView:self.parentViewController.tabBarController.view];

    Read the article

  • Does it exist: smart pointer, owned by one object allowing access.

    - by Noah Roberts
    I'm wondering if anyone's run across anything that exists which would fill this need. Object A contains an object B. It wants to provide access to that B to clients through a pointer (maybe there's the option it could be 0, or maybe the clients need to be copiable and yet hold references...whatever). Clients, lets call them object C, would normally, if we're perfect developers, be written carefully so as to not violate the lifetime semantics of any pointer to B they might have...but we're not perfect, in fact we're pretty dumb half the time. So what we want is for object C to have a pointer to object B that is not "shared" ownership but that is smart enough to recognize a situation in which the pointer is no longer valid, such as when object A is destroyed or it destroys object B. Accessing this pointer when it's no longer valid would cause an assertion/exception/whatever. In other words, I wish to share access to data in a safe, clear way but retain the original ownership semantics. Currently, because I've not been able to find any shared pointer in which one of the objects owns it, I've been using shared_ptr in place of having such a thing. But I want clear owneship and shared/weak pointer doesn't really provide that. Would be nice further if this smart pointer could be attached to member variables and not just hold pointers to dynamically allocated memory regions. If it doesn't exist I'm going to make it, so I first want to know if someone's already released something out there that does it. And, BTW, I do realize that things like references and pointers do provide this sort of thing...I'm looking for something smarter.

    Read the article

  • Why does this ActionFilterAttribute not import data to the ViewModel?

    - by Tomas Lycken
    I have the following attribute public class ImportStatusAttribute : ActionFilterAttribute { public override void OnActionExecuted(ActionExecutedContext filterContext) { var model = (IHasStatus)filterContext.Controller.ViewData.Model; model.Status = (StatusMessageViewModel)filterContext.Controller.TempData["status"]; filterContext.Controller.ViewData.Model = model; } } which I test with the following test method (the first of several I'll write when this one passes...) [TestMethod] public void OnActionExecuted_ImportsStatusFromTempDataToModel() { // Arrange Expect(new { Status = new StatusMessageViewModel() { Subject = "The test", Predicate = "has been tested" }, Key = "status" }); var filterContext = new Mock<ActionExecutedContext>(); var model = new Mock<IHasStatus>(); var tempData = new TempDataDictionary(); var viewData = new ViewDataDictionary(model.Object); var controller = new FakeController() { ViewData = viewData, TempData = tempData }; tempData.Add(expected.Key, expected.Status); filterContext.Setup(c => c.Controller).Returns(controller); var attribute = new ImportStatusAttribute(); // Act attribute.OnActionExecuted(filterContext.Object); // Assert Assert.IsNotNull(model.Object.Status, "The status was not exported"); Assert.AreEqual(model.Object.Status.ToString(), ((StatusMessageViewModel)expected.Status).ToString(), "The status was not the expected"); } (Expect() is a method that saves some expectations in the expected object...) When I run the test, it fails on the first assertion, and I can't get my head around why. Debugging, I can see that model is populated correctly, and that (StatusMessageViewModel)filterContext.Controller.TempData["status"] has the correct data. But after model.Status = (StatusMessageViewModel)filterContext.Controller.TempData["status"]; model.Status is still null in my watch window. Why can't I do this?

    Read the article

  • Need Help with .NET OpenId HttpHandler

    - by Mark E
    I'm trying to use a single HTTPHandler to authenticate a user's open id and receive a claimresponse. The initial authentication works, but the claimresponse does not. The error I receive is "This webpage has a redirect loop." What am I doing wrong? public class OpenIdLogin : IHttpHandler { private HttpContext _context = null; public void ProcessRequest(HttpContext context) { _context = context; var openid = new OpenIdRelyingParty(); var response = openid.GetResponse(); if (response == null) { // Stage 2: user submitting Identifier openid.CreateRequest(context.Request.Form["openid_identifier"]).RedirectToProvider(); } else { // Stage 3: OpenID Provider sending assertion response switch (response.Status) { case AuthenticationStatus.Authenticated: //FormsAuthentication.RedirectFromLoginPage(response.ClaimedIdentifier, false); string identifier = response.ClaimedIdentifier; //****** TODO only proceed if we don't have the user's extended info in the database ************** ClaimsResponse claim = response.GetExtension<ClaimsResponse>(); if (claim == null) { //IAuthenticationRequest req = openid.CreateRequest(identifier); IAuthenticationRequest req = openid.CreateRequest(Identifier.Parse(identifier)); var fields = new ClaimsRequest(); fields.Email = DemandLevel.Request; req.AddExtension(fields); req.RedirectingResponse.Send(); //Is this correct? } else { context.Response.ContentType = "text/plain"; context.Response.Write(claim.Email); //claim.FullName; } break; case AuthenticationStatus.Canceled: //TODO break; case AuthenticationStatus.Failed: //TODO break; } } }

    Read the article

  • Adding custom UITableViewCell crashes the simulator.

    - by nevva
    Im trying to build my application using a custom UITableViewCell. This is the code in my UIViewController that adds the viewCell to the table: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { NSLog(@"------- Tableview --------"); static NSString *MyIdentifier = @"MyIdentifier"; MyIdentifier = @"aCellIdentifier"; MyTableCell *cell = (MyTableCell *)[tableView dequeueReusableCellWithIdentifier:MyIdentifier]; if(cell == nil) { NSArray *[[NSBundle mainBundle] loadNibNamed:@"tblCellView" owner:self options:nil]; cell = tblCell; } [cell setLabelText:[NSString stringWithFormat:@"indexpath.row: %d", indexPath.row]]; //cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:MyIdentifier] autorelease]; return cell; } if i uncomment the line above "return cell" it returns a regular UITableViewCell without any errors, but as soon as i try to implement my custom cell it crashes with this error: ------- Tableview -------- 2010-04-23 11:17:33.163 SogetGolf[26935:40b] * Assertion failure in -[UITableView _createPreparedCellForGlobalRow:withIndexPath:], /SourceCache/UIKit_Sim/UIKit-984.38/UITableView.m:4709 2010-04-23 11:17:33.164 SogetGolf[26935:40b] * Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'UITableView dataSource must return a cell from tableView:cellForRowAtIndexPath:' 2010-04-23 11:17:33.165 SogetGolf[26935:40b] Stack: ( ... I have configured the .xib file as one should with the proper outlets. And the identifier of the UITableViewCell corresponds with name im trying to load from NSBundle

    Read the article

  • Returning C++ objects from Windows DLL

    - by R Samuel Klatchko
    Due to how Microsoft implements the heap in their non-DLL versions of the runtime, returning a C++ object from a DLL can cause problems: // dll.h DLL_EXPORT std::string somefunc(); and: // app.c - not part of DLL but in the main executable void doit() { std::string str(somefunc()); } The above code runs fine provided both the DLL and the EXE are built with the Multi-threaded DLL runtime library. But if the DLL and EXE are built without the DLL runtime library (either the single or multi-threaded versions), the code above fails (with a debug runtime, the code aborts immediately due to the assertion _CrtIsValidHeapPointer(pUserData) failing; with a non-debug runtime the heap gets corrupted and the program eventually fails elsewhere). Two questions: Is there a way to solve this other then requiring that all code use the DLL runtime? For people who distribute their libraries to third parties, how do you handle this? Do you not use C++ objects in your API? Do you require users of your library to use the DLL runtime? Something else?

    Read the article

  • How to support comparisons for QVariant objects containing a custom type?

    - by Tyler McHenry
    According to the Qt documentation, QVariant::operator== does not work as one might expect if the variant contains a custom type: bool QVariant::operator== ( const QVariant & v ) const Compares this QVariant with v and returns true if they are equal; otherwise returns false. In the case of custom types, their equalness operators are not called. Instead the values' addresses are compared. How are you supposed to get this to behave meaningfully for your custom types? In my case, I'm storing an enumerated value in a QVariant, e.g. In a header: enum MyEnum { Foo, Bar }; Q_DECLARE_METATYPE(MyEnum); Somewhere in a function: QVariant var1 = QVariant::fromValue<MyEnum>(Foo); QVariant var2 = QVariant::fromValue<MyEnum>(Foo); assert(var1 == var2); // Fails! What do I need to do differently in order for this assertion to be true? I understand why it's not working -- each variant is storing a separate copy of the enumerated value, so they have different addresses. I want to know how I can change my approach to storing these values in variants so that either this is not an issue, or so that they do both reference the same underlying variable. It don't think it's possible for me to get around needing equality comparisons to work. The context is that I am using this enumeration as the UserData in items in a QComboBox and I want to be able to use QComboBox::findData to locate the item index corresponding to a particular enumerated value.

    Read the article

  • Cocoa memory management - object going nil on me

    - by SirRatty
    Hi all, Mac OS X 10.6, Cocoa project, with retain/release gc I've got a function which: iterates over a specific directory, scans it for subfolders (included nested ones), builds an NSMutableArray of strings (one string per found subfolder path), and returns that array. e.g. (error handling removed for brevity). NSMutableArray * ListAllSubFoldersForFolderPath(NSString *folderPath) { NSMutableArray *a = [NSMutableArray arrayWithCapacity:100]; NSString *itemName = nil; NSFileManager *fm = [NSFileManager defaultManager]; NSDirectoryEnumerator *e = [fm enumeratorAtPath:folderPath]; while (itemName = [e nextObject]) { NSString *fullPath = [folderPath stringByAppendingPathComponent:itemName]; BOOL isDirectory; if ([fm fileExistsAtPath:fullPath isDirectory:&isDirectory]) { if (isDirectory is_eq YES) { [a addObject: fullPath]; } } } return a; } The calling function takes the array just once per session, keeps it around for later processing: static NSMutableArray *gFolderPaths = nil; ... gFolderPaths = ListAllSubFoldersForFolderPath(myPath); [gFolderPaths retain]; All appears good at this stage. [gFolderPaths count] returns the correct number of paths found, and [gFolderPaths description] prints out all the correct path names. The problem: When I go to use gFolderPaths later (say, the next run through my event loop) my assertion code (and gdb in Xcode) tells me that it is nil. I am not modifying gFolderPaths in any way after that initial grab, so I am presuming that my memory management is screwed and that gFolderPaths is being released by the runtime. My assumptions/presumptions I do not have to retain each string as I add it to the mutable array because that is done automatically, but I do have to retain the the array once it is handed over to me from the function, because I won't be using it immediately. Is this correct? Any help is appreciated.

    Read the article

  • Can a conforming C# compiler optimize away a local (but unused) variable if it is the only strong re

    - by stakx
    The title says it all, but let me explain: void Case_1() { var weakRef = new WeakReference(new object()); GC.Collect(); // <-- doesn't have to be an explicit call; just assume that // garbage collection would occur at this point. if (weakRef.IsAlive) ... } In this code example, I obviously have to plan for the possibility that the new'ed object is reclaimed by the garbage collector; therefore the if statement. (Note that I'm using weakRef for the sole purpose of checking if the new'ed object is still around.) void Case_2() { var unusedLocalVar = new object(); var weakRef = new WeakReference(unusedLocalVar); GC.Collect(); // <-- doesn't have to be an explicit call; just assume that // garbage collection would occur at this point. Debug.Assert(weakReferenceToUseless.IsAlive); } The main change in this code example from the previous one is that the new'ed object is strongly referenced by a local variable (unusedLocalVar). However, this variable is never used again after the weak reference (weakRef) has been created. Question: Is a conforming C# compiler allowed to optimize the first two lines of Case_2 into those of Case_1 if it sees that unusedLocalVar is only used in one place, namely as an argument to the WeakReference constructor? i.e. is there any possibility that the assertion in Case_2 could ever fail?

    Read the article

  • Python Webkit browser's inspector is missing a few things

    - by NoBugs
    I'm using a Webkit browser inspector like this. When I run it in Ubuntu 12.10, I'm getting errors when using the inspector. For example: ** Message: console message: file:///usr/share/webkitgtk-1.0/webinspector/UIString.js @42: Localized string "Go to line" not found. ** Message: console message: file:///usr/share/webkitgtk-1.0/webinspector/UIString.js @42: Localized string "Filter" not found. ** Message: console message: file:///usr/share/webkitgtk-1.0/webinspector/UIString.js @42: Localized string "Search Previous" not found. ** Message: console message: file:///usr/share/webkitgtk-1.0/webinspector/UIString.js @42: Localized string "Search Next" not found. ** Message: console message: file:///usr/share/webkitgtk-1.0/webinspector/UIString.js @42: Localized string "a:" not found. ** Message: console message: file:///usr/share/webkitgtk-1.0/webinspector/UIString.js @42: Localized string "%d of %d" not found. (geany:2487): Gdk-CRITICAL **: IA__gdk_error_trap_pop: assertion `gdk_error_traps != NULL' failed ** Message: console message: file:///usr/share/webkitgtk-1.0/webinspector/UIString.js @42: Localized string "Sources Panel" not found. ** Message: console message: file:///usr/share/webkitgtk-1.0/webinspector/UIString.js @42: Localized string "Toggle breakpoint" not found. ** Message: console message: file:///usr/share/webkitgtk-1.0/webinspector/UIString.js @42: Localized string "Painting" not found. I also noticed the breadcrumb/slider bar doesn't show when you have the console in the lower half: I don't remember this in earlier versions, and when I use the GTK3 version (from gi.repository import WebKit etc) it has similar problem, and is even worse, scrollbars don't have arrows at top and bottom. Am I missing a step on initializing the Webkit inspector or English locale for it? I would like to debug this issue, but since the inspector object isn't a webview object, I'm not sure I can add an inspector to the inspector? (like how you can use F12 when inspector is its own window in Chrome/Chromium, which lets you debug that inspector). It should be possible, but maybe not with pyGTK?

    Read the article

  • How to use the Zend_Log instance that was created using the Zend_Application_Resource_Log in a model

    - by Alex
    Our Zend_Log is initialized by only adding the following lines to application.ini resources.log.stream.writerName = "Stream" resources.log.stream.writerParams.mode = "a" So Zend_Application_Resource_Log will create the instance for us. We are already able to access this instance in controllers via the following: public function getLog() { $bootstrap = $this->getInvokeArg('bootstrap'); //if (is_null($bootstrap)) return false; if (!$bootstrap->hasPluginResource('Log')) { return false; } $log = $bootstrap->getResource('Log'); return $log; } So far, so good. Now we want to use the same log instance in model classes, where we can not access the bootstrap. Our first idea was to register the very same Log instance in Zend_Registry to be able to use Zend_Registry::get('Zend_Log') everywhere we want: in our Bootstrap class: protected function _initLog() { if (!$this->hasPluginResource('Log')) { throw new Zend_Exception('Log not enabled'); } $log = $this->getResource('Log'); assert( $log != null); Zend_Registry::set('Zend_Log', $log); } Unfortunately this assertion fails == $log IS NULL --- but why?? It is clear that we could just initialize the Zend_Log manually during bootstrapping without using the automatism of Zend_Application_Resource_Log, so this kind of answers will not be accepted.

    Read the article

  • multiple calls to realloc() seems to cause a heap corruption..

    - by Windindeed
    What's the problem with this code? It crashes every time. One time it's a failed assertion "_ASSERTE(_CrtIsValidHeapPointer(pUserData));", other times it is just a "heap corrpuption" error. Changing the buffer size affects this issue in some strange ways - sometimes it crashes on the "realloc", and other times on the "free". I have debugged this code many times, and there is nothing abnormal regarding the pointers. char buf[2000]; char *data = (char*)malloc(sizeof(buf)); unsigned int size = sizeof(buf); for (unsigned int i = 0; i < 5; ++i) { char *ptr = data + size; size += sizeof(buf); char *tmp = (char*)realloc(data, size); if (!tmp) { std::cout << "Oh no.."; break; } data = tmp; memcpy(ptr, buf, sizeof(buf)); } free(data); Thanks!

    Read the article

  • C++: getting the address of the start of an std::vector ?

    - by shoosh
    Sometimes it is useful to use the starting address of an std::vector and temporarily treat that address as the address of a regularly allocated buffer. For instance replace this: char* buf = new char[size]; fillTheBuffer(buf, size); useTheBuffer(buf, size); delete[] buf; With This: vector<char> buf(size); fillTheBuffer(&buf[0], size); useTheBuffer(&buf[0], size); The advantage of this is of course that the buffer is deallocated automatically and I don't have to worry about the delete[]. The problem I'm having with this is when size == 0. In that case the first version works ok. An empty buffer is "allocated" and the subsequent functions do nothing size they get size == 0. The second version however fails if size == 0 since calling buf[0] may rightly contain an assertion that 0 < size. So is there an alternative to the idiom &buf[0] that returns the address of the start of the vector even if the vector is empty? I've also considered using buf.begin() but according to the standard it isn't even guaranteed to return a pointer.

    Read the article

  • What encoding does c32rtomb convert to?

    - by R. Martinho Fernandes
    The functions c32rtomb and mbrtoc32 from <cuchar>/<uchar.h> are described in the C Unicode TR (draft) as performing conversions between UTF-321 and "multibyte characters". (...) If s is not a null pointer, the c32rtomb function determines the number of bytes needed to represent the multibyte character that corresponds to the wide character given by c32 (including any shift sequences), and stores the multibyte character representation in the array whose first element is pointed to by s. (...) What is this "multibyte character representation"? I'm actually interested in the behaviour of the following program: #include <cassert> #include <cuchar> #include <string> int main() { std::u32string u32 = U"this is a wide string"; std::string narrow = "this is a wide string"; std::string converted(1000, '\0'); char* ptr = &converted[0]; std::mbstate_t state {}; for(auto u : u32) { ptr += std::c32rtomb(ptr, u, &state); } converted.resize(ptr - &converted[0]); assert(converted == narrow); } Is the assertion in it guaranteed to hold1? 1 Working under the assumption that __STDC_UTF_32__ is defined.

    Read the article

  • PHP 5.3 Not Logging

    - by BHare
    I have set error_log = "/var/log/apache2/php_errors.log" and made sure errors were being logged. I have set the file to be owned by the www-data owner and group and even set the permissions to 777. I have confirmed with phpinfo() that the error_log is correctly set, however The logging still only happens in my vhost's apache error log. The following is my php.ini for 5.3.3-7 on Debian Squeeze Apache 2: The top is populated with comments on what I have been interested, or have changed. I have deleted all comments to save space. Full versions here: http://pastebin.com/AhWLiQBR [PHP] ;short_open_tag = On ;allow_call_time_pass_reference = On ;error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED ;display_errors = On ;display_startup_errors = Off ;log_errors = On ;html_errors = On error_log = "/var/log/apache2/php_errors.log" engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = On safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED display_errors = On display_startup_errors = Off log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = On variables_order = "GPCS" request_order = "GPC" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 100M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_tmp_dir = /tmp upload_max_filesize = 100M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 0 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba]

    Read the article

  • Seeking (somewhat) better explanations about supporting > 2.1 TB hard drives.

    - by irrational John
    Today while Googling about I stumbled across posts claiming that Seagate plans to ship a 3TB drive sometime later in 2010. Unfortunately, the stuff I looked at all seemed to contain tidbits of info which I didn't think fit together properly. (I would link to some examples, but I'm only allowed 1 link per post at the moment). Now I really don't have any "need" to better understand the underlying tedious details of this. I am just curious. And confused. So ... some questions I'm hoping someone better informed than I might answer. The talk about a potential addressing problem in both the hardware and the software confused me. The assertion is that something called something called Long LBA addressing (LLBA) is needed in the Command Descriptor Block as a way to get around the current limits to access a hard drive bigger than ~2.1 (or ~2.2?) TB. OK, fine. But I thought the last time this problem came up it was solved by extending the length of the LBA field from 28 to 48 bits. (Remember this website? www.48bitlba.com) A 6 byte LBA is clearly large enough, so what's up with this LLBA talk. I thought this was all fixed back by Win XP SP2, if not sooner? And certainly all the hardware should be up to the task, shouldn't it? The real problem as I understand it with drives much bigger than 2 TB are the 4 byte LBA fields in the Master Boot Record (MBR) used to partition just about all hard drives at the moment. The most likely solution is to migrate to Intel's GUID Partition Table (GPT). A GPT uses 8 byte fields for the LBA. What I don't understand in this context is what is the problem with booting say Windows from a 3TB drive that uses a GPT. Granted, the current PC BIOS wouldn't know how to recognize or work with a GPT. But every GPT comes with a so-called "Safety" or "Guarding" MBR in sector 0.Apple already uses a hybrid version of the MBR to allow them to boot Windows on their Intel Macs (aka Boot Camp). Couldn't something similar be done to allow the PC BIOS to recognize and boot from a partition in, say, the first 1 GB of a 3GB or larger drive? I've got more questions such as where do 4K sectors fit into all of this. But it's probably time I just shut up and posted this. ;-) -irrational john

    Read the article

  • A little primer on using TFS with a small team

    - by johndoucette
    The scenario; A small team of 3 developers mostly in maintenance mode with traditional ASP.net, classic ASP, .Net integration services and utilities with the company’s third party packages, and a bunch of java-based Coldfusion web applications all under Visual Source Safe (VSS). They are about to embark on a huge SharePoint 2010 new construction project and wanted to use subversion instead VSS. TFS was a foreign word and smelled of “high cost” and of an “over complicated process”. Since they had no preconditions about the old TFS versions (‘05 & ‘08), it was fun explaining how simple it was to install a TFS server and get the ball rolling, with or without all the heavy stuff one sometimes associates with such a huge and powerful application management lifecycle product. So, how does a small team begin using TFS? 1. Start by using source control and migrate current VSS source trees into TFS. You can take the latest version or migrate the entire version history. It’s up to you on whether you want a clean start or need quick access to all the version notes and history of the bits. 2. Since most shops are mainly in maintenance mode with existing applications, begin using bug workitems for everything. When you receive an issue/bug from your current tracking system, manually enter the workitem in TFS right through Visual Studio. You can automate the integration to the current tracking system later or replace it entirely. Believe me, this thing is powerful and can handle even the largest of help desks. 3. With new construction, begin work with requirements and task workitems and follow the traditional sprint-based development lifecycle. Obviously, some minor training will be needed, but don’t fear, this is very intuitive and MSDN has a ton of lesson based labs and videos. 4. For the java developers, use the new Team Explorer Everywhere 2010 plugin (recently known as Teamprise). There is a seamless interface in Eclipse, but also a good command-line utility for other environments such as Dreamweaver. 5. Wait to fully integrate the whole workitem/project management/testing process until your team is familiar with the integrated workitems for bugs and code. After a while, you will see the team wanting more transparency into the work they are all doing and naturally, everyone will want workitems to help them organize the chaos! 6. Management will be limited in the value of the reports until you have a fully blown implementation of project planning, construction, build, deployment and testing. However, there are some basic “bug rate” reports and current backlog listings that can provide good information. Some notable explanations of TFS; Work Item Tracking and Project Management - A workitem represents the unit of work within the system which enables tracking of all activities produced by a user, whether it is a developer, business user, project manager or tester. The properties of a workitem such as linked changesets (checked-in code), who updated the data and when, the states and reasons for change, are all transitioned to a data warehouse within TFS for reporting purposes. A workitem can be defines as a "bug", "requirement", test case", or a "change request". They drive the work effort by the individual assigned to it and also provide a key role in defining what needs to be done. Workitems are the things the team needs to do to accomplish a goal. Test Case Management - Starting with a workitem known as a "test case", a tester (or developer) can now author and manage test cases within a formal test plan subsystem. Although TFS supports the test case workitem type, there is a new product known as the VS Test Professional 2010 which allows a tester to facilitate manual tests including fast forwarding steps in the process to arrive at the assertion point quickly. This repeatable process provides quick regression tests and can be conducted by the business user to ensure completeness during UAT. In addition, developers no longer can provide a response to a bug with the line "cannot reproduce". With every test run, attachments including the recorded session, captured environment configurations and settings, screen shots, intellitrace (debugging history), and in some cases if the lab manager is being used, a snapshot of the tested environment is available. Version Control - A modern system allowing shared check-in/check-out, excellent merge conflict resolution, Shelvesets (personal check-ins), branching/merging visualization, public workspaces, gated check-ins, security hierarchy capabilities, and changeset/workitem tracking. Knowing what was done with the code by any developer has become much easier to picture and resolve issues. Team Build - Automate the compilation process whether you need it to be whenever a developer checks-in code, periodically such as nightly builds for testers in the morning, or manual builds to be deployed into production. Each build can run through pre-determined tests, perform code analysis to see if the developer conforms to the team standards, and reject the build if either fails. Project Portal & Reporting - Provide management with a dashboard with insight into the project(s). "Where are we" in each step of the way including past iterations and the current burndown rate. Enabling this feature is easy as it seamlessly interfaces with existing SharePoint implementations.

    Read the article

  • OpenSSL Versions in Solaris

    - by darrenm
    Those of you have have installed Solaris 11 or have read some of the blogs by my colleagues will have noticed Solaris 11 includes OpenSSL 1.0.0, this is a different version to what we have in Solaris 10.  I hope the following explains why that is and how it fits with the expectations on binary compatibility between Solaris releases. Solaris 10 was the first release where we included OpenSSL libraries and headers (part of it was actually statically linked into the SSH client/server in Solaris 9).  At time we were building and releasing Solaris 10 the current train of OpenSSL was 0.9.7.  The OpenSSL libraries at that time were known to not always be completely API and ABI (binary) compatible between releases (some times even in the lettered patch releases) though mostly if you stuck with the documented high level APIs you would be fine.   For this reason OpenSSL was classified as a 'Volatile' interface and in Solaris 10 Volatile interfaces were not part of the default library search path which is why the OpenSSL libraries live in /usr/sfw/lib on Solaris 10.  Okay, but what does Volatile mean ? Quoting from the attributes(5) man page description of Volatile (which was called External in older taxonomy): Volatile interfaces can change at any time and for any reason. The Volatile interface stability level allows Sun pro- ducts to quickly track a fluid, rapidly evolving specif- ication. In many cases, this is preferred to providing additional stability to the interface, as it may better meet the expectations of the consumer. The most common application of this taxonomy level is to interfaces that are controlled by a body other than Sun, but unlike specifications controlled by standards bodies or Free or Open Source Software (FOSS) communities which value interface compatibility, it can not be asserted that an incompatible change to the interface specifica- tion would be exceedingly rare. It may also be applied to FOSS controlled software where it is deemed more important to track the community with minimal latency than to provide stability to our customers. It also common to apply the Volatile classification level to interfaces in the process of being defined by trusted or widely accepted organization. These are generically referred to as draft standards. An "IETF Internet draft" is a well understood example of a specification under development. Volatile can also be applied to experimental interfaces. No assertion is made regarding either source or binary compatibility of Volatile interfaces between any two releases, including patches. Applications containing these interfaces might fail to function properly in any future release. Note that last paragraph!  OpenSSL is only one example of the many interfaces in Solaris that are classified as Volatile.  At the other end of the scale we have Committed (Stable in Solaris 10 terminology) interfaces, these include things like the POSIX APIs or Solaris specific APIs that we have no intention of changing in an incompatible way.  There are also Private interfaces and things we declare as Not-an-Interface (eg command output not intended for scripting against only to be read by humans). Even if we had declared OpenSSL as a Committed/Stable interface in Solaris 10 there are allowed exceptions, again quoting from attributes(5): 4. An interface specification which isn't controlled by Sun has been changed incompatibly and the vast majority of interface consumers expect the newer interface. 5. Not making the incompatible change would be incomprehensible to our customers. In our opinion and that of our large and small customers keeping up with the OpenSSL community is important, and certainly both of the above cases apply. Our policy for dealing with OpenSSL on Solaris 10 was to stay at 0.9.7 and add fixes for security vulnerabilities (the version string includes the CVE numbers of fixed vulnerabilities relevant to that release train).  The last release of OpenSSL 0.9.7 delivered by the upstream community was more than 4 years ago in Feb 2007. Now lets roll forward to just before the release of Solaris 11 Express in 2010. By that point in time the current OpenSSL release was 0.9.8 with the 1.0.0 release known to be coming soon.  Two significant changes to OpenSSL were made between Solaris 10 and Solaris 11 Express.  First in Solaris 11 Express (and Solaris 11) we removed the requirement that Volatile libraries be placed in /usr/sfw/lib, that means OpenSSL is now in /usr/lib, secondly we upgraded it to the then current version stream of OpenSSL (0.9.8) as was expected by our customers. In between Solaris 11 Express in 2010 and the release of Solaris 11 in 2011 the OpenSSL community released version 1.0.0.  This was a huge milestone for a long standing and highly respected open source project.  It would have been highly negligent of Solaris not to include OpenSSL 1.0.0e in the Solaris 11 release. It is the latest best supported and best performing version.     In fact Solaris 11 isn't 'just' OpenSSL 1.0.0 but we have added our SPARC T4 engine and the AES-NI engine to support the on chip crypto acceleration. This gives us 4.3x better AES performance than OpenSSL 0.9.8 running on AIX on an IBM POWER7. We are now working with the OpenSSL community to determine how best to integrate the SPARC T4 changes into the mainline OpenSSL.  The OpenSSL 'pkcs11' engine we delivered in Solaris 10 to support the CA-6000 card and the SPARC T1/T2/T3 hardware is still included in Solaris 11. When OpenSSL 1.0.1 and 1.1.0 come out we will asses what is best for Solaris customers. It might be upgrade or it might be parallel delivery of more than one version stream.  At this time Solaris 11 still classifies OpenSSL as a Volatile interface, it is our hope that we will be able at some point in a future release to give it a higher interface stability level. Happy crypting! and thank-you OpenSSL community for all the work you have done that helps Solaris.

    Read the article

< Previous Page | 9 10 11 12 13 14 15  | Next Page >