Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 155/192 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Merging two Regular Expressions to Truncate Words in Strings

    - by Alix Axel
    I'm trying to come up with the following function that truncates string to whole words (if possible, otherwise it should truncate to chars): function Text_Truncate($string, $limit, $more = '...') { $string = trim(html_entity_decode($string, ENT_QUOTES, 'UTF-8')); if (strlen(utf8_decode($string)) > $limit) { $string = preg_replace('~^(.{1,' . intval($limit) . '})(?:\s.*|$)~su', '$1', $string); if (strlen(utf8_decode($string)) > $limit) { $string = preg_replace('~^(.{' . intval($limit) . '}).*~su', '$1', $string); } $string .= $more; } return trim(htmlentities($string, ENT_QUOTES, 'UTF-8', true)); } Here are some tests: // Iñtërnâtiônàlizætiøn and then the quick brown fox... (49 + 3 chars) echo dyd_Text_Truncate('Iñtërnâtiônàlizætiøn and then the quick brown fox jumped overly the lazy dog and one day the lazy dog humped the poor fox down until she died.', 50, '...'); // Iñtërnâtiônàlizætiøn_and_then_the_quick_brown_fox_... (50 + 3 chars) echo dyd_Text_Truncate('Iñtërnâtiônàlizætiøn_and_then_the_quick_brown_fox_jumped_overly_the_lazy_dog and one day the lazy dog humped the poor fox down until she died.', 50, '...'); They both work as it is, however if I drop the second preg_replace() I get the following: Iñtërnâtiônàlizætiøn_and_then_the_quick_brown_fox_jumped_overly_the_lazy_dog and one day the lazy dog humped the poor fox down until she died.... I can't use substr() because it only works on byte level and I don't have access to mb_substr() ATM, I've made several attempts to join the second regex with the first one but without success. Please help S.M.S., I've been struggling with this for almost an hour. EDIT: I'm sorry, I've been awake for 40 hours and I shamelessly missed this: $string = preg_replace('~^(.{1,' . intval($limit) . '})(?:\s.*|$)?~su', '$1', $string); Still, if someone has a more optimized regex (or one that ignores the trailing space) please share: "Iñtërnâtiônàlizætiøn and then " "Iñtërnâtiônàlizætiøn_and_then_" EDIT 2: I still can't get rid of the trailing whitespace, can someone help me out?

    Read the article

  • Using Moq callbacks correctly according to AAA

    - by Hadi Eskandari
    I've created a unit test that tests interactions on my ViewModel class in a Silverlight application. To be able to do this test, I'm mocking the service interface, injected to the ViewModel. I'm using Moq framework to do the mocking. to be able to verify bounded object in the ViewModel is converted properly, I've used a callback: [Test] public void SaveProposal_Will_Map_Proposal_To_WebService_Parameter() { var vm = CreateNewCampaignViewModel(); var proposal = CreateNewProposal(1, "New Proposal"); Services.Setup(x => x.SaveProposalAsync(It.IsAny<saveProposalParam>())).Callback((saveProposalParam p) => { Assert.That(p.plainProposal, Is.Not.Null); Assert.That(p.plainProposal.POrderItem.orderItemId, Is.EqualTo(1)); Assert.That(p.plainProposal.POrderItem.orderName, Is.EqualTo("New Proposal")); }); proposal.State = ObjectStates.Added; vm.CurrentProposal = proposal; vm.Save(); } It is working fine, but if you've noticed, using this mechanism the Assert and Act part of the unit test have switched their parts (Assert comes before Acting). Is there a better way to do this, while preserving correct AAA order?

    Read the article

  • What's a good way of building up a String given specific start and end locations?

    - by Michael Campbell
    (java 1.5) I have a need to build up a String, in pieces. I'm given a set of (sub)strings, each with a start and end point of where they belong in the final string. Was wondering if there were some canonical way of doing this. This isn't homework, and I can use any licensable OSS, such as jakarta commons-lang StringUtils etc. My company has a solution using a CharBuffer, and I'm content to leave it as is (and add some unit tests, of which there are none (?!)) but the code is fairly hideous and I would like something easier to read. As I said this isn't homework, and I don't need a complete solution, just some pointers to libraries or java classes that might give me some insight. The String.Format didn't seem QUITE right... I would have to honor inputs too long and too short, etc. Substrings would be overlaid in the order they appear (in case of overlap). As an example of input, I might have something like: String:start:end FO:0:3 (string shorter than field) BAR:4:5 (String larger than field) BLEH:5:9 (String overlays previous field) I'd want to end up with FO BBLEH 01234567890

    Read the article

  • al32utf8 in oracle and SQL Server and DB2 pulling data

    - by Bob
    I have a non-utf8 oracle database running on 11.1.0.7. We need to support greek characters. So we have two options: use nvarchar, nclob fields for those fields that need greek (it is not all fields). We have tested this and gotten it to work with java coding. convert Oracle to AL32UTF8 database. I am not asking how to do this. I got this from the Oracle Site/Oracle Support. I know what is involved, lossy data, etc, increasing the size of the database. My question is we have users to our system that connect to our database with database links but work on SQL Server and IBM DB2 databases. I do not have access to those databases and I do not have experience with them. If they are not in UTF-8 databases what happens when they pull UTF8 data? I would assume that English/Ascii characters are fine and the greek will end up as junk data. I also ran Oracle Character set scanner (oracle command line utility you use to get info about the affects of a character set conversion). It says that my database will crease in sizez by about 20%. Does this have an affect on users with 3rd party databases? These are customers of our data and there is a limit to how much access I can have to them to run tests. Any information you have would be welcome.

    Read the article

  • Core Data: Inverse relationship only mirrors when I edit the mutableset. Not sure why.

    - by zorn
    My model is setup so Business has many clients, Client has one business. Inverse relationship is setup in the mom file. I have a unit test like this: - (void)testNewClientFromBusiness { PTBusiness *business = [modelController newBusiness]; STAssertTrue([[business clients] count] == 0, @"is actually %d", [[business clients] count]); PTClient *client = [business newClient]; STAssertTrue([business isEqual:[client business]], nil); STAssertTrue([[business clients] count] == 1, @"is actually %d", [[business clients] count]); } I implement -newClient inside of PTBusiness like this: - (PTClient *)newClient { PTClient *client = [NSEntityDescription insertNewObjectForEntityForName:@"Client" inManagedObjectContext:[self managedObjectContext]]; [client setBusiness:self]; [client updateLocalDefaultsBasedOnBusiness]; return client; } The test fails because [[business clients] count] is still 0 after -newClient is called. If I impliment it like this: - (PTClient *)newClient { PTClient *client = [NSEntityDescription insertNewObjectForEntityForName:@"Client" inManagedObjectContext:[self managedObjectContext]]; NSMutableSet *group = [self mutableSetValueForKey:@"clients"]; [group addObject:client]; [client updateLocalDefaultsBasedOnBusiness]; return client; } The tests passes. My question(s): So am I right in thinking the inverse relationship is only updated when I interact with the mutable set? That seems to go against some other Core Data docs I've read. Is the fact that this is running in a unit test without a run loop have anything to do with it? Any other troubleshooting recommendations? I'd really like to figure out why I can't set up the relationship at the client end.

    Read the article

  • What is the point of the logical operators in C?

    - by reubensammut
    I was just wondering if there is an XOR logical operator in C (something like && for AND but for XOR). I know I can split an XOR into ANDs, NOTs and ORs but a simple XOR would be much better. Then it occurred to me that if I use the normal XOR bitwise operator between two conditions, it might just work. And for my tests it did. Consider: int i = 3; int j = 7; int k = 8; Just for the sake of this rather stupid example, if I need k to be either greater than i or greater than j but not both, XOR would be quite handy. if ((k > i) XOR (k > j)) printf("Valid"); else printf("Invalid"); or printf("%s",((k > i) XOR (k > j)) ? "Valid" : "Invalid"); I put the bitwise XOR ^ and it produced "Invalid". Putting the results of the two comparisons in two integers resulted in the 2 integers to contain a 1, hence the XOR produced a false. I've then tried it with the & and | bitwise operators and both gave the expected results. All this makes sense knowing that true conditions have a non zero value, whilst false conditions have zero values. I was wondering, is there a reason to use the logical && and || when the bitwise operators &, | and ^ work just the same? Thanks Reuben

    Read the article

  • Correct way to trigger object clone/memento when property changes

    - by Jay
    Hi, I have a big doubt about the correct way to save an object state (clone object), if necessary to rollback the changes, when a property has changed. I know that the IEditableObject interface exists for those cases but during some tests the BeginEdit would just fire like crazy (I have a DataGrid whose values can be edited but I won't need to keep the state of the object in these cases). I'm following the MVP design pattern in my project and the view's DataContext is a wrapper of my presenter.Let's say I have a CheckBox/TextBox in my UI and when that textbox's value changes, the property bound in the wrapper gets set.Currently, before setting the new value i'm raising an event to the presenter (something like PropertyChanging) that clones my wrapper. I'm doing this because I don't think that job should be done by the wrapper itself but by the presenter.Is this a correct approach? Raising the event is an acceptable solution? I thought of other possible ideas: Interface between presenter and wrapper; Use explicit binding and trigger the binding after saving object's state; What is your opinion about the best way to do this? Should I just keep IEditableObject, is this the best way? Best Regards

    Read the article

  • Blocking on DBCP connection pool (open and close connnection). Is database connection pooling in OpenEJB pluggable?

    - by topchef
    We use OpenEJB on Tomcat (used to run on JBoss, Weblogic, etc.). While running load tests we experience significant performance problems with handling JMS messages (queues). Problem was localized to blocking on database connection pool getting or releasing connection to the pool. Blocking prevented concurrent MDB instances (threads) from running hence performance suffered 10-fold and worse. The same code used to run on application servers (with their respective connection pool implementations) with no blocking at all. Example of thread blocked: Name: JMS Resource Adapter-worker-23 State: BLOCKED on org.apache.commons.pool.impl.GenericObjectPool@1ea6b4a owned by: JMS Resource Adapter-worker-19 Total blocked: 18,426 Total waited: 0 Stack trace: org.apache.commons.pool.impl.GenericObjectPool.returnObject(GenericObjectPool.java:916) org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:91) - locked org.apache.commons.dbcp.PoolableConnection@1bcba8 org.apache.commons.dbcp.managed.ManagedConnection.close(ManagedConnection.java:147) com.xxxxx.persistence.DbHelper.closeConnection(DbHelper.java:290) .... Couple of questions. I am almost certain that some transactional attributes and properties contribute to this blocking, but MDBs are defined as non-transactional (we use both annotations and ejb-jar.xml). Some EJBs do use container-managed transactions though (and we can observe blocking there as well). Are there any DBCP configurations that may fix blocking? Is DBCP connection pool implementation replaceable in OpenEJB? How easy (difficult) to replace it with another library? Just in case this is how we define data source in OpenEJB (openejb.xml): <Resource id="MyDataSource" type="DataSource"> JdbcDriver oracle.jdbc.driver.OracleDriver JdbcUrl ${oracle.jdbc} UserName ${oracle.user} Password ${oracle.password} JtaManaged true InitialSize 5 MaxActive 30 ValidationQuery SELECT 1 FROM DUAL TestOnBorrow true </Resource>

    Read the article

  • When *not* to use prepared statements?

    - by Ben Blank
    I'm re-engineering a PHP-driven web site which uses a minimal database. The original version used "pseudo-prepared-statements" (PHP functions which did quoting and parameter replacement) to prevent injection attacks and to separate database logic from page logic. It seemed natural to replace these ad-hoc functions with an object which uses PDO and real prepared statements, but after doing my reading on them, I'm not so sure. PDO still seems like a great idea, but one of the primary selling points of prepared statements is being able to reuse them… which I never will. Here's my setup: The statements are all trivially simple. Most are in the form SELECT foo,bar FROM baz WHERE quux = ? ORDER BY bar LIMIT 1. The most complex statement in the lot is simply three such selects joined together with UNION ALLs. Each page hit executes at most one statement and executes it only once. I'm in a hosted environment and therefore leery of slamming their servers by doing any "stress tests" personally. Given that using prepared statements will, at minimum, double the number of database round-trips I'm making, am I better off avoiding them? Can I use PDO::MYSQL_ATTR_DIRECT_QUERY to avoid the overhead of multiple database trips while retaining the benefit of parametrization and injection defense? Or do the binary calls used by the prepared statement API perform well enough compared to executing non-prepared queries that I shouldn't worry about it? EDIT: Thanks for all the good advice, folks. This is one where I wish I could mark more than one answer as "accepted" — lots of different perspectives. Ultimately, though, I have to give rick his due… without his answer I would have blissfully gone off and done the completely Wrong Thing even after following everyone's advice. :-) Emulated prepared statements it is!

    Read the article

  • encodeURIComponent is really useful?

    - by Marco Demaio
    Something I still don't understand when perfoming an http-get request to the server is what the advantage is in using JS fucntion encodeURIcomponent to encode each component of the http-get. Doing some tests I saw the server (using PHP) gets the values of the http-get request properly also if I don't use encodeURIcomponent!!! Obviuosly I still need to encode at client level the special character & ? = / : otherwise an http-get value like this "peace&love=virtue" would be considered as new key value pair of the http-get request instead of a one single value. But why does encodeURIcompenent encodes also many other charcaters like 'è' for example wich is translated into %C3%A8 that must be decoded on a PHP server using the utf8_decode function. By using encodeURIcomponent all values of the http-get request are utf8 encoded, therefor when getting them in PHP I have to call each time the utf8_decode function on each $_GET value which is quite annoying. Why can't we just encode only the & ? = / : charcaters??? see also: http://stackoverflow.com/questions/2607946/js-encodeuricomponent-result-different-from-the-one-created-by-form It shows that encodeURIComponent does not even encode properly because a simple browser FORM GET encodes charactrs like '€', in different way. So I still wonder what does this encodeURIComponent is for?

    Read the article

  • Unit Testing Error - The unit test adapter failed to connect to the data source or to read the data

    - by michael.lukatchik
    I'm using VSTS 2K8 and I've set up a Unit Test Project. In it, I have a test class with a method that does a simple assertion. I'm using an Excel 2007 spreadsheet as my data source. My test method looks like this: [DataSource("System.Data.Odbc", "Dsn=Excel Files;dbq=|DataDirectory|\\MyTestData.xlsx;defaultdir=C:\\TestData;driverid=1046;maxbuffersize=2048;pagetimeout=5", "Sheet1", DataAccessMethod.Sequential)] [DeploymentItem("MyTestData.xlsx")] [TestMethod()] public void State_Value_Is_Set() { string expected = "MD"; string actual = TestContext.DataRow["State"] as string; Assert.AreEqual(expected, actual); } As indicated in the method decoration attributes, my Excel spreadsheet is on my local C:/ Drive. In it, the sheet where all of my data is located is named "Sheet1". I've copied the Excel spreadsheet into my project and I've set its Build Action = "Content" and I've set its Copy to Output Directory = "Copy if Newer". When trying to run this simple unit test, I receive the following error: The unit test adapter failed to connect to the data source or to read the data. For more information on troubleshooting this error, see "Troubleshooting Data-Driven Unit Tests" (http://go.microsoft.com/fwlink/?LinkId=62412) in the MSDN Library. Error details: ERROR [42S02] [Microsoft][ODBC Excel Driver] The Microsoft Office Access database engine could not find the object 'Sheet1'. Make sure the object exists and that you spell its name and the path name correctly. I've verified that the sheet name is spelled correctly (i.e. Sheet1) and I've verified that my data sources are set correctly. Web searches haven't turned up much at all. And I'm totally stumped. All help or input is appreciated!!!!

    Read the article

  • Can Castle.Windsor do automatic resolution of concrete types

    - by Anthony
    We are evaluating IoC containers for C# projects, and both Unity and Castle.Windsor are standing out. One thing that I like about Unity (NInject and StructureMap also do this) is that types where it is obvious how to construct them do not have to be registered with the IoC Container. Is there way to do this in Castle.Windsor? Am I being fair to Castle.Windsor to say that it does not do this? Is there a design reason to deliberately not do this, or is it an oversight, or just not seen as important or useful? I am aware of container.Register(AllTypes... in Windsor but that's not quite the same thing. It's not entirely automatic, and it's very broad. To illustrate the point, here are two NUnit tests doing the same thing via Unity and Castle.Windsor. The Castle.Windsor one fails. : namespace SimpleIocDemo { using NUnit.Framework; using Castle.Windsor; using Microsoft.Practices.Unity; public interface ISomeService { string DoSomething(); } public class ServiceImplementation : ISomeService { public string DoSomething() { return "Hello"; } } public class RootObject { public ISomeService SomeService { get; private set; } public RootObject(ISomeService service) { SomeService = service; } } [TestFixture] public class IocTests { [Test] public void UnityResolveTest() { UnityContainer container = new UnityContainer(); container.RegisterType<ISomeService, ServiceImplementation>(); // Root object needs no registration in Unity RootObject rootObject = container.Resolve<RootObject>(); Assert.AreEqual("Hello", rootObject.SomeService.DoSomething()); } [Test] public void WindsorResolveTest() { WindsorContainer container = new WindsorContainer(); container.AddComponent<ISomeService, ServiceImplementation>(); // fails with exception "Castle.MicroKernel.ComponentNotFoundException: // No component for supporting the service SimpleIocDemo.RootObject was found" // I could add // container.AddComponent<RootObject>(); // but that approach does not scale RootObject rootObject = container.Resolve<RootObject>(); Assert.AreEqual("Hello", rootObject.SomeService.DoSomething()); } } }

    Read the article

  • Calling webservice via server causes java.net.MalformedURLException: no protocol

    - by Thomas
    I am writing a web-service, which parses an xml file. In the client, I read the whole content of the xml into a String then I give it to the web-service. If I run my web-service with main as a Java-Application (for tests) there is no problem, no error messages. However when I try to call it via the server, I get the following error: java.net.MalformedURLException: no protocol I use the same xml file, the same code (without main), and I just cannot figure out, what the cause of the error can be. here is my code: DOMParser parser=new DOMParser(); try { parser.setFeature("http://xml.org/sax/features/validation", true); parser.setFeature("http://apache.org/xml/features/validation/schema",true); parser.setFeature("http://apache.org/xml/features/validation/dynamic",true); parser.setErrorHandler(new myErrorHandler()); parser.parse(new InputSource(new StringReader(xmlFile))); document=parser.getDocument(); xmlFile is constructed in the client so: String myFile ="C:/test.xml"; File file=new File(myFile); String myString=""; FileInputStream fis=new FileInputStream(file); BufferedInputStream bis=new BufferedInputStream(fis); DataInputStream dis=new DataInputStream(bis); while (dis.available()!=0) { myString=myString+dis.readLine(); } fis.close(); bis.close(); dis.close(); Any suggestions will be appreciated!

    Read the article

  • How reliable is Verify() in Moq?

    - by matthewayinde
    I'm only new to Unit Testing and ASP.NET MVC. I've been trying to get my head into both using Steve Sanderson's "Pro ASP.NET MVC Framework". In the book there is this piece of code: public class AdminController : Controller { ... [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(Product product, HttpPostedFileBase image) { ... productsRepository.SaveProduct(product); TempData["message"] = product.Name + " has been saved."; return RedirectToAction("Index"); } } That he tests like so: [Test] public void Edit_Action_Saves_Product_To_Repository_And_Redirects_To_Index() { // Arrange AdminController controller = new AdminController(mockRepos.Object); Product newProduct = new Product(); // Act var result = (RedirectToRouteResult)controller.Edit(newProduct, null); // Assert: Saved product to repository and redirected mockRepos.Verify(x => x.SaveProduct(newProduct)); Assert.AreEqual("Index", result.RouteValues["action"]); } THE TEST PASSES. So I intensionally corrupt the code by adding "productsRepository.DeleteProduct(product);" after the "SaveProduct(product);" as in: ... productsRepository.SaveProduct(product); productsRepository.DeleteProduct(product); ... THE TEST PASSES.(i.e Condones a calamitous [hypnosis + intellisense]-induced typo :) ) Could this test be written better? Or is there something I should know? Thanks a lot.

    Read the article

  • How test that ASP.NET MVC route redirects to other site?

    - by Matt Lacey
    Due to a prinitng error in some promotional material I have a site that is receiving a lot of requests which should be for one site arriving at another. i.e. The valid sites are http://site1.com/abc & http://site2.com/def but people are being told to go to http://site1.com/def. I have control over site1 but not site2. site1 contains logic for checking that the first part of the route is valid in an actionfilter, like this: public override void OnActionExecuting(ActionExecutingContext filterContext) { base.OnActionExecuting(filterContext); if ((!filterContext.ActionParameters.ContainsKey("id")) || (!manager.WhiteLabelExists(filterContext.ActionParameters["id"].ToString()))) { if (filterContext.ActionParameters["id"].ToString().ToLowerInvariant().Equals("def")) { filterContext.HttpContext.Response.Redirect("http://site2.com/def", true); } filterContext.Result = new ViewResult { ViewName = "NoWhiteLabel" }; filterContext.HttpContext.Response.Clear(); } } I'm not sure how to test the redirection to the other site though. I already have tests for redirecting to "NoWhiteLabel" using the MvcContrib Test Helpers, but these aren't able to handle (as far as I can see) this situation. How do I test the redirection to antoher site?

    Read the article

  • PHPUnit - multiple stubs of same class

    - by keithjgrant
    I'm building unit tests for class Foo, and I'm fairly new to unit testing. A key component of my class is an instance of BarCollection which contains a number of Bar objects. One method in Foo iterates through the collection and calls a couple methods on each Bar object in the collection. I want to use stub objects to generate a series of responses for my test class. How do I make the Bar stub class return different values as I iterate? I'm trying to do something along these lines: $stubs = array(); foreach ($array as $value) { $barStub->expects($this->any()) ->method('GetValue')) ->will($this->returnValue($value)); $stubs[] = $barStub; } // populate stubs into `Foo` // assert results from `Foo->someMethod()` So Foo->someMethod() will produce data based on the results it receives from the Bar objects. But this gives me the following error whenever the array is longer than one: There was 1 failure: 1) testMyTest(FooTest) with data set #2 (array(0.5, 0.5)) Expectation failed for method name is equal to <string:GetValue> when invoked zero or more times. Mocked method does not exist. /usr/share/php/PHPUnit/Framework/MockObject/Mock.php(193) : eval()'d code:25 One thought I had was to use ->will($this->returnCallback()) to invoke a callback method, but I don't know how to indicate to the callback which Bar object is making the call (and consequently what response to give). Another idea is to use the onConsecutiveCalls() method, or something like it, to tell my stub to return 1 the first time, 2 the second time, etc, but I'm not sure exactly how to do this. I'm also concerned that if my class ever does anything other than ordered iteration on the collection, I won't have a way to test it.

    Read the article

  • PubSubHubBub Hubs

    - by PartlyCloudy
    Hi, I'm currently building a live web application based upon the PubSubHubBub protocol. However, I encountered several issues. First, I'm in search of a hub application that I can run on my server. There are several applications, but most of them are not mature yet, or they don't support the 0.3 spec. The official google hub runs on the Google App Engine and can even be executed locally. Unfortunately, "Tasks will not run automatically. Push the 'Run' button to execute each task." This behaviour is useful for debugging and understanding the workflow, but in some live tests, it would be nice not to invoke all tasks manually. Is there a way to tweak the local app engine due automatically run tasks? Next, I have a question concerning the spec itself. The Google reference implementation provides the initial publish method bound to the outpoint uri + /publish. But this is not reflected in the specs. So are there any mature hubs that can be run locally for debugging? Or are there ways to configure the offical google app engine hub to run locally and to execute tasks directly? Thanks in advance

    Read the article

  • Why is execution-time method resolution faster than compile-time resolution?

    - by Felix
    At school, we about virtual functions in C++, and how they are resolved (or found, or matched, I don't know what the terminology is -- we're not studying in English) at execution time instead of compile time. The teacher also told us that compile-time resolution is much faster than execution-time (and it would make sense for it to be so). However, a quick experiment would suggest otherwise. I've built this small program: #include <iostream> #include <limits.h> using namespace std; class A { public: void f() { // do nothing } }; class B: public A { public: void f() { // do nothing } }; int main() { unsigned int i; A *a = new B; for (i=0; i < UINT_MAX; i++) a->f(); return 0; } Where I made A::f() once normal, once virtual. Here are my results: [felix@the-machine C]$ time ./normal real 0m25.834s user 0m25.742s sys 0m0.000s [felix@the-machine C]$ time ./virtual real 0m24.630s user 0m24.472s sys 0m0.003s [felix@the-machine C]$ time ./normal real 0m25.860s user 0m25.735s sys 0m0.007s [felix@the-machine C]$ time ./virtual real 0m24.514s user 0m24.475s sys 0m0.000s [felix@the-machine C]$ time ./normal real 0m26.022s user 0m25.795s sys 0m0.013s [felix@the-machine C]$ time ./virtual real 0m24.503s user 0m24.468s sys 0m0.000s There seems to be a steady ~1 second difference in favor of the virtual version. Why is this? Relevant or not: dual-core pentium @ 2.80Ghz, no extra applications running between two tests. Archlinux with gcc 4.5.0. Compiling normally, like: $ g++ test.cpp -o normal Also, -Wall doesn't spit out any warnings, either.

    Read the article

  • iPhone OS: Strategies for high density image work

    - by Jasconius
    I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time. I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases. This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory. Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility. I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following: 1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an external server.

    Read the article

  • Writing fortran robust and "modern" code

    - by Blklight
    In some scientific environments, you often cannot go without FORTRAN as most of the developers only know that idiom, and there is lot of legacy code and related experience. And frankly, there are not many other cross-platform options for high performance programming ( C++ would do the task, but the syntax, zero-starting arrays, and pointers are too much for most engineers ;-) ). I'm a C++ guy but I'm stuck with some F90 projects. So, let's assume a new project must use FORTRAN (F90), but I want to build the most modern software architecture out of it. while being compatible with most "recent" compilers (intel ifort, but also including sun/HP/IBM own compilers) So I'm thinking of imposing: global variable forbidden, no gotos, no jump labels, "implicit none", etc. "object-oriented programming" (modules with datatypes + related subroutines) modular/reusable functions, well documented, reusable libraries assertions/preconditions/invariants (implemented using preprocessor statements) unit tests for all (most) subroutines and "objects" an intense "debug mode" (#ifdef DEBUG) with more checks and all possible Intel compiler checks possible (array bounds, subroutine interfaces, etc.) uniform and enforced legible coding style, using code processing tools C stubs/wrappers for libpthread, libDL (and eventually GPU kernels, etc.) C/C++ implementation of utility functions (strings, file operations, sockets, memory alloc/dealloc reference counting for debug mode, etc.) ( This may all seem "evident" modern programming assumptions, but in a legacy fortran world, most of these are big changes in the typical programmer workflow ) The goal with all that is to have trustworthy, maintainable and modular code. Whereas, in typical fortran, modularity is often not a primary goal, and code is trustworthy only if the original developer was very clever, and the code was not changed since then ! (i'm a bit joking here, but not much) I searched around for references about object-oriented fortran, programming-by-contract (assertions/preconditions/etc.), and found only ugly and outdated documents, syntaxes and papers done by people with no large-scale project involvement, and dead projects. Any good URL, advice, reference paper/books on the subject?

    Read the article

  • How to fix this java.lang.LinkageError?

    - by Péter Török
    I am trying to configure a custom layout class to Log4J as described in my previous post. The class uses java.util.regex.Matcher to identify potential credit card numbers in log messages. It works perfectly in unit tests (I can also programmatically configure a logger to use it and produce the expected output). However when I try to deploy it with our app in JBoss, I get the following error: --- MBEANS THAT ARE THE ROOT CAUSE OF THE PROBLEM --- ObjectName: jboss.web.deployment:war=MyWebApp-2010_02-SNAPSHOT.war,id=476602902 State: FAILED Reason: java.lang.LinkageError: java/util/regex/Matcher I couldn't even find any info on this form of the error - typically LinkageError seems to show up with a "loader constrain violation" message, like in here. Technical details: we use JBoss 4.2, Java 5, Log4J 1.2.12. We deploy our app in an .ear, which contains (among others) the above mentioned .war file, and the custom layout class in a separate jar file. We override the default settings in jboss-log4J.xml with our own log4j.properties located in a different folder, which is added to the classpath at startup, and is provided via Carbon. I can only guess: are two different Matcher class versions loaded from somewhere, or is Matcher loaded by two different classloaders when it is used from the jar and the war? What does this error message mean, and how can I fix it?

    Read the article

  • Testing a patch to the Rails mysql adapter

    - by Sleepycat
    I wrote a little monkeypatch to the Rails MySQLAdapter and want to package it up to use it in my other projects. I am trying to write some tests for it but I am still new to testing and I am not sure how to test this. Can someone help get me started? Here is the code I want to test: unless RAILS_ENV == 'production' module ActiveRecord module ConnectionAdapters class MysqlAdapter < AbstractAdapter def select_with_explain(sql, name = nil) explanation = execute_with_disable_logging('EXPLAIN ' + sql) e = explanation.all_hashes.first exp = e.collect{|k,v| " | #{k}: #{v} "}.join log(exp, 'Explain') select_without_explain(sql, name) end def execute_with_disable_logging(sql, name = nil) #:nodoc: #Run a query without logging @connection.query(sql) rescue ActiveRecord::StatementInvalid => exception if exception.message.split(":").first =~ /Packets out of order/ raise ActiveRecord::StatementInvalid, "'Packets out of order' error was received from the database. Please update your mysql bindings (gem install mysql) and read http://dev.mysql.com/doc/mysql/en/password-hashing.html for more information. If you're on Windows, use the Instant Rails installer to get the updated mysql bindings." else raise end end alias_method_chain :select, :explain end end end end Thanks.

    Read the article

  • SSL certificate pre-fetch .NET

    - by Wil P
    I am writing a utility that would allow me to monitor the health of our websites. This consists of a series of validation tasks that I can run against a web application. One of the tests is to anticipate the expiration of a particular SSL certificate. I am looking for a way to pre-fetch the SSL certificate installed on a web site using .NET or a WINAPI so that I can validate the expiration date of the certificate associated with a particular website. One way I could do this is to cache the certificates when they are validated in the ServicePointManager.ServerCertificateValidationCallback handler and then match them up with configured web sites, but this seems a bit hackish. Another would be to configure the application with the certificate for the website, but I'd rather avoid this if I can in order to minimize configuration. What would be the easiest way for me to download an SSL certificate associated with a website using .NET so that I can inspect the information the certificate contains to validate it? EDIT: To extend on the answer below there is no need to manually create the ServicePoint prior to creating the request. It is generated on the request object as part of executing the request. private static string GetSSLExpiration(string url) { HttpWebRequest request = WebRequest.Create(url) as HttpWebRequest; using (WebResponse response = request.GetResponse()) { } if (request.ServicePoint.Certificate != null) { return request.ServicePoint.Certificate.GetExpirationDateString(); } else { return string.Empty; } }

    Read the article

  • Restful WCF Service - how to send data with illegal XML characters?

    - by Chris
    I have a RESTful WCF web service. One of my methods has several input parameters. One of the input parameters is a string. Some times the data I am passing to this web method will include content that has one or more "illegal characters" - i.e. "&". So, I replace this with &amp; before passing it to the web service - but it still throws an exception. The exception isn't visible, as the data never reaches the web service, but I know that it is this content that is causing the problem, as I have done several tests sending data that doesn't contain an illegal XML character, and every time it worked, but any data containing "&" will fail. Am I not supposed to replace "&" with &amp;? Please refer to the web method below: [WebGet] public MapNode AddMapNode(string nodeText) { return new inProcessEntities().AddMapNode(nodeText); } Please help on how I can fix this. Thanks. Chris

    Read the article

  • testing if constructor in constructor chain

    - by Delan Azabani
    I'm attempting to implement a GTK+ inspired widget toolkit for the web in JavaScript. One of the constructor chains goes gtk.widget => gtk.container => gtk.bin => gtk.window Every gtk.widget has a showAll method, which, if and only if the widget is a gtk.container or derivative (such as gtk.bin or gtk.window), will recursively show the children of that widget. Obviously, if it isn't a gtk.container or derivative, we shouldn't do anything because the widget in question can't contain anything. For reference, here is my inheritance function; it's probably not the best, but it's a start: function inherit(target, parent) { target.prototype = new parent; target.prototype.constructor = target; } I know that I can check for the direct constructor like this: if (this.constructor === gtk.container) ... However, this only tests for direct construction and not, say, if the object is a gtk.bin or gtk.window. How can I test whether gtk.container is somewhere up in the constructor chain?

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >