Search Results

Search found 2108 results on 85 pages for 'differences'.

Page 68/85 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Dependency Injection for Windows Phone 7

    - by Igor Zevaka
    I was trying to use Unity 2.0 beta 2 for Silverlight in my Windows Phone 7 project and I kept getting this crash: Microsoft.Practices.Unity.Silverlight.dll!Microsoft.Practices.ObjectBuilder2.DynamicMethodConstructorStrategy.DynamicMethodConstructorStrategy() + 0x1f bytes Microsoft.Practices.Unity.Silverlight.dll!Microsoft.Practices.ObjectBuilder2.DynamicMethodConstructorStrategy.DynamicMethodConstructorStrategy() + 0x1f bytes mscorlib.dll!System.Reflection.RuntimeConstructorInfo.InternalInvoke(System.Reflection.RuntimeConstructorInfo rtci = {System.Reflection.RuntimeConstructorInfo}, System.Reflection.BindingFlags invokeAttr = Default, System.Reflection.Binder binder = null, object parameters = {object[0]}, System.Globalization.CultureInfo culture = null, bool isBinderDefault = false, System.Reflection.Assembly caller = null, bool verifyAccess = true, ref System.Threading.StackCrawlMark stackMark = LookForMyCaller) mscorlib.dll!System.Reflection.RuntimeConstructorInfo.InternalInvoke(object obj = null, System.Reflection.BindingFlags invokeAttr = Default, System.Reflection.Binder binder = null, object[] parameters = {object[0]}, System.Globalization.CultureInfo culture = null, ref System.Threading.StackCrawlMark stackMark = LookForMyCaller) + 0x103 bytes mscorlib.dll!System.Activator.InternalCreateInstance(System.Type type = {Name = "DynamicMethodConstructorStrategy" FullName = "Microsoft.Practices.ObjectBuilder2.DynamicMethodConstructorStrategy"}, bool nonPublic = false, ref System.Threading.StackCrawlMark stackMark = LookForMyCaller) + 0xf0 bytes mscorlib.dll!System.Activator.CreateInstance() + 0xc bytes Microsoft.Practices.Unity.Silverlight.dll!Microsoft.Practices.ObjectBuilder2.StagedStrategyChain.AddNew(Microsoft.Practices.Unity.ObjectBuilder.UnityBuildStage stage = Creation) + 0x1d bytes Microsoft.Practices.Unity.Silverlight.dll!Microsoft.Practices.Unity.UnityDefaultStrategiesExtension.Initialize() + 0x6c bytes Microsoft.Practices.Unity.Silverlight.dll!Microsoft.Practices.Unity.UnityContainerExtension.InitializeExtension(Microsoft.Practices.Unity.ExtensionContext context = {Microsoft.Practices.Unity.UnityContainer.ExtensionContextImpl}) + 0x31 bytes Microsoft.Practices.Unity.Silverlight.dll!Microsoft.Practices.Unity.UnityContainer.AddExtension(Microsoft.Practices.Unity.UnityContainerExtension extension = {Microsoft.Practices.Unity.UnityDefaultStrategiesExtension}) + 0x1a bytes Microsoft.Practices.Unity.Silverlight.dll!Microsoft.Practices.Unity.UnityContainer.UnityContainer() + 0xf bytes Thinking I could resolve it I've tried a few things but to no avail. Turns out that this is a rather fundamental problem and my assumption that Windows Phone 7 is Silverlight 3 + Some other stuff is wrong. This page describes the differences between Mobile Silverlight and Silverlight 3. Of particular interest is this: The System.Reflection.Emit namespace is not supported in Silverlight for Windows Phone. This is precisely why Unity is crashing on the phone, DynamicMethodConstructorStrategy class uses System.Reflection.Emit quite extensively... So the question is, what alternative to Unity is there for Windows Phone 7?

    Read the article

  • Why is Assembly.GetCustomAttributes suddenly throwing TypeLoadException on build machine with Silver

    - by andrej351
    A short while back i had to display the current version of our Silverlight app. After some googling the following code gave me the desired result: var fileVersionAttributes = typeof(MyClass).Assembly. GetCustomAttributes(typeof(AssemblyFileVersionAttribute), false) as AssemblyFileVersionAttribute[]; var version = fileVersionAttributes[0].Version; This worked a treat in our .NET 3.5 Silverlight 3 environment. However, we recently upgraded to .NET 4 and Silverlight 4. We just finished getting our build machine working and found that the unit test for this code was throwing the following exception: Exception Message: System.TypeLoadException: Error 0x80131522. Debugging resource strings are unavailable. See http://go.microsoft.com/fwlink/?linkid=106663&Version=3.0.50106.0&File=mscorrc.dll&Key=0x80131522 at System.ModuleHandle.ResolveType(ModuleHandle module, Int32 typeToken, RuntimeTypeHandle* typeInstArgs, Int32 typeInstCount, RuntimeTypeHandle* methodInstArgs, Int32 methodInstCount) at System.ModuleHandle.ResolveTypeHandle(Int32 typeToken, RuntimeTypeHandle[] typeInstantiationContext, RuntimeTypeHandle[] methodInstantiationContext) at System.Reflection.Module.ResolveType(Int32 metadataToken, Type[] genericTypeArguments, Type[] genericMethodArguments) at System.Reflection.CustomAttribute.FilterCustomAttributeRecord(CustomAttributeRecord caRecord, MetadataImport scope, Assembly& lastAptcaOkAssembly, Module decoratedModule, MetadataToken decoratedToken, RuntimeType attributeFilterType, Boolean mustBeInheritable, Object[] attributes, IList derivedAttributes, RuntimeType& attributeType, RuntimeMethodHandle& ctor, Boolean& ctorHasParameters, Boolean& isVarArg) at System.Reflection.CustomAttribute.GetCustomAttributes(Module decoratedModule, Int32 decoratedMetadataToken, Int32 pcaCount, RuntimeType attributeFilterType, Boolean mustBeInheritable, IList derivedAttributes, Boolean isDecoratedTargetSecurityTransparent) at System.Reflection.CustomAttribute.GetCustomAttributes(Module decoratedModule, Int32 decoratedMetadataToken, Int32 pcaCount, RuntimeType attributeFilterType, Boolean isDecoratedTargetSecurityTransparent) at System.Reflection.CustomAttribute.GetCustomAttributes(Assembly assembly, Type caType) at System.Reflection.Assembly.GetCustomAttributes(Type attributeType, Boolean inherit) at MyCode.VersionTest() I have never seen this exception before and the link in it points nowhere. It is only throwing on the build machine and not on my development box, so i'm going through a process of trial and error to see any differences between the two. Any idea why this might be happening?? Cheers, Andrej.

    Read the article

  • dateByAddingComponents problem and getting difference of dates with NSDateComponents problem!

    - by Rob
    I am having problems with adding values to dates and also getting differences between dates. The dates and components calculated are incorrect. So for adding, if I add 1.5 months, I only get 1 month, however if I add any whole number ie (1 or 2 or 3 and etc) it calculates correctly. Float32 addAmount = 1.5; NSDateComponents *components = [[[NSDateComponents alloc] init] autorelease]; [components setMonth:addAmount]; NSCalendar *gregorian = [[[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar] autorelease]; [gregorian setTimeZone:[NSTimeZone timeZoneWithName:@"UTC"]]; NSDate *newDate2 = [gregorian dateByAddingComponents:components toDate:Date1 options:0]; Now for difference, if I have a date that has been added with exactly one year (almost same code as above), it adds correctly, but when the difference is calculated, I get 0 years, 11 months and 30 days. NSDate *startDate = Date1; NSDate *endDate = Date2; NSCalendar *gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; [gregorian setTimeZone:[NSTimeZone timeZoneWithName:@"UTC"]]; NSUInteger unitFlags = NSYearCalendarUnit | NSMonthCalendarUnit | NSDayCalendarUnit; NSDateComponents *components = [gregorian components:unitFlags fromDate:startDate toDate:endDate options:0]; NSInteger years = [components year]; NSInteger months = [components month]; NSInteger days = [components day]; What am I doing wrong? Also I have added the kCFCalendarComponentsWrap constanct in the options for both adding and difference functions but with no difference. Thanks

    Read the article

  • Gridview empty when SelectedIndexChanged called

    - by xan
    I have a DataGrid which is being bound dynamically to a database query. The user enters some search text into a text field, clicks search, and the code behind creates the appropriate database query using LINQ (searches a table based on the string and returns a limited set of the columns). It then sets the GridView datasource to be the query and calls DataBind(). protected void btnSearch_Click(object sender, EventArgs e) { var query = from record in DB.Table where record.Name.Contains(txtSearch.Text) //Extra string checking etc. removed. select new { record.ID, record.Name, record.Date }; gvResults.DataSource = query; gvResults.DataBind(); } This works fine. When a user selects a row in the grid, the SelectedIndexChanged event handler gets the id from the row in the grid (one of the fields), queries the full record from the DB and then populates a set of editor / details fields with the records full details. protected void gvResults_SelectedIndexChanged(object sender, EventArgs e) { int id = int.Parse(gvResults.SelectedRow.Cells[1].Text); DisplayDetails(id); } This works fine on my local machine where I'm developing the code. On the production server however, the function is called successfully, but the row and column count on gvResults, the GridVeiw is 0 - the table is empty. The GridView's viewstate is enabled and I can't see obvious differences. Have I made some naive assumptions, or am I relying on something that is likely to be configured differently in debug? Locally I am running an empty asp.net web project in VS2008 to make development quicker. The production server is running the sitecore CMS so is configured rather differently. Any thoughts or suggestions would be most welcome. Thanks in advance!

    Read the article

  • How Do You Get the bufspec While Using Vimdiff Through Git

    - by Elizabeth Buckwalter
    I've read Vimdiff and Viewing differences with Vimdiff plus doing various google searches using things like "vimdiff multiple", "vimdiff git", "vimdiff commands" etc. When using do or diffg I get the error "More than two buffers in diff mode, don't know which one to use". When using diffg v:fname_in I get "No matching buffer for v:fname_in". From the vimdiff documentation: :[range]diffg[et] [bufspec] Modify the current buffer to undo difference with another buffer. If [bufspec] is given, that buffer is used. If [bufspec] refers to the current buffer then nothing happens. Otherwise this only works if there is one other buffer in diff mode. and more: When 'diffexpr' is not empty, Vim evaluates to obtain a diff file in the format mentioned. These variables are set to the file names used: v:fname_in original file v:fname_new new version of the same file v:fname_out resulting diff file So, I need to get the name of bufspec, but the default variables (fname_in, fname_new, and fname_out) aren't set. I ran the command git mergetool on a linux box through a terminal. [Edit] A partial solution that bred more questions. I used the "filename" at the bottom of the buffer. It's only a half answer, because occasionally I get a file does not exist error. I believe it's consistently the remote version of the file that "does not exist". I suspect this has something to do with git and indexing. How do you get the bufspec value consistently while using vimdiff through git-mergetool?

    Read the article

  • Using kAudioSessionProperty_OtherMixableAudioShouldDuck on iPhone

    - by Cliff
    Hello, I'm trying to get consistant behavior out of the kAudioSessionProperty_OtherMixableAudioShouldDuck property on the iPhone to allow iPod music blending and I'm having trouble. At the start of my app I set an Ambient category like so: -(void) initialize { [[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryAmbient error: nil]; } Later on when I attempt to play audio I set the duck property using the following method: //this will crossfade the audio with the ipod music - (void) toggleCrossfadeOn:(UInt32)onOff { //crossfade the ipod music AudioSessionSetProperty(kAudioSessionProperty_OtherMixableAudioShouldDuck,sizeof(onOff),&onOff); AudioSessionSetActive(onOff); } I call this passing a numeric "1" just before playing audio like so: [self toggleCrossfadeOn:1]; [player play]; I then call the crossfade method again passing a zero when my app's audio completes using a playback is stopping callback like so: -(void) playbackIsStoppingForPlayer:(MQAudioPlayer*)audioPlayer { NSLog(@"Releasing player"); [audioPlayer release]; [self toggleCrossfadeOn:0]; } In my production app this exact code works as expected, causing the ipod to fade out just before playing my app's audio then fade back in when the audio finishes playing. In a new project I recently started, I get different behavior. The iPod audio fades down and never fades back in. In my production app I use the AVAudioPlayer where in my new app I've written a custom audio player that uses audio queues. Could somebody help me understand the differences and how to properly use this API?

    Read the article

  • Flex 4 vs JavaScript Options (Cappuccino, JQuery, etc.)

    - by user320681
    Rehashing an older post: http://stackoverflow.com/questions/1570070/jquery-vs-flex-choosing-a-platform-for-saas We are preparing to develop an application that is exceptionally dynamic and interactive. It's particularly heavy on the graphics side. We are 85% convinced that Adobe Flash built atop Flex is the right path to take, however Cappuccino is quite nice and seems as though it may be able to nearly fit the bill. The only pause we have right now is portability for the iPhone. With the lack of blessings from Apple we will most certainly have to create a 2nd interface for the iPhone for the site, however... Having two interfaces may not be bad as it will likely have to be custom anyway to take advantage of the differences that it affords. Any further thoughts or reevaluations of points enumerated in the noted article? Further, Flex 4 adds a lot of strength to the position mentioned previously regarding UI development. Fx4 is very nice vs Fx3 and shaves 90% from the development time when coupled with Flash Catalyst, which is not really always fully appropriate, but with some round trip tricks it seems as though it can cut through things rather well... Please do advise and many thanks.

    Read the article

  • Entity Framework - Merging 2 physical tables into one "virtual" table problems...

    - by Keith Barrows
    I have been reading up on porting ASP.NET Membership Provider into .NET 3.5 using LINQ & Entities. However, the DB model that every single sample shows is the newer model while I've inherited a rather old model. Differences: The User Table is split into a pair of User & Membership Tables. All of the tables in the DB are prepended with aspnet_ I have Lowered versions of some columns (UserName, Email, etc) To work with this I have copied the properties from the Membership table into the User table (in the DB this is a 1<-1 relationship, not a 1<-0,1), renamed aspnet_Applications to Application, aspnet_Profiles to Profile, aspnet_Users to User and aspnet_Roles to Role. (See image) Link to full size image of model Now, I am running into one of 2 problems when I try to compile. Using the model in the image I get this error: Problem in Mapping Fragment starting at line 464: EntitySets 'UserSet' and 'aspnet_Membership' are both mapped to table 'aspnet_Membership'. Their Primary Keys may collide. If I delete the aspnet_Membership table from my model (to handle the above error) I then get: Problem in Mapping Fragment starting at line 384: Column aspnet_Membership.ApplicationId in table aspnet_Membership must be mapped: It has no default value and is not nullable. My ability to hand edit the backing stores is not the best and I don't want to just hack something in that may break other things. I am looking for suggestions, best practices, etc to handle this. Note: Moving the data tables themselves is not an option as I cannot replace all the logic in the existing apps. I am building this EF Provider for a new App. Over the next 6 months the old app(s) will migrate bit-by-bit to the new structures. Note: I added a link just under the image to the full size image for better viewing.

    Read the article

  • Why is Available Physical Memory (dwAvailPhys) > Available Virtual Memory (dwAvailVirtual) in call G

    - by Andrew
    I am playing with an MSDN sample to do memory stress testing (see: http://msdn.microsoft.com/en-us/magazine/cc163613.aspx) and an extension of that tool that specifically eats physical memory (see http://www.donationcoder.com/Forums/bb/index.php?topic=14895.0;prev_next=next). I am obviously confused though on the differences between Virtual and Physical Memory. I thought each process has 2 GB of virtual memory (although I also read 1.5 GB because of "overhead". My understanding was that some/all/none of this virtual memory could be physical memory, and the amount of physical memory used by a process could change over time (memory could be swapped out to disc, etc.)I further thought that, in general, when you allocate memory, the operating system could use physical memory or virtual memory. From this, I conclude that dwAvailVirtual should always be equal to or greater than dwAvailPhys in the call GlobalMemoryStatus. However, I often (always?) see the opposite. What am I missing. I apologize in advance if my question is not well formed. I'm still trying to get my head around the whole memory management system in Windows. Tutorials/Explanations/Book recs are most welcome! Andrew

    Read the article

  • Non-Relational Database Design

    - by Ian Varley
    I'm interested in hearing about design strategies you have used with non-relational "nosql" databases - that is, the (mostly new) class of data stores that don't use traditional relational design or SQL (such as Hypertable, CouchDB, SimpleDB, Google App Engine datastore, Voldemort, Cassandra, SQL Data Services, etc.). They're also often referred to as "key/value stores", and at base they act like giant distributed persistent hash tables. Specifically, I want to learn about the differences in conceptual data design with these new databases. What's easier, what's harder, what can't be done at all? Have you come up with alternate designs that work much better in the non-relational world? Have you hit your head against anything that seems impossible? Have you bridged the gap with any design patterns, e.g. to translate from one to the other? Do you even do explicit data models at all now (e.g. in UML) or have you chucked them entirely in favor of semi-structured / document-oriented data blobs? Do you miss any of the major extra services that RDBMSes provide, like relational integrity, arbitrarily complex transaction support, triggers, etc? I come from a SQL relational DB background, so normalization is in my blood. That said, I get the advantages of non-relational databases for simplicity and scaling, and my gut tells me that there has to be a richer overlap of design capabilities. What have you done? FYI, there have been StackOverflow discussions on similar topics here: the next generation of databases changing schemas to work with Google App Engine choosing a document-oriented database

    Read the article

  • C++ thread safety - exchange data between worker and controller

    - by peterchen
    I still feel a bit unsafe about the topic and hope you folks can help me - For passing data (configuration or results) between a worker thread polling something and a controlling thread interested in the most recent data, I've ended up using more or less the following pattern repeatedly: Mutex m; tData * stage; // temporary, accessed concurrently // send data, gives up ownership, receives old stage if any tData * Send(tData * newData) { ScopedLock lock(m); swap(newData, stage); return newData; } // receiving thread fetches latest data here tData * Fetch(tData * prev) { ScopedLock lock(m); if (stage != 0) { // ... release prev prev = stage; stage = 0; } return prev; // now current } Note: This is not supposed to be a full producer-consumer queue, only the msot recent data is relevant. Also, I've skimmed ressource management somewhat here. When necessary I'm using two such stages: one to send config changes to the worker, and for sending back results. Now, my questions assuming that ScopedLock implements a full memory barrier: do stage and/or workerData need to be volatile? is volatile necessary for tData members? can I use smart pointers instead of the raw pointers - say boost::shared_ptr? Anything else that can go wrong? I am basically trying to avoid "volatile infection" spreading into tData, and minimize lock contention (a lock free implementation seems possible, too). However, I'm not sure if this is the easiest solution. ScopedLock acts as a full memory barrier. Since all this is more or less platform dependent, let's say Visual C++ x86 or x64, though differences/notes for other platforms are welcome, too. (a prelimenary "thanks but" for recommending libraries such as Intel TBB - I am trying to understand the platform issues here)

    Read the article

  • visual studio 2008 vs 2010 Pro unmanaged code

    - by bartek
    Hi, I'm a C++ programmer, I use Visual Studio 2008 Professional, only unmanaged code. I'm thinking of buying VS 2010 Pro. I'm confused, I don't know what are differences between those two. I know that, in plus, it has tr1 included. When I started using 2008 edition I was very pleased to see f.e. unit testing support but all new features are only for managed code. The C++ debugger in 2008 is very good, better than 2003 edition one. I would't like to buy a new tool and discover that I gained nothing and lost some functionality ( because f.e. something was moved to higher version). Once upon a time I switched from very good VS6 to VS 2003.Net and imagine what, after some time I discovered that Pro has no support for code optimalization. It is wonderful how Microsoft makes money. I wouldn't like to experience something like that again. What do you think, what can you recommended?

    Read the article

  • Cloud-aware programming and help choosing a good framework

    - by Shoaibi
    How can i write a cloud-aware application? e.g. an application that takes benefit of being deployed on cloud. Is it same as an application that runs or a vps/dedicated server? if not then what are the differences? are there any design changes? What are the procedures that i need to take if i am to migrate an application to cloud-aware? Also i am about to implement a web application idea which would need features like security, performance, caching, and more importantly free. I have been comparing some frameworks and found that django has least RAM/CPU usage and works great in prefork+threaded mode, but i have also read that django based sites stop to respond with huge load of connections. Other frameworks that i have seen/know are Zend, CakePHP, Lithium/Cake3, CodeIgnitor, Symfony, Ruby on Rails.... So i would leave this to your opinion as well, suggest me a good free framework based on my needs. Finally thanks for reading the essay ;)

    Read the article

  • Problem with apostrophes and other special characters when using aspell in windows

    - by Loftx
    Hi there, We seem to be having a problem with the spell checker on our content management system where it marks the ve part of We’ve as a misspelling. The spellchecker uses aspell which is called from a script on the server which executes the cmd.exe and uses it to pipe a file into aspell (it's a long winded way I know, but our server side programming langauge (ColdFusion) doesn't support writing to stdin for executables). Aspell is called by executing: c:\windows\system32\cmd.exe /c type d:\path_to_file\file.txt | "C:\Program Files\Aspell\bin\aspell" --lang=en -a Where file.txt contains the text to be spelled e.g. ^Oh have We’ve (the carat is added to prevent piping problems I believe). Aspell then output: @(#) International Ispell Version 3.1.20 (but really Aspell 0.50.3) * * * & ve 62 12: vie, voe, V, v, veg, vet, Be, Ce, be, Ev, E, e, vex, VA, VI, Va, Vi, vi, we, VD, VF, VG, VJ, VP, VT, Vt, vb, vs, DE, De, Fe, GE, Ge, He, IE, Le, ME, Me, NE, Ne, OE, PE, Re, SE, Se, Te, Xe, he, me, re, ye, Ave, Eve, Ive, ave, eve, VAR, var, veer, vier, view, vow However, we have a dev site, with the same version of Aspell, and when the same file is used it outputs with no misspellings. Both servers are running Aspell 0.50.3 on Windows server 2003, but there could be other differences in configuration: @(#) International Ispell Version 3.1.20 (but really Aspell 0.50.3) I'm wondering if the problem is to do with the piping part of the process or something different in the Aspell configuration. Does anyone have any ideas? Cheers, Tom

    Read the article

  • UIImagePickerController shows last picture taken instead of camera input

    - by jules
    I'm having a strange behaviour within my app. For taking pictures i'm using the following pretty standard code for displaying the UIImagePickerController: UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = self; picker.allowsEditing = NO; picker.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentViewController:picker animated:YES completion:nil]; It works perfectly fine the first time I tap the button which calls this action. The strange behaviour starts when I tap that button again. The UIImagePickerController starts again BUT it doesnt show the input from the camera anymore. It shows the last picture I've taken. More Details of this state: Tapping on the image shows the yellow square of the auto focus. (which it actually uses to focus the camera correctly) When I tap on the ImageCapture button - the correct image is taken and presented on the screen. If I take a picture and press 'Retake' the regular camera image is presented as input. More weirdness: It has nothing to do with the iPad I'm using. Creating a new example app which only has button which calls the code from above everything works perfectly fine. I assume it has something to do with the configuration of the app. Therefore I checked everything but could not find any differences which may cause this issue. Thanks in advance for any advice! Update: I do implement the UIImagePickerControllerDelegate in order to dismiss the UIImagePickerController.

    Read the article

  • How is jQuery so fast?

    - by ClarkeyBoy
    Hey, I have a rather large application which, on the admin frontend, takes a few seconds to load a page because of all the pageviews that it has to load into objects before displaying anything. Its a bit complex to explain how the system works, but a few of my other questions explains the system in great detail. The main difference between what they say and the current system is that the customer frontend no longer loads all the pageviews into objects when a customer first views the page - it simply adds the pageview to the database and creates an object in an unsynchronised list... to put it simply, when a customer views a page it no longer loads all the pageviews into objects; but the admin frontend still does. I have been working on some admin tools on the customer frontend recently, so if an administrator clicks the description of an item in the catalogue then the right hand column will display statistics and available actions for the selected item. To do this the page which gets loaded (through $('action-container').load(bla bla bla);) into the right hand column has to loop through ALL the pageviews - this ultimately means that ALL the pageviews are loaded into objects if they haven't been already. For some reason this loads really REALLY fast. The difference in speed is only like a second on my dev site, but the live site has thousands of pageviews so the difference is quite big... So my question is: why is it that the admin frontend loads so slowly while using $(bla).load(bla); is so fast? I mean whatever method jQuery uses, can't browsers use this method too and load pages super-fast? Obviously not as someone would've done that by now - but I am interested to know just why the difference is so big... is it just my system or is there a major difference in speed between the browser getting a page and jQuery getting a page? Do other people experience the same kind of differences? Thanks in advance, Regards, Richard

    Read the article

  • mtl, transformers, monads-fd, monadLib, and the paradox of choice

    - by yairchu
    Hackage has several packages for monad transformers: mtl: Monad transformer library transformers: Concrete functor and monad transformers monads-fd: Monad classes, using functional dependencies monads-tf: Monad classes, using type families monadLib: A collection of monad transformers. mtl-tf: Monad transformer library using type families mmtl: Modular Monad transformer library mtlx: Monad transformer library with type indexes, providing 'free' copies. compose-trans: Composable monad transformers (and maybe I missed some) Which one shall we use? mtl is the one in the Haskell Platform, but I keep hearing on reddit that it's uncool. But what's bad about choice anyway, isn't it just a good thing? Well, I saw how for example the authors of data-accessor had to make all these to cater to just the popular choices: data-accessor-monadLib library: Accessor functions for monadLib's monads data-accessor-monads-fd library: Use Accessor to access state in monads-fd State monad class data-accessor-monads-tf library: Use Accessor to access state in monads-tf State monad type family data-accessor-mtl library: Use Accessor to access state in mtl State monad class data-accessor-transformers library: Use Accessor to access state in transformers State monad I imagine that if this goes on and for example several competing Arrow packages evolve, we might see something like: spoonklink-arrows-transformers, spoonklink-arrows-monadLib, spoonklink-tfArrows-transformers, spoonklink-tfArrows-monadLib, ... And then I worry that if spoonklink gets forked, Hackage will run out of disk space. :) Questions: Why are there so many monad transformer packages? Why is mtl [considered] uncool? What are the key differences? Most of these seemingly competing packages were written by Andy Gill and are maintained by Ross Paterson. Does this mean that these packages are not competing but rather work together in some way? And do Andy and Ross consider any of their own packages as obsolete? Which one should me and you use?

    Read the article

  • Relational vs. Dimensional Databases, what's the difference?

    - by grautur
    I'm trying to learn about OLAP and data warehousing, and I'm confused about the difference between relational and dimensional modeling. Is dimensional modeling basically relational modeling, but allowing for redundant/un-normalized data? For example, let's say I have historical sales data on (product, city, # sales). I understand that the following would be a relational point-of-view: Product | City | # Sales Apples, San Francisco, 400 Apples, Boston, 700 Apples, Seattle, 600 Oranges, San Francisco, 550 Oranges, Boston, 500 Oranges, Seattle, 600 While the following is a more dimensional point-of-view: Product | San Francisco | Boston | Seattle Apples, 400, 700, 600 Oranges, 550, 500, 600 But it seems like both points of view would nonetheless be implemented in an identical star schema: Fact table: Product ID, Region ID, # Sales Product dimension: Product ID, Product Name City dimension: City ID, City Name And it's not until you start adding some additional details to each dimension that the differences start popping up. For instance, if you wanted to track regions as well, a relational database would tend to have a separate region table, in order to keep everything normalized: City dimension: City ID, City Name, Region ID Region dimension: Region ID, Region Name, Region Manager, # Regional Stores While a dimensional database would allow for denormalization to keep the region data inside the city dimension, in order to make it easier to slice the data: City dimension: City ID, City Name, Region Name, Region Manager, # Regional Stores Is this correct?

    Read the article

  • OpenCV: Shift/Align face image relative to reference Image (Image Registration)

    - by Abhischek
    I am new to OpenCV2 and working on a project in emotion recognition and would like to align a facial image in relation to a reference facial image. I would like to get the image translation working before moving to rotation. Current idea is to run a search within a limited range on both x and y coordinates and use the sum of squared differences as error metric to select the optimal x/y parameters to align the image. I'm using the OpenCV face_cascade function to detect the face images, all images are resized to a fixed (128x128). Question: Which parameters of the Mat image do I need to modify to shift the image in a positive/negative direction on both x and y axis? I believe setImageROI is no longer supported by Mat datatypes? I have the ROIs for both faces available however I am unsure how to use them. void alignImage(vector<Rect> faceROIstore, vector<Mat> faceIMGstore) { Mat refimg = faceIMGstore[1]; //reference image Mat dispimg = faceIMGstore[52]; // "displaced" version of reference image //Rect refROI = faceROIstore[1]; //Bounding box for face in reference image //Rect dispROI = faceROIstore[52]; //Bounding box for face in displaced image Mat aligned; matchTemplate(dispimg, refimg, aligned, CV_TM_SQDIFF_NORMED); imshow("Aligned image", aligned); } The idea for this approach is based on Image Alignment Tutorial by Richard Szeliski Working on Windows with OpenCV 2.4. Any suggestions are much appreciated.

    Read the article

  • Java Robot key activity seems to stop working while certain software is running

    - by Mike Turley
    I'm writing a Java application to automate character actions in an online game overnight (specifically, it catches fish in Final Fantasy XI). The app makes heavy use of java's Robot class both for emulating user keyboard input and for detecting color changes on certain parts of the screen. It also uses multithreading and a swing GUI. The application seems to work perfectly when I test it without the game running, just using screenshots to trigger the apps responses into notepad. But for some reason, when I actually launch FFXI and start the program, all of my keyboard and mouse manipulations just stop working altogether. The program is still running, and the Robot class is still able to read pixel colors. But Robot.keyPress, Robot.keyRelease, Robot.mouseMove, Robot.mousePress and Robot.mouseRelease all do nothing. It's the strangest thing-- to test it, I wrote a simple loop that just keeps typing letters, and focused notepad. I'd then start the game, refocus notepad, and it would do nothing. Then I'd exit the game, and it'd start working again immediately. Has anyone else come across something like this, where specific software will stop certain functions of java from working? Also, to make this more interesting-- Last year I wrote a very similar program using the same classes and programming techniques to automate healing a party in the game as they fight. Last year, this program worked perfectly. After running into these problems I dug up that old program, ran it without making any changes, and found that it too was having the same problems. The only differences between now and when it was working: I was running Windows Vista and now I'm running Windows 7, and several new Java versions as well as FFXI versions have been released. What the hell is going on? (if anyone needs to see my source code, email me at [email protected]. I'm trying to keep it to myself.)

    Read the article

  • decimal.TryParse() drops leading "1"

    - by Martin Harris
    Short and sweet version: On one machine out of around a hundred test machines decimal.TryParse() is converting "1.01" to 0.01 Okay, this is going to sound crazy but bare with me... We have a client applications that communicates with a webservice through JSON, and that service returns a decimal value as a string so we store it as a string in our model object: [DataMember(Name = "value")] public string Value { get; set; } When we display that value on screen it is formatted to a specific number of decimal places. So the process we use is string - decimal then decimal - string. The application is currently undergoing final testing and is running on more than 100 machines, where this all works fine. However on one machine if the decimal value has a leading '1' then it is replaced by a zero. I added simple logging to the code so it looks like this: Log("Original string value: {0}", value); decimal val; if (decimal.TryParse(value, out val)) { Log("Parsed decimal value: {0}", val); string output = val.ToString(format, CultureInfo.InvariantCulture.NumberFormat); Log("Formatted string value: {0}", output); return output; } On my machine - any every other client machine - the logfile output is: Original string value: 1.010000 Parsed decimal value: 1.010000 Formatted string value: 1.01 On the defective machine the output is: Original string value: 1.010000 Parsed decimal value: 0.010000 Formatted string value: 0.01 So it would appear that the decimal.TryParse method is at fault. Things we've tried: Uninstalling and reinstalling the client application Uninstalling and reinstalling .net 3.5 sp1 Comparing the defective machine's regional settings for numbers (using English (United Kingdom)) to those of a working machine - no differences. Has anyone seen anything like this or has any suggestions? I'm quickly running out of ideas... While I was typing this some more info came in: Passing a string value of "10000" to Convert.ToInt32() returns 0, so that also seems to drop the leading 1.

    Read the article

  • Using Fantom USB Driver from JNI

    - by Starky
    I'm having some difficulty with JNI. I'm using JNI to call some Java methods from a C++ program. This implementation of JNI is working fine. The goal of the Java program is to send commands over USB to a LEGO robot using LEJOS. This works fine when running the Java program by itself but for some reason when I call the methods from C++ the robot cannot be detected. My only lead so far is that there may be some problem using the Fantom USB driver from a JNI call. This is the driver that's used for the USB connection to the robot. I've had a quick look at the code for the driver and it looks like it makes use of JNI too. So I guess I'm asking the following things: What differences could there be between calling code from JNI and executing it through command prompt with the 'java classname args' method which could cause this problem? Could it be that there is some problem with me using JNI in C++ when the driver that's being used uses JNI as well? I won't post any code just now as I don't think it's really relevant but if anyone thinks that they need to see it then I can add it.

    Read the article

  • Are SharePoint site templates really less performant than site definitions?

    - by Jim
    So, it seems in the SharePoint blogosphere that everybody just copies and pastes the same bullet points from other blogs. One bullet point I've seen is that SharePoint site templates are less performant than site definitions because site definitions are stored on the file system. Is that true? It seems odd that site templates would be less performant. It's my understanding that all site content lives in a database, whether you use a site template or a site definition. A site template is applied once to the database, and from then on the site should not care if the content was created using a site template or not. So, does anybody have an architectural reason why a site template would be less performant than a site definition? Edit: Links to the blogs that say there is a performance difference: From MSDN: Because it is slow to store templates in and retrieve them from the database, site templates can result in slower performance. From DevX: However, user templates in SharePoint can lead to performance problems and may not be the best approach if you're trying to create a set of reusable templates for an entire organization. From IT Footprint: Because it is slow to store templates in and retrieve them from the database, site templates can result in slower performance. Templates in the database are compiled and executed every time a page is rendered. From Branding SharePoint:Custom site definitions hold the following advantages over custom templates: Data is stored directly on the Web servers, so performance is typically better. At a minimum, I think the above articles are incomplete, and I think several are misleading based on what I know of SharePoints architecture. I read another blog post that argued against the performance differences, but I can't find the link.

    Read the article

  • Reordering arguments using recursion (pro, cons, alternatives)

    - by polygenelubricants
    I find that I often make a recursive call just to reorder arguments. For example, here's my solution for endOther from codingbat.com: Given two strings, return true if either of the strings appears at the very end of the other string, ignoring upper/lower case differences (in other words, the computation should not be "case sensitive"). Note: str.toLowerCase() returns the lowercase version of a string. public boolean endOther(String a, String b) { return a.length() < b.length() ? endOther(b, a) : a.toLowerCase().endsWith(b.toLowerCase()); } I'm very comfortable with recursions, but I can certainly understand why some perhaps would object to it. There are two obvious alternatives to this recursion technique: Swap a and b traditionally public boolean endOther(String a, String b) { if (a.length() < b.length()) { String t = a; a = b; b = t; } return a.toLowerCase().endsWith(b.toLowerCase()); } Not convenient in a language like Java that doesn't pass by reference Lots of code just to do a simple operation An extra if statement breaks the "flow" Repeat code public boolean endOther(String a, String b) { return (a.length() < b.length()) ? b.toLowerCase().endsWith(a.toLowerCase()) : a.toLowerCase().endsWith(b.toLowerCase()); } Explicit symmetry may be a nice thing (or not?) Bad idea unless the repeated code is very simple ...though in this case you can get rid of the ternary and just || the two expressions So my questions are: Is there a name for these 3 techniques? (Are there more?) Is there a name for what they achieve? (e.g. "parameter normalization", perhaps?) Are there official recommendations on which technique to use (when)? What are other pros/cons that I may have missed?

    Read the article

  • rails + compass: advantages vs using haml + blueprint directly

    - by egarcia
    I've got some experience using haml (+sass) on rails projects. I recently started using them with blueprintcss - the only thing I did was transform blueprint.css into a sass file, and started coding from there. I even have a rails generator that includes all this by default. It seems that Compass does what I do, and other things. I'm trying to understand what those other things are - but the documentation/tutorials weren't very clear. These are my conclusions: Compass comes with built-in sass mixins that implement common CSS idioms, such as links with icons or horizontal lists. My solution doesn't provide anything like that. (1 point for Compass). Compass has several command-line options: you can create a rails project, but you can also "install" it on an existing rails project. A rails generator could be personalized to do the same thing, I guess. (Tie). Compass has two modes of working with blueprint: "basic" and "semantic" usage. I'm not clear about the differences between those. With my rails generator I only have one mode, but it seems enough. (Tie) Apparently, Compass is prepared to use other frameworks, besides blueprint (e.g. YUI). I could not find much documentation about this, and I'm not interested on it anyway - blueprint is ok for me (Tie). Compass' learning curve seems a bit stiff and the documentation seems sparse. Learning could be a bit difficult. On the other hand, I know the ins and outs of my own system and can use it right away. (1 point for my system). With this analysis, I'm hesitant to give Compass a try. Is my analysis correct? Are Am I missing any key points, or have I evaluated any of these points wrongly?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >