Search Results

Search found 3912 results on 157 pages for 'distributed caching'.

Page 131/157 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • NonStop ODBC: how the connections (ODBC servers) are assigned to CPUs?

    - by Vladimir Dyuzhev
    We have an ODBC pool running on a NonStop server. The pool is connected to SQL/MX. This pool is used by a few external Java applications, each of which has an JDBC pool connected to ODBC pool (e.g. 14 connections per application). With time (after a few application recycles) we see an imbalance between CPUs -- some have 8 ODBC processes running, some only 5. That leads to CPU time imbalance too. Up to this point we assumed that a CPU is assigned to ODBC process in round-robin fashion. That would maintain the number of ODBC processes more or less equally distributed. It's not the case though. Is there any information on how ODBC pool decided which CPU to choose for every new allocated process? Does it look at CPU load? Available memory? Something else? Sadly, even HP's own people (available to us, that is) couldn't answer those questions with certainty. :-(

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • How is it that json serialization is so much faster than yaml serialization in python?

    - by guidoism
    I have code that relies heavily on yaml for cross-language serialization and while working on speeding some stuff up I noticed that yaml was insanely slow compared to other serialization methods (e.g., pickle, json). So what really blows my mind is that json is so much faster that yaml when the output is nearly identical. >>> import yaml, cjson; d={'foo': {'bar': 1}} >>> yaml.dump(d, Dumper=yaml.SafeDumper) 'foo: {bar: 1}\n' >>> cjson.encode(d) '{"foo": {"bar": 1}}' >>> import yaml, cjson; >>> timeit("yaml.dump(d, Dumper=yaml.SafeDumper)", setup="import yaml; d={'foo': {'bar': 1}}", number=10000) 44.506911039352417 >>> timeit("yaml.dump(d, Dumper=yaml.CSafeDumper)", setup="import yaml; d={'foo': {'bar': 1}}", number=10000) 16.852826118469238 >>> timeit("cjson.encode(d)", setup="import cjson; d={'foo': {'bar': 1}}", number=10000) 0.073784112930297852 PyYaml's CSafeDumper and cjson are both written in C so it's not like this is a C vs Python speed issue. I've even added some random data to it to see if cjson is doing any caching, but it's still way faster than PyYaml. I realize that yaml is a superset of json, but how could the yaml serializer be 2 orders of magnitude slower with such simple input?

    Read the article

  • Efficient AutoSuggest with jQuery?

    - by nobosh
    I'm working to build a jQuery AutoSuggest plugin, inspired by Apple's spotlight. Here is the general code: $(document).ready(function() { $('#q').bind('keyup', function() { if( $(this).val().length == 0) { // Hide the q-suggestions box $('#q-suggestions').fadeOut(); } else { // Show the AJAX Spinner $("#q").css("background-image","url(/images/ajax-loader.gif)"); $.ajax({ url: '/search/spotlight/', data: {"q": $(this).val()}, success: function(data) { $('#q-suggestions').fadeIn(); // Show the q-suggestions box $('#q-suggestions').html(data); // Fill the q-suggestions box // Hide the AJAX Spinner $("#q").css("background-image","url(/images/icon-search.gif)"); } }); } }); The issue I want to solve well & elegantly, is not killing the sever. Right now the code above hits the server every time you type a key and does not wait for you to essentially finish typing. What's the best way to solve this? A. Kill previous AJAX request? B. Some type of AJAX caching? C. Adding some type of delay to only submit .AJAX() when the person has stopped typing for 300ms or so?

    Read the article

  • NSFetchedResultsController: changing predicate not working?

    - by icerelic
    Hi, I'm writing an app with two tables on one screen. The left table is a list of folders and the right table shows a list of files. When tapped on a row on the left, the right table will display the files belonging to that folder. I'm using Core Data for storage. When the selection of folder changes, the fetch predicate of the right table's NSFetchedResultsController will change and perform a new fetch, then reload the table data. I used the following code snippet: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"list = %@",self.list]; [fetchedResultsController.fetchRequest setPredicate:predicate]; NSError *error = nil; if (![[self fetchedResultsController] performFetch:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } [table reloadData]; However the fetch results are still the same. I've NSLog'ed "predicate" before and after the fetch, and they were correct with updated information. The fetch results stay the same as initial fetch (when view is loaded). I'm not very familiar with the way Core Data fetches objects (is there a caching system?), but I've done similar things before(changing predicates, re-fetching data, and refreshing table) with single table views and everything went well. If someone could gave me a hint I would be very appreciated. Thanks in advance.

    Read the article

  • XSL: List divided into columns.

    - by kalininew
    Hello, help me please. There is a list of nodes. <list> <item>1</item> <item>2</item> <item>3</item> <item>4</item> <item>5</item> <item>6</item> <item>7</item> and so on... </list> Need to divide the list of "n" (arbitrary number) equal parts. If the number of nodes is not divided equally, then let the last set of nodes will contain the remainder of the division. For example, if the input list contains 33 elements and the output should have 4 parts with uniformly distributed elements. At the exit to get 3 parts to 9 of elements and one part with 6 elements in the sum of 33.

    Read the article

  • Can per-user randomized salts be replaced with iterative hashing?

    - by Chas Emerick
    In the process of building what I'd like to hope is a properly-architected authentication mechanism, I've come across a lot of materials that specify that: user passwords must be salted the salt used should be sufficiently random and generated per-user ...therefore, the salt must be stored with the user record in order to support verification of the user password I wholeheartedly agree with the first and second points, but it seems like there's an easy workaround for the latter. Instead of doing the equivalent of (pseudocode here): salt = random(); hashedPassword = hash(salt . password); storeUserRecord(username, hashedPassword, salt); Why not use the hash of the username as the salt? This yields a domain of salts that is well-distributed, (roughly) random, and each individual salt is as complex as your salt function provides for. Even better, you don't have to store the salt in the database -- just regenerate it at authentication-time. More pseudocode: salt = hash(username); hashedPassword = hash(salt . password); storeUserRecord(username, hashedPassword); (Of course, hash in the examples above should be something reasonable, like SHA-512, or some other strong hash.) This seems reasonable to me given what (little) I know of crypto, but the fact that it's a simplification over widely-recommended practice makes me wonder whether there's some obvious reason I've gone astray that I'm not aware of.

    Read the article

  • How to efficiently serve massive sitemaps in django

    - by mlissner
    I have a site with about 150K pages in its sitemap. I'm using the sitemap index generator to make the sitemaps, but really, I need a way of caching it, because building the 150 sitemaps of 1,000 links each is brutal on my server.[1] I COULD cache each of these sitemap pages with memcached, which is what I'm using elsewhere on the site...however, this is so many sitemaps that it would completely fill memcached....so that doesn't work. What I think I need is a way to use the database as the cache for these, and to only generate them when there are changes to them (which as a result of the sitemap index means only changing the latest couple of sitemap pages, since the rest are always the same.)[2] But, as near as I can tell, I can only use one cache backend with django. How can I have these sitemaps ready for when Google comes-a-crawlin' without killing my database or memcached? Any thoughts? [1] I've limited it to 1,000 links per sitemap page because generating the max, 50,000 links, just wasn't happening. [2] for example, if I have sitemap.xml?page=1, page=2...sitemap.xml?page=50, I only really need to change sitemap.xml?page=50 until it is full with 1,000 links, then I can it pretty much forever, and focus on page 51 until it's full, cache it forever, etc.

    Read the article

  • Node.js/ v8: How to make my own snapshot to accelerate startup

    - by Anand
    I have a node.js (v0.6.12) application that starts by evaluating a Javascript file, startup.js. It takes a long time to evaluate startup.js, and I'd like to 'bake it in' to a custom build of Node if possible. The v8 source directory distributed with Node, node/deps/v8/src, contains a SconScript that can almost be used to do this. On line 302, we have LIBRARY_FILES = ''' runtime.js v8natives.js array.js string.js uri.js math.js messages.js apinatives.js date.js regexp.js json.js liveedit-debugger.js mirror-debugger.js debug-debugger.js '''.split() Those javascript files are present in the same directory. Something in the build process apparently evaluates them, takes a snapshot of state, and saves it as a byte string in node/out/Release/obj/release/snapshot.cc (on Mac OS). Some customization of the startup snapshot is possible by altering the SconScript. For example, I can change the definition of the builtin Date.toString by altering date.js. I can even add new global variables by adding startup.js to the list of library files, with contents global.test = 1. However, I can't put just any javascript code in startup.js. If it contains Date.toString = 1;, an error results even though the code is valid at the node repl: Build failed: -> task failed (err #2): {task: libv8.a SConstruct -> libv8.a} make: *** [program] Error 1 And it obviously can't make use of code that depends on libraries Node adds to v8. global.underscore = require('underscore'); causes the same error. I'd ideally like a tool, customSnapshot, where customSnapshot startup.js evaluates startup.js with node and then dumps a snapshot to a file, snapshot.cc, which I can put into the node source directory. I can then build node and tell it not to rebuild the snapshot.

    Read the article

  • Is it possible to read data that has been separately copied to the Android sd card without having ro

    - by icecream
    I am developing an application that needs to access data on the sd card. When I run on my development device (an odroid with Android 2.1) I have root access and can construct the path using: File sdcard = Environment.getExternalStorageDirectory(); String path = sdcard.getAbsolutePath() + File.separator + "mydata" File data = new File(path); File[] files = data.listFiles(new FilenameFilter() { @Override public boolean accept(File dir, String filename) { return filename.toLowerCase().endsWith(".xyz"); }}); However, when I install this on a phone (2.1) where I do not have root access I get files == null. I assume this is because I do not have the right permissions to read the data from the sd card. I also get files == null when just trying to list files on /sdcard. So the same applies without my constructed path. Also, this app is not intended to be distributed through the app store and is needs to use data copied separately to the sd card so this is a real use-case. It is too much data to put in res/raw (I have tried, it did not work). I have also tried adding: <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> to the manifest, even though I only want to read the sd card, but it did not help. I have not found a permission type for reading the storage. There is probably a correct way to do this, but I haven't been able to find it. Any hints would be useful.

    Read the article

  • java class creation dynamically and make it accessible across the network different jvms i.e. serial

    - by inj.rav
    Hi. I have a requirement of creating java classes dynamically and make it accessible different jvms across the network. I tried to use reflection and javassist tool,but nothing worked. Let me explain the scenario we are using Coherence distributed cache. It has a power of doing aggregation/filtering in parallel across the cluster. For example if a class has [dynamic class] has amount variable and getAmount/setAmount methods. Then if we execute COHERENCE queries, it will start process in parallel across the cluster. I tried to create classes at run time by using javassist and reflection. I am able to access it from single JVM, but when I tried to access the same class from other jvm [through coherence cluster]. I am getting exception of class not found [as remote jvm is not having idea of this class].I can over come this by creating same class dynamically on remote jvm also and access the methods. But coherence in built methods/functions are not able to find the class. could some one help me on this matter

    Read the article

  • Static variable for communication among like-typed objects

    - by Dan Ray
    I have a method that asynchronously downloads images. If the images are related to an array of objects (a common use-case in the app I'm building), I want to cache them. The idea is, I pass in an index number (based on the indexPath.row of the table I'm making by way through), and I stash the image in a static NSMutableArray, keyed on the row of the table I'm dealing with. Thusly: @implementation ImageDownloader ... @synthesize cacheIndex; static NSMutableArray *imageCache; -(void)startDownloadWithImageView:(UIImageView *)imageView andImageURL:(NSURL *)url withCacheIndex:(NSInteger)index { self.theImageView = imageView; self.cacheIndex = index; NSLog(@"Called to download %@ for imageview %@", url, self.theImageView); if ([imageCache objectAtIndex:index]) { NSLog(@"We have this image cached--using that instead"); self.theImageView.image = [imageCache objectAtIndex:index]; return; } self.activeDownload = [NSMutableData data]; NSURLConnection *conn = [[NSURLConnection alloc] initWithRequest:[NSURLRequest requestWithURL:url] delegate:self]; self.imageConnection = conn; [conn release]; } //build up the incoming data in self.activeDownload with calls to didReceiveData... - (void)connectionDidFinishLoading:(NSURLConnection *)connection { NSLog(@"Finished downloading."); UIImage *image = [[UIImage alloc] initWithData:self.activeDownload]; self.theImageView.image = image; NSLog(@"Caching %@ for %d", self.theImageView.image, self.cacheIndex); [imageCache insertObject:image atIndex:self.cacheIndex]; NSLog(@"Cache now has %d items", [imageCache count]); [image release]; } My index is getting through okay, I can see that by my NSLog output. But even after my insertObject: atIndex: call, [imageCache count] never leaves zero. This is my first foray into static variables, so I presume I'm doing something wrong. (The above code is heavily pruned to show only the main thing of what's going on, so bear that in mind as you look at it.)

    Read the article

  • Rails: constraint violation on create but not on update

    - by justinbach
    Note: This is a "railsier" (and more succinct) version of this question, which was getting a little long. I'm getting Rails behavior on a production server that I can't replicate on the development server. The codebases are identical save for credentials and caching settings, and both are powered by Oracle 10g databases with identical schema (but different data). My Rails application contains a user model, which has_one registration; registration in turn has_and_belongs_to_many company_ownerships through a registration_ownerships table. Upon registering, users fill out data pertinent to all three models, including a series of checkboxes indicating what registration_ownerships might apply to their account. On the dev server, the registration process is seamless, no matter what data is entered. On production, however, if users check off any of the company ownership fields before submitting their registration, Oracle complains about a constraint violation on the primary key of the company_ownerships table (which is a two-field key based on company_ownership_id and registration_id) and users get the standard Rails 500 error screen. In every case, I've verified that no conflicting record on these two fields exists in the production database, so I don't know why the constraint is getting violated. To further confuse things, if a user registers without listing any ownerships and later goes back and modifies their account to reflect ownership data (which is done through the same interface), the application happily complies with their request and Oracle is well-behaved (this is both on production and dev). I've spent the past couple days trying to figure out what might be causing this problem and am reaching the end of my wits. Any advice would be greatly appreciated!

    Read the article

  • Where and how to start on C# and .Net Framework ?

    - by Rachel
    Currently, I have been working as an PHP developer for approximately 1 year now and I want learn about C# and .Net Framework, I do not have any experience with .Net Framework and C# and also there is not firm basis as to why I am going for C# and .Net Framework vs Java or any other programming languages, this decision is mere on career point of view and job opportunities. So my question is about: Is my decision wise to go for C# and .Net Framework route after working for sometime as an PHP Developer ? What are the good resources which I can refer and learn from to get knowledge on C# and .NET Framework ? How should I go about learning on C# and .NET Framework ? What all technologies should I be learning OR have experience with to be considered as an C#/.Net Developer, I am mentioning some technologies, please add or suggest one, if am missing out any ? Technologies C#-THE LANGUAGE GUI APPLICATION DEVELOPMENT WINDOWS CONTROL LIBRARY DELEGATES DATA ACCESS WITH ADO.NET MULTI THREADING ASSEMBLIES WINDOWS SERVICES VB INTRODUCTION TO VISUAL STUDIO .NET WINDOWS CONTROL LIBRARY DATA ACCESS WITH ADO.NET ASP.NET WEB TECHNOLOGIES CONTROLS VALIDATION CONTROL STATE MANAGEMENT CACHING ASP.NET CONFIGURATION ADO.NET ASP.NET TRACING & SECURITY IN ASP.NET XMLPROGRAMMING WEB SERVICES CRYSTAL REPORTS SSRS (SQL Server Reporting Services) MS-Reports LINQ: NET Language-Integrated Query NET Language-Integrated Query LINQ to SQL: SQL Integration WCF: Windows Communication Foundation What Is Windows Communication Foundation? Fundamental Windows Communication Foundation Concepts Windows Communication Foundation Architecture WPF: Windows Presentation Foundation Getting Started (WPF) Application Development WPF Fundamentals What are your thoughts, suggestions on this and from Job and Market Perspective, what areas of C#/.Net Development should I put my focus on ? I know this is very subjective and long question but advice would be highly appeciated. Thanks.

    Read the article

  • Avoiding Service Locator with AutoFac 2

    - by Page Brooks
    I'm building an application which uses AutoFac 2 for DI. I've been reading that using a static IoCHelper (Service Locator) should be avoided. IoCHelper.cs public static class IoCHelper { private static AutofacDependencyResolver _resolver; public static void InitializeWith(AutofacDependencyResolver resolver) { _resolver = resolver; } public static T Resolve<T>() { return _resolver.Resolve<T>(); } } From answers to a previous question, I found a way to help reduce the need for using my IoCHelper in my UnitOfWork through the use of Auto-generated Factories. Continuing down this path, I'm curious if I can completely eliminate my IoCHelper. Here is the scenario: I have a static Settings class that serves as a wrapper around my configuration implementation. Since the Settings class is a dependency to a majority of my other classes, the wrapper keeps me from having to inject the settings class all over my application. Settings.cs public static class Settings { public static IAppSettings AppSettings { get { return IoCHelper.Resolve<IAppSettings>(); } } } public interface IAppSettings { string Setting1 { get; } string Setting2 { get; } } public class AppSettings : IAppSettings { public string Setting1 { get { return GetSettings().AppSettings["setting1"]; } } public string Setting2 { get { return GetSettings().AppSettings["setting2"]; } } protected static IConfigurationSettings GetSettings() { return IoCHelper.Resolve<IConfigurationSettings>(); } } Is there a way to handle this without using a service locator and without having to resort to injecting AppSettings into each and every class? Listed below are the 3 areas in which I keep leaning on ServiceLocator instead of constructor injection: AppSettings Logging Caching

    Read the article

  • How to fix codesign when it says user cancelled the operation? (And I didn't)

    - by eipipuz
    I'm compiling an iPhone app meant to be Distributed. It's my first app so I followed the "iPhone Provisioning Profiles" instructions. Unfortunately it fails with this: CodeSign build/*_*_.app cd "/Users/videojuegos/Documents/*_*_" setenv IGNORE_CODESIGN_ALLOCATE_RADAR_7181968 /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/codesign_allocate setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /usr/bin/codesign -f -s "iPhone Distribution: ******" "--resource-rules=/Users/videojuegos/Documents/*_*_/build/*_*_.app/ResourceRules.plist" --entitlements "/Users/videojuegos/Documents/*_*_/build/Unity-iPhone.build/Distribution-iphoneos/Unity-iPhone.build/*_*_.xcent" "/Users/videojuegos/Documents/*_*_/build/*_*_.app" /Users/videojuegos/Documents/*_*_/build/*_*_.app: The operation was cancelled by the user. Command /usr/bin/codesign failed with exit code 1 I thought Keychain wasn't allowing Codesign to work but as far as I can tell that isn't the case. I also attempted running these commands from a terminal and it failed with this message: Users/videojuegos/Documents/*_*_/build/Unity-iPhone.build/Distribution-iphoneos/Unity-iPhone.build/*_*_.xcent: cannot read entitlement data I have made the xcode setting from scratch three times. Googled it. No results. I don't have any idea what else to try. Any suggestions?

    Read the article

  • System.OutOfMemoryException was thrown. at Go60505(RegexRunner ) at System.Text.RegularExpressions.C

    - by Deanvr
    Hi All, I am a c# dev working on some code for a website in vb.net. We use a lot of caching on a 32bit iss 6 win 2003 box and in some cases run into OutOfMemoryException exceptions. This is the code I trace it back to and would like to know if anyone else has has this... Public Sub CreateQueryStringNodes() 'Check for nonstandard characters' Dim key As String Dim keyReplaceSpaces As String Dim r As New Regex("^[-a-zA-Z0-9_]+$", RegexOptions.Compiled) For Each key In HttpContext.Current.Request.Form If Not IsNothing(key) Then keyReplaceSpaces = key.Replace(" ", "_") If r.IsMatch(keyReplaceSpaces) Then CreateNode(keyReplaceSpaces, HttpContext.Current.Request(key)) End If End If Next For Each key In HttpContext.Current.Request.QueryString If Not IsNothing(key) Then keyReplaceSpaces = key.Replace(" ", "_") If r.IsMatch(keyReplaceSpaces) Then CreateNode(keyReplaceSpaces, HttpContext.Current.Request(key).Replace("--", "-")) End If End If Next End Sub .NET Framework Version:2.0.50727.3053; ASP.NET Version:2.0.50727.3053 error: Exception of type 'System.OutOfMemoryException' was thrown. at Go60505(RegexRunner ) at System.Text.RegularExpressions.CompiledRegexRunner.Go() at System.Text.RegularExpressions.RegexRunner.Scan(Regex regex, String text, Int32 textbeg, Int32 textend, Int32 textstart, Int32 prevlen, Boolean quick) at System.Text.RegularExpressions.Regex.Run(Boolean quick, Int32 prevlen, String input, Int32 beginning, Int32 length, Int32 startat) at System.Text.RegularExpressions.Regex.IsMatch(String input) at Xcite.Core.XML.Write.CreateQueryStringNodes() at Xcite.Core.XML.Write..ctor(String IncludeSessionAndPostedData) at mysite._Default.Page_Load(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at thanks

    Read the article

  • Rate limiting a ruby file stream

    - by Matthew Savage
    I am working on a project which involves uploading flash video files to a S3 bucket from a number of geographically distributed nodes. The video files are about 2-3mb each, and we are only sending one file (per node) every ten minutes, however the bandwidth we consume needs to be rate limited to ~20k/s, as these nodes are delivering streaming media to a CDN, and due to the locations we are only able to get 512k max upload. I have been looking into the ASW-S3 gem and while it doesn't offer any kind of rate limiting I am aware that you can pass in a IO Stream. Given this I am wondering if it might be possible to create a rate-limited stream which overrides the read method, adds in the rate limiting logic (e.g. in its simplest form a call to sleep between reads) and then call out to the super of the overridden method. Another option I considered is hacking the code for Net::HTTP and putting the rate limiting into the send_request_with_body_stream method which is using a while loop, but I'm not entirely sure which would be the best option. I have attempted at extending the IO class, however that didn't work at all, simply inheriting from the class with class ThrottledIO < IO didn't do anything. Any suggestions will be greatly appreciated.

    Read the article

  • How do I force or add the content length for ajax type POST requests in Firefox?

    - by Jayson
    I'm trying to POST a http request using ajax, but getting a response from the apache server using modsec_audit that: "POST request must have a Content-Length header." I do not want to disable this in modsec_audit. This occurs only in firefox, and not IE. Further, I switched to using a POST rather than a GET to keep IE from caching my results. This is a simplified version of the code I'm using for the request, I'm not using any javascript framework. function getMyStuff(){ var SearchString = ''; /* build search string */ ... /* now do request */ var xhr = createXMLHttpRequest(); var RequestString = 'someserverscript.cfm' + SearchString; xhr.open("POST", RequestString, true); xhr.onreadystatechange = function(){ processResponse(xhr); } xhr.send(null); } function processResponse(xhr){ var serverResponse = xhr.responseText; var container = document.getElementById('myResultsContainer'); if (xhr.readyState == 4){ container.innerHTML = serverResponse; } } function createXMLHttpRequest(){ try { return new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) {} try { return new ActiveXObject("Microsoft.XMLHTTP"); } catch (e) {} try { return new XMLHttpRequest(); } catch(e) {} return null; } How do I force or add the content length for ajax type POST requests in Firefox?

    Read the article

  • Valueurl Binding On Large Arrays Causes Sluggish User Interface

    - by Hooligancat
    I have a large data set (some 3500 objects) that returns from a remote server via HTTP. Currently the data is being presented in an NSCollectionView. One aspect of the data is a path pack to the server for a small image that represents the data (think thumbnail for simplicity). Bindings works fantastically for the data that is already returned, and binding the image via a valueurl binding is easy to do. However, the user interface is very sluggish when scrolling through the data set - which makes me think that the NSCollectionView is retrieving all the image data instead of just the image data used to display the currently viewable images. I was under the impression that Cocoa controls were smart enough to only retrieve data for the information that is actually being output to the user interface through lazy loading. This certainly seems to be the case with NSTableView - but I could be misguided on this thought. Should valueurl binding act lazily and, moreover, should it act lazily in an NSCollectionView? I could create a caching mechanism (in fact I already have such a thing in place for another application - see my post here if you are interested http://stackoverflow.com/questions/1740209/populating-nsimage-with-data-from-an-asynchronous-nsurlconnection) but I really don't want to go this route if I don't have to for this specific implementation as the user could potentially change data sets often and may only want small sub-sets of the data. Any suggested approaches? Thanks!

    Read the article

  • Examples of monoids/semigroups in programming

    - by jkff
    It is well-known that monoids are stunningly ubiquitous in programing. They are so ubiquitous and so useful that I, as a 'hobby project', am working on a system that is completely based on their properties (distributed data aggregation). To make the system useful I need useful monoids :) I already know of these: Numeric or matrix sum Numeric or matrix product Minimum or maximum under a total order with a top or bottom element (more generally, join or meet in a bounded lattice, or even more generally, product or coproduct in a category) Set union Map union where conflicting values are joined using a monoid Intersection of subsets of a finite set (or just set intersection if we speak about semigroups) Intersection of maps with a bounded key domain (same here) Merge of sorted sequences, perhaps with joining key-equal values in a different monoid/semigroup Bounded merge of sorted lists (same as above, but we take the top N of the result) Cartesian product of two monoids or semigroups List concatenation Endomorphism composition. Now, let us define a quasi-property of an operation as a property that holds up to an equivalence relation. For example, list concatenation is quasi-commutative if we consider lists of equal length or with identical contents up to permutation to be equivalent. Here are some quasi-monoids and quasi-commutative monoids and semigroups: Any (a+b = a or b, if we consider all elements of the carrier set to be equivalent) Any satisfying predicate (a+b = the one of a and b that is non-null and satisfies some predicate P, if none does then null; if we consider all elements satisfying P equivalent) Bounded mixture of random samples (xs+ys = a random sample of size N from the concatenation of xs and ys; if we consider any two samples with the same distribution as the whole dataset to be equivalent) Bounded mixture of weighted random samples Which others do exist?

    Read the article

  • ASP.Net Response Filter Causing SharePoint 2010 "Unexpected Error"

    - by Jason Weber
    Hello everyone, I'm debugging an HttpModule with an ASP.NET response filter. This dynamically rewrites portions of rendered SharePoint WCM pages. The publishing pages render fine in SP2007 on both Server 2003 and Server 2008. However the equivalent pages fail to render in SP2010 B2 on Server 2008 R2. The generic "An unexpected error has occurred message" page is displayed. This error only happens when the response filter is applied to an .aspx page. Other page types, such as .css, render fine on this platform. This error also happens when the response filter does not modify the page at all (pure pass-through). This KB article seems very closely related: http://support.microsoft.com/kb/2014472. However, this same error occurs with caching disabled. I see no related entries in any of the following: ULS for SP, Event Log, Failed Request Tracing (IIS7). Running under the debugger suggests that the custom code is not raising any exceptions. Any help or insight would be greatly appreciated.

    Read the article

  • Can someone help me with m Django localization?

    - by alex
    I have a template with has text in it. It's located in /templates under my project directory. I'm trying to do Japanese now. I create a directory called "locale" in my project directory. Then, I set up this in my settings: gettext = lambda s: s LANGUAGES = ( ('de', gettext('German')), ('en', gettext('English')), ('ja', gettext('Japanese')), ) After that, I run this command: django-admin.py makemessages -l ja The only problem is, this doesn't work! In my locale/ja/LC_MESSAGES/django.po: Isn't it supposed to scan my templates with .html extension and grab all the strings? # SOME DESCRIPTIVE TITLE. # Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER # This file is distributed under the same license as the PACKAGE package. # FIRST AUTHOR <EMAIL@ADDRESS>, YEAR. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: PACKAGE VERSION\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2010-05-20 22:45+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Language-Team: LANGUAGE <[email protected]>\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" #: settings.py:101 msgid "German" msgstr "" #: settings.py:102 msgid "English" msgstr "" #: settings.py:103 msgid "Japanese" msgstr ""

    Read the article

  • Speeding up a soap powered website

    - by ChrisRamakers
    Hi all, We're currently looking into doing some performance tweaking on a website which relies heavily on a Soap webservice. But ... our servers are located in Belgium and the webservice we connect to is locate in San Francisco so it's a long distance connection to say the least. Our website is PHP powered, using PHP's built in SoapClient class. On average a call to the webservice takes 0.7 seconds and we are doing about 3-5 requests per page. All possible request/response caching is already implemented so we are now looking at other ways to improved the connection speed. This is the code which instantiates the SoapClient, what i'm looking for now is other ways/methods to improve speed on single requestes. Anyone has idea's or suggestions? private function _createClient() { try { $wsdl = sprintf($this->config->wsUrl.'?wsdl', $this->wsdl); $client = new SoapClient($wsdl, array( 'soap_version' => SOAP_1_1, 'encoding' => 'utf-8', 'connection_timeout' => 5, 'cache_wsdl' => 1, 'trace' => 1, 'features' => SOAP_SINGLE_ELEMENT_ARRAYS )); $header_tags = array('username' => new SOAPVar($this->config->wsUsername, XSD_STRING, null, null, null, $this->ns), 'password' => new SOAPVar(md5($this->config->wsPassword), XSD_STRING, null, null, null, $this->ns)); $header_body = new SOAPVar($header_tags, SOAP_ENC_OBJECT); $header = new SOAPHeader($this->ns, 'AuthHeaderElement', $header_body); $client->__setSoapHeaders($header); } catch (SoapFault $e){ controller('Error')->error($id.': Webservice connection error '.$e->getCode()); exit; } $this->client = $client; return $this->client; }

    Read the article

  • How to persist objects between requests in PHP

    - by SztupY
    I've been using rails, merb, django and asp.net mvc applications in the past. What they have common (that is relevant to the question) is that they have code that sets up the framework. This usually means creating objects and state that is persisted until the web server is recycled (like setting up routing, or checking which controllers are available, etc). As far as I know PHP is more like a CGI script that gets compiled to some bytecode each time it's run, and after the request it's discarded. Of course you can have sessions, to persist data between requests from the same user, and as I see there are extensions like APC, with which you can persist objects between requests at the server level. My question is: how can one create a PHP application that works like rails and such? I mean an application that on the first requests sets up the framework, then on the 2nd and later requests use the objects that are already set up. Is there some built in caching facility in mod_php? (for example that stores the compiled bytecode of the executed php applications) Or is using APC or some similar extensions the only way to solve this problem? How would you do it? Thanks.

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >