Search Results

Search found 23347 results on 934 pages for 'key storage'.

Page 280/934 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • Cassandra/HBase or just MySQL: Potential problems doing the next thing

    - by alexeypro
    Say I have "user". It's the key. And I need to keep "user count". I am planning to have record with key "user" and value "0" to "9999+ ;-)" (as many as I'll have). What problems I will drive in if I use Cassandra, HBase or MySQL for that? Say, I have thousand of new updates to this "user" key, where I need to increment the value. Am I in trouble? Locked for writes? Any other way of doing that? Why this is done -- there will be a lot of "user"-like keys. Different other cases. But the idea is the same. Why keep it this way -- because I'll have more reads, so I can always get "counted value" very fast.

    Read the article

  • Changing data in a django modelform

    - by Matt Hampel
    I get data in from POST and validate it via this standard snippet: entry_formset = EntryFormSet(request.POST, request.FILES, prefix='entries') if entry_formset.is_valid(): .... The EntryFormSet modelform overrides a foreign key field widget to present a text field. That way, the user can enter an existing key (suggested via an Ajax live search), or enter a new key, which will be seamlessly added. I use this try-except block to test if the object exists already, and if it doesn't, I add it. entity_name = request.POST['entries-0-entity'] try: entity = Entity.objects.get(name=entity_name) except Entity.DoesNotExist: entity = Entity(name=entity_name) entity.slug = slugify(entity.name) entity.save() However, I now need to get that entity back into the entry_formset. It thinks that entries-0-entity is a string (that's how it came in); how can I directly access that value of the entry_formset and get it to take the object reference instead?

    Read the article

  • Keyboard Simulation not working with Keyboard hook for modifier keys

    - by Eduardo Wada
    I have a piece of software that is being used to simulate a certain device on a touchscreen, this device already runs an application that receives keyboard input from the device. My software (reffered to as simulator) displays a virtual keyboard and runs the application. Thus, the simulator sends keys with input simulator: http://inputsimulator.codeplex.com/ And the applciation listens to keys with the following keyboard hook: https://svn.cyberduck.io/tags/release-4-1/source/ch/cyberduck/core/GlobalKeyboardHook.cs My problem is, what some keys from the device's hardware actually do is to sent a key combination (ex: left-alt + 1) to the application and a weird scenario is occurring: The application listens to normal keyboard inputs The simulator sends keys to other applications (ie: visual studio responds to the keys sent when debugging) The simulator can send single keys to the application (I can type) The simulator CANNOT send key combinations to the application (alt+1 is received as just 1 in the application) This started happenning when we imported the application's dll into the same process from the simulator. Could there be any reason why I can't simulate key combinations for a hook in the same process? Is there any easy fix for this?

    Read the article

  • Cloud Computing Architecture Patterns: Don’t Focus on the Client

    - by BuckWoody
    Normally I try to put topics in the positive in other words "Do this" not "Don't do that". Sometimes its clearer to focus on what *not* to do. Popular development processes often start with screen mockups, or user input descriptions. In a scale-out pattern like Cloud Computing on Windows Azure, that's the wrong place to start. Start with the Data    Instead, I recommend that you start with the data that a process requires. That data might be temporary or persisted, but starting with the data and its requirements helps to define not only the storage engine you need but also drives everything from security to the integrity of the application. For instance, assume the requirements show that the user must enter their phone number, and that this datum is used in a contact management system further down the application chain. For that datum, you can determine what data type you need (U.S. only or International?) the security requirements, whether it needs ACID compliance, how it will be searched, indexed and so on. From one small data point you can extrapolate out your options for storing and processing the data. Here's the interesting part, which begins to break the patterns that we've used for decades: all of the data doesn't have the same requirements. The phone number might be best suited for a list, or an element, or a string, with either BASE or ACID requirements, based on how it is used. That means we don't have to dump everything into XML, an RDBMS, a NoSQL engine, or a flat file exclusively. In fact, one record might use all of those depending on the use-case requirements. Next Is Data Management  With the data defined, we can move on to how to store the data. Again, the requirements now dictate whether we need a full relational calculus or set-based operations, or we can choose another method based on the requirements for the data. And breaking another pattern its OK to store in more than once, in more than one location. We do this all the time for reporting systems and Business Intelligence systems, so this is a pattern we need to think about even for OLTP data. Move to Data Transport How does the data get around? We can use a connection-based method, sending the data along a transport to the storage engine, but in some cases we may want to use a cache, a queue, the Service Bus, or Complex Event Processing. Finally, Data Processing Most RDBMS engines, NoSQL, and certainly Big Data engines not only store data, but can process and manipulate it as well. Its doubtful that you'll calculate that phone number right? Well, if you're the phone company, you most certainly will. And so we see that even once we've chosen the data type, storage and engine, the same element can have different computing requirements based on how it is used. Sure, We Need A Front-End At Some Point Not all data is entered by human hands in fact most data isn't. We don't really need a Graphical User Interface (GUI) we need some way for a GUI to get data into and out of the systems listed earlier.   But when we do need to allow users to enter or examine data, that should be left to the GUI that best fits the device the user has. Ever tried to use an application designed for a web browser on a phone? Or one designed for a tablet on a phone? Its usually quite painful. The siren song of "We'll just write one interface for all devices" is strong, and has beguiled many an unsuspecting architect. But they just don't work out.   Instead, focus on the data, its transport and processing. Create API calls or a message system that allows for resilient transport to the device or interface, and let it do what it does best. References Microsoft Architecture Journal:   http://msdn.microsoft.com/en-us/architecture/bb410935.aspx Patterns and Practices:   http://msdn.microsoft.com/en-us/library/ff921345.aspx Windows Azure iOS, Android, Windows 8 Mobile Devices SDK: http://www.windowsazure.com/en-us/develop/mobile/tutorials/get-started-ios/ Windows Azure Facebook SDK: http://ntotten.com/2013/03/14/using-windows-azure-mobile-services-with-the-facebook-sdk-for-windows-phone/

    Read the article

  • Prevent direct access to a PHP page.

    - by SyaZ
    How do I prevent my users from accessing directly pages meant for ajax calls only? Passing a key during ajax call seems like a solution, whereas access without the key will not be processed. But it is also easy to fabricate the key, no? Curse of View Source... p/s: Using Apache as webserver. EDIT: To answer why, I have jQuery ui-tabs in my index.php, and inside those tabs are forms with scripts, which won't work if they're accessed directly. Why a user would want to do that, I don't know, I just figure I'd be more user friendly by preventing direct access to forms without validation scripts.

    Read the article

  • Google Chrome: JavaScript associative arrays, evaluated out of sequence

    - by Jerry
    Ok, so on a web page, I've got a JavaScript object which I'm using as an associative array. This exists statically in a script block when the page loads: var salesWeeks = { "200911" : ["11 / 2009", "Fiscal 2009"], "200910" : ["10 / 2009", "Fiscal 2009"], "200909" : ["09 / 2009", "Fiscal 2009"], "200908" : ["08 / 2009", "Fiscal 2009"], "200907" : ["07 / 2009", "Fiscal 2009"], "200906" : ["06 / 2009", "Fiscal 2009"], "200905" : ["05 / 2009", "Fiscal 2009"], "200904" : ["04 / 2009", "Fiscal 2009"], "200903" : ["03 / 2009", "Fiscal 2009"], "200902" : ["02 / 2009", "Fiscal 2009"], "200901" : ["01 / 2009", "Fiscal 2009"], "200852" : ["52 / 2008", "Fiscal 2009"], "200851" : ["51 / 2008", "Fiscal 2009"] }; The order of the key/value pairs is intentional, as I'm turning the object into an HTML select box such as this: <select id="ddl_sw" name="ddl_sw"> <option value="">== SELECT WEEK ==</option> <option value="200911">11 / 2009 (Fiscal 2009)</option> <option value="200910">10 / 2009 (Fiscal 2009)</option> <option value="200909">09 / 2009 (Fiscal 2009)</option> <option value="200908">08 / 2009 (Fiscal 2009)</option> <option value="200907">07 / 2009 (Fiscal 2009)</option> <option value="200906">06 / 2009 (Fiscal 2009)</option> <option value="200905">05 / 2009 (Fiscal 2009)</option> <option value="200904">04 / 2009 (Fiscal 2009)</option> <option value="200903">03 / 2009 (Fiscal 2009)</option> <option value="200902">02 / 2009 (Fiscal 2009)</option> <option value="200901">01 / 2009 (Fiscal 2009)</option> <option value="200852">52 / 2008 (Fiscal 2009)</option> <option value="200851">51 / 2008 (Fiscal 2009)</option> </select> ...with code that looks like this (snipped from a function): var arr = []; arr.push( "<select id=\"ddl_sw\" name=\"ddl_sw\">" + "<option value=\"\">== SELECT WEEK ==</option>" ); for(var key in salesWeeks) { arr.push( "<option value=\"" + key + "\">" + salesWeeks[key][0] + " (" + salesWeeks[key][1] + ")" + "<\/option>" ); } arr.push("<\/select>"); return arr.join(""); This all works fine in IE, FireFox and Opera. However in Chrome, the order comes out all weird: <select id="ddl_sw" name="ddl_sw"> <option value="">== SELECT WEEK ==</option> <option value="200852">52 / 2008 (Fiscal 2009)</option> <option value="200908">08 / 2009 (Fiscal 2009)</option> <option value="200906">06 / 2009 (Fiscal 2009)</option> <option value="200902">02 / 2009 (Fiscal 2009)</option> <option value="200907">07 / 2009 (Fiscal 2009)</option> <option value="200904">04 / 2009 (Fiscal 2009)</option> <option value="200909">09 / 2009 (Fiscal 2009)</option> <option value="200903">03 / 2009 (Fiscal 2009)</option> <option value="200905">05 / 2009 (Fiscal 2009)</option> <option value="200901">01 / 2009 (Fiscal 2009)</option> <option value="200910">10 / 2009 (Fiscal 2009)</option> <option value="200911">11 / 2009 (Fiscal 2009)</option> <option value="200851">51 / 2008 (Fiscal 2009)</option> </select> NOTE: This order, though weird, does not change on subsequent refreshes. It's always in this order. So, what is Chrome doing? Some optimization in how it processes the loop? In the first place, am I wrong to rely on the order that the key/value pairs are declared in any associative array? I never questioned it before, I just assumed the order would hold because this technique has always worked for me in the other browsers. But I suppose I've never seen it stated anywhere that the order is guaranteed. Maybe it's not? Any insight would be awesome. Thanks.

    Read the article

  • Python: Getting INVALID response from PayPal's Sandbox IPN, slowly going insane...

    - by thepeanut
    Hi All I am trying to implement a simple online payment system using PayPal, however I have tried everything I know and am still getting an INVALID response. I know it's nothing too simple, because I get a VERIFIED response when using the IPN simulator. I have tried putting the items into a dict first, I have tried fixing the encoding, and still nothing. PayPal says the reasons for an INVALID response could be: Sending wrong items or in wrong order (pretty sure it's not this) Sending to the wrong address (definitely not this) Encoding items incorrectly (I dont think it's this, set encoding to UTF-8 on both paypal and my script) The following is the snippet concerned: f = cgi.FieldStorage() newparams = 'cmd=_notify-validate' for key in f.keys(): val = f[key].value newparams += '&' + urlencode({key: val.encode('utf-8')}) req = urllib2.Request(PP_URL, newparams) req.add_header("Content-type", "application/x-www-form-urlencoded") http = urllib2.urlopen(req) ret = http.read() fi.write(ret + '\n') if ret == 'VERIFIED': #*do stuff* Can anyone suggest anything I can do to fix this?! Cheers Sam

    Read the article

  • jQuery, PHP, AJAX, "tu" variable beeing posted for no reason, shows in var_dump()

    - by Mattis
    A jQuery AJAX request .post()s data to page.php, which creates $res and var_dump()s it. $res: $res = array(); foreach ($_REQUEST as $key => $value) { if($key){ $res[$key] = $value; } } var_dump($res): array(4) { ["text1"]=> string(6) "mattis" ["text2"]=> string(4) "test" ["tu"]=> string(32) "deb6adbbff4234b5711cc4368c153bc4" ["PHPSESSID"]=> string(32) "cda24363cb9d3226bd37b2577ed0bc0b" } My javascript only sends text1 and text2: $.post("page.php",{ text1:"mattis", text2:"test" } What is the "tu" variable beeing sent? Apparantly it's very similar to the session id, but I've never seen it before. EDIT: It is sent in IE but not in FF.

    Read the article

  • So how I can control the page contents loading sequence in dojo

    - by David Zhao
    Hi there, I'm using dojo for our UI's, and would like to load certain part of page contents in sequence. For example, for a certain stock, I'd like to load stock general information, such as ticker, company name, key stats, etc. and a grid with the last 30 days open/close prices. Different contents will be fetched from the server separately. Now, I'd like first load the grid so the user can have something to look at, then, say, start loading of key stats which is a large data set takes longer time to load. How do I do this. I tried: dojo.addOnLoad(function() { startGrid(); //mock grid startup function which works fine getKeyStats(); //mock key stat getter function also works fine }); But dojo is loading getKeyStats(), then startGrid() here for some reason, and sequence doesn't seem be matter here. So how I can control the loading sequence at will? Thanks in advance! David

    Read the article

  • Python hashable dicts

    - by TokenMacGuy
    As an exercise, and mostly for my own amusement, I'm implementing a backtracking packrat parser. The inspiration for this is i'd like to have a better idea about how hygenic macros would work in an algol-like language (as apposed to the syntax free lisp dialects you normally find them in). Because of this, different passes through the input might see different grammars, so cached parse results are invalid, unless I also store the current version of the grammar along with the cached parse results. (EDIT: a consequence of this use of key-value collections is that they should be immutable, but I don't intend to expose the interface to allow them to be changed, so either mutable or immutable collections are fine) The problem is that python dicts cannot appear as keys to other dicts. Even using a tuple (as I'd be doing anyways) doesn't help. >>> cache = {} >>> rule = {"foo":"bar"} >>> cache[(rule, "baz")] = "quux" Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'dict' >>> I guess it has to be tuples all the way down. Now the python standard library provides approximately what i'd need, collections.namedtuple has a very different syntax, but can be used as a key. continuing from above session: >>> from collections import namedtuple >>> Rule = namedtuple("Rule",rule.keys()) >>> cache[(Rule(**rule), "baz")] = "quux" >>> cache {(Rule(foo='bar'), 'baz'): 'quux'} Ok. But I have to make a class for each possible combination of keys in the rule I would want to use, which isn't so bad, because each parse rule knows exactly what parameters it uses, so that class can be defined at the same time as the function that parses the rule. But combining the rules together is much more dynamic. In particular, I'd like a simple way to have rules override other rules, but collections.namedtuple has no analogue to dict.update(). Edit: An additional problem with namedtuples is that they are strictly positional. Two tuples that look like they should be different can in fact be the same: >>> you = namedtuple("foo",["bar","baz"]) >>> me = namedtuple("foo",["bar","quux"]) >>> you(bar=1,baz=2) == me(bar=1,quux=2) True >>> bob = namedtuple("foo",["baz","bar"]) >>> you(bar=1,baz=2) == bob(bar=1,baz=2) False tl'dr: How do I get dicts that can be used as keys to other dicts? Having hacked a bit on the answers, here's the more complete solution I'm using. Note that this does a bit extra work to make the resulting dicts vaguely immutable for practical purposes. Of course it's still quite easy to hack around it by calling dict.__setitem__(instance, key, value) but we're all adults here. class hashdict(dict): """ hashable dict implementation, suitable for use as a key into other dicts. >>> h1 = hashdict({"apples": 1, "bananas":2}) >>> h2 = hashdict({"bananas": 3, "mangoes": 5}) >>> h1+h2 hashdict(apples=1, bananas=3, mangoes=5) >>> d1 = {} >>> d1[h1] = "salad" >>> d1[h1] 'salad' >>> d1[h2] Traceback (most recent call last): ... KeyError: hashdict(bananas=3, mangoes=5) based on answers from http://stackoverflow.com/questions/1151658/python-hashable-dicts """ def __key(self): return tuple(sorted(self.items())) def __repr__(self): return "{0}({1})".format(self.__class__.__name__, ", ".join("{0}={1}".format( str(i[0]),repr(i[1])) for i in self.__key())) def __hash__(self): return hash(self.__key()) def __setitem__(self, key, value): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def __delitem__(self, key): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def clear(self): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def pop(self, *args, **kwargs): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def popitem(self, *args, **kwargs): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def setdefault(self, *args, **kwargs): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def update(self, *args, **kwargs): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def __add__(self, right): result = hashdict(self) dict.update(result, right) return result if __name__ == "__main__": import doctest doctest.testmod()

    Read the article

  • Send custom headers with UIWebView loadRequest

    - by Thomas Clayson
    I want to be able to send some extra headers with my UIWebView loadRequest method. I have tried: NSMutableURLRequest *req = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:@"http://www.reliply.org/tools/requestheaders.php"]]; [req addValue:@"hello" forHTTPHeaderField:@"aHeader"]; [self.theWebView loadRequest:req]; I have also tried subclassing the UIWebView and intercepting the - (BOOL)webView:(UIWebView *)webView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType method. In that method I had a block of code which looked like this: NSMutableURLRequest *newRequest = [request mutableCopy]; for(NSString *key in [customHeaders allKeys]) { [newRequest setValue:[customHeaders valueForKey:key] forHTTPHeaderField:key]; } [self loadRequest:newRequest]; But for some unknown reason it was causing the web view to not load anything (blank frame) and the error message NSURLErrorCancelled (-999) comes up (all known fixes don't fix it for me). So I am at a loss as to what to do. How can I send a custom header along with a UIWebView request? Many thanks!

    Read the article

  • LLBLGen: Copy table from one database to another

    - by StreamT
    I have two databases (SQL Server 2005) with the same table schemes. I need to copy data from source table to destination with some modification of data along the way. And if destination table already contains some data, then rows from source table should not override, but be added to the destination table. In our project we use LLBLGen and LINQ to LLBLGen to as ORM solution. Example: Table 1: Table 2: Table 1: Key Value Key Value Key Value 1 One 1 T2_One Result=> 1 One 2 Two 2 T2_Two 2 Two 3 Three 3 Three 4 T2_One 5 T2_Two

    Read the article

  • PBKDF2-HMAC-SHA1

    - by Jason
    To generate a valid pairwise master key for a WPA2 network a router uses the PBKDF2-HMAC-SHA1 algorithm. I understand that the sha1 function is performed 4096 times to derive the PMK, however I have two questions about the process. Excuse the pseudo code. 1) How is the input to the first instance of the SHA1 function formatted? SHA1("network_name"+"network_name_length"+"network_password") Is it formatted in that order, is it the hex value of the network name, length and password or straight ASCII? Then from what I gather the 160 bit digest received is fed straight into another round of hashing without any additional salting. Like this: SHA1("160bit digest from last round of hashing") Rise and repeat. 2) Once this occurs 4096 times 256 bits of the output is used as the pairwise master key. What I don't understand is that if SHA1 produces 160bit output, how does the algorithm arrive at the 256bits required for a key? Thanks for the help.

    Read the article

  • Announcing the New Windows Azure Web Sites Shared Scaling Tier

    - by Clint Edmonson
    Windows Azure Web Sites has added a new pricing tier that will solve the #1 blocker for the web development community. The shared tier now supports custom domain names mapped to shared-instance web sites. This post will outline the plan changes and elaborate on how the new pricing model makes Windows Azure Web Sites an even richer option for web development shops of all sizes. Free Shared Reserved # of Sites 10 100 100 Egress 165MB/Day 5GB/Month Included 5GB/Month Included Storage 1GB 1GB 10GB Throttling CPU/Memory/Egress CPU/Memory Unlimited Price Free $.02/hr per site, per instance $.08/hr per core Setting the Stage In June, we released the first public preview of Windows Azure Web Sites, which gave web developers a great platform on which to get web sites running using their web development framework of choice. PHP, Node.js, classic ASP, and ASP.NET developers can all utilize the Windows Azure platform to create and launch their web sites. Likewise, these developers have a series of data storage options using Windows Azure SQL Databases, MySQL, or Windows Azure Storage. The Windows Azure Web Sites free offer enabled startups to get their site up and running on Windows Azure with a minimal investment, and with multiple deployment and continuous integration features such as Git, Team Foundation Services, FTP, and Web Deploy.  The response to the Windows Azure Web Sites offer has been overwhelmingly positive. Since the addition of the service on June 12th, tens of thousands of web sites have been deployed to Windows Azure and the volume of adoption is increasing every week. Preview Feedback In spite of the growth and success of the product, the community has had questions about features lacking in the free preview offer. The main question web developers asked regarding Windows Azure Web Sites relates to the lack of the free offer’s support for domain name mapping. During the preview launch period, customer feedback made it obvious that the lack of domain name mapping support was an area of concern. We’re happy to announce that this #1 request has been delivered as a feature of the new shared plan. New Shared Tier Portal Features In the screen shot below, the “Scale” tab in the portal shows the new tiers – Free, Shared, and Reserved – and gives the user the ability to quickly move any of their free web sites into the shared tier. With a single mouse-click, the user can move their site into the shared tier. Once a site has been moved into the shared tier, a new Manage Domains button appears in the bottom action bar of the Windows Azure Portal giving site owners the ability to manage their domain names for a shared site. This button brings up the domain-management dialog, which can be used to enter in a specific domain name that will be mapped to the Windows Azure Web Site. Shared Tier Benefits Startups and large web agencies will both benefit from this plan change. Here are a few examples of scenarios which fit the new pricing model: Startups no longer have to select the reserved plan to map domain names to their sites. Instead, they can use the free option to develop their sites and choose on a site-by-site basis which sites they elect to move into the shared plan, paying only for the sites that are finished and ready to be domain-mapped Agencies who manage dozens of sites will realize a lower cost of ownership over the long term by moving their sites into reserved mode. Once multi-site companies reach a certain price point in the shared tier, it is much more cost-effective to move sites to a reserved tier.  Long-term, it’s easy to see how the new Windows Azure Web Sites shared pricing tier makes Windows Azure Web Sites it a great choice for both startups and agency customers, as it enables rapid growth and upgrades while keeping the cost to a minimum. Large agencies will be able to have all of their sites in their own instances, and startups will have the capability to scale up to multiple-shared instances for minimal cost and eventually move to reserved instances without worrying about the need to incur continually additional costs. Customers can feel confident they have the power of the Microsoft Windows Azure brand and our world-class support, at prices competitive in the market. Plus, in addition to realizing the cost savings, they’ll have the whole family of Windows Azure features available. Continuous Deployment from GitHub and CodePlex Along with this new announcement are two other exciting new features. I’m proud to announce that web developers can now publish their web sites directly from CodePlex or GitHub.com repositories. Once connections are established between these services and your web sites, Windows Azure will automatically be notified every time a check-in occurs. This will then trigger Windows Azure to pull the source and compile/deploy the new version of your app to your web site automatically. Walk-through videos on how to perform these functions are below: Publishing to an Azure Web Site from CodePlex Publishing to an Azure Web Site from GitHub.com These changes, as well as the enhancements to the reserved plan model, make Windows Azure Web Sites a truly competitive hosting option. It’s never been easier or cheaper for a web developer to get up and running. Check out the free Windows Azure web site offering and see for yourself. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted

    Read the article

  • ruby rails loop causes server freeze

    - by Darkerstar
    Hi all: I am working on a Ruby on Rails project on Windows. I have Ruby 1.86 and Rails 2.35 installed. Everything is fine until I tried to implement a comet process. I have the following code written to respond to a long poll javascript request. But everytime this function is called, it will hang the whole rails server, no second request can get in, until the timeout. (I know there is juggernaut, but I like to implement one myself first :) Is this due to my server setup? The project will be deployed on a linux server with Ngix and Passenger setup, will it suffer the same problem? def comet_hook timeout(5) do while true do key = 'station_' + station_id.to_s + '_message_lastwrite' if Rails.cache.exist?(key) @cache_time = DateTime.parse(Rails.cache.read(key)) if @cache_time > hook_start @messages = @station.messages_posted_after(hook_start) hook_start = @cache_time break end end end ... end Also with Rails memory store cache, I keep getting "cannot modify frozen object" error, so the above script only worked for me when I switched to File cache. :(

    Read the article

  • Databinding race condition

    - by Stephen Price
    I have a login form (using ChildWindow) and have implemented a Keyup event handler on the passwordbox. If the key is enter then it sets the ChildWindow ResultDialog to true. What seems to be happening is the databinding on the Passwordbox is not happening before the childwindow is closed so the Password property on my Login control is null. I've tried using KeyUp and Keydown, as well as using a buttonAutoPeer to invoke a click on the Ok button. I've also tried setting the focus to the OKbutton before setting the DialogResult (which closes the window). private void PasswordBox_KeyUp(object sender, KeyEventArgs e) { if (e.Key == Key.Enter) { if (UsernameBox.Text != userPrompt && !string.IsNullOrEmpty(PasswordBox.Password.Trim())) { this.DialogResult = true; } else { UsernameBox.Focus(); } } }

    Read the article

  • IBM Keynote: (hardware,software)–>{IBM.java.patterns}

    - by Janice J. Heiss
    On Sunday evening, September 30, 2012, Jason McGee, IBM Distinguished Engineer and Chief Architect Cloud Computing, along with John Duimovich IBM Distinguished Engineer and Java CTO, gave an information- and idea-rich keynote that left Java developers with much to ponder.Their focus was on the challenges to make Java more efficient and productive given the hardware and software environments of 2012. “One idea that is very interesting is the idea of multi-tenancy,” said McGee, “and how we can move up the spectrum. In traditional systems, we ran applications on dedicated middleware, operating systems and hardware. A lot of customers still run that way. Now people introduce hardware virtualization and share the hardware. That is good but there is a lot more we can do. We can share middleware and the application itself.” McGee challenged developers to better enable the Java language to function in these higher density models. He spoke about the need to describe patterns that help us grasp the full environment that an application needs, whether it’s a web or full enterprise application. Developers need to understand the resources that an application interacts with in a way that is simple and straightforward. The task is to then automate that deployment so that the complexity of infrastructure can be by-passed and developers can live in a simpler world where the cloud can automatically configure the needed environment. McGee argued that the key, something IBM has been working on, is to use a simpler pattern that allows a cloud-based architecture to embrace the entire infrastructure required for an application and make it highly available, scalable and able to recover from failure. The cloud-based architecture would automate the complexity of setting up and managing the infrastructure. IBM has been trying to realize this vision for customers so they can describe their Java application environment simply and allow the cloud to automate the deployment and management of applications. “The point,” explained McGee, “is to package the executable used to describe applications, to drop it into a shared system and let that system provide some intelligence about how to deploy and manage those applications.”John Duimovich on Improvements in JavaMcGee then brought onstage IBM’s Distinguished Engineer and CTO for Java, John Duimovich, who showed the audience ways to deploy Java applications more efficiently.Duimovich explained that, “When you run lots of copies of Java in the cloud or any hypervisor virtualized system, there are a lot of duplications of code and jar files. IBM has a facility called ‘shared classes’ where we put shared code, read only artefacts in a cache that is sharable across hypervisors.” By putting JIT code in ahead of time, he explained that the application server will use 20% less memory and operate 30% faster.  He described another example of how the JVM allows for the maximum amount of sharing that manages the tenants and file sockets and memory use through throttling and control. Duimovich touched on the “thin is in” model and IBM’s Liberty Profile and lightweight runtime for the cloud, which allows for greater efficiency in interacting with the cloud.Duimovich discussed the confusion Java developers experience when, for example, the hypervisor tells them that that they have 8 and then 4 and then 16 cores. “Because hypervisors are virtualized, they can change based on resource needs across the hypervisor layer. You may have 10 instances of an operation system and you may need to reallocate memory, " explained Duimovich.  He showed how to resize LPARs, reallocate CPUs and migrate applications as needed. He explained how application servers can resize thread pools and better use resources based on information from the hypervisors.Java Challenges in Hardware and SoftwareMcGee ended the keynote with a summary of upcoming hardware and software challenges for the Java platform. He noted that one reason developers love Java is it allows them to ignore differences in hardware. He stated that the most important things happening in hardware were in network and storage – in developments such as the speed of SSD, the exploitation of high-speed, low-latency networking, and recent developments such as storage-class memory, and non-volatile main memory. “So we are challenged to maintain the benefits of Java and the abstraction it provides from hardware while still exploiting the new innovations in hardware,” said McGee.McGee discussed transactional messaging applications where developers send messages transactionally persist a message to storage, something traditionally done by backing messages on spinning disks, something mostly outdated. “Now,” he pointed out, “we would use SSD and store it in Flash and get 70,000 messages a second. If we stored it using a PCI express-based flash memory device, it is still Flash but put on a PCI express bus on a card closer to the CPU. This way I get 300,000 messages a second and 25% improvement in latency.” McGee’s central point was that hardware has a huge impact on the performance and scalability of applications. New technologies are enabling developers to build classes of Java applications previously unheard of. “We need to be able to balance these things in Java – we need to maintain the abstraction but also be able to exploit the evolution of hardware technology,” said McGee. According to McGee, IBM's current focus is on systems wherein hardware and software are shipped together in what are called Expert Integrated Systems – systems that are pre-optimized, and pre-integrated together. McGee closed IBM’s engaging and thought-provoking keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers now must understand the strengths and weaknesses of such newcomers as applications increasingly involve a complex interconnection of languages.

    Read the article

  • jQuery won't parse my JSON from AJAX query

    - by littlecharva
    Hi, I'm having difficulty parsing some JSON data returned from my server using jQuery.ajax() To perform the AJAX I'm using: $.ajax({ url: myUrl, cache: false, dataType: "json", success: function(data){ ... }, error: function(e, xhr){ ... } }); And if I return an array of items then it works fine: [ { title: "One", key: "1" }, { title: "Two", key: "2" } ] The success function is called and receives the correct object. However, when I'm trying to return a single object: { title: "One", key: "1" } The error function is called and xhr contains 'parsererror'. I've tried wrapping the JSON in parenthesis on the server before sending it down the wire, but it makes no difference. Yet if I paste the content into a string in Javascript and then use the eval() function, it evaluates it perfectly. Any ideas what I'm doing wrong? Anthony

    Read the article

  • Visual Studio Load Testing using Windows Azure

    - by Tarun Arora
    In my opinion the biggest adoption barrier in performance testing on smaller projects is not the tooling but the high infrastructure and administration cost that comes with this phase of testing. Only if a reusable solution was possible and infrastructure management wasn’t as expensive, adoption would certainly spike. It certainly is possible if you bring Visual Studio and Windows Azure into the equation. It is possible to run your test rig in the cloud without getting tangled in SCVMM or Lab Management. All you need is an active Azure subscription, Windows Azure endpoint enabled developer workstation running visual studio ultimate on premise, windows azure endpoint enabled worker roles on azure compute instances set up to run as test controllers and test agents. My test rig is running SQL server 2012 and Visual Studio 2012 RC agents. The beauty is that the solution is reusable, you can open the azure project, change the subscription and certificate, click publish and *BOOM* in less than 15 minutes you could have your own test rig running in the cloud. In this blog post I intend to show you how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. As a bonus, I will share a reusable solution that you can use to automate test rig creation for both VS 2010 agents as well as VS 2012 agents. Introduction The slide show below should help you under the high level details of what we are trying to achive... Leveraging Azure for Performance Testing View more PowerPoint from Avanade Scenario 1 – Running a Test Rig in Windows Azure To start off with the basics, in the first scenario I plan to discuss how to, - Automate deployment & configuration of Windows Azure Worker Roles for Test Controller and Test Agent - Automate deployment & configuration of SQL database on Test Controller on the Test Controller Worker Role - Scaling Test Agents on demand - Creating a Web Performance Test and a simple Load Test - Managing Test Controllers right from Visual Studio on Premise Developer Workstation - Viewing results of the Load Test - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig Scenario 2 – The scaled out Test Rig and sharing data using SQL Azure A scaled out version of this implementation would involve running multiple test rigs running in the cloud, in this scenario I will show you how to sync the load test database from these distributed test rigs into one SQL Azure database using Azure sync. The selling point for this scenario is being able to collate the load test efforts from across the organization into one data store. - Deploy multiple test rigs using the reusable solution from scenario 1 - Set up and configure Windows Azure Sync - Test SQL Azure Load Test result database created as a result of Windows Azure Sync - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig The Ingredients Though with an active MSDN ultimate subscription you would already have access to everything and more, you will essentially need the below to try out the scenarios, 1. Windows Azure Subscription 2. Windows Azure Storage – Blob Storage 3. Windows Azure Compute – Worker Role 4. SQL Azure Database 5. SQL Data Sync 6. Windows Azure Connect – End points 7. SQL 2012 Express or SQL 2008 R2 Express 8. Visual Studio All Agents 2012 or Visual Studio All Agents 2010 9. A developer workstation set up with Visual Studio 2012 – Ultimate or Visual Studio 2010 – Ultimate 10. Visual Studio Load Test Unlimited Virtual User Pack. Walkthrough To set up the test rig in the cloud, the test controller, test agent and SQL express installers need to be available when the worker role set up starts, the easiest and most efficient way is to pre upload the required software into Windows Azure Blob storage. SQL express, test controller and test agent expose various switches which we can take advantage of including the quiet install switch. Once all the 3 have been installed the test controller needs to be registered with the test agents and the SQL database needs to be associated to the test controller. By enabling Windows Azure connect on the machines in the cloud and the developer workstation on premise we successfully create a virtual network amongst the machines enabling 2 way communication. All of the above can be done programmatically, let’s see step by step how… Scenario 1 Video Walkthrough–Leveraging Windows Azure for performance Testing Scenario 2 Work in progress, watch this space for more… Solution If you are still reading and are interested in the solution, drop me an email with your windows live id. I’ll add you to my TFS preview project which has a re-usable solution for both VS 2010 and VS 2012 test rigs as well as guidance and demo performance tests.   Conclusion Other posts and resources available here. Possibilities…. Endless!

    Read the article

  • Purpose of lua_lock and lua_unlock?

    - by anon
    What is the point of lua_lock and lua_unlock? The following implies it's important: LUA_API void lua_gettable (lua_State *L, int idx) { StkId t; lua_lock(L); t = index2adr(L, idx); api_checkvalidindex(L, t); luaV_gettable(L, t, L->top - 1, L->top - 1); lua_unlock(L); } LUA_API void lua_getfield (lua_State *L, int idx, const char *k) { StkId t; TValue key; lua_lock(L); t = index2adr(L, idx); api_checkvalidindex(L, t); setsvalue(L, &key, luaS_new(L, k)); luaV_gettable(L, t, &key, L->top); api_incr_top(L); lua_unlock(L); } The following implies it does nothing: #define lua_lock(L) ((void) 0) #define lua_unlock(L) ((void) 0) Please enlighten.

    Read the article

  • Linq to SQL Error with Many-to-Many table

    - by Matt Connolly
    I am getting the following error with linq-to-sql when I try to access a many-to-many collection: Members 'Int32 XXX' and 'Int32 YYY' both marked as IsPrimaryKey and IsDbGenerated. The statement is true, in that both of those columns are primary key integers with identity insert. The table I am trying to access has a foreign key to both YYY and ZZZ, and then ZZZ has a foreign key to XXX. There doesn't appear to be anything wrong with my data structure. I tried setting "Child Property" to false on the ZZZ-YYY relationship, but it didn't change anything.

    Read the article

  • Racket: change dotted pair to list

    - by user2963128
    I have a program that recursively calls a hashtable and prints out data from it. Unfortunately my hashtable seems to be saving data as dotted pairs so when I call the hashtable I get an error saying that there is no data for it because its tryign to search the hashtable for a dotted pair instead of a list. Is there an easy way to make the dotted pair into a regular list? IE im getting '("was" . "beginning") instead of '("was" "beginning") Is there a way to change this without re-writing how my hashtable store stuff? im using the let function to set a variable to this and then calling another function based on this variable (let ((data ( list-ref(hash-ref Ngram-table key) (random (length (hash-ref Ngram-table key)))))) is there a way to make the value stored in data just a list like this '("var1" "var2") instead of a dotted pair? edit: im getting dotted pairs because im using let to set data to the part of the hashtable's key and one of the elements in that hash.

    Read the article

  • Using sub filters/queries in Google App Engine

    - by fredrik
    Hi, I'm trying to use figure out how to sub query a query that uses a filter. From what I've figured out so far while using .filter() it changes the original query, that leads to a second .filter() would also have to match the first filter. I would like to make something like this: modules = data.Modules.all().filter('page = ', page.key()) modules.filter('name = ', 'Test') modules.filter('name = ', 'Test2') I can't get the "Test2" filter to work. The only solution I have at the moment is to make all new queries. data.Modules.all().filter('page = ', page.key()).filter('name = ', "Test").get() data.Modules.all().filter('page = ', page.key()).filter('name = ', "Test2").get() Or write the same as an GQL. But for me it seams quite stupid way to go. I've looked at using ancestors, but I don't quite understand it and honestly don't know if that's the way to go. Any ideas? ..fredrik

    Read the article

  • VSTO Outlook project

    - by Chris
    I currently have an Outlook 2007 VSTO plug-in which needs to write certain values into the registry. I am programmitically downloading and installing a new stationery into Outlook by saving a htm file into the users App Data folder and then updating the HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Common\MailSettings\NewTheme registry key which sets which stationery that is currently in use. So far everything is fine for 2007, but I have checked a PC that is running Outlook 2010, and this registry key is in a different spot. Instead of 12.0 as the version it is 14.0, which makes sense. Is there anyway I can determine what version the plugin is installed in, so that I can write the key based on the correct version in the correct location?!? I haven't been able to find anything on this so far, but surely there is a way..?!? Thanks in advance. Chris

    Read the article

  • Cookies with urllib

    - by CMC
    This will probably seem like a really simple question, and I am quite confused as to why this is so difficult for me. I would like to write a function that takes three inputs: [url, data, cookies] that will use urllib (not urllib2) to get the contents of the requested url. I figured it'd be simple, so I wrote the following: def fetch(url, data = None, cookies = None): if isinstance(data, dict): data = urllib.urlencode(data) if isinstance(cookies, dict): # TODO: find a better way to do this cookies = "; ".join([str(key) + "=" + str(cookies[key]) for key in cookies]) opener = urllib.FancyURLopener() opener.addheader("Cookie", cookies) obj = opener.open(url, data) result = obj.read() obj.close() return result This doesn't work, as far as I can tell (can anyone confirm that?) and I'm stumped.

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >