Search Results

Search found 3407 results on 137 pages for 'happy'.

Page 103/137 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • In a digital photo, how can I detect if a mountain is obscured by clouds?

    - by Gavin Brock
    The problem I have a collection of digital photos of a mountain in Japan. However the mountain is often obscured by clouds or fog. What techniques can I use to detect that the mountain is visible in the image? I am currently using Perl with the Imager module, but open to alternatives. All the images are taken from the exact same position - these are some samples. My naïve solution I started by taking several horizontal pixel samples of the mountain cone and comparing the brightness values to other samples from the sky. This worked well for differentiating good image 1 and bad image 2. However in the autumn it snowed and the mountain became brighter than the sky, like image 3, and my simple brightness test started to fail. Image 4 is an example of an edge case. I would classify this as a good image since some of the mountain is clearly visible. UPDATE 1 Thank you for the suggestions - I am happy you all vastly over-estimated my competence. Based on the answers, I have started trying the ImageMagick edge-detect transform, which gives me a much simpler image to analyze. convert sample.jpg -edge 1 edge.jpg I assume I should use some kind of masking to get rid of the trees and most of the clouds. Once I have the masked image, what is the best way to compare the similarity to a 'good' image? I guess the "compare" command suited for this job? How do I get a numeric 'similarity' value from this?

    Read the article

  • Differences between iPhone/iPod Simulator and Devices

    - by Allisone
    Hi, since I started iPhone/iPod Development I have come across some differences between how the simulator and how real device react. Maybe I will come across some other differences I will have to figure out as well, maybe other people haven't met these problems here (YET) and can profit from the knowledge, and maybe you know some problems/differences that you would have been happy to know about earlier before you spent several hours or days figuring out what the heck is going on. So here is what I came across. Simulator is not case sensitive, Devices are case sensitive. This means a default.png or Icon.png will work in simulator, but not on a device where they must be named Default.png and icon.png (if it's still not working read this answer) Simulator has different codecs to play audio and video If you use f.e. MPMoviePlayerController you might play certain video on the simulator while on the device it won't work (use Handbrake-presets-iPhone & iPod Touch to create playable videos for Simulator and Device). If you play audio with AudioServicesPlaySystemSound(&soundID) you might here the sound on simulator but not an a device. (use Audacity to open your soundfile, export as wav and run afconvert -f caff -d LEI16@44100 -c 1 audacity.wav output.caf in terminal) Also there is this flickering on second run problem which can be resolved with an playerViewCtrl.initialPlaybackTime = -1.0; either on the end of playing or before each beginning. Simulator is mostly much faster cause it doesn't simulate the hardware but uses Mac resources, therefore f.e. sio2 Apps (OpenGL,OpenAL,etc. framework) run much better on simulator, well everything that uses more resources will run visibly better in simulator than on device. I hope we can add some more to this.

    Read the article

  • Ideal Multi-Developer Lamp Stack?

    - by devians
    I would like to build an 'ideal' lamp development stack. Dual Server (Virtualised, ESX) Apache / PHP on one, Databases (MySQL, PgSQL, etc) on the other. User (Developer) Manageable mini environments, or instance. Each developer instance shares the top level config (available modules and default config etc) A developer should have control over their apache and php version for each project. A developer might be able to change minor settings, ie magicquotes on for legacy code. Each project would determine its database provider in its code The idea is that it is one administrate-able server that I can control, and provide globally configured things like APC, Memcached, XDebug etc. Then by moving into subsets for each project, i can allow my users to quickly control their environments for various projects. Essentially I'm proposing the typical system of a developer running their own stack on their own machine, but centralised. In this way I'd hope to avoid problems like Cross OS code problems, database inconsistencies, slightly different installs producing bugs etc. I'm happy to manage this in custom builds from source, but if at all possible it would be great to have a large portion of it managed with some sort of package management. We typically use CentOS, so yum? Has anyone ever built anything like this before? Is there something turnkey that is similar to what I have described? Are there any useful guides I should be reading in order to build something like this?

    Read the article

  • How can I coordinate code review tool and RCS (specifically git)

    - by Chris Nelson
    We're committed to git for code management. We're trying to find a tool that will help us systematize code reviews. We're considering Gerrit and Code Collaborator but would welcome other suggestions. We're having a problem answering the question, "How do we know every commit was reviewed?" (Or "What commits have yet to be reviewed?") One answer would be to submit every commit or every push for review and track incomplete reviews in the review tool. I'm not entirely happy with relying on a another tool -- especially if it's not open source -- to tell us this. What seems to be a better answer is to rely on sign offs in git (e.g., "Signed-off-by: Chris Nelson") and use a hook in the review tool to sign off commits on behalf of the reviewer. And advantage of this is if we use some other review mechanism for some commits, we have just one place to look for results. One problem with this is that we can't require review before push because the review tool is unlikely to have access to the developer's private repository clone to add the sign-off. Any ideas on integrating code review with code management to achieve ease of use and high visibility of unreviewed changes?

    Read the article

  • Normalizing (webdav) unicode paths

    - by Evert
    Hi guys, I'm working on a WebDAV implementation for PHP. In order to make it easier for Windows and other operating systems to work together, I need jump through some character encoding hoops. Windows uses ISO-8859-1 in it's HTTP request, while most other clients encode anything beyond ascii as UTF-8. My first approach was to ignore this altogether, but I quickly ran into issues when returning urls. I then figured it's probably best to normalize all urls. Using u¨ as an example. This will get sent over the wire by OS/X as u%CC%88 (this is codepoint U+0308) Windows sents this as: %FC (latin1) But, doing a utf8_encode on %FC, I get : %C3%BC (this is codepoint U+00FC) Should I treat %C3%BC and u%CC%88 as the same thing? If so.. how? Not touching it seems to work OK for windows. It somehow understands that it's a unicode character, but updating the same file throws an error (for no particular reason). I'd be happy to provide more information.

    Read the article

  • Trace PRISM / CAL events (best practice?)

    - by Christian
    Ok, this question is for people with either a deep knowledge of PRISM or some magic skills I just lack (yet). The Background is simple: Prism allows the declaration of events to which the user can subscribe or publish. In code this looks like this: _eventAggregator.GetEvent<LayoutChangedEvent>().Subscribe(UpdateUi, true); _eventAggregator.GetEvent<LayoutChangedEvent>().Publish("Some argument"); Now this is nice, especially because these events are strongly typed, and the declaration is a piece of cake: public class LayoutChangedEvent : CompositePresentationEvent<string> { } But now comes the hard part: I want to trace events in some way. I had the idea to subscribe using a lambda expression calling a simple log message. Worked perfectly in WPF, but in Silverlight there is some method access error (took me some time to figure out the reason).. If you want to see for yourself, try this in Silverlight: eA.GetEvent<VideoStartedEvent>().Subscribe(obj => TraceEvent(obj, "vSe", log)); If this would be possible, I would be happy, because I could easily trace all events using a single line to subscribe. But it does not... The alternative approach is writing a different functions for each event, and assign this function to the events. Why different functions? Well, I need to know WHICH event was published. If I use the same function for two different events I only get the payload as argument. I have now way to figure out which event caused the tracing message. I tried: using Reflection to get the causing event (not working) using a constructor in the event to enable each event to trace itself (not allowed) Any other ideas? Chris PS: Writing this text took me most likely longer than writing 20 functions for my 20 events, but I refuse to give up :-) I just had the idea to use postsharp, that would most likely work (although I am not sure, perhaps I end up having only information about the base class).. Tricky and so unimportant topic...

    Read the article

  • How to handle missing files on MVC

    - by kaivalya
    What is your preferred way to handle hits to files that does not exist on your MVC app. I have couple of web apps runing with MVC and they are constantly getting hits for files folders etc. that does not exist in the app structure. Apps are throwing exception: The controller for path could not be found or it does not implement IController I am trying to find out the best way to handle this. I have 3 global routes on my global.asax file (see below) and at this point I am happy with that simple definition. I know if I added route definition for all controllers then I can add a definition to ignore the rest and handle these hits but if it will be possible to solve this problem without it, I do not want to add route definitions for each controller which I believe will flood the route definitions and also add a layer of maintenance which I don't like. //Aggregates 2nd level routes.MapRoute( "AggregateLevel2", "{controller}/{action}/{id}/{childid}/{childidlevel2}", new { controller = "Home", action = "Index", id = "", childid = "", childidlevel2 = "" } ); //Aggregates 1st level routes.MapRoute( "AggregateLevel1", "{controller}/{action}/{id}/{childid}", new { controller = "Home", action = "Index", id = "", childid = "" } ); routes.MapRoute( "Default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" } );

    Read the article

  • Efficient way to handle multiple HTMLPurifier configs.

    - by anomareh
    I'm using HTMLPurifier in a current project and I'm not sure about the most efficient way to go about handling multiple configs. For the most part the only major thing changing is the allowed tags. Currently I have a private method, in each class using HTMLPurifier, that gets called when a config is needed and it creates one from the default. I'm not really happy with this as if 2 classes are using the same config, that's duplicating code, and what if a class needs 2 configs? It just feels messy. The one upside to it, is it's lazy in the sense that it's not creating anything until it's needed. The various classes could be used and never have to purify anything. So I don't want to be creating a bunch of config objects that might not even be used. I just recently found out that you can create a new instance of HTMLPurifier and directly set config options like so: $purifier = new HTMLPurifier(); $purifier->config->set('HTML.Allowed', ''); I'm not sure if that's bad form at all or not, and if it isn't I'm not really sure of a good way to put it to use. My latest idea was to create a config handler class that would just return an HTMLPurifier config for use in subsequent purify calls. At some point it could probably be expanded to allow the setting of configs, but just from the start I figured they'd just be hardcoded and run a method parameter through a switch to grab the requested config. Perhaps I could just send the stored purifier instance as an argument and have the method directly set the config on it like shown above? This to me seems the best of the few ways I thought of, though I'm not sure if such a task warrants me creating a class to handle it, and if so, if I'm handling it in the best way. I'm not too familiar with the inner workings of HTMLPurifier so I'm not sure if there are any better mechanisms in place for handling multiple configs. Thanks for any insight anyone can offer.

    Read the article

  • Determining failing sectors on portable flash memory

    - by Faxwell Mingleton
    I'm trying to write a program that will detect signs of failure for portable flash memory devices (thumb drives, etc). I have seen tools in the past that are able to detect failing sectors and other kinds of trouble on conventional mechanical hard drives, but I fear that flash memory does not have the same kind of predictable low-level access to the hardware due to the internal workings of the storage. Things like wear-leveling and other block-remapping techniques (to skip over 'dead' sectors?) lead me to believe that determining if a flash drive is failing will be difficult at best, if not impossible (short of having constant read failures and device unmounts). Flash drives at their end-of-life should be easy to detect (constant CRC discrepancies during reads and all-out failure). But what about drives that might be failing early? Are there any tell-tale signs like slower throughput speeds that might indicate a flash drive is going to fail much sooner than normal? Along the lines of detecting potentially bad blocks, I had considered attempting random reads/writes to a file close to or exactly the size of the entire volume, but even then is it possible that the drive might report sizes under its maximum capacity to account for 'dead' blocks? In short, is there any way to circumvent or at least detect (algorithmically or otherwise) the use of block-remapping or other life extension techniques for flash memory? Let me end this question by expressing my uncertainty as to whether or not this belongs on serverfault.com . This is definitely a hardware-related question, but I also desire a software solution - preferably one that I can program myself. If this question is misplaced, I will be happy to migrate it to serverfault - but I do need a programming solution. Please let me know if you need clarification :) Thanks!

    Read the article

  • In Ruby, how to implement global behaviour?

    - by Gordon McAllister
    Hi all, I want to implement the concept of a Workspace. This is a global concept - all other code will interact with one instance of this Workspace. The Workspace will be responsible for maintaining the current system state (i.e. interacting with the system model, persisting the system state etc) So what's the best design strategy for my Workspace, bearing in mind this will have to be testable (using RSpec now, but happy to look at alternatives). Having read thru some open source projects out there and I've seen 3 strategies. None of which I can identify as "the best practice". They are: Include the singleton class. But how testable is this? Will the global state of Workspace change between tests? Implemented all behaviour as class methods. Again how do you test this? Implemented all behaviour as module methods. Not sure about this one at all! Which is best? Or is there another way? Thanks, Gordon

    Read the article

  • using JQuery and Prototype in the same page

    - by Don
    Hi, Several of my pages use both JQuery and Protoype. Since I upgraded to version 1.3 of JQuery this appears to be causing problems, because both libraries define a function named '$'. JQuery provides a function noConflict() which relinquishes control of $ to other libraries that may be using it. So it seems like I need to go through all my pages that look like this: <head> <script type="text/javascript" src="/obp/js/prototype.js"></script> <script type="text/javascript" src="/obp/js/jquery.js"></script> </head> and change them to look like this: <head> <script type="text/javascript" src="/obp/js/prototype.js"></script> <script type="text/javascript" src="/obp/js/jquery.js"></script> <script type="text/javascript"> jQuery.noConflict(); var $j = jQuery; </script> </head> I should then be able to use '$' for Prototype and '$j' (or 'jQuery') for JQuery. I'm not entirely happy about duplicating these 2 lines of code in every relevant page, and expect that at some point somebody is likely to forget to add them to a new page. I'd prefer to be able to do the following Create a file jquery-noconflict.js which "includes" jquery.js and the 2 lines of code shown above Import jquery-noconflict.js (instead of jquery.js) in all my JSP/HTML pages However, I'm not sure if it's possible to include one JS file in another, in the manner I've described? Of course an alternate solution is simply to add the 2 lines of code above to jquery.js directly, but if I do that I'll need to remember to do it every time I upgrade JQuery. Thanks in advance, Don

    Read the article

  • How do I grant a site's applet an AllPermission privilege?

    - by nahsra
    I'd like to specify certain applets to run with java.security.AllPermission on my computer (for debugging and security testing). However, I don't want to enable all applets that I run to have this permission. So, editing my user Java policy file (which I have ensured is the correct policy file through testing), I try to put this value: grant codeBase "http://host_where_applet_lives/-" { permission java.security.AllPermission; }; This value fails when the applet tries to do something powerful (create a new Thread, in my case). However, when I put the following value: grant { permission java.security.AllPermission; }; The applet is able to perform the powerful operation. The only difference is the lack of a codeBase attribute. An answer to a similar question asked here [1] seemed to suggest (but never show or prove) that AccessController.doPrivileged() calls may be required. To me, this sounds wrong as I don't need that call when I grant the permissions to all applets (the second example I showed). Even if this is a solution, littering the applets I run with AccessController.doPrivileged() calls is not easy or necessarily possible. To top it off, my tests show that this just doesn't work anyway. But I'm happy to hear more ideas around it. [1] http://stackoverflow.com/questions/1751412/cant-get-allpermission-configured-for-intranet-applet-can-anyone-help

    Read the article

  • Windows with Plesk Panel installs ActiveState Python 2.5.0 - any thoughts?

    - by B Mahoney
    I expect to run Pylons on a Windows Server 2003 and IIS 6 on a Virtual Private Server (VPS). Most work with the VPS is done through the Plesk 8.6 panel. The Plesk panel has a lot of maintenance advantages for us. However, this Plesk configuration installs ActiveState Python 2.5.0. The Parallels Plesk documents for 8.6 and version 9 insist that only this configuration should be installed. I'm not eager to settle for the baseline 2.5.0. but don't see any safe upgrade path. How has ActiveState Python 2.5.0 been for other users? Can you replace the Parallels\Plesk\Additional\Python with another distribution? I don't want to break Plesk, please. Previously, I followed these instructions, Serving a Pylons app with IIS - Pylons Cookbook Using the default web site IP address, I had Python 2.6.3 installing the ISAPI-WSGI dll in IIS so that I successfully ran Pylons in a virutalenv through IIS using the IP address, no domain name. I would be so happy if I could run this successful configuration for my domains while I must use Plesk. Tips and solutions appreciated.

    Read the article

  • Error 1053: the service did not respond to the start or control request in a timely fashion

    - by deejjaayy
    i know this is very much a "how long is a piece of string" type of question, however i have recently inherited a couple of applications that run as windows services, and i am having problems providing a gui (accessible from a context menu in system tray) with both of them. before you ask, the reason why we need a gui for a windows service is in order to be able to re-configure the behaviour of the windows service(s) without resorting to stopping/re-starting. my code works fine in debug mode, and i get the context menu come up, and everything behaves correctly etc. when i install the service via "installutil" using a named account (i.e., not Local System Account), the service runs fine, but doesn't display the icon in the system tray (i know this is normal behaviour because i don't have the "interact with desktop" option). here is the problem though - when i choose the "LocalSystemAccount" option, and check the "interact with desktop" option, the service takes AGES to start up for no obvious reason, and i just keep getting "Could not start the ... service on Local Computer. Error 1053: the service did not respond to the start or control request in a timely fashion". incidentally, i increased the windows service timeout from the default 30 seconds to 2 minutes via a registry hack (see http://support.microsoft.com/kb/824344, search for TimeoutPeriod in section 3), however the service start up still times out. my first question is - why might the "Local System Account" login takes SOOOOO MUCH LONGER than when the service logs in with the non-LocalSystemAccount, causing the windows service time-out? what's could the difference be between these two to cause such different behaviour at start up? secondly - taking a step back, all i'm trying to achieve, is simply a windows service that provides a gui for configuration - I'd be quite happy to run using the non-Local System Account (with named user/pwd), if I could get the service to interact with the desktop (that is, have a context menu available from the system tray). is this possible, and if so how? any pointers to the above questions would be very much appreciated! thanks in advance for your help.

    Read the article

  • Headless, scriptable Firefox/Webkit on linux?

    - by Parand
    I'm looking to automate some web interactions, namely periodic download of files from a secure website. This basically involves entering my username/password and navigating to the appropriate URL. I tried simple scripting in Python, followed by more sophisticated scripting, only to discover this particular website is using some obnoxious javascript and flash based mechanism for login, rendering my methods useless. I then tried HTMLUnit, but that doesn't seem to want to work either. I suspect use of Flash is the issue. I don't really want to think about it any more, so I'm leaning towards scripting an actual browser to log in and grab the file I need. Requirements are: Run on linux server (ie. no X running). If I really need to have X I can make that happen, but I won't be happy. Be reliable. I want to start this thing and never think about it again. Be scriptable. Nothing too sophisticated, but I should be able to tell the browser the various steps to take and pages to visit. Are there any good toolkits for a headless, X-less scriptable browser? Have you tried something like this and if so do you have any words of wisdom?

    Read the article

  • Disassembling with python - no easy solution?

    - by Abc4599
    Hi, I'm trying to create a python script that will disassemble a binary (a Windows exe to be precise) and analyze its code. I need the ability to take a certain buffer, and extract some sort of struct containing information about the instructions in it. I've worked with libdisasm in C before, and I found it's interface quite intuitive and comfortable. The problem is, its Python interface is available only through SWIG, and I can't get it to compile properly under Windows. At the availability aspect, diStorm provides a nice out-of-the-box interface, but it provides only the Mnemonic of each instruction, and not a binary struct with enumerations defining instruction type and what not. This is quite uncomfortable for my purpose, and will require a lot of what I see as spent time wrapping the interface to make it fit my needs. I've also looked at BeaEngine, which does in fact provide the output I need, a struct with binary info concerning each instruction, but its interface is really odd and counter-intuitive, and it crashes pretty much instantly when provided with wrong arguments. The CTypes sort of ultimate-death-to-your-python crashes. So, I'd be happy to hear about other solutions, which are a little less time consuming than messing around with djgcc or mingw to make SWIGed libdisasm, or writing an OOP wrapper for diStorm. If anyone has some guidance as to how to compile SWIGed libdisasm, or better yet, a compiled binary (pyd or dll+py), I'd love to hear/have it. :) Thanks ahead.

    Read the article

  • CSS3 Gradients and border-radius leading to extraneous background in webkit

    - by iamfriendly
    Hello all, After my 1st question with relation to CSS3 gradients in which I was recreating an 'inner glow' I've now got to the point where I'm not so happy with the way in which webkit renders the effect. Basically, if you give an element a background colour and apply a border radius to it, webkit lets the background colour "bleed" out to fill the surrounding box (making it look a bit awful) To reproduce the undesirable effect, try something like the following section#featured footer p a { color: rgb(255,255,255); text-shadow: 1px 1px 1px rgba(0,0,0,0.6); text-decoration: none; padding: 5px 10px; border-radius: 15px; -moz-border-radius: 15px; -webkit-border-radius: 15px; background: rgb(98,99,100); -moz-box-shadow: inset 0 0 8px rgba(0,0,0, 0.25); -webkit-box-shadow: inset 0 0 8px rgba(0,0,0, 0.25); } You can see an example of this here: http://iamfriendly.clients.friendlygp.com/ Apparently this appears to be a Windows-only problem, so for those on a Mac, here's a screenshot: (Check the 'carry on reading' button) You'll notice that in Safari/Chrome (the latest available public downloads as well as the latest nightlies as far as I can tell), you get a rather ugly background colour bleed. However, in Firefox, you should be able to see what I'm after. If you're in Internet Explorer, woe betide you. Does anyone know of a technique which will allow me to produce the 'correct' effect? Is there a CSS Property which I've missed that tells webkit to only have the background within the border-radius'd part of the containing box. I could potentially use an image, but I'm really trying to avoid it. Naturally, as we're dealing with CSS3 and the landscape is continually changing, I might just have to 'lump' it and revert to an image. However, if anyone can suggest an alternative I would be very much appreciative!

    Read the article

  • asp.net Membership : Extending Role membership?

    - by mark smith
    Hi there, I am been taking a look at asp.net membership and it seems to provide everything that i need but i need some kind of custom Role functionality. Currently i can add user to a role, great. But i also need to be able to add Permissions to Roles.. i.e. Role: Editor Permissions: Can View Editor Menu, Can Write to Editors Table, Can Delete Entries in Editors Table. Currently it doesn't support this, The idea behind this is to create a admin option in my program to create a role and then assign permissions to a role to say "allow the user to view a certain part of the application", "allow the user to open a menu item" Any ideas how i would implement soemthing like this? I presume a custom ROLE provider but i was wondering if some kind of framework extension existed already without rolling my own? Or anybody knows a good tutorial of how to tackle this issue? I am quite happy with what asp.net SQL provider has created in terms of tables etc... but i think i need to extend this by adding another table called RolesPermissions and then I presume :-) adding some kind of enumeration into the table for each valid permission?? THanks in advance

    Read the article

  • What computer science topic am I trying to describe?

    - by ItzWarty
    I've been programming for around... 6-8 years, and I've begun to realize that I don't really know what really happens at the low-ish level when I do something like int i = j%348 The thing is, I know what j%348 does, it divides j by 348 and finds the remainder. What I don't know is HOW the computer does this. Similarly, I know that try { blah(); }catch(Exception e){ blah2(); } will invoke blah and if blah throws, it will invoke blah2... however, I have no idea how the computer does this instead of err... crashing or ending execution. And I figure that in order for me to get "better" at programming, I should probably know what my code is really doing. [This would probably also help me optimize and... err... not do stupid things] I figure that what I'm asking for is probably something huge taught in universities or something, but to be honest, if I could learn a little, I would be happy. The point of the question is: What topic/computer-science-course am I asking about? Because in all honesty, I don't know. Since I don't know what the topic is called, I'm unable to actually find a book or online resource to learn about the topic, so I'm sort of stuck. I'd be eternally thankful if someone helped me =/

    Read the article

  • How to use EffectUpdate?

    - by coma
    So, this is my sample: <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:mx="library://ns.adobe.com/flex/mx" xmlns:s="library://ns.adobe.com/flex/spark"> <fx:Style> @namespace s "library://ns.adobe.com/flex/spark"; @namespace mx "library://ns.adobe.com/flex/mx"; s|Application { background-color: #333333; } #info { padding-top: 5; padding-right: 5; padding-bottom: 5; padding-left: 5; font-size: 22; background-color: #ffffff; } #plane { corner-radius: 8; background-color: #1c1c1c; } </fx:Style> <fx:Script> import mx.events.*; private var steps:uint = 0; private function effectUpdateHandler(event:EffectEvent):void { info.text = "rotationY: " + plane.rotationY + " ; steps: " + steps; steps++; } </fx:Script> <fx:Declarations> <s:Rotate3D id="spin" target="{plane}" autoCenterTransform="true" angleYFrom="0" angleYTo="360" repeatCount="10" effectUpdate="effectUpdateHandler(event)" /> </fx:Declarations> <s:VGroup horizontalAlign="center" gap="50" width="100%"> <s:Label id="info" width="100%"/> <s:BorderContainer id="plane" width="200" height="200" click="spin.play()"/> </s:VGroup> </s:Application> and it doesn't make me happy.

    Read the article

  • Implementing parts of rfc4226 (HOTP) in mysql

    - by Moose Morals
    Like the title says, I'm trying to implement the programmatic parts of RFC4226 "HOTP: An HMAC-Based One-Time Password Algorithm" in SQL. I think I've got a version that works (in that for a small test sample, it produces the same result as the Java version in the code), but it contains a nested pair of hex(unhex()) calls, which I feel can be done better. I am constrained by a) needing to do this algorithm, and b) needing to do it in mysql, otherwise I'm happy to look at other ways of doing this. What I've got so far: -- From the inside out... -- Concatinate the users secret, and the number of time its been used -- find the SHA1 hash of that string -- Turn a 40 byte hex encoding into a 20 byte binary string -- keep the first 4 bytes -- turn those back into a hex represnetation -- convert that into an integer -- Throw away the most-significant bit (solves signed/unsigned problems) -- Truncate to 6 digits -- store into otp -- from the otpsecrets table select (conv(hex(substr(unhex(sha1(concat(secret, uses))), 1, 4)), 16, 10) & 0x7fffffff) % 1000000 into otp from otpsecrets; Is there a better (more efficient) way of doing this?

    Read the article

  • Backing up my locally hosted rails apps in preparation for OS upgrade

    - by stephen murdoch
    I have some apps running on Heroku. I will be upgrading my OS in two weeks. The last time I upgraded though (6 months ago) I ran into some problems. Here's what I did: copied all my rails apps onto DVD upgraded OS transferred rails apps from DVD to new OS Then, after setting up new SSH-keys I tried to push to some of my heroku apps and, whilst I can't remember the exact error message off-hand, it more or less amounted to "fatal exception the remote end hung up" So I know that I'm doing something wrong here. First of all, is there any need for me to be putting my heroku hosted rails apps onto DVD? Would I be better just pulling all my apps from their heroku repos once I've done the upgrade? What do others do here? The reason I stuck them on DVD is because I tend to push a specific production branch to Heroku and sometimes omit large development files from it... Secondly, was this problem caused by SSH keys? Should I have backed up the old keys and transferred them from my old OS to my new one too, or is Heroku perfectly happy to let you change OS's like that? My solution in the end was to just create new heroku apps and reassign the custom domain names in heroku add-ons menu... I never actually though of pulling from the heroku repos as I tend to push a specific branch to heroku and that branch doesn't always have all the development files in it... I realise that the error message I mentioned doesn't particularly help anyone but I didn't think to remember it 6 months ago. Any advice would be appreciated PS - when I say upgrade, I mean full install of the new version with full format of the HDD.

    Read the article

  • Delphi - Populate an imagelist with icons at runtime 'destroys' transparency

    - by ben
    Hi again, I've spended hours for this (simple) one and don't find a solution :/ I'm using D7 and the TImageList. The ImageList is assigned to a toolbar. When I populate the ImageList at designtime, the icons (with partial transparency) are looking fine. But I need to populate it at runtime, and when I do this the icons are looking pretty shitty - complete loose of the partial transparency. I just tried to load the icons from a .res file - with the same result. I've tried third party image lists also without success. I have no clue what I could do :/ Thanks 2 all ;) edit: To be honest I dont know exactly whats going on. Alpha blending is the correkt term... Here are 2 screenies: Icon added at designtime: Icon added at runtime: Your comment that alpha blending is not supported just brought the solution: I've edited the image in an editor and removed the "alpha blended" pixels - and now it looks fine. But its still strange that the icons look other when added at runtime instead of designtime. If you (or somebody else ;) can explain it, I would be happy ;) thanks for you support!

    Read the article

  • Q on Python serialization/deserialization

    - by neil
    What chances do I have to instantiate, keep and serialize/deserialize to/from binary data Python classes reflecting this pattern (adopted from RFC 2246 [TLS]): enum { apple, orange } VariantTag; struct { uint16 number; opaque string<0..10>; /* variable length */ } V1; struct { uint32 number; opaque string[10]; /* fixed length */ } V2; struct { select (VariantTag) { /* value of selector is implicit */ case apple: V1; /* VariantBody, tag = apple */ case orange: V2; /* VariantBody, tag = orange */ } variant_body; /* optional label on variant */ } VariantRecord; Basically I would have to define a (variant) class VariantRecord, which varies depending on the value of VariantTag. That's not that difficult. The challenge is to find a most generic way to build a class, which serializes/deserializes to and from a byte stream... Pickle, Google protocol buffer, marshal is all not an option. I made little success with having an explicit "def serialize" in my class, but I'm not very happy with it, because it's not generic enough. I hope I could express the problem. My current solution in case VariantTag = apple would look like this, but I don't like it too much import binascii import struct class VariantRecord(object): def __init__(self, number, opaque): self.number = number self.opaque = opaque def serialize(self): out = struct.pack('>HB%ds' % len(self.opaque), self.number, len(self.opaque), self.opaque) return out v = VariantRecord(10, 'Hello') print binascii.hexlify(v.serialize()) >> 000a0548656c6c6f Regards

    Read the article

  • How do I copy or move an NSManagedObject from one context to another?

    - by Aeonaut
    I have what I assume is a fairly standard setup, with one scratchpad MOC which is never saved (containing a bunch of objects downloaded from the web) and another permanent MOC which persists objects. When the user selects an object from scratchMOC to add to her library, I want to either 1) remove the object from scratchMOC and insert into permanentMOC, or 2) copy the object into permanentMOC. The Core Data FAQ says I can copy an object like this: NSManagedObjectID *objectID = [managedObject objectID]; NSManagedObject *copy = [context2 objectWithID:objectID]; (In this case, context2 would be permanentMOC.) However, when I do this, the copied object is faulted; the data is initially unresolved. When it does get resolved, later, all of the values are nil; none of the data (attributes or relationships) from the original managedObject are actually copied or referenced. Therefore I can't see any difference between using this objectWithID: method and just inserting an entirely new object into permanentMOC using insertNewObjectForEntityForName:. I realize I can create a new object in permanentMOC and manually copy each key-value pair from the old object, but I'm not very happy with that solution. (I have a number of different managed objects for which I have this problem, so I don't want to have to write and update copy: methods for all of them as I continue developing.) Is there a better way?

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >