Search Results

Search found 2468 results on 99 pages for 'splattered bits'.

Page 82/99 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • How to handle alpha in a manual "Overlay" blend operation?

    - by quixoto
    I'm playing with some manual (walk-the-pixels) image processing, and I'm recreating the standard "overlay" blend. I'm looking at the "Photoshop math" macros here: http://www.nathanm.com/photoshop-blending-math/ (See also here for more readable version of Overlay) Both source images are in fairly standard RGBA (8 bits each) format, as is the destination. When both images are fully opaque (alpha is 1.0), the result is blended correctly as expected: But if my "blend" layer (the top image) has transparency in it, I'm a little flummoxed as to how to factor that alpha into the blending equation correctly. I expect it to work such that transparent pixels in the blend layer have no effect on the result, opaque pixels in the blend layer do the overlay blend as normal, and semitransparent blend layer pixels have some scaled effect on the result. Can someone explain to me the blend equations or the concept behind doing this? Bonus points if you can help me do it such that the resulting image has correctly premultiplied alpha (which only comes into play for pixels that are not opaque in both layers, I think.) Thanks! // factor in blendLayerA, (1-blendLayerA) somehow? resultR = ChannelBlend_Overlay(baseLayerR, blendLayerR); resultG = ChannelBlend_Overlay(baseLayerG, blendLayerG); resultB = ChannelBlend_Overlay(baseLayerB, blendLayerB); resultA = 1.0; // also, what should this be??

    Read the article

  • django: control json serialization

    - by abolotnov
    Is there a way to control json serialization in django? Simple code below will return serialized object in json: co = Collection.objects.all() c = serializers.serialize('json',co) The json will look similar to this: [ { "pk": 1, "model": "picviewer.collection", "fields": { "urlName": "architecture", "name": "\u0413\u043e\u0440\u043e\u0434 \u0438 \u0430\u0440\u0445\u0438\u0442\u0435\u043a\u0442\u0443\u0440\u0430", "sortOrder": 0 } }, { "pk": 2, "model": "picviewer.collection", "fields": { "urlName": "nature", "name": "\u041f\u0440\u0438\u0440\u043e\u0434\u0430", "sortOrder": 1 } }, { "pk": 3, "model": "picviewer.collection", "fields": { "urlName": "objects", "name": "\u041e\u0431\u044a\u0435\u043a\u0442\u044b \u0438 \u043d\u0430\u0442\u044e\u0440\u043c\u043e\u0440\u0442", "sortOrder": 2 } } ] You can see it's serializing it in a way that you are able to re-create the whole model, shall you want to do this at some point - fair enough, but not very handy for simple JS ajax in my case: I want bring the traffic to minimum and make the whole thing little clearer. What I did is I created a view that passes the object to a .json template and the template will do something like this to generate "nicer" json output: [ {% if collections %} {% for c in collections %} {"id": {{c.id}},"sortOrder": {{c.sortOrder}},"name": "{{c.name}}","urlName": "{{c.urlName}}"}{% if not forloop.last %},{% endif %} {% endfor %} {% endif %} ] This does work and the output is much (?) nicer: [ { "id": 1, "sortOrder": 0, "name": "????? ? ???????????", "urlName": "architecture" }, { "id": 2, "sortOrder": 1, "name": "???????", "urlName": "nature" }, { "id": 3, "sortOrder": 2, "name": "??????? ? ?????????", "urlName": "objects" } ] However, I'm bothered by the fast that my solution uses templates (an extra step in processing and possible performance impact) and it will take manual work to maintain shall I update the model, for example. I'm thinking json generating should be part of the model (correct me if I'm wrong) and done with either native python-json and django implementation but can't figure how to make it strip the bits that I don't want. One more thing - even when I restrict it to a set of fields to serialize, it will keep the id always outside the element container and instead present it as "pk" outside of it.

    Read the article

  • Are there pitfalls to using static class/event as an application message bus

    - by Doug Clutter
    I have a static generic class that helps me move events around with very little overhead: public static class MessageBus<T> where T : EventArgs { public static event EventHandler<T> MessageReceived; public static void SendMessage(object sender, T message) { if (MessageReceived != null) MessageReceived(sender, message); } } To create a system-wide message bus, I simply need to define an EventArgs class to pass around any arbitrary bits of information: class MyEventArgs : EventArgs { public string Message { get; set; } } Anywhere I'm interested in this event, I just wire up a handler: MessageBus<MyEventArgs>.MessageReceived += (s,e) => DoSomething(); Likewise, triggering the event is just as easy: MessageBus<MyEventArgs>.SendMessage(this, new MyEventArgs() {Message="hi mom"}); Using MessageBus and a custom EventArgs class lets me have an application wide message sink for a specific type of message. This comes in handy when you have several forms that, for example, display customer information and maybe a couple forms that update that information. None of the forms know about each other and none of them need to be wired to a static "super class". I have a couple questions: fxCop complains about using static methods with generics, but this is exactly what I'm after here. I want there to be exactly one MessageBus for each type of message handled. Using a static with a generic saves me from writing all the code that would maintain the list of MessageBus objects. Are the listening objects being kept "alive" via the MessageReceived event? For instance, perhaps I have this code in a Form.Load event: MessageBus<CustomerChangedEventArgs>.MessageReceived += (s,e) => DoReload(); When the Form is Closed, is the Form being retained in memory because MessageReceived has a reference to its DoReload method? Should I be removing the reference when the form closes: MessageBus<CustomerChangedEventArgs>.MessageReceived -= (s,e) => DoReload();

    Read the article

  • Character Set Issues when Upgrading from Symfony 2.0.* to Symfony 2.1.*?

    - by Adam Stacey
    I have recently upgraded my staging test site to the latest version of Symfony and updated all the vendors using composer as instructed in the upgrade document that comes with the download. Everything has all updated fine, but I have noticed now that some bits of HTML are not displaying in the Twig templates. I did a comparison with the current live site and it appears to be a character set issue. As an example I had a drop down list that had the following value in: Kitchen Ducting > Ducting Kits > Ducting Kit 4” / 100mm In the updated site the drop-down list item just appeared blank. When I used Twig's raw function it then displayed the item again, but with the dreaded question mark in a black diamond. Kitchen Ducting > Ducting Kits > Ducting Kit 4? / 100mm Things that you should know that may help: The staging test site and live site are both on the same server. In my httpd.conf file I have 'AddDefaultCharset utf-8'. In my php.ini file I have 'default_charset = "utf-8"'. The HTML file served has the Content-Type meta tag 'content="text/html; charset=utf-8"' My database is InnoDB and uses 'utf8' as the default character set and 'utf8_general_ci' as default collation. All tables in the database also use the defaults. I looked into BOM with UTF8, but could not work out if that was a problem or not?

    Read the article

  • SQL Server getdate() to a string like "2009-12-20"

    - by Adam Kane
    In Microsoft SQL Server 2005 and .NET 2.0, I want to convert the current date to a string of this format: "YYYY-MM-DD". For example, December 12th 2009 would become "2009-12-20". How do I do this in SQL. The context of this SQL statement in the table definiton. In other words, this is the default value. So when a new record is created the default value of the current date is stored as a string in the above format. I'm trying: SELECT CONVERT(VARCHAR(10), GETDATE(), 102) AS [YYYY.MM.DD] But SQL server keeps converting that to: ('SELECT CONVERT(VARCHAR(10), GETDATE(), 102) AS [YYYY.MM.DD]') so the result is just: 'SELECT CONVERT(VARCHAR(10), GETDATE(), 102) AS [YYYY.MM.DD]' Here's a screen shot of what the Visual Studio server explorer, table, table definition, properties shows: These wrapper bits are being adding automatically and converting it all to literal string: (N' ') Here's the reason I'm trying to use something other than the basic DATETIME I was using previously: This is the error I get when hooking everything to an ASP.NET GridView and try to do an update via the grid view: Server Error in '/' Application. The version of SQL Server in use does not support datatype 'date'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentException: The version of SQL Server in use does not support datatype 'date'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [ArgumentException: The version of SQL Server in use does not support datatype 'date'.] Note: I've added a related question to try to get around the SQL Server in use does not support datatype 'date' error so that I can use a DATETIME as recommended.

    Read the article

  • Is it possible to "trick" PrintScreen, swap out the contents of my form with something else before c

    - by Lasse V. Karlsen
    I have a bit of a challenge. In an earlier version of our product, we had an error message window (last resort, unhandled exception) that showed the exception message, type, stack trace + various bits and pieces of information. This window was printscreen-friendly, in that if the user simply did a printscreen-capture, and emailed us the screenshot, we had almost everything we needed to start diagnosing the problem. However, the form was deemed too technical and "scary" for normal users, so it was toned down to a more friendly one, still showing the error message, but not the stack trace and some of the more gory details that I'd still like to get. In addition, the form was added the capabilities of emailing us a text file containing everything we had before + lots of other technical details as well, basically everything we need. However, users still use PrintScreen to capture the contents of the form and email that back to us, which means I now have a less than optimal amount of information to go on. So I was wondering. Would it be possible for me to pre-render a bitmap the same size as my form, with everything I need on it, detect that PrintScreen was hit and quickly swap out the form contents with my bitmap before capture, and then back again afterwards? And before you say "just educate the users", yes, that's not going to work. These are not out users, they're users at our customers place, so we really cannot tell them to wisen up all that much. Or, barring this, is there a way for me to detect PrintScreen, tell Windows to ignore it, and instead react to it, by dumping the aformentioned prerendered bitmap onto the clipboard ready for placing into an email? The code is C# 3.0 in .NET 3.5, if it matters, but pointers for something to look at/for is good enough.

    Read the article

  • Configuring an offscreen framebuffer fails the completeness test

    - by randallmeadows
    I'm trying to create an offscreen framebuffer into which I can do some OpenGL drawing, and then pull the bits out manually. I'm following the instructions here, but in step 4, status is 0 instead of GL_FRAMEBUFFER_COMPLETE_OES. If I insert a call go glGetError() after every gl call, it returns 0 (GL_NO_ERROR) every time. But, the values of variables do not change during the call. E.g., GLuint framebuffer; glGenFramebuffersOES(1, &framebuffer); glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer); the value of framebuffer does not get altered at all (even when I change it to some arbitrary value and re-execute). It's almost like the gl calls are not actually being made. I'm linking against OpenGLES framework, and get no compile, link, or run-time errors (or warnings). I'm at a loss as to what to do to fix this. I've tried continuing on with my drawing, but do not see the results I expect, but at this point I can't tell whether it's because of the above error, or the conversion to a UIImage.

    Read the article

  • OS X: Terminal output of javac is garbled.

    - by Don Werve
    I've got my computer set up in Japanese (hey, it's good language practice), and everything is all fine and dandy... except javac. It displays localized error messages out to the console, but they're in Shift-JIS, not UTF8: $ javac this-file-doesnt-exist.java javac: ?t?@?C??????????????: this-file-doesnt-exist.java ?g????: javac <options> <source files> ?g?p?\??I?v?V?????~??X?g?????A-help ???g?p???? If I pipe the output through nkf -w, it's readable, but that's not really much of a solution: $ javac this-file-doesnt-exist.java 2>&1 | nkf -w javac: ????????????: this-file-doesnt-exist.java ???: javac <options> <source files> ????????????????????-help ?????? Everything else works fine (with UTF8) from the command-line; I can type filenames in Japanese, tab-completion works fine, vi can edit UTF-8 files, etc. Although java itself spits out all its messages in English (which is fine). Here's the relevant bits of my environment: LC_CTYPE=UTF-8 LANG=ja_JP.UTF-8 From what it looks like, javac isn't picking up the encoding properly, and java isn't picking up the language at all. I've tried -Dfile.encoding=utf8 as well, but that does nada, and documentation on the localization of the JVM toolchain is pretty nonexistent, at least from Google.

    Read the article

  • Using Cucumber With Modular Sinatra Apps

    - by Rob Conery
    I'm building out a medium-sized application using Sinatra and all was well when I had a single app.rb file and I followed Aslak's guidance up on Github: http://wiki.github.com/aslakhellesoy/cucumber/sinatra As the app grew a bit larger and the app.rb file started to bulge, I refactored out a lot of of the bits into "middleware" style modules using Sinatra::Base, mapping things using a rack-up file (config.ru) etc. The app works nicely - but my specs blew up as there was no more app.rb file for webrat to run against (as defined in the link above). I've tried to find examples on how to work this - and I think I'm just not used to the internal guts of Cuke just yet as I can't find a single way to have it cover all the apps. I tried just pointing to "config.ru" instead of app.rb - but that doesn't work. What I ended up doing - which is completely hackish - is to have a separate app.rb file in my support directory, which has all the requires stuff so I can at least test the model stuff. I can also specify routes in there - but that's not at all what I want to do. So - the question is: how can I get Cucumber to properly work with the modular app approach?

    Read the article

  • Android AES and init vector

    - by Donald_W
    I have an issue with AES encryptio and decryption: I can change my IV entirely and still I'm able to decode my data. public static final byte[] IV = { 65, 1, 2, 23, 4, 5, 6, 7, 32, 21, 10, 11, 12, 13, 84, 45 }; public static final byte[] IV2 = { 65, 1, 2, 23, 45, 54, 61, 81, 32, 21, 10, 121, 12, 13, 84, 45 }; public static final byte[] KEY = { 0, 42, 2, 54, 4, 45, 6, 7, 65, 9, 54, 11, 12, 13, 60, 15 }; public static final byte[] KEY2 = { 0, 42, 2, 54, 43, 45, 16, 17, 65, 9, 54, 11, 12, 13, 60, 15 }; //public static final int BITS = 256; public static void test() { try { // encryption Cipher c = Cipher.getInstance("AES"); SecretKeySpec keySpec = new SecretKeySpec(KEY, "AES"); c.init(Cipher.ENCRYPT_MODE, keySpec, new IvParameterSpec(IV)); String s = "Secret message"; byte[] data = s.getBytes(); byte[] encrypted = c.doFinal(data); String encryptedStr = ""; for (int i = 0; i < encrypted.length; i++) encryptedStr += (char) encrypted[i]; //decryoption Cipher d_c = Cipher.getInstance("AES"); SecretKeySpec d_keySpec = new SecretKeySpec(KEY, "AES"); d_c.init(Cipher.DECRYPT_MODE, d_keySpec, new IvParameterSpec(IV2)); byte[] decrypted = d_c.doFinal(encrypted); String decryptedStr = ""; for (int i = 0; i < decrypted.length; i++) decryptedStr += (char) decrypted[i]; Log.d("", decryptedStr); } catch (Exception ex) { Log.d("", ex.getMessage()); } } Any ideas what I'm doing wrong? How can I get 256 bit AES encryption (only change key to 32-byte long array?) Encryption is a new topic for me so please for newbie friendly answers.

    Read the article

  • Service Bus / Request Forwarding

    - by codputer
    I'm doing some development with a thrid party that issues either a Get or POST to a public URL that I specify. What I would like to do is set up a Relay service on the Azure Service Bus that my dev machine can listen to. When the request comes in, I want to forward that request as if my web service was taking the request directly from the thrid party service. When I'm ready, I'll deploy the application to a public service, change the URL that the thrid party service is sending too, and viola I should be up and running. What I'm looking for looks exactly like this: Clemens the Master of Service Bus but it's from the 2009 CTP. I'm working at it, but haven't yet got it working using all the new bits in 2012 (a.ka. its over my head at the moment). Somebody want to help? Clemens also help somebody else create a Reverse Proxy using the Service Bus, but I can't seem to find it. Yes I've also tweeted Clemens, but I'm sure he is a busy man! p.s. I know about Application Request Routing, but my dev machine is not on a public URL, I need to rewrite the URL after my client listener on the service bus recieves the message that was relayed from the Server side endpoint.

    Read the article

  • How to keep confirmation messages after POST while doing a post-submit redirect?

    - by MicE
    Hello, I'm looking for advise on how to share certain bits of data (i.e. post-submit confirmation messages) between individual requests in a web application. Let me explain: Current approach: user submits an add/edit form for a resource if there were no errors, user is shown a confirmation with links to: submit a new resource (for "add" form) view the submitted/edited resource view all resources (one step above in hierarchy) user then has to click on one of the three links to proceed (i.e. to the page "above") Progmatically, the form and its confirmation page are one set of classes. The page above that is another. They can technically share code, but at the moment they are both independent during processing of individual requests. We would like to amend the above as follows: user submits an add/edit form for a resource if there were no errors, the user is redirected to the page with all resources (one step above in hierarchy) with one or more confirmation messages displayed at the top of the page (i.e. success message, to whom was the request assigned, etc) This will: save users one click (they have to go through a lot of these add/edit forms) the post-submit redirect will address common problems with browser refresh / back-buttons What approach would you recommend for sharing data needed for the confirmation messages between the two requests, please? I'm not sure if it helps, it's a PHP application backed by a RESTful API, but I think that this is a language-agnostic question. A few simple solutions that come to mind are to share the data via cookies or in the session, this however breaks statelessness and would pose a significant problem for users who work in several tabs (the data could clash together). Passing the data as GET parameters is not suitable as we are talking about several messages which are dynamic (e.g. changing actors, dates). Thanks, M.

    Read the article

  • How to produce 64 bit masks?

    - by egiakoum1984
    Based on the following simple program the bitwise left shit operator works only for 32 bits. Is it true? #include <iostream> #include <stdlib.h> using namespace std; int main(void) { long long currentTrafficTypeValueDec; int input; cout << "Enter input:" << endl; cin >> input; currentTrafficTypeValueDec = 1 << (input - 1); cout << currentTrafficTypeValueDec << endl; cout << (1 << (input - 1)) << endl; return 0; } The output of the program: Enter input: 30 536870912 536870912 Enter input: 62 536870912 536870912 How could I produce 64-bit masks?

    Read the article

  • Compilation errors calling find_if using a functor

    - by Jim Wong
    We are having a bit of trouble using find_if to search a vector of pairs for an entry in which the first element of the pair matches a particular value. To make this work, we have defined a trivial functor whose operator() takes a pair as input and compares the first entry against a string. Unfortunately, when we actually add a call to find_if using an instance of our functor constructed using a temporary string value, the compiler produces a raft of error messages. Oddly (to me, anyway), if we replace the temporary with a string that we've created on the stack, things seem to work. Here's what the code (including both versions) looks like: typedef std::pair<std::string, std::string> MyPair; typedef std::vector<MyPair> MyVector; struct MyFunctor: std::unary_function <const MyPair&, bool> { explicit MyFunctor(const std::string& val) : m_val(val) {} bool operator() (const MyPair& p) { return p.first == m_val; } const std::string m_val; }; bool f(const char* s) { MyFunctor f(std::string(s)); // ERROR // std::string str(s); // MyFunctor f(str); // OK MyVector vec; MyVector::const_iterator i = std::find_if(vec.begin(), vec.end(), f); return i != vec.end(); } And here's what the most interesting error message looks like: /usr/include/c++/4.2.1/bits/stl_algo.h:260: error: conversion from ‘std::pair, std::allocator , std::basic_string, std::allocator ’ to non-scalar type ‘std::string’ requested Because we have a workaround, we're mostly curious as to why the first form causes problems. I'm sure we're missing something, but we haven't been able to figure out what it is.

    Read the article

  • Remove redundant SQL code

    - by Dave Jarvis
    Code The following code calculates the slope and intercept for a linear regression against a slathering of data. It then applies the equation y = mx + b against the same result set to calculate the value of the regression line for each row. Can the two separate sub-selects be joined so that the data and its slope/intercept are calculated without executing the data gathering part of the query twice? SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR * ymxb.SLOPE + ymxb.INTERCEPT as REGRESSION_LINE, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D, (SELECT ((avg(t.AMOUNT * t.YEAR)) - avg(t.AMOUNT) * avg(t.YEAR)) / (stddev( t.AMOUNT ) * stddev( t.YEAR )) as CORRELATION, ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM ( SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ) t ) ymxb WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Question How do I execute the duplicate bits only once per query, instead of twice? The duplicate bit is the WHERE clause: $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' Related http://stackoverflow.com/questions/1595659/how-to-eliminate-duplicate-calculation-in-sql Thank you!

    Read the article

  • csrf error in django

    - by niklasfi
    Hello, I want to realize a login for my site. I basically copied and pasted the following bits from the Django Book together. However I still get an error (CSRF verification failed. Request aborted.), when submitting my registration form. Can somebody tell my what raised this error and how to fix it? Here is my code: views.py: # Create your views here. from django import forms from django.contrib.auth.forms import UserCreationForm from django.http import HttpResponseRedirect from django.shortcuts import render_to_response def register(request): if request.method == 'POST': form = UserCreationForm(request.POST) if form.is_valid(): new_user = form.save() return HttpResponseRedirect("/books/") else: form = UserCreationForm() return render_to_response("registration/register.html", { 'form': form, }) register.html: <html> <body> {% block title %}Create an account{% endblock %} {% block content %} <h1>Create an account</h1> <form action="" method="post">{% csrf_token %} {{ form.as_p }} <input type="submit" value="Create the account"> </form> {% endblock %} </body> </html>

    Read the article

  • Elegantly determine if more than one boolean is "true"

    - by Ola Tuvesson
    I have a set of five boolean values. If more than one of these are true I want to excecute a particular function. What is the most elegant way you can think of that would allow me to check this condition in a single if() statement? Target language is C# but I'm interested in solutions in other languages as well (as long as we're not talking about specific built-in functions). One interesting option is to store the booleans in a byte, do a right shift and compare with the original byte. Something like if(myByte && (myByte 1)) But this would require converting the separate booleans to a byte (via a bitArray?) and that seems a bit (pun intended) clumsy... [edit]Sorry, that should have been if(myByte & (myByte - 1)) [/edit] Note: This is of course very close to the classical "population count", "sideways addition" or "Hamming weight" programming problem - but not quite the same. I don't need to know how many of the bits are set, only if it is more than one. My hope is that there is a much simpler way to accomplish this.

    Read the article

  • Xcode "Build and Archive" from command line

    - by Dan Fabulich
    Xcode 3.2 provides an awesome new feature under the Build menu, "Build and Archive" which generates an .ipa file suitable for Ad Hoc distribution. You can also open the Organizer, go to "Archived Applications," and "Submit Application to iTunesConnect." Is there a way to use "Build and Archive" from the command line (as part of a build script)? I'd assume that xcodebuild would be involved somehow, but the man page doesn't seem to say anything about this. UPDATE Michael Grinich requested clarification; here's what exactly you can't do with command-line builds, features you can ONLY do with Xcode's Organizer after you "Build and Archive." You can click "Share Application..." to share your IPA with beta testers. As Guillaume points out below, due to some Xcode magic, this IPA file does not require a separately distributed .mobileprovision file that beta testers need to install; that's magical. No command-line script can do it. For example, Arrix's script (submitted May 1) does not meet that requirement. More importantly, after you've beta tested a build, you can click "Submit Application to iTunes Connect" to submit that EXACT same build to Apple, the very binary you tested, without rebuilding it. That's impossible from the command line, because signing the app is part of the build process; you can sign bits for Ad Hoc beta testing OR you can sign them for submission to the App Store, but not both. No IPA built on the command-line can be beta tested on phones and then submitted directly to Apple. I'd love for someone to come along and prove me wrong: both of these features work great in the Xcode GUI and cannot be replicated from the command line.

    Read the article

  • I dont know how or where to add the correct encoding code to this iPhone code...

    - by BC
    Ok, I understand that using strings that have special characters is an encoding issue. However I am not sure how to adjust my code to allow these characters. Below is the code that works great for text that contains no special characters, but can you show me how and where to change the code to allow for the special characters to be used. Right now those characters crash the app. enter code here - (void)alertView:(UIAlertView *)alertView clickedButtonAtIndex:(NSInteger)buttonIndex{ if (buttonIndex == 1) { //iTunes Audio Search NSString *stringURL = [NSString stringWithFormat:@"http://phobos.apple.com/WebObjects/MZSearch.woa/wa/search?WOURLEncoding=ISO8859_1&lang=1&output=lm&term=\"%@\"",currentSong.title]; stringURL = [stringURL stringByAddingPercentEscapesUsingEncoding:NSASCIIStringEncoding]; NSURL *url = [NSURL URLWithString:stringURL]; [[UIApplication sharedApplication] openURL:url]; } } And this: -(IBAction)launchLyricsSearch:(id)sender{ WebViewController * webView = [[WebViewController alloc] initWithNibName:@"WebViewController" bundle:[NSBundle mainBundle]]; webView.webURL = [NSString stringWithFormat:@"http://www.google.com/m/search?hl=es&q=\"%@\"+letras",currentSong.title]; webView.webTitle = @"Letras"; [self.navigationController pushViewController:webView animated:YES]; } Please show me how and where to do this for these two bits of code.

    Read the article

  • Javascript CS-PRNG - 64-bit random

    - by Jack
    Hi, I need to generate a cryptographically secure 64-bit unsigned random integer in Javascript. The first problem is that Javascript only allows 64-bit signed integers, so 9223372036854775808 is the biggest supported integer without going into floating point use I think? To fix this I can use a big number library, no problem. My Method: var randNum = SHA256( randBigInt(128, 0) ) % 2^64; Where SHA256() is a secure hash function and randBigInt() is defined below as a non-crypto PRNG, im giving it a 128bit seed so brute force shouldn't be a problem. randBigInt(n,s) //return an n-bit random BigInt (n>=1). If s=1, then the most significant of those n bits is set to 1. Is this a secure method to generate a cryptographically secure 64-bit random int? And importantly does taking the 2^64 mod guarantee 100% I have a 64-bit number? An abstract example, say this number is prime (it isn't i know), I will use it in the Galois Field [2^p], where p must be 64bits so that every possible 1-63bit number is a field element. In this query, my random int must be larger than any 63-bit number. And Im not sure im correct in taking the 2^64 mod of a 256bit hash output. Thanks (hope that makes sense)

    Read the article

  • Python: Created nested dictionary from list of paths

    - by sberry2A
    I have a list of tuples the looks similar to this (simplified here, there are over 14,000 of these tuples with more complicated paths than Obj.part) [ (Obj1.part1, {<SPEC>}), (Obj1.partN, {<SPEC>}), (ObjK.partN, {<SPEC>}) ] Where Obj goes from 1 - 1000, part from 0 - 2000. These "keys" all have a dictionary of specs associated with them which act as a lookup reference for inspecting another binary file. The specs dict contains information such as the bit offset, bit size, and C type of the data pointed to by the path ObjK.partN. For example: Obj4.part500 might have this spec, {'size':32, 'offset':128, 'type':'int'} which would let me know that to access Obj4.part500 in the binary file I must unpack 32 bits from offset 128. So, now I want to take my list of strings and create a nested dictionary which in the simplified case will look like this data = { 'Obj1' : {'part1':{spec}, 'partN':{spec} }, 'ObjK' : {'part1':{spec}, 'partN':{spec} } } To do this I am currently doing two things, 1. I am using a dotdict class to be able to use dot notation for dictionary get / set. That class looks like this: class dotdict(dict): def __getattr__(self, attr): return self.get(attr, None) __setattr__ = dict.__setitem__ __delattr__ = dict.__delitem__ The method for creating the nested "dotdict"s looks like this: def addPath(self, spec, parts, base): if len(parts) > 1: item = base.setdefault(parts[0], dotdict()) self.addPath(spec, parts[1:], item) else: item = base.setdefault(parts[0], spec) return base Then I just do something like: for path, spec in paths: self.lookup = dotdict() self.addPath(spec, path.split("."), self.lookup) So, in the end self.lookup.Obj4.part500 points to the spec. Is there a better (more pythonic) way to do this?

    Read the article

  • Does unboxing just return a pointer to the value within the boxed object on the heap?

    - by Charles
    I this MSDN Magazine article, the author states (emphasis mine): Note that boxing always creates a new object and copies the unboxed value's bits to the object. On the other hand, unboxing simply returns a pointer to the data within a boxed object: no memory copy occurs. However, it is commonly the case that your code will cause the data pointed to by the unboxed reference to be copied anyway. I'm confused by the sentence I've bolded and the sentence that follows it. From everything else I've read, including this MSDN page, I've never before heard that unboxing just returns a pointer to the value on the heap. I was under the impression that unboxing would result in you having a variable containing a copy of the value on the stack, just as you began with. After all, if my variable contains "a pointer to the value on the heap", then I haven't got a value type, I've got a pointer. Can someone explain what this means? Was the author on crack? (There is at least one other glaring error in the article). And if this is true, what are the cases where "your code will cause the data pointed to by the unboxed reference to be copied anyway"? I just noticed that the article is nearly 10 years old, so maybe this is something that changed very early on in the life of .Net.

    Read the article

  • C++ class is not being included properly.

    - by ravloony
    Hello all, I have a problem which is either something I have completely failed to understand, or very strange. It's probably the first one, but I have spent the whole afternoon googling with no success, so here goes... I have a class called Schedule, which has as a member a vector of Room. However, when I compile using cmake, or even by hand, I get the following: In file included from schedule.cpp:1: schedule.h:13: error: ‘Room’ was not declared in this scope schedule.h:13: error: template argument 1 is invalid schedule.h:13: error: template argument 2 is invalid schedule.cpp: In constructor ‘Schedule::Schedule(int, int, int)’: schedule.cpp:12: error: ‘Room’ was not declared in this scope schedule.cpp:12: error: expected ‘;’ before ‘r’ schedule.cpp:13: error: request for member ‘push_back’ in ‘((Schedule*)this)->Schedule::_sched’, which is of non-class type ‘int’ schedule.cpp:13: error: ‘r’ was not declared in this scope Here are the relevant bits of code: #include <vector> #include "room.h" class Schedule { private: std::vector<Room> _sched; //line 13 int _ndays; int _nrooms; int _ntslots; public: Schedule(); ~Schedule(); Schedule(int nrooms, int ndays, int ntslots); }; Schedule::Schedule(int nrooms, int ndays, int ntslots):_ndays(ndays), _nrooms(nrooms),_ntslots(ntslots) { for (int i=0; i<nrooms;i++) { Room r(ndays,ntslots); _sched.push_back(r); } } In theory, g++ should compile a class before the one that includes it. There are no circular dependencies here, it's all straightforward stuff. I am completely stumped on this one, which is what leads me to believe that I must be missing something. :-D

    Read the article

  • Delphi fast large bitmap creation (without clearing)

    - by Ritsaert Hornstra
    When using the TBitmap wrapper for a GDI bitmap from the unit Graphics I noticed it will always clear out the bitmap (using a PatBlt call) when setting up a bitmap with SetSize( w, h ). When I copy in the bits later on (see routine below) it seems ScanLine is the fastest possibility and not SetDIBits. function ToBitmap: TBitmap; var i, N, x: Integer; S, D: PAnsiChar; begin Result := TBitmap.Create(); Result.PixelFormat := pf32bit; Result.SetSize( width, height ); S := Src; D := Result.ScanLine[ 0 ]; x := Integer( Result.ScanLine[ 1 ] ) - Integer( D ); N := width * sizeof( longword ); for i := 0 to height - 1 do begin Move( S^, D^, N ); Inc( S, N ); Inc( D, x ); end; end; The bitmaps I need to work with are quite large (150MB of RGB memory). With these iomages it takes 150ms to simply create an empty bitmap and a further 140ms to overwrite it's contents. Is there a way of initializing a TBitmap with the correct size WITHOUT initializing the pixels itself and leaving the memory of the pixels uninitialized (eg dirty)? Or is there another way to do such a thing. I know we could work on the pixels in place but this still leaves the 150ms of unnessesary initializtion of the pixels.

    Read the article

  • Bitwise operators and converting an int to 2 bytes and back again.

    - by aKiwi
    first time user, Hi guys! So hopefully someone can help.. My background is php so entering the word of lowend stuff like, char is bytes, which are bits.. which is binary values.. etc is taking some time to get the hang of ;) What im trying to do here is sent some values from an Ardunio board to openFrameWorks (both are c++). What this script currently does (and works well for one sensor i might add) when asked for the data to be sent is.. int value_01 = analogRead(0); // which outputs between 0-1024 unsigned char val1; unsigned char val2; //some Complicated bitshift operation val1 = value_01 &0xFF; val2 = (value_01 >> 8) &0xFF; //send both bytes Serial.print(val1, BYTE); Serial.print(val2, BYTE); Apparently this is the most reliable way of getting the data across.. So now that it is send via serial port, the bytes are added to a char string and converted back by.. int num = ( (unsigned char)bytesReadString[1] << 8 | (unsigned char)bytesReadString[0] ); So to recap, im trying to get 4 sensors worth of data (which im assuming will be 8 of those serialprints?) and to have int num_01 - num_04... at the end of it all. Im assuming this (as with most things) might be quite easy for someone with experience in these concepts.. Any help would be greatly appreciated. Thanks

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >