Search Results

Search found 22300 results on 892 pages for 'half bit'.

Page 577/892 | < Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >

  • Different results when applying function to equal values

    - by Johannes Stiehler
    I'm just digging a bit into Haskell and I started by trying to compute the Phi-Coefficient of two words in a text. However, I ran into some very strange behaviour that I cannot explain. After stripping everything down, I ended up with this code to reproduce the problem: let sumTup = (sumTuples°concat) frequencyLists let sumFixTup = (138, 136, 17, 204) putStrLn (show ((138, 136, 17, 204) == sumTup)) putStrLn (show (phi sumTup)) putStrLn (show (phi sumFixTup)) This outputs: True NaN 0.4574206676616167 So although the sumTupand sumFixTup show as equal, they behave differently when passed to phi. The definition of phi is: phi (a, b, c, d) = let dividend = fromIntegral(a * d - b * c) divisor = sqrt(fromIntegral((a + b) * (c + d) * (a + c) * (b + d))) in dividend / divisor

    Read the article

  • Mercurial - revert back to old version and continue from there

    - by Paolo
    I'm using mercurial locally for a project (it's the only repo there's no pushing/pulling to/from anywhere else). To date it's got a linear history. However, the current thing I'm working on I've now realised is a terrible approach and I want to go back to the version before I started it and implement it a different way. I'm a bit confused with the branch / revert / update -C commands in Mercurial. Basically I want to revert to version 38 (currently on 45) and have my next commits have 38 as a parent and carry on from there. I don't care if revisions 39-45 are lost for ever or end up in a dead-end branch of their own. Which command / set of commands do I need?

    Read the article

  • "Verbose Dictionary" in C#, 'override new' this[] or implement IDictionary

    - by Benjol
    All I want is a dictionary which tells me which key it couldn't find, rather than just saying The given key was not present in the dictionary. I briefly considered doing a subclass with override new this[TKey key], but felt it was a bit hacky, so I've gone with implementing the IDictionary interface, and passing everything through directly to an inner Dictionary, with the only additional logic being in the indexer: public TValue this[TKey key] { get { ThrowIfKeyNotFound(key); return _dic[key]; } set { ThrowIfKeyNotFound(key); _dic[key] = value; } } private void ThrowIfKeyNotFound(TKey key) { if(!_dic.ContainsKey(key)) throw new ArgumentOutOfRangeException("Can't find key [" + key + "] in dictionary"); } Is this the right/only way to go? Would newing over the this[] really be that bad?

    Read the article

  • Reinstall TeamCity when Tomcat becomes corrupt

    - by dodegaard
    I've got a TeamCity 4 installation where tomcat has bit the dust with the following error "The APR Based Apache Tomcat Native library which allows for optimal performance in production environments was not found in java.library path". It appears this started happening once the JDK was installed on the server to allow for a compile. The JDK has been removed and the JRE reinstalled but still no go. My question is should I reinstall TeamCity completely or is there a way to simply reinstall tomcat so I don't hose the configuration? Your help is greatly appreciated.

    Read the article

  • How does operating system software maintains time clocks?

    - by Neeraj
    Hi everyone, This may sound a bit less relevant but I couldn't think of a better place to ask this question. Now consider this situation, you install an OS on your system, set the timezone and time, do some stuff and turn it off. (Note that there is no power going in to the computer). Now next time (say after some hours or days) you turn it on again, and you see the updated time. How is this possible even when my computer is not connected to the internet and was consuming no power during the period it was down.(Is there some kind of hardware hack?) please clarify!

    Read the article

  • Keeping a database structure up to date in a project where code is on subversion?

    - by Bruno De Barros
    I have been working with Subversion for a while now, and it's been incredible for the management of my projects, and even to help managing the deployment to several different servers, but there is just the one thing that still annoys me. Whenever I make any changes to the database structure, I need to update every server manually, I have to keep track of any changes I made, and because some of my servers run branches of the project (modifications that are still being worked on, or were made for different purposes), it's a bit awkward. Until now, I've been using a "database.sql" file, which is a dump of the database structure for a specific revision. But it just seems like such a bad way to manage this. And I was wondering, how does everyone else manage their MySQL databases when they're working on a project and using Subversion?

    Read the article

  • UIView Animation won't run transition

    - by dpelletier
    So, I've searched quite a bit for this and can't seem to find a solution. This code works: CGContextRef context = UIGraphicsGetCurrentContext(); [UIView beginAnimations:nil context:context]; [UIView setAnimationDuration:5]; [c setCenter:CGPointMake(200, 200)]; [UIView commitAnimations]; This code doesn't: CGContextRef context = UIGraphicsGetCurrentContext(); [UIView beginAnimations:nil context:context]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromLeft forView:c cache:YES]; [UIView setAnimationDuration:5]; [c exchangeSubviewAtIndex:0 withSubviewAtIndex:1]; [UIView commitAnimations]; And I know the call to exchangeSubViewAtIndex is working because if I remove it from the animation block it functions as expected. Anyone have any insight as to why this transition won't run? Something I need to import?

    Read the article

  • NoSql Crash Course/Tutorial

    - by Chris Thompson
    Hi all, I've seen NoSQL pop up quite a bit on SO and I have a solid understanding of why you would use it (from here, Wikipedia, etc). This could be due to the lack of concrete and uniform definition of what it is (more of a paradigm than concrete implementation), but I'm struggling to wrap my head around how I would go about designing a system that would use it or how I would implement it in my system. I'm really stuck in a relational-db mindset thinking of things in terms of tables and joins... At any rate, does anybody know of a crash course/tutorial on a system that would use it (kind of a "hello world" for a NoSQL-based system) or a tutorial that takes an existing "Hello World" app based on SQL and converts it to NoSQL (not necessarily in code, but just a high-level explanation). I see this having one solid answer, but if you guys feel like it should be community wiki, I'll be happy to change it. Thanks! Chris

    Read the article

  • Injecting an XML fragment into the current document from an external file

    - by makenai
    I'm currently parsing an XML file using REXML and trying to come up with a way of inserting an XML fragment from an internal file. Currently, I'm using some logic like the following: doc.elements.each('//include') do |element| handleInclude( element ) end def handleInclude( element ) if filename = element.attributes['file'] data = File.open( filename ).read doc = REXML::Document.new( data ) element.parent.replace_child( element, doc ) end end Where my XML looks like the following: <include file="test.xml" /> But this seems a little bit clunky, and I'm worried that REXML might not always parse XML fragments correctly due to absence of a proper root node in some cases. Is there a better way of doing this? Concern #2: REXML seems not to pick up my changes after I replace elements. For example, after making a change: doc.elements.each('rootNode/*') do |element| end ..picks up neither the original element I replaced, nor the one I replaced it with. Is there some trick to getting REXML to rescan its' tree?

    Read the article

  • WPF data grid for financial style reporting?

    - by user191254
    Hello, I'm looking for a decent WPF data grid or solution involving one to represent financial data. I've looked at many - the WPF one, XCeed, Ingragistics, DevExpress, etc.... but none of them seem to offer the simple requirement I have: I want to be able to display group subtotals in their columns in the group row, e.g. GROUP 1 xxxx.xx GROUP 2 xxxx.xx ROW 1 xx.xx ROW 2 xx.xx Does anyone know of a grid that does this, or a nice supporting collection that implements aggregate functions (group totals would need to be used in individual line items) so that existing grids with a bit of XAML styling would work? Thanks in advance, Stephen

    Read the article

  • SQL 2008 Encryption Scan

    - by Mike K.
    We recently upgraded a database server from SQL 2005 to SQL 2008 64 bit. CPU utilization is oftentimes running at 100% on all four processors now (this never happended on the SQL 2005 server). When I run sp_lock I see a number of processes waiting on a resource called [ENCRYPTION_SCAN]. I am not using any SQL 2008 encryption features. Does anyone know why I would have tasks waiting on this resource? It appears that whenever I have four processes waiting on this resource, CPU hits 100% on all four processors.

    Read the article

  • Can PyAMF support service deployment by way of the filesystem?

    - by Chris R
    I'm evaluating PyAMF to replace our current PHP (ugh) AMF services framework, and I'm unable to find the one crucial piece of information that would allow me to provide a compelling use case for changing over: Right now, new PHP AMF services are deployed simply by putting the .php files in the filesystem; the next time they're accessed, the new service is in play. Removal of a service is as simple as deleting the .php file that provided it, and updating it is correspondingly simple. I need that same ease-of-deployment from PyAMF. If we have to rewrite our installers to deploy these services, it'll be a nonstarter. So, what I need to know is, can PyAMF support new service discovery by way of the filesystem, can it support service upgrading and removal by way of same, and if so, what is the best way to set it up to do this? I'm open to any of the various server options; I can easily have cherrypy, django, whatever installed and running on its own, and even -- with a bit more sturm nd drang -- have mod_python or mod_wsgi made available.

    Read the article

  • PHP or Javascript or other - Draw simple shapes onto images?

    - by Tommo
    I basically have an image of a world map and i would like to place a pin image at a specified pixel co-ordinate ontop of this world map image. It's for a website, so ideally the solution should be in PHP or Javascript (i'm avoiding Java and Flash as i want it to be as simple as possible). I had a look at the processing.js library but it is way to big and bloated for just performing this simple task. Is there a pre-existing Javascript function which will allow me to do this? Or a more simple javascript library that i can use? (processing.js was a bit too advanced for me, i couldnt get it working lol) In terms of a PHP solution, i would prefer taking the load off the server and onto the client for this task, but i would still like to hear methods for doing it in PHP if they are suitable. Thanks!

    Read the article

  • .NET Database Apps: Your Preferred Setup

    - by mdvaldosta
    I'm struggling to settle into a pattern for developing typical database driven apps in C# and Visual Studio. There are so many ways to set them up, using drag/drop datasets and adapters or writing the queries manually in ADO.NET or Linq to SQL, Linq to Entities, to bind or not to data bind etc etc. Where to store the connection string, in app.config or in a method or both etc etc. So many tutorials and all of them are different. Everytime I write something I start hating the way it looks and works, so I scrap it and start over. It's getting a bit tedious. Maybe it's alittle of the OCD in me. Would any of you professional developers out there share your method of setting up and structuring your database logic and maybe some sample code? It's really how to go about organizing the code and the method(s) of interacting with SQL that I'm trying to get into a routine with, one that works and won't get me laughed at by someone reviewing it.

    Read the article

  • OpenGL ES 1.1 vs 2.0 for 2D Graphics, with rotated sprites?

    - by Lee Olayvar
    I am having trouble finding information related to which i should choose, OpenGL ES 1.1 or 2.0 for 2D graphics. OpenGL ES 1.1 on Android is a bit limited to my knowledge, and based purely on sprite count the only useful renderer is draw_texture() (as far as i know). However, that does not have rotation and rotation is very important to me. Now with the NDK adding support for OpenGL ES 2.0, i am trying to figure out if there is anything that preforms as well as draw_texture(), but can handle rotation. Anyone have any information on if 2.0 can help me in this area?

    Read the article

  • Java bytecode compiler benchmarks

    - by Dave Jarvis
    Q.1. What free compiler produces the fastest executable Java bytecode? Q.2. What free virtual machine executes Java bytecode the fastest (on 64-bit multi-core CPUs)? Q.3. What other (currently active) compiler projects are missing from this list: http://www.ibm.com/developerworks/java/jdk/ http://gcc.gnu.org/java/ http://openjdk.java.net/groups/compiler/ http://java.sun.com/javase/downloads/ http://download.eclipse.org/eclipse/downloads/ Q.4. What performance improvements can compilers do that JITs cannot (or do not)? Q.5. Where are some recent benchmarks, comparisons, or shoot-outs (for Q1 or Q2)? Thank you!

    Read the article

  • How can Request Validation be disabled for HttpHandlers?

    - by Mun
    Is it possible to disable request validation for HttpHandlers? A bit of background - I've got an ASP.NET web application using an HttpHandler to receive the payment response from WorldPay. The IIS logs show that the handler is being called correctly from WorldPay, but the code inside the handler is never called. If I create a physical ASPX page and set ValidateRequest=false in the header, and put the same code in the Page_Load method, the code is called without any problems. This solves the problem, though I'd prefer to stick with using an HttpHandler for this as it's better suited for this type of functionality, rather than having an empty ASPX page, though this is dependent on being able to disable request validation. The web application is using ASP.NET 2.0 and the server is IIS6.

    Read the article

  • Adding HTML to the DOM through the client.

    - by Mantorok
    Hi all Just wanted your views on the most efficient way to render content on demand, the reason this has come to light is that I'm maintaining a AA compliant website and we recently added video to it, and one of the techniques to avoid invalidation was to add the HTML for the video to the DOM after the page has loaded - which is fine and working ok. However, I found 2 possible techniques and I would appreciate some views, the one I'm currently using is an AJAX call that returns the HTML, the other technique is a hidden field that contains the HTML - I'm presently using the former. The main reason I'm asking the question is because there may be times when this particular control is also requested via an AJAX call, so I would end up with back-to-back AJAX calls, which seems a bit inefficient to me. I hope this makes sense, are there any better techniques to achieve this? Am I worrying too much over the consecutive AJAX calls? Thanks

    Read the article

  • Closing an EDM ObjectContext?

    - by David Veeneman
    I am getting started with the ADO.NET Entity Framework 4.0. I have created an EDM and data store for the app, and it successfully retrieves entities. The application holds the EDM's ObjectContext as a member-level variable, which it uses to call ObjectContext.SaveChanges(). So far, so good. I am going to refactor to repositories later. Right now, my question is a bit more basic: When I am finished with the EDM, what do I need to do to release it? Is it as simple as calling Dispose() on the ObjectContext? Thanks for your help.

    Read the article

  • Theory of computation - Using the pumping lemma for CFLs

    - by Tony
    I'm reviewing my notes for my course on theory of computation and I'm having trouble understanding how to complete a certain proof. Here is the question: A = {0^n 1^m 0^n | n>=1, m>=1} Prove that A is not regular. It's pretty obvious that the pumping lemma has to be used for this. So, we have |vy| = 1 |vxy| <= p (p being the pumping length, = 1) uv^ixy^iz exists in A for all i = 0 Trying to think of the correct string to choose seems a bit iffy for this. I was thinking 0^p 1^q 0^p, but I don't know if I can obscurely make a q, and since there is no bound on u, this could make things unruly.. So, how would one go about this?

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • Google app engine - what is the lifecycle of PersistenceManager?

    - by Domchi
    What is the preferred way of using GAE datastore PersistenceManager for web app? GAE instructions are a bit ambiguous on the matter. Do I instantiate PersistenceManagerFactory for each RPC call, or do I use only one factory for all requests? Do I call PMF.get().getPersistenceManager(), or do I call PMF.get().getPersistenceManagerProxy()? Do I close PM after each RPC call, or do I leave it open? What are you guys doing? Furthermore, I'm not certain how GAE handles 30-second-per-request limit. Is it even possible to reference the same PM between requests?

    Read the article

  • Who could ask for more with LESS CSS? (Part 3 of 3&ndash;Clrizr)

    - by ToString(theory);
    Welcome back!  In the first two posts in this series, I covered some of the awesome features in CSS precompilers such as SASS and LESS, as well as how to get an initial project setup up and running in ASP.Net MVC 4. In this post, I will cover an actual advanced example of using LESS in a project, and show some of the great productivity features we gain from its usage. Introduction In the first post, I mentioned two subjects that I will be using in this example – constants, and color functions.  I’ve always enjoyed using online color scheme utilities such as Adobe Kuler or Color Scheme Designer to come up with a scheme based off of one primary color.  Using these tools, and requesting a complementary scheme you can get a couple of shades of your primary color, and a couple of shades of a complementary/accent color to display. Because there is no way in regular css to do color operations or store variables, there was no way to accomplish something like defining a primary color, and have a site theme cascade off of that.  However with tools such as LESS, that impossibility becomes a reality!  So, if you haven’t guessed it by now, this post is on the creation of a plugin/module/less file to drop into your project, plugin one color, and have your primary theme cascade from it.  I only went through the trouble of creating a module for getting Complementary colors.  However, it wouldn’t be too much trouble to go through other options such as Triad or Monochromatic to get a module that you could use off of that. Step 1 – Analysis I decided to mimic Adobe Kuler’s Complementary theme algorithm as I liked its simplicity and aesthetics.  Color Scheme Designer is great, but I do believe it can give you too many color options, which can lead to chaos and overload.  The first thing I had to check was if the complementary values for the color schemes were actually hues rotated by 180 degrees at all times – they aren’t.  Apparently Adobe applies some variance to the complementary colors to get colors that are actually more aesthetically appealing to users.  So, I opened up Excel and began to plot complementary hues based on rotation in increments of 10: Long story short, I completed the same calculations for Hue, Saturation, and Lightness.  For Hue, I only had to record the Complementary hue values, however for saturation and lightness, I had to record the values for ALL of the shades.  Since the functions were too complicated to put into LESS since they aren’t constant/linear, but rather interval functions, I instead opted to extrapolate the HSL values using the trendline function for each major interval, onto intervals of spacing 1. For example, using the hue extraction, I got the following values: Interval Function 0-60 60-140 140-270 270-360 Saturation and Lightness were much worse, but in the end, I finally had functions for all of the intervals, and then went the route of just grabbing each shades value in intervals of 1.  Step 2 – Mapping I declared variable names for each of these sections as something that shouldn’t ever conflict with a variable someone would define in their own file.  After I had each of the values, I extracted the values and put them into files of their own for hue variables, saturation variables, and lightness variables…  Example: /*HUE CONVERSIONS*/@clrizr-hue-source-0deg: 133.43;@clrizr-hue-source-1deg: 135.601;@clrizr-hue-source-2deg: 137.772;@clrizr-hue-source-3deg: 139.943;@clrizr-hue-source-4deg: 142.114;.../*SATURATION CONVERSIONS*/@clrizr-saturation-s2SV0px: 0;@clrizr-saturation-s2SV1px: 0;@clrizr-saturation-s2SV2px: 0;@clrizr-saturation-s2SV3px: 0;@clrizr-saturation-s2SV4px: 0;.../*LIGHTNESS CONVERSIONS*/@clrizr-lightness-s2LV0px: 30;@clrizr-lightness-s2LV1px: 31;@clrizr-lightness-s2LV2px: 32;@clrizr-lightness-s2LV3px: 33;@clrizr-lightness-s2LV4px: 34;...   In the end, I have 973 lines of mapping/conversion from source HSL to shade HSL for two extra primary shades, and two complementary shades. The last bit of the work was the file to compose each of the shades from these mappings. Step 3 – Clrizr Mapper The final step was the hardest to overcome as I was still trying to understand LESS to its fullest extent.  Imports As mentioned previously, I had separated the HSL mappings into different files, so the first necessary step is to import those for use into the Clrizr plugin: @import url("hue.less");@import url("saturation.less");@import url("lightness.less"); Extract Component Values For Each Shade Next, I extracted the necessary information for each shade HSL before shade composition: @clrizr-input-saturation: 1px+floor(saturation(@clrizr-input))-1;@clrizr-input-lightness: 1px+floor(lightness(@clrizr-input))-1; @clrizr-complementary-hue: formatstring("clrizr-hue-source-{0}", ceil(hue(@clrizr-input))); @clrizr-primary-2-saturation: formatstring("clrizr-saturation-s2SV{0}",@clrizr-input-saturation);@clrizr-primary-1-saturation: formatstring("clrizr-saturation-s1SV{0}",@clrizr-input-saturation);@clrizr-complementary-1-saturation: formatstring("clrizr-saturation-c1SV{0}",@clrizr-input-saturation); @clrizr-primary-2-lightness: formatstring("clrizr-lightness-s2LV{0}",@clrizr-input-lightness);@clrizr-primary-1-lightness: formatstring("clrizr-lightness-s1LV{0}",@clrizr-input-lightness);@clrizr-complementary-1-lightness: formatstring("clrizr-lightness-c1LV{0}",@clrizr-input-lightness); Here, you can see a couple of odd things…  On the first line, I am using operations to add units to the saturation and lightness.  This is due to some limitations in the operations that would give me saturation or lightness in %, which can’t be in a variable name.  So, I use first add 1px to it, which casts the result of the following functions as px instead of %, and then at the end, I remove that pixel.  You can also see here the formatstring method which is exactly what it sounds like – something like String.Format(string str, params object[] obj). Get Primary & Complementary Shades Now that I have components for each of the different shades, I can now compose them into each of their pieces.  For this, I use the @@ operator which will look for a variable with the name specified in a string, and then call that variable: @clrizr-primary-2: hsl(hue(@clrizr-input), @@clrizr-primary-2-saturation, @@clrizr-primary-2-lightness);@clrizr-primary-1: hsl(hue(@clrizr-input), @@clrizr-primary-1-saturation, @@clrizr-primary-1-lightness);@clrizr-primary: @clrizr-input;@clrizr-complementary-1: hsl(@@clrizr-complementary-hue, @@clrizr-complementary-1-saturation, @@clrizr-complementary-1-lightness);@clrizr-complementary-2: hsl(@@clrizr-complementary-hue, saturation(@clrizr-input), lightness(@clrizr-input)); That’s is it, for the most part.  These variables now hold the theme for the one input color – @clrizr-input.  However, I have one last addition… Perceptive Luminance Well, after I got the colors, I decided I wanted to also get the best font color that would go on top of it.  Black or white depending on light or dark color.  Now I couldn’t just go with checking the lightness, as that is half the story.  You see, the human eye doesn’t see ALL colors equally well but rather has more cells for interpreting green light compared to blue or red.  So, using the ratio, we can calculate the perceptive luminance of each of the shades, and get the font color that best matches it! @clrizr-perceptive-luminance-ps2: round(1 - ( (0.299 * red(@clrizr-primary-2) ) + ( 0.587 * green(@clrizr-primary-2) ) + (0.114 * blue(@clrizr-primary-2)))/255)*255;@clrizr-perceptive-luminance-ps1: round(1 - ( (0.299 * red(@clrizr-primary-1) ) + ( 0.587 * green(@clrizr-primary-1) ) + (0.114 * blue(@clrizr-primary-1)))/255)*255;@clrizr-perceptive-luminance-ps: round(1 - ( (0.299 * red(@clrizr-primary) ) + ( 0.587 * green(@clrizr-primary) ) + (0.114 * blue(@clrizr-primary)))/255)*255;@clrizr-perceptive-luminance-pc1: round(1 - ( (0.299 * red(@clrizr-complementary-1)) + ( 0.587 * green(@clrizr-complementary-1)) + (0.114 * blue(@clrizr-complementary-1)))/255)*255;@clrizr-perceptive-luminance-pc2: round(1 - ( (0.299 * red(@clrizr-complementary-2)) + ( 0.587 * green(@clrizr-complementary-2)) + (0.114 * blue(@clrizr-complementary-2)))/255)*255; @clrizr-col-font-on-primary-2: rgb(@clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2);@clrizr-col-font-on-primary-1: rgb(@clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1);@clrizr-col-font-on-primary: rgb(@clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps);@clrizr-col-font-on-complementary-1: rgb(@clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1);@clrizr-col-font-on-complementary-2: rgb(@clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2); Conclusion That’s it!  I have posted a project on clrizr.codePlex.com for this, and included a testing page for you to test out how it works.  Feel free to use it in your own project, and if you have any questions, comments or suggestions, please feel free to leave them here as a comment, or on the contact page!

    Read the article

  • Showing edittext obliquely in android

    - by Chandra Sekhar
    I have an EditText, which generally shows parallel to the screen X-axis. I want to show it obliquely (around 45 degree to horizontal axis). Is it possible to do this in Android. Please guide me in a direction so that I can try for it. After getting the two links in the answer by pawelzeiba, I proceed a little bit in solving this, but stuck again so I put another question on this. here is the link. So please help me to solve this.

    Read the article

  • How do I select the number of distinct days in a date range?

    - by isme
    I'm trying to use the T-SQL function DATEDIFF to select the number of distinct dates in a time period. The following query: SELECT DATEDIFF(DAY, '2012-01-01 01:23:45', '2012-01-02 01:23:45') selects 1, which is one less than I want. There are two distinct dates in the range: 2012-01-01 and 2012-01-02. It is not correct to add one to the result in the general case. The following query: SELECT DATEDIFF(DAY, '2012-01-01 00:00:00', '2012-01-02 00:00:00') selects 1, which is correct, because there is only one distinct date in the range. I'm sure there is a simple bit of arithmetic that I'm missing to calculate this. Can someone help me?

    Read the article

< Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >