Search Results

Search found 14292 results on 572 pages for 'high integrity systems'.

Page 483/572 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • problem creating jpg thumb with php

    - by Ross
    Hi, I am having problems creating a thumbnail from an uploaded image, my problem is (i) the quality (ii) the crop http://welovethedesign.com.cluster.cwcs.co.uk/phpimages/large.jpg http://welovethedesign.com.cluster.cwcs.co.uk/phpimages/thumb.jpg If you look the quality is very poor and the crop is taken from the top and is not a resize of the original image although the dimesions mean it is in proportion. The original is 1600px wide by 1100px high. Any help would be appreciated. $thumb = $targetPath."Thumbs/".$fileName; $imgsize = getimagesize($targetFile); $image = imagecreatefromjpeg($targetFile); $width = 200; //New width of image $height = 138; //This maintains proportions $src_w = $imgsize[0]; $src_h = $imgsize[1]; $thumbWidth = 200; $thumbHeight = 138; // Intended dimension of thumb // Beyond this point is simply code. $sourceImage = imagecreatefromjpeg($targetFile); $sourceWidth = imagesx($sourceImage); $sourceHeight = imagesy($sourceImage); $targetImage = imagecreate($thumbWidth,$thumbHeight); imagecopyresized($targetImage,$sourceImage,0,0,0,0,$thumbWidth,$thumbWidth,imagesx($sourceImage),imagesy($sourceImage)); //imagejpeg($targetImage, "$thumbPath/$thumbName"); imagejpeg($targetImage, $thumb); chmod($thumb, 0755);

    Read the article

  • TextField Covering UIAlertView's Button.

    - by XcodeDev
    Hi, I am using a UIAlertView with three buttons: "Dismiss", "Submit Score" and @"View Leaderboard". The UIAlertView also contains a UITextField called username. At the moment the UITextField "username" is covering one of the buttons in the UIAlertView. I just wanted to know how I could stop the UITextField from covering one of the buttons, i.e move the buttons down. Here is an image of what is happening: And here is my code: [username setBackgroundColor:[UIColor whiteColor]]; [username setBorderStyle:UITextBorderStyleRoundRect]; username.backgroundColor = [UIColor clearColor]; username.returnKeyType = UIReturnKeyDone; username.keyboardAppearance = UIKeyboardAppearanceAlert; username.placeholder = @"Enter your name here"; username = [[UITextField alloc] initWithFrame:CGRectMake(20.0, 45.0, 245.0, 25.0)]; username.borderStyle = UITextBorderStyleRoundedRect; [username resignFirstResponder]; UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Congratulations" message:[NSString stringWithFormat:@"You tapped %i times in %i seconds!\n", tapAmount, originalCountdownTime] delegate:self cancelButtonTitle:@"Dismiss" otherButtonTitles:@"Submit To High Score Leaderboard", @"View Leaderboard", nil]; alert.tag = 01; [alert addSubview:username]; [alert show]; [alert release];

    Read the article

  • Code smells galore. Can this be a good company?

    - by Paperflyer
    I am currently doing some contract work for a company. Now they want to hire me for real. I have been reading on SO about code smells lately. The thing is, I have worked with some of their code and it smells. Badly. They use incredibly old versions of MSVC (2003), they do not seem to use version control systems, most code is completely undocumented, variable names with more than three letters are a rarity, there is commented out code all over the place, some methods take huge amounts of arguments, UI design is seemingly done by blind people... Yet they seem to be quite successful with what they do and their actual algorithms seem to be pretty sound and rather sophisticated. Since they mostly do DSP stuff, I am willing to ignore the UI side of things, but really these code smells are worrying. What would you think of a company that doesn't seem to value readable code? The people are nice enough and payment would be good. How much would you value code smells in this context? You see, this is my first job and SO got me worried, so I turn to you for suggestions ;-)

    Read the article

  • CakePHP Routes: Messing With The MVC

    - by thesunneversets
    So we have a real-estate-related site that has controller/action pairs like "homes/view", "realtors/edit", and so forth. From on high it has been deemed a good idea to refactor the site so that URLS are now in the format "/realtorname/homes/view/id", and perhaps also "/admin/homes/view/id" and/or "/region/..." As a mere CakePHP novice I'm finding it difficult to achieve this in routes.php. I can do the likes of: Router::connect('/:filter/h/:id', array('controller'=>'homes','action'=>'view')); Router::connect('/admin/:controller/:action/:id'); But I'm finding that the id is no longer being passed simply and elegantly to the actions, now that controller and action do not directly follow the domain. Therefore, questions: Is it a stupid idea to play fast and loose with the /controller/action format in this way? Is there a better way of stating these routes so that things don't break egregiously? Would we be better off going back to subdomains (the initial method of achieving this type of functionality, shot down on potentially spurious SEO-related grounds)? Many thanks for any advice! I'm sorry that I'm such a newbie that I don't know whether I'm asking stupid questions or not....

    Read the article

  • Multiple inequality conditions (range queries) in NoSQL

    - by pableu
    Hi, I have an application where I'd like to use a NoSQL database, but I still want to do range queries over two different properties, for example select all entries between times T1 and T2 where the noiselevel is smaller than X. On the other hand, I would like to use a NoSQL/Key-Value store because my data is very sparse and diverse, and I do not want to create new tables for every new datatype that I might come across. I know that you cannot use multiple inequality filters for the Google Datastore (source). I also know that this feature is coming (according to this). I know that this is also not possible in CouchDB (source). I think I also more or less understand why this is the case. Now, this makes me wonder.. Is that the case with all NoSQL databases? Can other NoSQL systems make range queries over two different properties? How about, for example, Mongo DB? I've looked in the Documentation, but the only thing I've found was the following snippet in their docu: Note that any of the operators on this page can be combined in the same query document. For example, to find all document where j is not equal to 3 and k is greater than 10, you'd query like so: db.things.find({j: {$ne: 3}, k: {$gt: 10} }); So they use greater-than and not-equal on two different properties. They don't say anything about two inequalities ;-) Any input and enlightenment is welcome :-)

    Read the article

  • Neural Networks or Human-computer interaction

    - by Shahin
    I will be entering my third year of university in my next academic year, once I've finished my placement year as a web developer, and I would like to hear some opinions on the two modules in the Title. I'm interested in both, however I want to pick one that will be relevant to my career and that I can apply to systems I develop. I'm doing an Internet Computing degree, it covers web development, networking, database work and programming. Though I have had myself set on becoming a web developer I'm not so sure about that any more so am trying not to limit myself to that area of development. I know HCI would help me as a web developer, but do you think it's worth it? Do you think Neural Network knowledge could help me realistically in a system I write in the future? Thanks. EDIT: Hi guys, I thought it would be useful to follow-up with what I decided to do and how it's worked out. I picked Artificial Neural Networks over HCI, and I've really enjoyed it. Having a peek into cognitive science and machine learning has ignited my interest for the subject area, and I will be hoping to take on a postgraduate project a few years from now when I can afford it. I have got a job which I am starting after my final exams (which are in a few days) and I was indeed asked if I had done a module in HCI or similar. It didn't seem to matter, as it isn't a front-end developer position! I would recommend taking the module if you have it as an option, as well as any module consisting of biological computation, it will open up more doors should you want to go onto postgraduate research in the future. Thanks again, Shahin

    Read the article

  • How to index a table with a Type 2 slowly changing dimension for optimal performance

    - by The Lazy DBA
    Suppose you have a table with a Type 2 slowly-changing dimension. Let's express this table as follows, with the following columns: * [Key] * [Value1] * ... * [ValueN] * [StartDate] * [ExpiryDate] In this example, let's suppose that [StartDate] is effectively the date in which the values for a given [Key] become known to the system. So our primary key would be composed of both [StartDate] and [Key]. When a new set of values arrives for a given [Key], we assign [ExpiryDate] to some pre-defined high surrogate value such as '12/31/9999'. We then set the existing "most recent" records for that [Key] to have an [ExpiryDate] that is equal to the [StartDate] of the new value. A simple update based on a join. So if we always wanted to get the most recent records for a given [Key], we know we could create a clustered index that is: * [ExpiryDate] ASC * [Key] ASC Although the keyspace may be very wide (say, a million keys), we can minimize the number of pages between reads by initially ordering them by [ExpiryDate]. And since we know the most recent record for a given key will always have an [ExpiryDate] of '12/31/9999', we can use that to our advantage. However... what if we want to get a point-in-time snapshot of all [Key]s at a given time? Theoretically, the entirety of the keyspace isn't all being updated at the same time. Therefore for a given point-in-time, the window between [StartDate] and [ExpiryDate] is variable, so ordering by either [StartDate] or [ExpiryDate] would never yield a result in which all the records you're looking for are contiguous. Granted, you can immediately throw out all records in which the [StartDate] is greater than your defined point-in-time. In essence, in a typical RDBMS, what indexing strategy affords the best way to minimize the number of reads to retrieve the values for all keys for a given point-in-time? I realize I can at least maximize IO by partitioning the table by [Key], however this certainly isn't ideal. Alternatively, is there a different type of slowly-changing-dimension that solves this problem in a more performant manner?

    Read the article

  • Project management and bundling dependencies

    - by Joshua
    I've been looking for ways to learn about the right way to manage a software project, and I've stumbled upon the following blog post. I've learned some of the things mentioned the hard way, others make sense, and yet others are still unclear to me. To sum up, the author lists a bunch of features of a project and how much those features contribute to a project's 'suckiness' for a lack of a better term. You can find the full article here: http://spot.livejournal.com/308370.html In particular, I don't understand the author's stance on bundling dependencies with your project. These are: == Bundling == Your source only comes with other code projects that it depends on [ +20 points of FAIL ] Why is this a problem, (especially given the last point)? If your source code cannot be built without first building the bundled code bits [ +10 points of FAIL ] Doesn't this necessarily have to be the case for software built against 3rd party libs? Your code needs that other code to be compiled into its library before the linker can work? If you have modified those other bundled code bits [ +40 points of FAIL ] If this is necessary for your project, then it naturally follows that you've bundled said code with yours. If you want to customize a build of some lib,say WxWidgets, you'll have to edit that projects build scripts to bulid the library that you want. Subsequently, you'll have to publish those changes to people who wish to build your code, so why not use a high level make script with the params already written in, and distribute that? Furthermore, (especially in a windows env) if your code base is dependent on a particular version of a lib (that you also need to custom compile for your project) wouldn't it be easier to give the user the code yourself (because in this case, it is unlikely that the user will already have the correct version installed)? So how would you respond to these comments, and what points may I be failing to take into consideration? Would you agree or disagree with the author's take (or mine), and why?

    Read the article

  • Using jQuery with Windows 8 Metro JavaScript App causes security error

    - by patridge
    Since it sounded like jQuery was an option for Metro JavaScript apps, I was starting to look forward to Windows 8 dev. I installed Visual Studio 2012 Express RC and started a new project (both empty and grid templates have the same problem). I made a local copy of jQuery 1.7.2 and added it as a script reference. <!-- SomeTestApp references --> <link href="/css/default.css" rel="stylesheet" /> <script src="/js/jquery-1.7.2.js"></script> <script src="/js/default.js"></script> Unfortunately, as soon as I ran the resulting app it tosses out a console error: HTML1701: Unable to add dynamic content ' a' A script attempted to inject dynamic content, or elements previously modified dynamically, that might be unsafe. For example, using the innerHTML property to add script or malformed HTML will generate this exception. Use the toStaticHTML method to filter dynamic content, or explicitly create elements and attributes with a method such as createElement. For more information, see http://go.microsoft.com/fwlink/?LinkID=247104. I slapped a breakpoint in a non-minified version of jQuery and found the offending line: div.innerHTML = " <link/><table></table><a href='/a' style='top:1px;float:left;opacity:.55;'>a</a><input type='checkbox'/>"; Apparently, the security model for Metro apps forbids creating elements this way. This error doesn't cause any immediate issues for the user, but given its location, I am worried it will cause capability-discovery tests in jQuery to fail that shouldn't. I definitely want jQuery $.Deferred for making just about everything easier. I would prefer to be able to use the selector engine and event handling systems, but I would live without them if I had to. How does one get the latest jQuery to play nicely with Metro development?

    Read the article

  • Examples using Active Directory/LDAP groups for permissions \ roles in Rails App.

    - by Nick Gorbikoff
    Hello. I was wondering how other people implemented this scenario. I have an internal rails app ( inventory management, label printing, shipping,etc). I'm rewriting security on the system, cause the old way got to cumbersome to maintain ( users table, passwords, roles) - I used restful_authentication and roles. It was implemented about 3 years ago. I already implemented AuthLogic with ruby-ldap-net to authenticate users ( actually that was surprisingly easy, compared to how I struggled with other frameworks/languages before). Next step is roles. I already have groups defined in Active Directory - so I don't want to run a separate roles system in my rails app, I just want to reuse Active Directory groups - since that part of the system is already maintained for other purposes ( shared drives, backups, pc access, etc) So I was wondering if others had experience implementing permissions/roles in a rails app based on groups in Active Directory or LDAP. Also the roles requirements are pretty complex. Here is an example: For instance I have users that belong to the supervisors group in AD and to inventory dept, so I was that user to be able to run "advanced" tasks in invetory - adjust qty, run reports, however other "supervisors" from other departmanets, shouldn't be able to do this, also Top Management - should be able to use those reports (regardless weather they belong to the invetory or not), but not Middle Management, unless they are in inventory group. Admins of the system (Domain Admins) should have unrestricted access to the system , except for HR & Finances part unless they are in HR ( like you don't want all system admins (except for one authorized one) to see personal info of other employees). I looked at acl9, cancan, aegis. I was wondering if there are any advantaged/cons to using one versus the other for this particular use of system access based on AD. Suggest other systems if you had good experience. Thank you!!!

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Suggestion on UPnP presentation

    - by Microkernel
    Hi all, I am working on an embedded device (bit higher end in terms of system resources but still an embedded one) which has lot of media content in it. I am trying to make it UPnP complaint and want to be able to control this device using a UPnP complaint control point/companion device like ipad. The step towards this is to be able to present the playlist content to the user. We thought of using HTML5 as a format to use. But as I am a noob in web technologies, I am not sure whats the best way to produce and present rich dynamic web pages. The content thats presented are video/audio listing that device can play and want this listing to be generated using the user's input criteria. So, what would be the best way to generate these dynamic pages which are rich and rendered as HTML5 pages. (looked at XML & XSLT, but there seems to be some limitations in how well one can use XSLT from some rewviews I saw). Thanks Microkernel PS: This may be silly or very basic as I am a embedded systems developer and not even a noob in web technologoes...

    Read the article

  • Pseudo-quicksort time complexity

    - by Ord
    I know that quicksort has O(n log n) average time complexity. A pseudo-quicksort (which is only a quicksort when you look at it from far enough away, with a suitably high level of abstraction) that is often used to demonstrate the conciseness of functional languages is as follows (given in Haskell): quicksort :: Ord a => [a] -> [a] quicksort [] = [] quicksort (p:xs) = quicksort [y | y<-xs, y<p] ++ [p] ++ quicksort [y | y<-xs, y>=p] Okay, so I know this thing has problems. The biggest problem with this is that it does not sort in place, which is normally a big advantage of quicksort. Even if that didn't matter, it would still take longer than a typical quicksort because it has to do two passes of the list when it partitions it, and it does costly append operations to splice it back together afterwards. Further, the choice of the first element as the pivot is not the best choice. But even considering all of that, isn't the average time complexity of this quicksort the same as the standard quicksort? Namely, O(n log n)? Because the appends and the partition still have linear time complexity, even if they are inefficient.

    Read the article

  • Did we always have to register to download the Java 5 JDK, or is this new Oracle fun?

    - by Ukko
    I could swear that just a couple of months ago I downloaded a copy of the Java 1.5 SE JDK and I did not have to give them information on my first born. Today, I had to go through the register-and-we-will-send-you-a-link-someday dance. I have not received the link yet, so I thought I would ask about it here. What is special about the Java 5 JDK? I can get 6 just by clicking, is this a stick to get us to migrate to Java 6? Am I just not remembering doing this before? What marketing genius thought this would be a value add for Java? "If we make them sweat for the JDK they won't just delete it willy-nilly the next time?" Does everyone picture the people designing systems like this as mustache twirling Snidely Whiplash clones like I do? Did I just miss the link for the Secret Squirrel route to the download page? Finally, I am in the U.S. so I should not have to worry about export restrictions. Any thoughts?

    Read the article

  • forcing stack w/i 32bit when -m64 -mcmodel=small

    - by chaosless
    have C sources that must compile in 32bit and 64bit for multiple platforms. structure that takes the address of a buffer - need to fit address in a 32bit value. obviously where possible these structures will use natural sized void * or char * pointers. however for some parts an api specifies the size of these pointers as 32bit. on x86_64 linux with -m64 -mcmodel=small tboth static data and malloc()'d data fit within the 2Gb range. data on the stack, however, still starts in high memory. so given a small utility _to_32() such as: int _to_32( long l ) { int i = l & 0xffffffff; assert( i == l ); return i; } then: char *cp = malloc( 100 ); int a = _to_32( cp ); will work reliably, as would: static char buff[ 100 ]; int a = _to_32( buff ); but: char buff[ 100 ]; int a = _to_32( buff ); will fail the assert(). anyone have a solution for this without writing custom linker scripts? or any ideas how to arrange the linker section for stack data, would appear it is being put in this section in the linker script: .lbss : { *(.dynlbss) *(.lbss .lbss.* .gnu.linkonce.lb.*) *(LARGE_COMMON) } thanks!

    Read the article

  • Leveraging hobby experience to get a job

    - by Bernard
    Like many other's I began programming at an early age. I started when I was 11 and I learned C when I was 14 (now 26). While most of what I did were games just to entertain myself I did everything from low level 2D graphics, and binary I/O, to interfacing with free API's, custom file systems, audio, 3D animations, OpenGL, web sites, etc. I worked on a wide variety of things trying to make various games. Because of this experience I have tested out of every college level C/C++ programming course I have ever been offered. In the classes I took, my classmates would need a week to do what I finished in class with an hour or two of work. I now have my degree now and I have 2 years of experience working full time as a web developer however I would like to get back into C++ and hopefully do simulation programming. Unfortunately I have yet to do C++ as a job, I have only done it for testing out of classes and doing my senior project in college. So most of what I have in C++ is still hobby experience and I don't know how to best convey that so that I don't end up stuck doing something too low level for me. Right now I see a job offer that requires 2 years of C++ experience, but I have at least 9 (I didn't do C++ everyday for the last 14 years). How do I convey my experience? How much is it truly worth? and How do I get it's full value? The best thing that I can think of is a demo and a portfolio, however that only comes into play after an interview has been secured. I used a portfolio to land my current job. All answers and advice are appreciated.

    Read the article

  • Calculate location and number of intersections between multiple date/time ranges?

    - by Patricker
    I need to calculate the location of intersections between multiple date ranges, and the number of overlapping intersections. Then I need to show which date/time ranges overlap each of those intersecting sections. It is slightly more complicated than that so I'll do my best to explain by providing an example. I am working in VB.Net, but C# examples are acceptable as well as I work in both. We have several high risk tasks that involve the same system. Below I have three example jobs named HR1/2/3/4 with start and end date/times. HR1 {1/6/2010 10:00 - 1/6/2010 15:00} HR2 {1/6/2010 11:00 - 1/6/2010 18:00} HR3 {1/6/2010 12:00 - 1/6/2010 14:00} HR4 {1/6/2010 18:00 - 1/6/2010 20:00} What I want the end result to be is shown below. I am having trouble describing it any way but by example. HRE1 {1/6/2010 10:00 - 1/6/2010 11:00} - Intersects 1 {End Time Split 1, for readability only, not needed in solution} HRE1 {1/6/2010 11:00 - 1/6/2010 12:00} - Intersects 2 HRE2 {1/6/2010 11:00 - 1/6/2010 12:00} - Intersects 2 {End Time Split 2, for readability only, not needed in solution} HRE1 {1/6/2010 12:00 - 1/6/2010 14:00} - Intersects 3 HRE2 {1/6/2010 12:00 - 1/6/2010 14:00} - Intersects 3 HRE3 {1/6/2010 12:00 - 1/6/2010 14:00} - Intersects 3 {End Time Split 3, for readability only, not needed in solution} HRE1 {1/6/2010 14:00 - 1/6/2010 15:00} - Intersects 2 HRE2 {1/6/2010 14:00 - 1/6/2010 15:00} - Intersects 2 {End Time Split 4, for readability only, not needed in solution} HRE2 {1/6/2010 15:00 - 1/6/2010 18:00} - Intersects 1 {End Time Split 5, for readability only, not needed in solution} HR4 {1/6/2010 18:00 - 1/6/2010 20:00} - Intersects 1 Any help would be greatly appreciated.

    Read the article

  • Transform only one axis to log10 scale with ggplot2

    - by daroczig
    I have the following problem: I would like to visualize a discrete and a continuous variable on a boxplot in which the latter has a few extreme high values. This makes the boxplot meaningless (the points and even the "body" of the chart is too small), that is why I would like to show this on a log10 scale. I am aware that I could leave out the extreme values from the visualization, but I am not intended to. Let's see a simple example with diamonds data: m <- ggplot(diamonds, aes(y = price, x = color)) The problem is not serious here, but I hope you could imagine why I would like to see the values at a log10 scale. Let's try it: m + geom_boxplot() + coord_trans(y = "log10") As you can see the y axis is log10 scaled and looks fine but there is a problem with the x axis, which makes the plot very strange. The problem do not occur with scale_log, but this is not an option for me, as I cannot use a custom formatter this way. E.g.: m + geom_boxplot() + scale_y_log10() My question: does anyone know a solution to plot the boxplot with log10 scale on y axis which labels could be freely formatted with a formatter function like in this thread? Editing the question to help answerers based on answers and comments: What I am really after: one log10 transformed axis (y) with not scientific labels. I would like to label it like dollar (formatter=dollar) or any custom format. If I try @hadley's suggestion I get the following warnings: > m + geom_boxplot() + scale_y_log10(formatter=dollar) Warning messages: 1: In max(x) : no non-missing arguments to max; returning -Inf 2: In max(x) : no non-missing arguments to max; returning -Inf 3: In max(x) : no non-missing arguments to max; returning -Inf With an unchanged y axis labels:

    Read the article

  • File descriptor limits and default stack sizes

    - by Charles
    Where I work we build and distribute a library and a couple complex programs built on that library. All code is written in C and is available on most 'standard' systems like Windows, Linux, Aix, Solaris, Darwin. I started in the QA department and while running tests recently I have been reminded several times that I need to remember to set the file descriptor limits and default stack sizes higher or bad things will happen. This is particularly the case with Solaris and now Darwin. Now this is very strange to me because I am a believer in 0 required environment fiddling to make a product work. So I am wondering if there are times where this sort of requirement is a necessary evil, or if we are doing something wrong. Edit: Great comments that describe the problem and a little background. However I do not believe I worded the question well enough. Currently, we require customers, and hence, us the testers, to set these limits before running our code. We do not do this programatically. And this is not a situation where they MIGHT run out, under normal load our programs WILL run out and seg fault. So rewording the question, is requiring the customer to change these ulimit values to run our software to be expected on some platforms, ie, Solaris, Aix, or are we as a company making it to difficult for these users to get going? Bounty: I added a bounty to hopefully get a little more information on what other companies are doing to manage these limits. Can you set these pragmatically? Should we? Should our programs even be hitting these limits or could this be a sign that things might be a bit messy under the covers? That is really what I want to know, as a perfectionist a seemingly dirty program really bugs me.

    Read the article

  • Convert a raw string to an array of big-endian words with Ruby

    - by Zag zag..
    Hello, I would like to convert a raw string to an array of big-endian words. As example, here is a JavaScript function that do it well (by Paul Johnston): /* * Convert a raw string to an array of big-endian words * Characters >255 have their high-byte silently ignored. */ function rstr2binb(input) { var output = Array(input.length >> 2); for(var i = 0; i < output.length; i++) output[i] = 0; for(var i = 0; i < input.length * 8; i += 8) output[i>>5] |= (input.charCodeAt(i / 8) & 0xFF) << (24 - i % 32); return output; } I believe the Ruby equivalent can be String#unpack(format). However, I don't know what should be the correct format parameter. Thank you for any help. Regards

    Read the article

  • Experience with Take home Programming Test for Interviews

    - by Alan
    Okay this is not "programming" related per-se, but it is a situation that I believe the SO audience would be more familiar with, than say an ask.yahoo.com audience, so please forgive me. I had a phone screen the other day with a company that I really want to work for. It went pretty well, based on cues from the HR person, such as "Next step we're going to send you a programming test," and "Well, before I get ahead of myself, do you want to continue the interviewing process." and "We'll send out the test later this afternoon. It doesn't sound like you'll have trouble with it, but I want to be honest we do have a high failure rate on it." The questions asked weren't technical, just going down my resume, and talking about the work I've done, and how it relates to the position. Nothing I couldn't talk through. This was last Thursday. It's now Tuesday, and haven't received the test yet. I sent a follow up email yesterday to the lady who interviewed me, but haven't gotten a response. Anyone had a similar experience? Am I reading too much into this? Or was I off the mark by thinking I had moved on to the next step in the interview process. Since this is a company I really want to work for, I'm driving myself insane enumerating all the various what-if scenarios.

    Read the article

  • How to create nested ViewComponents in Monorail and NVelocity?

    - by rob_g
    I have been asked to update the menu on a website we maintain. The website uses Castle Windors Monorail and NVelocity as the template. The menu is currently rendered using custom made subclasses of ViewComponent, which render li elements. At the moment there is only one (horizontal) level, so the current mechanism is fine. I have been asked to add drop down menus to some of the existing menus. As this is the first time I have seen Monorail and NVelocity, I'm a little lost. What currently exists: <ul> #component(MenuComponent with "title=Home" "hover=autoselect" "link=/") #component(MenuComponent with "title=Videos" "hover=autoselect") #component(MenuComponent with "title=VPS" "hover=autoselect" "link=/vps") #component(MenuComponent with "title=Add-Ons" "hover=autoselect" "link=/addons") #component(MenuComponent with "title=Hosting" "hover=autoselect" "link=/hosting") #component(MenuComponent with "title=Support" "hover=autoselect" "link=/support") #component(MenuComponent with "title=News" "hover=autoselect" "link=/news") #component(MenuComponent with "title=Contact Us" "hover=autoselect" "link=/contact-us") </ul> Is it possible to have nested MenuComponents (or a new SubMenuComponent) something like: <ul> #component(MenuComponent with "title=Home" "hover=autoselect" "link=/") #component(MenuComponent with "title=Videos" "hover=autoselect") #blockcomponent(MenuComponent with "title=VPS" "hover=autoselect" "link=/vps") #component(SubMenuComponent with "title="Plans" "hover=autoselect" "link=/vps/plans") #component(SubMenuComponent with "title="Operating Systems" "hover=autoselect" "link=/vps/os") #component(SubMenuComponent with "title="Supported Applications" "hover=autoselect" "link=/vps/apps") #end #component(MenuComponent with "title=Add-Ons" "hover=autoselect" "link=/addons") #component(MenuComponent with "title=Hosting" "hover=autoselect" "link=/hosting") #component(MenuComponent with "title=Support" "hover=autoselect" "link=/support") #component(MenuComponent with "title=News" "hover=autoselect" "link=/news") #component(MenuComponent with "title=Contact Us" "hover=autoselect" "link=/contact-us") </ul> I need to draw the sub menu (ul and li elements) inside the overridden Render method on MenuComponent, so using nested ViewComponent derivatives may not work. I would like a method keep the basically declarative method for creating menus, if at all possible.

    Read the article

  • how to set a fixed color bar for pcolor in python matplotlib?

    - by user248237
    I am using pcolor with a custom color map to plot a matrix of values. I set my color map so that low values are white and high values are red, as shown below. All of my matrices have values between 0 and 20 (inclusive) and I'd like 20 to always be pure red and 0 to always be pure white, even if the matrix has values that don't span the entire range. For example, if my matrix only has values between 2 and 7, I don't want it to plot 2 as white and 7 as red, but rather color it as if the range is still 0 to 20. How can I do this? I tried using the "ticks=" option of colorbar but it did not work. Here is my current code (assume "my_matrix" contains the values to be plotted): cdict = {'red': ((0.0, 1.0, 1.0), (0.5, 1.0, 1.0), (1.0, 1.0, 1.0)), 'green': ((0.0, 1.0, 1.0), (0.5, 1.0, 1.0), (1.0, 0.0, 0.0)), 'blue': ((0.0, 1.0, 1.0), (0.5, 1.0, 1.0), (1.0, 0.0, 0.0))} my_cmap = matplotlib.colors.LinearSegmentedColormap('my_colormap', cdict, 256) colored_matrix = plt.pcolor(my_matrix, cmap=my_cmap) plt.colorbar(colored_matrix, ticks=[0, 5, 10, 15, 20]) any idea how I can fix this to get the right result? thanks very much.

    Read the article

  • Unable to get data from a WCF client

    - by Scott
    I am developing a DLL that will provide sychronized time stamps to multiple applications running on the same machine. The timestamps are altered in a thread that uses a high performance timer and a scalar to provide the appearance of moving faster than real-time. For obvious reasons I want only 1 instance of this time library, and I thought I could use WCF for the other processes to connect to this and poll for timestamps whenever they want. When I connect however I never get a valid time stamp, just an empty DateTime. I should point out that the library does work. The original implementation was a single DLL that each application incorporated and each one was synced using windows messages. I'm fairly sure it has something to do with how I'm setting up the WCF stuff, to which I am still pretty new. Here are the contract definitions: public interface ITimerCallbacks { [OperationContract(IsOneWay = true)] void TimerElapsed(String id); } [ServiceContract(SessionMode = SessionMode.Required, CallbackContract = typeof(ITimerCallbacks))] public interface ISimTime { [OperationContract] DateTime GetTime(); } Here is my class definition: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)] public class SimTimeServer: ISimTime The host setup: // set up WCF interprocess comms host = new ServiceHost(typeof(SimTimeServer), new Uri[] { new Uri("net.pipe://localhost") }); host.AddServiceEndpoint(typeof(ISimTime), new NetNamedPipeBinding(), "SimTime"); host.Open(); and the implementation of the interface function server-side: public DateTime GetTime() { if (ThreadMutex.WaitOne(20)) { RetTime = CurrentTime; ThreadMutex.ReleaseMutex(); } return RetTime; } Lastly the client-side implementation: Callbacks myCallbacks = new Callbacks(); DuplexChannelFactory pipeFactory = new DuplexChannelFactory(myCallbacks, new NetNamedPipeBinding(), new EndpointAddress("net.pipe://localhost/SimTime")); ISimTime pipeProxy = pipeFactory.CreateChannel(); while (true) { string str = Console.ReadLine(); if (str.ToLower().Contains("get")) Console.WriteLine(pipeProxy.GetTime().ToString()); else if (str.ToLower().Contains("exit")) break; }

    Read the article

  • How does mercurial's bisect work when the range includes branching?

    - by Joshua Goldberg
    If the bisect range includes multiple branches, how does hg bisect's search work. Does it effectively bisect each sub-branch (I would think that would be inefficient)? For instance, borrowing, with gratitude, a diagram from an answer to this related question, what if the bisect got to changeset 7 on the "good" right-side branch first. @ 12:8ae1fff407c8:bad6 | o 11:27edd4ba0a78:bad5 | o 10:312ba3d6eb29:bad4 |\ | o 9:68ae20ea0c02:good33 | | | o 8:916e977fa594:good32 | | | o 7:b9d00094223f:good31 | | o | 6:a7cab1800465:bad3 | | o | 5:a84e45045a29:bad2 | | o | 4:d0a381a67072:bad1 | | o | 3:54349a6276cc:good4 |/ o 2:4588e394e325:good3 | o 1:de79725cb39a:good2 | o 0:2641cc78ce7a:good1 Will it then look only between 7 and 12, missing the real first-bad that we care about? (thus using "dumb" numerical order) or is it smart enough to use the full topography and to know that the first bad could be below 7 on the right-side branch, or could still be anywhere on the left-side branch. The purpose of my question is both (a) just to understand the algorithm better, and (b) to understand whether I can liberally extend my initial bisect range without thinking hard about what branch I go to. I've been in high-branching bisect situations where it kept asking me after every test to extend beyond the next merge, so that the whole procedure was essentially O(n). I'm wondering if I can just throw the first "good" marker way back past some nest of merges without thinking about it much, and whether that would save time and give correct results.

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >