Search Results

Search found 713 results on 29 pages for 'rogue coder'.

Page 21/29 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Expected time for lazy evaluation with nested functions?

    - by Matt_JD
    A colleague and I are doing a free R course, although I believe this is a more general lazy evaluation issue, and have found a scenario that we have discussed briefly and I'd like to find out the answer from a wider community. The scenario is as follows (pseudo code): wrapper => function(thing) { print => function() { write(thing) } } v = createThing(1, 2, 3) w = wrapper(v) v = createThing(4, 5, 6) w.print() // Will print 4, 5, 6 thing. v = create(7, 8, 9) w.print() // Will print 4, 5, 6 because "thing" has now been evaluated. Another similar situation is as follows: // Using the same function as above v = createThing(1, 2, 3) v = wrapper(v) w.print() // The wrapper function incestuously includes itself. Now I understand why this happens but where my colleague and I differ is on what should happen. My colleague's view is that this is a bug and the evaluation of the passed in argument should be forced at the point it is passed in so that the returned "w" function is fixed. My view is that I would prefer his option myself, but that I realise that the situation we are encountering is down to lazy evaluation and this is just how it works and is more a quirk than a bug. I am not actually sure of what would be expected, hence the reason I am asking this question. I think that function comments could express what will happen, or leave it to be very lazy, and if the coder using the function wants the argument evaluated then they can force it before passing it in. So, when working with lazy evaulation, what is the practice for the time to evaluate an argument passed, and stored, inside a function?

    Read the article

  • Problems with 3D transformation - (SharpDX)

    - by Morphex
    First of all , I have been trying to get this right for a couple of day already, I have read so much info and still fail miserably to understand this. So I am going to tell you that even though I have done fairly amount of research myself, I failed to implement it. I must say miserably I am trying to create a generic camera class for a game engine of sorts - for research purposes only - the thing is I have no idea how to go about it. I have read about quaternions and matrices, but when it comes to the actual implementation I suck at it. Sharpdx has already Matrices and QUaternions implemented. SO no big deal on the map behind it. How in the world would I go about creating a camera? I have seen so many camera examples and still can't make one that works as expected. I would like to implement diferent types too (Orbital, 6DoF, FPS). So what is need for a camera? UP, Forward and Right vectors I read they are needed, also a quaternion for rotations, and View and Projection matrices. I understand that a FPS camera for instance only rotates around the World Y and the Right Axis of the camera. the 6DoF rotates always around their own axis, and the orbital is just translating for set distance and making it look always at a fixed target point. The concepts are there, now implementing this is not trivial for me. Can anyone point me on what am I missing, what I got wrong? I would really enjoy if you could give a tutorial, some piece of code, or just plain explanation of the concepts. Thank you for readin, a frustrated coder.

    Read the article

  • I've failed at PHP several times. Is Ruby the Cure? [closed]

    - by saltcod
    Extremely, extremely subjective question here, but its something I've been struggling with for quite a while. I've seriously tried to become a reasonable PHP coder for the past several years. But I've really failed every time. I hate to describe myself as a beginner, b/c I've been designing websites (using WordPress, Drupal, etc) for years, but still I just can't seem get better at programming. Could it be that I have some kind of allergy to PHP? I went through Chris Pine's awesome into to Ruby about a week ago (for about the fifth time), and though it did all all seem much clearer to me than PHP, I wondered if I was just switching languages to find an easy way out? The things I struggle with in PHP all seem elementary—when to use a function, how to return database queries in foreach/while statements, when to turn those queries into reusable functions, adding arguments to functions, etc, etc. And all the OOP stuff that I keep seeing these days just files over my head. I guess my question(s) are: Am I going about learning how to program in the wrong way? Do I have some aversion to PHP that's preventing me from catch on? If I keep pushing at Ruby/Rails, will it just eventually 'click'. Or, the one I fear, am I just unlikely to ever be a programmer? Honesty appreciated. Terry

    Read the article

  • If the bug is 5+ years old, then is it a feature?

    - by Job
    Allow me to add details: I work at an institutional place with many coders, testers, QA analysts, product owners, etc. and here is something that bugs me: We have been able to sell crappy (albeit pretty functional) software for over a decade. It has many features and the product is competitive, but there are a some serious bugs out there, as well as thousands of "paper cuts" - little annoyances that clients need to get used to. It pains me to look at some of the things because I firmly believe that if computers do not help to make our lives easier, then we should not use them. I have confidence in my colleagues - they are smart, able, and can improve things when the focus is on doing that. But, it can be difficult to file bugs against some old functionality without seeing them closed or forgotten. "It worked like that for ions" is a typical answer. Also, when QA does regression, they tend to look for anything that is different as much as anything that does not seem right. So, a fix to an old problem can be written up as a bug, because "it has been like that before even my time". The young coder in me thinks: rewrite this freaking thing! As someone who had the opportunity to be close to sales, clients, I want to give a benefit of a doubt to this approach. I am interested in your opinion/experience as well. Please try to consider risk, cost-to-benefit, and other non-technical factors.

    Read the article

  • ASP.NET ReportViewer Google Chrome CPU usage

    - by Phil
    Hello, We have found an interesting issue between ASP.NET 3.5 and ReportViewer with Google Chrome. Our set of pages work fine until a ReportViewer control displays a report. Google Chrome then eats up 50% of the CPU doing nothing it seems. I've extracted the ReportViewer control to a blank Web Forms project to confirm its that control and not a rogue bit of my code. I'm using ReportViewer in local mode (RDLC file) so I presume its the 2005 version? Anyone seen this before and have a solution? Phil Edit: Google Chrome 3.0.195.33 on Vista Business x64 Edit 2: Added bounty for help fixing this

    Read the article

  • Silverlight RIA Services. Query fails on large table but works with Where clause

    - by FauxReal
    I have a somewhat large table, maybe 2000 rows by 50 columns. When using the most basic imaginable RIA implementation. Create one-table Model Create DomainService Drop datagrid onto MainPage.xaml Drop datasource onto datagrid Ctrl-F5 I get this error: System.ServiceModel.DomainServices.Client.DomainOperationException: Load operation faild for query. Value cannot be null. Error is much larger, but thats the beginning of it. The weird thing is that if I narrow the results down with a where clause on the GetQuery, it works fine. In fact six different querys which together result in all of the rows being called works fine also. So basically, I'm sure its not some sort of rogue row. Why do I get this "Value cannot be null" error if I query the whole table? Thanks

    Read the article

  • Winforms application hungs when switching to another app

    - by joseluisrod
    Hi, I believe I have a potential threading issue. I have a user control that contains the following code: private void btnVerify_Click(object sender, EventArgs e) { if (!backgroundWorkerVerify.IsBusy) { backgroundWorkerVerify.RunWorkerAsync(); } } private void backgroundWorkerVerify_DoWork(object sender, System.ComponentModel.DoWorkEventArgs e) { VerifyAppointments(); } private void backgroundWorkerVerify_RunWorkerCompleted(object sender, System.ComponentModel.RunWorkerCompletedEventArgs e) { MessageBox.Show("Information was Verified.", "Verify", MessageBoxButtons.OK, MessageBoxIcon.Information); CloseEvent(); } vanilla code. but the issue I have is that when the application is running and the users tabs to another application when they return to mine the application is hung, they get a blank screen and they have to kill it. This started when I put the threading code. Could I have some rogue threads out there? what is the best way to zero in a threading problem? The issue can't be recreated on my machine...I know I must be missing something on how to dispose of a backgroundworker properly. Any thoughts are appreciated, Thanks, Jose

    Read the article

  • Can't pass a single parameter to lambda function in MVVM Light Toolkit's RelayCommand

    - by Dave
    I don't know if there's a difference between Josh Smith's and Laurent Bugnion's implementations of RelayCommand or not, but everywhere I've looked, it sounds like the Execute portion of RelayCommand can take 0 or 1 parameters. I've only been able to get it to work with 0. When I try something like: public class Test { public RelayCommand MyCommand { get; set; } public Test() { MyCommand = new RelayCommand((param) => SomeFunc(param)); } private void SomeFunc( object param) { } } I get the error: Delegate 'System.Action' does not take '1' arguments. Just to make sure I am not insane, I went to the definition of RelayCommand to make sure I didn't have some rogue implementation in my solution somewhere, but sure enough, it was just Action, and not Action<. What on earth am I missing here?

    Read the article

  • Winforms application hangs when switching to another app

    - by joseluisrod
    Hi, I believe I have a potential threading issue. I have a user control that contains the following code: private void btnVerify_Click(object sender, EventArgs e) { if (!backgroundWorkerVerify.IsBusy) { backgroundWorkerVerify.RunWorkerAsync(); } } private void backgroundWorkerVerify_DoWork(object sender, System.ComponentModel.DoWorkEventArgs e) { VerifyAppointments(); } private void backgroundWorkerVerify_RunWorkerCompleted(object sender, System.ComponentModel.RunWorkerCompletedEventArgs e) { MessageBox.Show("Information was Verified.", "Verify", MessageBoxButtons.OK, MessageBoxIcon.Information); CloseEvent(); } vanilla code. but the issue I have is that when the application is running and the users tabs to another application when they return to mine the application is hung, they get a blank screen and they have to kill it. This started when I put the threading code. Could I have some rogue threads out there? what is the best way to zero in a threading problem? The issue can't be recreated on my machine...I know I must be missing something on how to dispose of a backgroundworker properly. Any thoughts are appreciated, Thanks, Jose

    Read the article

  • Script to fix broken lines in a .txt file?

    - by Gravitas
    Hi, I'd love like to read books properly on my Kindle. To achieve my dream, I need a script to fix broken lines in a txt file. For example, if the txt file has this line: He watched Kahlan as she walked with her shoulders slumped down. ... then it should fix it by deleting the newline before the word "down": He watched Kahlan as she walked with her shoulders slumped down. So, fellow programmers, whats (a) the easiest way to do this and (b) the best language? p.s. The solution will involve searching for a lowercase letter in column 1, and deleting the newline before it to stitch the lines together. There are 1.2 million occurrences of this "rogue line break" in the novel I am trying to fix.

    Read the article

  • MAXDOP in SQL Azure

    - by Herve Roggero
    In my search of better understanding the scalability options of SQL Azure I stumbled on an interesting aspect: Query Hints in SQL Azure. More specifically, the MAXDOP hint. A few years ago I did a lot of analysis on this query hint (see article on SQL Server Central:  http://www.sqlservercentral.com/articles/Configuring/managingmaxdegreeofparallelism/1029/).  Here is a quick synopsis of MAXDOP: It is a query hint you use when issuing a SQL statement that provides you control with how many processors SQL Server will use to execute the query. For complex queries with lots of I/O requirements, more CPUs can mean faster parallel searches. However the impact can be drastic on other running threads/processes. If your query takes all available processors at 100% for 5 minutes... guess what... nothing else works. The bottom line is that more is not always better. The use of MAXDOP is more art than science... and a whole lot of testing; it depends on two things: the underlying hardware architecture and the application design. So there isn't a magic number that will work for everyone... except 1... :) Let me explain. The rules of engagements are different. SQL Azure is about sharing. Yep... you are forced to nice with your neighbors.  To achieve this goal SQL Azure sets the MAXDOP to 1 by default, and ignores the use of the MAXDOP hint altogether. That means that all you queries will use one and only one processor.  It really isn't such a bad thing however. Keep in mind that in some of the largest SQL Server implementations MAXDOP is usually also set to 1. It is a well known configuration setting for large scale implementations. The reason is precisely to prevent rogue statements (like a SELECT * FROM HISTORY) from bringing down your systems (like a report that should have been running on a different in the first place) and to avoid the overhead generated by executing too many parallel queries that could cause internal memory management nightmares to the host Operating System. Is summary, forcing the MAXDOP to 1 in SQL Azure makes sense; it ensures that your database will continue to function normally even if one of the other tenants on the same server is running massive queries that would otherwise bring you down. Last but not least, keep in mind as well that when you test your database code for performance on-premise, make sure to set the DOP to 1 on your SQL Server databases to simulate SQL Azure conditions.

    Read the article

  • Scan Your Thumb Drive for Viruses from the AutoPlay Dialog

    - by Mysticgeek
    It’s always a good idea to scan someone’s flash drive for viruses when you use it on your PC. Today we look at how to use Microsoft Security Essentials to scan thumb drives via the AutoPlay dialog. Editor Note: This technique was created by our friend Ramesh Srinivasan from the winhelponline tech blog. If you haven’t done so already, download and install Microsoft Security Essentials (link below), which has earned the How-To Geek official endorsement. Next download the mseautoplay.zip (link below). Unzip the file to view its contents. Then move the msescan.vbs script file into the Windows directory. Next double-click on the mseautoplay.reg file… Click Yes to the warning dialog window asking if you’re sure you want to add to the registry. After it’s added you’ll get a confirmation message…click OK. Now when you pop in a thumb drive, when AutoPlay comes up you will have the options to scan it with MSE first. MSE starts the scan of the thumb drive…   You can use this to scan any removable media. Here is an example of the ability to scan a DVD with MSE before opening any files. You can also go into Control Panel and set it as a default option of AutoPlay. Open Control Panel, View by Large icons, and click on AutoPlay. Notice that now when you go to change the default options for different types of media, Scanning with MSE is now included in the dropdown lists. Remove Settings If you want to remove the MSE AutoPlay Handler, Ramesh was kind enough to create an undo registry file. Double-click on undo.reg from the original MSE AutoPlay folder and click yes to the message to remove the setting.   Then you will need to go into the Windows directory and manually delete the msescan.vbs script file. This is an awesome trick which will allow you to scan your thumb drives and other removable media from the AutoPlay dialog. We tested it out on XP, Vista, and Windows 7 and it works perfectly on each one. Download mseautoplay.zip Download Microsoft Security Essentials Read Our Review of MSE Similar Articles Productive Geek Tips Disable AutoPlay in Windows VistaFind Your Missing USB Drive on Windows XPDisable Autoplay of Audio CDs and USB DrivesHow To Remove Antivirus Live and Other Rogue/Fake Antivirus MalwareScan Files for Viruses Before You Download With Dr.Web TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox)

    Read the article

  • Extensible Metadata in Oracle IRM 11g

    - by martin.abrahams
    Another significant change in Oracle IRM 11g is that we now use XML to create the tamperproof header for each sealed document. This article explains what this means, and what benefit it offers. So, every sealed file has a metadata header that contains information about the document - its classification, its format, the user who sealed it, the name and URL of the IRM Server, and much more. The IRM Desktop and other IRM applications use this information to formulate the request for rights, as well as to enhance the user experience by exposing some of the metadata in the user interface. For example, in Windows explorer you can see some metadata exposed as properties of a sealed file and in the mouse-over tooltip. The following image shows 10g and 11g metadata side by side. As you can see, the 11g metadata is written as XML as opposed to the simple delimited text format used in 10g. So why does this matter? The key benefit of using XML is that it creates the opportunity for sealing applications to use custom metadata. This in turn creates the opportunity for custom classification models to be defined and enforced. Out of the box, the solution uses the context classification model, in which two particular pieces of metadata form the basis of rights evaluation - the context name and the document's item code. But a custom sealing application could use some other model entirely, enabling rights decisions to be evaluated on some other basis. The integration with Oracle Beehive is a great example of this. When a user adds a document to a Beehive workspace, that document can be automatically sealed with metadata that represents the Beehive security model rather than the context model. As a consequence, IRM can enforce the Beehive security model precisely and all rights configuration can actually be managed through the Beehive UI rather than the IRM UI. In this scenario, IRM simply supports the Beehive application, seamlessly extending Beehive security to all copies of workspace documents without any additional administration. Finally, I mentioned that the metadata header is tamperproof. This is obviously to stop a rogue user modifying the metadata with a view to gaining unauthorised access - reclassifying a board document to a less sensitive classifcation, for example. To prevent this, the header is digitally signed and can only be manipulated by a suitably authorised sealing application.

    Read the article

  • Down Tools Week Cometh: Kissing Goodbye to CVs/Resumes and Cover Letters

    - by Bart Read
    I haven't blogged about what I'm doing in my (not so new) temporary role as Red Gate's technical recruiter, mostly because it's been routine, business as usual stuff, and because I've been trying to understand the role by doing it. I think now though the time has come to get a little more radical, so I'm going to tell you why I want to largely eliminate CVs/resumes and cover letters from the application process for some of our technical roles, and why I think that might be a good thing for candidates (and for us). I have a terrible confession to make, or at least it's a terrible confession for a recruiter: I don't really like CV sifting, or reading cover letters, and, unless I've misread the mood around here, neither does anybody else. It's dull, it's time-consuming, and it's somewhat soul destroying because, when all is said and done, you're being paid to be incredibly judgemental about people based on relatively little information. I feel like I've dirtied myself by saying that - I mean, after all, it's a core part of my job - but it sucks, it really does. (And, of course, the truth is I'm still a software engineer at heart, and I'm always looking for ways to do things better.) On the flip side, I've never met anyone who likes writing their CV. It takes hours and hours of faffing around and massaging it into shape, and the whole process is beset by a gnawing anxiety, frustration, and insecurity. All you really want is a chance to demonstrate your skills - not just talk about them - and how do you do that in a CV or cover letter? Often the best candidates will include samples of their work (a portfolio, screenshots, links to websites, product downloads, etc.), but sometimes this isn't possible, or may not be appropriate, or you just don't think you're allowed because of what your school/university careers service has told you (more commonly an issue with grads, obviously). And what are we actually trying to find out about people with all of this? I think the common criteria are actually pretty basic: Smart Gets things done (thanks for these two Joel) Not an a55hole* (sorry, have to get around Simple Talk's swear filter - and thanks to Professor Robert I. Sutton for this one) *Of course, everyone has off days, and I don't honestly think we're too worried about somebody being a bit grumpy every now and again. We can do a bit better than this in the context of the roles I'm talking about: we can be more specific about what "gets things done" means, at least in part. For software engineers and interns, the non-exhaustive meaning of "gets things done" is: Excellent coder For test engineers, the non-exhaustive meaning of "gets things done" is: Good at finding problems in software Competent coder Team player, etc., to me, are covered by "not an a55hole". I don't expect people to be the life and soul of the party, or a wild extrovert - that's not what team player means, and it's not what "not an a55hole" means. Some of our best technical staff are quiet, introverted types, but they're still pleasant to work with. My problem is that I don't think the initial sift really helps us find out whether people are smart and get things done with any great efficacy. It's better than nothing, for sure, but it's not as good as it could be. It's also contentious, and potentially unfair/inequitable - if you want to get an idea of what I mean by this, check out the background information section at the bottom. Before I go any further, let's look at the Red Gate recruitment process for technical staff* as it stands now: (LOTS of) People apply for jobs. All these applications go through a brutal process of manual sifting, which eliminates between 75 and 90% of them, depending upon the role, and the time of year**. Depending upon the role, those who pass the sift will be sent an assessment or telescreened. For the purposes of this blog post I'm only interested in those that are sent some sort of programming assessment, or bug hunt. This means software engineers, test engineers, and software interns, which are the roles for which I receive the most applications. The telescreen tends to be reserved for project or product managers. Those that pass the assessment are invited in for first interview. This interview is mostly about assessing their technical skills***, although we're obviously on the look out for cultural fit red flags as well. If the first interview goes well we'll invite candidates back for a second interview. This is where team/cultural fit is really scoped out. We also use this interview to dive more deeply into certain areas of their skillset, and explore any concerns that may have come out of the first interview (these obviously won't have been serious or obvious enough to cause a rejection at that point, but are things we do need to look into before we'd consider making an offer). We might subsequently invite them in for lunch before we make them an offer. This tends to happen when we're recruiting somebody for a specific team and we'd like them to meet all the people they'll be working with directly. It's not an interview per se, but can prove pivotal if they don't gel with the team. Anyone who's made it this far will receive an offer from us. *We have a slightly quirky definition of "technical staff" as it relates to the technical recruiter role here. It includes software engineers, test engineers, software interns, user experience specialists, technical authors, project managers, product managers, and development managers, but does not include product support or information systems roles. **For example, the quality of graduate applicants overall noticeably drops as the academic year wears on, which is not to say that by now there aren't still stars in there, just that they're fewer and further between. ***Some organisations prefer to assess for team fit first, but I think assessing technical skills is a more effective initial filter - if they're the nicest person in the world, but can't cut a line of code they're not going to work out. Now, as I suggested in the title, Red Gate's Down Tools Week is upon us once again - next week in fact - and I had proposed as a project that we refactor and automate the first stage of marking our programming assessments. Marking assessments, and in fact organising the marking of them, is a somewhat time-consuming process, and we receive many assessment solutions that just don't make the cut, for whatever reason. Whilst I don't think it's possible to fully automate marking, I do think it ought to be possible to run a suite of automated tests over each candidate's solution to see whether or not it behaves correctly and, if it does, move on to a manual stage where we examine the code for structure, decomposition, style, readability, maintainability, etc. Obviously it's possible to use tools to generate potentially helpful metrics for some of these indices as well. This would obviously reduce the marking workload, and would provide candidates with quicker feedback about whether they've been successful - though I do wonder if waiting a tactful interval before sending a (nicely written) rejection might be wise. I duly scrawled out a picture of my ideal process, which looked like this: The problem is, as soon as I'd roughed it out, I realised that fundamentally it wasn't an ideal process at all, which explained the gnawing feeling of cognitive dissonance I'd been wrestling with all week, whilst I'd been trying to find time to do this. Here's what I mean. Automated assessment marking, and the associated infrastructure around that, makes it much easier for us to deal with large numbers of assessments. This means we can be much more permissive about who we send assessments out to or, in other words, we can give more candidates the opportunity to really demonstrate their skills to us. And this leads to a question: why not give everyone the opportunity to demonstrate their skills, to show that they're smart and can get things done? (Two or three of us even discussed this in the down tools week hustings earlier this week.) And isn't this a lot simpler than the alternative we'd been considering? (FYI, this was automated CV/cover letter sifting by some form of textual analysis to ideally eliminate the worst 50% or so of applications based on an analysis of the 20,000 or so historical applications we've received since 2007 - definitely not the basic keyword analysis beloved of recruitment agencies, since this would eliminate hardly anyone who was awful, but definitely would eliminate stellar Oxbridge candidates - #fail - or some nightmarishly complex Google-like system where we profile all our currently employees, only to realise that we're never going to get representative results because we don't have a statistically significant sample size in any given role - also #fail.) No, I think the new way is better. We let people self-select. We make them the masters (or mistresses) of their own destiny. We give applicants the power - we put their fate in their hands - by giving them the chance to demonstrate their skills, which is what they really want anyway, instead of requiring that they spend hours and hours creating a CV and cover letter that I'm going to evaluate for suitability, and make a value judgement about, in approximately 1 minute (give or take). It doesn't matter what university you attended, it doesn't matter if you had a bad year when you took your A-levels - here's your chance to shine, so take it and run with it. (As a side benefit, we cut the number of applications we have to sift by something like two thirds.) WIN! OK, yeah, sounds good, but will it actually work? That's an excellent question. My gut feeling is yes, and I'll justify why below (and hopefully have gone some way towards doing that above as well), but what I'm proposing here is really that we run an experiment for a period of time - probably a couple of months or so - and measure the outcomes we see: How many people apply? (Wouldn't be surprised or alarmed to see this cut by a factor of ten.) How many of them submit a good assessment? (More/less than at present?) How much overhead is there for us in dealing with these assessments compared to now? What are the success and failure rates at each interview stage compared to now? How many people are we hiring at the end of it compared to now? I think it'll work because I hypothesize that, amongst other things: It self-selects for people who really want to work at Red Gate which, at the moment, is something I have to try and assess based on their CV and cover letter - but if you're not that bothered about working here, why would you complete the assessment? Candidates who would submit a shoddy application probably won't feel motivated to do the assessment. Candidates who would demonstrate good attention to detail in their CV/cover letter will demonstrate good attention to detail in the assessment. In general, only the better candidates will complete and submit the assessment. Marking assessments is much less work so we'll be able to deal with any increase that we see (hopefully we will see). There are obviously other questions as well: Is plagiarism going to be a problem? Is there any way we can detect/discourage potential plagiarism? How do we assess candidates' education and experience? What about their ability to communicate in writing? Do we still want them to submit a CV afterwards if they pass assessment? Do we want to offer them the opportunity to tell us a bit about why they'd like the job when they submit their assessment? How does this affect our relationship with recruitment agencies we might use to hire for these roles? So, what's the objective for next week's Down Tools Week? Pretty simple really - we want to implement this process for the Graduate Software Engineer and Software Engineer positions that you can find on our website. I will be joined by a crack team of our best developers (Kevin Boyle, and new Red-Gater, Sam Blackburn), and recruiting hostess with the mostest Laura McQuillen, and hopefully a couple of others as well - if I can successfully twist more arms before Monday.* Hopefully by next Friday our experiment will be up and running, and we may have changed the way Red Gate recruits software engineers for good! Stay tuned and we'll let you know how it goes! *I'm going to play dirty by offering them beer and chocolate during meetings. Some background information: how agonising over the initial CV/cover letter sift helped lead us to bin it off entirely The other day I was agonising about the new university/good degree grade versus poor A-level results issue, and decided to canvas for other opinions to see if there was something I could do that was fairer than my current approach, which is almost always to reject. This generated quite an involved discussion on our Yammer site: I'm sure you can glean a pretty good impression of my own educational prejudices from that discussion as well, although I'm very open to changing my opinion - hopefully you've already figured that out from reading the rest of this post. Hopefully you can also trace a logical path from agonising about sifting to, "Uh, hang on, why on earth are we doing this anyway?!?" Technorati Tags: recruitment,hr,developers,testers,red gate,cv,resume,cover letter,assessment,sea change

    Read the article

  • A CMS based on Yii ?

    - by santa_cametotown
    Hi - i've been with Yii for a few months and before I use main CodeIgniter, SilverStripe in my projects. Does anyone know a good Yii based CMS such as SilverStripe based on Sapphire or EE based on CodeIgniter ? My experience is working with Yii is much more easier and straightforward assuming you are good OOP coder but Yii is still young and there are not lot of samples that I can put together quickly for a real prodcution project. A couple of YII based CMS I spotted at do not look really promising or maybe at a very early stage such as dotPlant, Web3CMS.

    Read the article

  • MP4 plays on Safari 4 (desktop) but not on Safari Mobile (iphone)

    - by deb
    I'm encoding the video with ffmpeg and displaying it using the HTML 5 video tag. It works fine on Firefox (i'm also providing a ogg version) and Safari 4. However, when I try to open it on the iphone I get a "Cannot Play Movie" error. Here is the ffmpeg command I'm using: ffmpeg -y -i movie.mov -acodec libfaac -ar 44100 -ab 96k -vcodec libx264 -vpre hq -level 41 -crf 20 -bufsize 20000k -maxrate 1500k -g 250 -s 320X200 -coder 1 -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -flags2 +dct8x8+bpyramid -me_method umh -subq 7 -me_range 16 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -rc_eq 'blurCplx^(1-qComp)' -bf 16 -b_strategy 1 -bidir_refine 1 -refs 6 -deblockalpha 0 -deblockbeta 0 movie.mp4 I reduced the maxrate to 1500 because I read that if the bit rate is too high the iphone won't play the video, but still didn't work. I don't know where else to look... any ideas? Thanks in advance

    Read the article

  • Gmail 3-legged OAuth access -- Zend_Mail_Protocol_Exception

    - by tchaymore
    I'm trying to access Gmail by using three-legged Oauth PHP code provided by Google ('google-mail-xoauth-tools') here: http://code.google.com/apis/gmail/oauth/code.html. I have my domain registered and everything seems to go fine with OAuth, but after I authorize access I get this error: Fatal error: Uncaught exception 'Zend_Mail_Protocol_Exception' with message 'cannot connect to host; error = Connection refused (errno = 111 )' in /home/tchaymor/public_html/gmail/Zend/Mail/Protocol/Imap.php:100 Stack trace: #0 /home/tchaymor/public_html/gmail/Zend/Mail/Protocol/Imap.php(61): Zend_Mail_Protocol_Imap->connect('imap.gmail.com', '993', true) #1 /home/tchaymor/public_html/gmail/three-legged.php(170): Zend_Mail_Protocol_Imap->__construct('imap.gmail.com', '993', true) #2 {main} thrown in /home/tchaymor/public_html/gmail/Zend/Mail/Protocol/Imap.php on line 100 This is my first time using OAuth with any Google products, so it could be something totally brainless I'm missing. Any suggestions would be most welcome (as suggestions for easier alternatives). I'm more on the designer rather than coder end, so the simpler the better.

    Read the article

  • Cufon JS is not loading

    - by UXdesigner
    I've developed a website in html/css and it works perfectly fine. Now I'm working with the coder, integrating this to a .NET framework, changing the website to .apsx instead of html, but during the build of the website, the only error that is marked is the load of Cufon , it simply can't load and the structure and syntax of all the commands are the same I used with the html site that actually works. There are no path problems so far. What do you guys think would be this problem ? Thank you for your kind help.

    Read the article

  • My C# and DLL Data Woes

    - by Lynn
    Hey guys, I'm a very beginner C# coder. So, if I get some of the terms incorrect, please be easy on me. I'm trying to see if it is possible to pull data from a DLL. I did some research and found that you can store application resources within a DLL. What I couldn't find, was the information to tell me how to do that. There is a MS article that explains how to access resources within a satellite DLL, but I honestly don't know if that is what I'm looking for. http://msdn.microsoft.com/en-us/library/ms165653.aspx I did try some of the codes involved, but there are some "FileNotFoundExceptions" going on. The rest of the DLL information is showing up: classes, objects, etc. I just added the DLL as a resource in my Visual Studio Project and added it with "using". I just don't know how to get at the meat of it, if it is possible. Thanks, Lynn

    Read the article

  • samsung HMX-H100P camcorder and video encoding with mencoder

    - by jskg
    Hi everyone, my background is totally not related to video stuff so pardon my newbie style. I own a samsung HMX-H100P camcorder and I'm trying to encode videos to be uploaded to Youtube and Vimeo. First problem: videos generated by the camera with no processing appear like this: http://www.youtube.com/watch?v=AANbl_DTuzE when I play them with Totem(Linux) or VideoLan. Second problem: When I try to encode the videos produced by the camera using mencoder I get the video at the resolution I chose but those ugly lines and lagging are still present. Here's the command I use: mencoder $inputFile -aspect 16:9 -of lavf -lavfopts format=psp -oac lavc -ovc lavc -lavcopts aglobal=1:vglobal=1:coder=0:vcodec=libx264:acodec=libfaac:vbitrate=4500:abitrate=128 -vf scale=1280:720 -ofps 25000/1001 -o $outputFile Any ideas? Thanks in advance

    Read the article

  • How does COM registration work in Windows

    - by Air Benji
    I'm an application packager trying to make sense of how the COM registry keys (SelfReg) interrelate to the given .dll in Windows. ProgID's, AppID's, TypeLibs, Extensions & Verbs are all tied around the CLSID right? Do CLSID's always use Prog/App IDs or could you just have a file extension class? Which bits are optional? Some of it seems to be 'like a router' where there's the two interfaces (internal - .dll) and external (the extension etc). How does this all fit? (The SDK documentation doesn't make sense to me) I ask as this is all pivotal to application 'healing' with Windows Installer (which packagers are all 'big' on, but there's no nitty-gritty breakdowns since its a coder-thing really) ---Edit: Am I safe in assuming that for what COM is registered, it must all link back to the CLSID and cannot be a 'dead-end'? Verbs need extensions which need progid's... What about the AppId's, TypeLibs and Interfaces? How do they interrelate?

    Read the article

  • iPhone OpenGL ES freezes for no reason

    - by KJ
    Hi, I'm quite new to iPhone OpenGL ES, and I'm really stuck. I was trying to implement shadow mapping on iPhone, and I allocated two 512*1024*32bit textures for the shadow map and the diffuse map respectively. The problem is that my application started to freeze and reboot the device after I added the shadow map allocation part to the code (so I guess the shadow map allocation is causing all this mess). It happens randomly, but mostly within 10 minutes. (sometimes within a few secs) And it only happens on the real iPhone device, not on the virtual device. I backtracked the problem by removing irrelevant code lines by lines and now my code is really simple, but it's still crashing (I mean, freezing). Could anybody please download my xcode project linked below and see what on earth is wrong? The code is really simple: http://www.tempfiles.net/download/201004/95922/CrashTest.html I would really appreciate if someone can help me. My iPhone is a 3GS and running on the OS version 3.1. Again, run the code and it'll take about 5 mins in average for the device to freeze and reboot. (Don't worry, it does no harm) It'll just display cyan screen before it freezes, but you'll be able to notice when it happens because the device will reboot soon, so please be patient. Just in case you can't reproduce the problem, please let me know. (That could possibly mean it's specifically my device that something's wrong with) Observation: The problem goes away when I change the size of the shadow map to 512*512. (but with the diffuse map still 512*1024) I'm desperate for help, thanks in advance! Just for the people's information who can't download the link, here is the OpenGL code: #import "GLView.h" #import <OpenGLES/ES2/glext.h> #import <QuartzCore/QuartzCore.h> @implementation GLView + (Class)layerClass { return [CAEAGLLayer class]; } - (id)initWithCoder: (NSCoder*)coder { if ((self = [super initWithCoder:coder])) { CAEAGLLayer* layer = (CAEAGLLayer*)self.layer; layer.opaque = YES; layer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool: NO], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; displayLink_ = nil; context_ = [[EAGLContext alloc] initWithAPI: kEAGLRenderingAPIOpenGLES2]; if (!context_ || ![EAGLContext setCurrentContext: context_]) { [self release]; return nil; } glGenFramebuffers(1, &framebuffer_); glBindFramebuffer(GL_FRAMEBUFFER, framebuffer_); glViewport(0, 0, self.bounds.size.width, self.bounds.size.height); glGenRenderbuffers(1, &defaultColorBuffer_); glBindRenderbuffer(GL_RENDERBUFFER, defaultColorBuffer_); [context_ renderbufferStorage: GL_RENDERBUFFER fromDrawable: layer]; glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, defaultColorBuffer_); glGenTextures(1, &shadowColorBuffer_); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, shadowColorBuffer_); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 1024, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); glGenTextures(1, &texture_); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture_); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 1024, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); } return self; } - (void)startAnimation { displayLink_ = [CADisplayLink displayLinkWithTarget: self selector: @selector(drawView:)]; [displayLink_ setFrameInterval: 1]; [displayLink_ addToRunLoop: [NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; } - (void)useDefaultBuffers { glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, defaultColorBuffer_); glClearColor(0.0, 0.8, 0.8, 1); glClear(GL_COLOR_BUFFER_BIT); } - (void)useShadowBuffers { glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, shadowColorBuffer_, 0); glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT); } - (void)drawView: (id)sender { NSTimeInterval startTime = [NSDate timeIntervalSinceReferenceDate]; [EAGLContext setCurrentContext: context_]; [self useShadowBuffers]; [self useDefaultBuffers]; glBindRenderbuffer(GL_RENDERBUFFER, defaultColorBuffer_); [context_ presentRenderbuffer: GL_RENDERBUFFER]; NSTimeInterval endTime = [NSDate timeIntervalSinceReferenceDate]; NSLog(@"FPS : %.1f", 1 / (endTime - startTime)); } - (void)stopAnimation { [displayLink_ invalidate]; displayLink_ = nil; } - (void)dealloc { if (framebuffer_) glDeleteFramebuffers(1, &framebuffer_); if (defaultColorBuffer_) glDeleteRenderbuffers(1, &defaultColorBuffer_); if (shadowColorBuffer_) glDeleteTextures(1, &shadowColorBuffer_); glDeleteTextures(1, &texture_); if ([EAGLContext currentContext] == context_) [EAGLContext setCurrentContext: nil]; [context_ release]; context_ = nil; [super dealloc]; } @end

    Read the article

  • Best resource for serious Commodore 64 programming.

    - by postfuturist
    What is the best resource for serious Commodore 64 programming? Assume that serious programming on the Commodore 64 is not done in BASIC V2 that ships with the Commodore 64. I feel like most of the knowledge is tied up in old books and not available on the internet. All that I have found online are either very beginner style introductions to Commodore 64 programming (Hello world), or arcane demo-coder hacks to take advantage of strange parts of the hardware. I haven't found a well-explained list of opcodes, memory locations for system calls, and general mid-level examples and tips. Main portals I have found: lemon64 C-64 Scene Database c64web Actually hosted on a Commodore 64! Tools I have found: cc65 A C compiler that can target Commodore 64.

    Read the article

  • How do you keep a balance between working, training, health and family?

    - by Jim Burger
    One trend I see in the awesome developers I've met, is that they devote inordinate amounts of time to coding at the expense of (usually) their health. Personally, I also find it hard to motivate myself to keep healthy. Every now and again, I meet a fantastic coder who has it clocked; they are up to date with the latest dev news, have time to read about good programming practices, and to finish it off, have happy wives/husbands and families. How do you guys/gals manage it in the short 24 hours a day that we all have?

    Read the article

  • PHP Tag Cloud System

    - by deniz
    Hi, I am implementing a tag cloud system based on this recommendation. (However, I am not using foreign keys) I am allowing up to 25 tags. My question is, how can I handle editing on the items? I have this item adding/editing page: Title: Description: Tags: (example data) computer, book, web, design If someone edits an item details, do I need to delete all the tags from Item2Tag table first, then add the new elements? For instance, someone changed the data to this: Tags: (example data) computer, book, web, newspaper Do I need to delete all the tags from Item2Tag table, and then add these elements? It seems inefficient, but I could find a more efficient way. The other problem is with this way is, if someone edits description but does not change the tags box, I still need to delete all the elements from Item2Tag table and add the same element. I am not an experienced PHP coder, so could you suggest a better way to handle this? (pure PHP/MySQL solution is preferable) Thanks in advance,

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >