Search Results

Search found 69645 results on 2786 pages for 'real time systems'.

Page 440/2786 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • SQL SERVER – Interview Questions & Answers Needs Your Help

    - by pinaldave
    About an year ago, I had posted SQL Server related Interview Questions and Answers. It was very well received in community. I have received many comments, suggestions and emails on this subject. I am planning to upgrade the Interview Questions and Answers and take it to next level. Here, I need your help. Please your comments, suggestions, expectation or potential interview Question (along with answer) here. Your input will be very valuable. As time goes by we all learn and get better. There were few things missing at that time when those interview questions and answers were prepared, now is the time to complete the gap and make this interview questions more useful. If you know all, this Question and Answers are not for you. This are for those who is eager to learn and need help in the area. If you do not want to leave a comment, I suggest to send me email at pinal “at” SQLAuthority.com Following is the reproduction of original consolidation post for quick reference. SQL SERVER – 2008 – Interview Questions and Answers – Part 1 SQL SERVER – 2008 – Interview Questions and Answers – Part 2 SQL SERVER – 2008 – Interview Questions and Answers – Part 3 SQL SERVER – 2008 – Interview Questions and Answers – Part 4 SQL SERVER – 2008 – Interview Questions and Answers – Part 5 SQL SERVER – 2008 – Interview Questions and Answers – Part 6 SQL SERVER – 2008 – Interview Questions and Answers – Part 7 SQL SERVER – 2008 – Interview Questions and Answers – Part 8 Download SQL Server 2008 Interview Questions and Answers Complete List Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Advise on how to move from a .net developer role to a web developer role

    - by dermd
    I've been working primarily as a .net developer for the past 4 years for a financial services company. I've worked on .net 1.1, 2.0, 3.5 and have done the 3.5 enterprise app developer cert (not that that's worth a whole lot!). Before that I worked as a java developer with a bit of Flex thrown in for just over a year. My educational background is an Electronic and computer engineering degree, a higher diploma in systems analysis as well as one in web development (this was mainly java - JSP, Spring, etc) and a science masters in software design and development. I really feel like a change and would like to move to a different field to experience something different. I've done some courses in RoR and played around with it a bit in my spare time. Similarly I've done various web and mobile courses and done up some mobile webapps along with android and ios equivalents (haven't tried pushing them up to the app stores yet but may be worth tidying them up and doing that). I currently work long enough hours so find it hard to find time to work on too many side projects to get a decent portfolio together. But when I do work on the web stuff I do find it really enjoyable so think it's something I'd like to do full time. However, since my experience is pretty much all .net and financial services I find it very hard to get my foot in the door anywhere or get past a phone screen unless their specifically looking for someone with .net knowledge. What is the best way to move into a web development role without starting from scratch again. I do think a lot of the skills I have translate over but I seem to just get paired with .net jobs whenever I look around? Apart from js, jquery, html5, objective C are there any other technologies I should be looking into?

    Read the article

  • Mr Flibble: As Seen Through a Lens, Darkly

    - by Phil Factor
    One of the rewarding things about getting involved with Simple-Talk has been in meeting and working with some pretty daunting talents. I’d like to say that Dom Reed’s talents are at the end of the visible spectrum, but then there is Richard, who pops up on national radio occasionally, presenting intellectual programs, Andrew, master of the ukulele, with his pioneering local history work, and Tony with marathon running and his past as a university lecturer. However, Dom, who is Red Gate’s head of creative design and who did the preliminary design work for Simple-Talk, has taken the art photography to an extreme that was impossible before Photoshop. He’s not the first person to take a photograph of himself every day for two years, but he is definitely the first to weave the results into a frightening narrative that veers from comedy to pathos, using all the arts of Photoshop to create a fictional character, Mr Flibble.   Have a look at some of the Flickr pages. Uncle Spike The B-Men – Woolverine The 2011 BoyZ iN Sink reunion tour turned out to be their last Error 404 – Flibble not found Mr Flibble is not a normal type of alter-ego. We generally prefer to choose bronze age warriors of impossibly magnificent physique and stamina; superheroes who bestride the world, scorning the forces of evil and anarchy in a series noble and righteous quests. Not so Dom, whose Mr Flibble is vulnerable, and laid low by an addiction to toxic substances. His work has gained an international cult following and is used as course material by several courses in photography. Although his work was for a while ignored by the more conventional world of ‘art’ photography they became famous through the internet. His photos have received well over a million views on Flickr. It was definitely time to turn this work into a book, because the whole sequence of images has its maximum effect when seen in sequence. He has a Kickstarter project page, one of the first following the recent UK launch of the crowdfunding platform. The publication of the book should be a major event and the £45 I shall divvy up will be one of the securest investments I shall ever make. The local news in Cambridge picked up on the project and I can quote from the report by the excellent Cabume website , the source of Tech news from the ‘Cambridge cluster’ Put really simply Mr Flibble likes to dress up and take pictures of himself. One of the benefits of a split personality, however is that Mr Flibble is supported in his endeavour by Reed’s top notch photography skills, supreme mastery of Photoshop and unflinching dedication to the cause. The duo have collaborated to take a picture every day for the past 730-plus days. It is not a big surprise that neither Mr Flibble nor Reed watches any TV: In addition to his full-time role at Cambridge software house,Red Gate Software as head of creativity and the two to five hours a day he spends taking the Mr Flibble shots, Reed also helps organise the . And now Reed is using Kickstarter to see if the world is ready for a Mr Flibble coffee table book. Judging by the early response it is. At the time of writing, just a few days after it went live, ‘I Drink Lead Paint: An absurd photography book by Mr Flibble’ had raised £1,545 of the £10,000 target it needs to raise by the Friday 30 November deadline from 37 backers. Following the standard Kickstarter template, Reed is offering a series of rewards based on the amount pledged, ranging from a Mr Flibble desktop wallpaper for pledges of £5 or more to a signed copy of the book for pledges of £45 or more, right up to a starring role in the book for £1,500. Mr Flibble is unquestionably one of the more deranged Kickstarter hopefuls, but don’t think for a second that he doesn’t have a firm grasp on the challenges he faces on the road to immortalisation on 150 gsm stock. Under the section ‘risks and challenges’ on his Kickstarter page his statement begins: “An angry horde of telepathic iguanas discover the world’s last remaining stock of vintage lead paint and hold me to ransom. Gosh how I love to guzzle lead paint. Anyway… faced with such brazen bravado, I cower at the thought of taking on their combined might and die a sad and lonely Flibble deprived of my one and only true liquid love.” At which point, Reed manages to wrestle away the keyboard, giving him the opportunity to present slightly more cogent analysis of the obstacles the project must still overcome. We asked Reed a few questions about Mr Flibble’s Kickstarter adventure and felt that his responses were worth publishing in full: Firstly, how did you manage it – holding down a full time job and also conceiving and executing these ideas on a daily basis? I employed a small team of ferocious gerbils to feed me ideas on a daily basis. Whilst most of their ideas were incomprehensibly rubbish and usually revolved around food, just occasionally they’d give me an idea like my B-Men series. As a backup plan though, I found that the best way to generate ideas was to actually start taking photos. If I were to stand in front of the camera, pull a silly face, place a vegetable on my head or something else equally stupid, the resulting photo of that would typically spark an idea when I came to look at it. Sitting around idly trying to think of an idea was doomed to result in no ideas. I admit that I really struggled with time. I’m proud that I never missed a day, but it was definitely hard when you were late from work, tired or doing something socially on the same day. I don’t watch TV, which I guess really helps, because I’d frequently be spending 2-5 hours taking and processing the photos every day. Are there any overlaps between software development and creative thinking? Software is an inherently creative business and the speed that it moves ensures you always have to find solutions to new things. Everyone in the team needs to be a problem solver. Has it helped me specifically with my photography? Probably. Working within teams that continually need to figure out new stuff keeps the brain feisty I suppose, and I guess I’m continually exposed to a lot of possible sources of inspiration. How specifically will this Kickstarter project allow you to test the commercial appeal of your work and do you plan to get the book into shops? It’s taken a while to be confident saying it, but I know that people like the work that I do. I’ve had well over a million views of my pictures, many humbling comments and I know I’ve garnered some loyal fans out there who anticipate my next photo. For me, this Kickstarter is about seeing if there’s worth to my work beyond just making people smile. In an online world where there’s an abundance of freely available content, can you hope to receive anything from what you do, or would people just move onto the next piece of content if you happen to ask for some support? A book has been the single-most requested thing that people have asked me to produce and it’s something that I feel would showcase my work well. It’s just hard to convince people in the publishing industry just now to take any kind of risk – they’ve been hit hard. If I can show that people would like my work enough to buy a book, then it sends a pretty clear picture that publishers might hear, or it gives me the confidence enough to invest in myself a bit more – hard to do when you’re riddled with self-doubt! I’d love to see my work in the shops, yes. I could see it being the thing that someone flips through idly as they’re Christmas shopping and recognizing that it’d be just the perfect gift for their difficult to buy for friend or relative. That said, working in the software industry means I’m clearly aware of how I could use technology to distribute my work, but I can’t deny that there’s something very appealing to having a physical thing to hold in your hands. If the project is successful is there a chance that it could become a full-time job? At the moment that seems like a distant dream, as should this be successful, there are many more steps I’d need to take to reach any kind of business viability. Kickstarter seems exactly that – a way for people to help kick start me into something that could take off. If people like my work and want me to succeed with it, then taking a look at my Kickstarter page (and hopefully pledging a bit of support) would make my elbows blush considerably. So there is is. An opportunity to open the wallet just a bit to ensure that one of the more unusual talents sees the light in the format it deserves.  

    Read the article

  • Do’s and Don’ts Building SharePoint Applications

    - by Bil Simser
    SharePoint is a great platform for building quick LOB applications. Simple things from employee time trackers to server and software inventory to full blown Help Desks can be crafted up using SharePoint from just customizing Lists. No programming necessary. However there are a few tricks I’ve painfully learned over the years that you can use for your own solutions. DO What’s In A Name? When you create a new list, column, or view you’ll commonly name it something like “Expense Reports”. However this has the ugly effect of creating a url to the list as “Expense%20Reports”. Or worse, an internal field name of “Expense_x0x0020_Reports” which is not only cryptic but hard to remember when you’re trying to find the column by internal name. While “Expense Reports 2011” is user friendly, “ExpenseReports2011” is not (unless you’re a programmer). So that’s not the solution. Well, not entirely. Instead when you create your column or list or view use the scrunched up name (I can’t think of the technical term for it right now) of “ExpenseReports2011”, “WomenAtTheOfficeThatAreMen” or “KoalaMeatIsGoodWhenBroiled”. After you’ve created it, go back and change the name to the more friendly “Silly Expense Reports That Nobody Reads”. The original internal name will be the url and code friendly one without spaces while the one used on data entry forms and view headers will be the human version. Smart Columns When building a view include columns that make sense. By default when you add a column the “Add to default view” is checked. Resist the urge to be lazy and leave it checked. Uncheck that puppy and decide consciously what columns should be included in the view. Pick columns that make sense to what the user is trying to do. This means you have to talk to the user. Yes, I know. That can be trying at times and even painful. Go ahead, talk to them. You might learn something. Find out what’s important to them and why. If they’re doing something repetitively as part of their job, try to make their life easier by including what’s most important to them. Do they really need to see the Created *and* Modified date of a document or do they just need the title and author? You’ll only find out after talking to them (or getting them drunk in a bar and leaving them in the back alley handcuffed to a garbage bin, don’t ask). Gotta Keep it Separated Hey, views are there for a reason. Use them. While “All Items” is a fine way to present a list of well, all items, it’s hardly sufficient to present a list of servers built before the Y2K bug hit. You’ll be scrolling the list for hours finally arriving at Page 387 of 12,591 and cursing that SharePoint guy for convincing you that putting your hardware into a list would be of any use to anyone. Next to collecting the data, presenting it is just as important. Views are often overlooked and many times ignored or misused. They’re the way you can slice and dice the data up so that you’re not trying to consume 3,000 years of human evolution on a single web page. Remember views can be filtered so feel free to create a view for each status or one for each operating system or one for each species of Information Worker you might be putting in that list or document library. Not only will it reduce the number of items someone sees at one time, it’ll also make the information that much more relevant. Also remember that each view is a separate page. Use it in navigation by creating a menu on the Quick Launch to each view. The discoverability of the Views menu isn’t overly obvious and if you violate the rule of columns (see Horizontally Scrolling below) the view menu doesn’t even show up until you shuffle the scroll bar to the left. Navigation links, big giant buttons, a screaming flashing “CLICK ME NOW” will help your users find their way. Sort It! Views are great so we’re building nice, rich views for the user. Awesomesauce. However sort is not very discoverable by the user. For example when you’re looking at a view how do you know if it’s ascending or descending and what is it sorted on. Maybe it’s sorted using two fields so what’s that all about? Help your users by letting them know the information they’re looking at is sorted. Maybe you name the view something appropriate like “Bogus Expense Claims Sorted By Deadbeats”. If you use the naming strategy just make sure you keep the name consistent with the description. In the previous example their better be a Deadbeat column so I can see the sort in action. Having a “Loser” column, while equally correct, is a little obtuse to the average Information Worker. Remember, they usually don’t use acronyms and even if they knew how to, it’s not immediately obvious to them that’s what you’re trying to convey. Another option is to simply drop a Content Editor Web Part above the list and explain exactly the view they’re looking at. Each view is it’s own page so one CEWP won’t be used across the board. Be descriptive in what the user is seeing but try to keep it brief. Dumping the first chapter of I, Claudius might be informative to the data but can gobble up screen real estate and miss the point of having the list. DO NOT Useless Attachments The attachments column is, in a word, useless. For the most part. Sure it indicates there’s an attachment on the list item but in the grand scheme of things that’s not overly informative. Maybe it is and by all means, if it makes sense to you include it. Colour it. Make it shine and stand like the Return of Clippy on every SharePoint list. Without it being functional it can be boring. EndUserSharePoint.com has an article to make the son of Clippy that much more useful so feel free to head over and check out this blog post by Paul Grenier on the task (Warning code ahead! Danger Will Robinson!) In any case, I would suggest you remove it from your views. Again if it’s important then include it but consider the jQuery solution above to make it functional. It’s added by default to views and one of things that people forget to clean up. Horizontal Scrolling Screen real estate is premium so building a list that contains 8,000 columns and stretches horizontally across 15 screens probably isn’t the most user friendly experience. Most users can’t figure out how to scroll vertically let alone horizontally so don’t make it even that more confusing for them. Take the Steve Krug approach in your view designs and try not to make the user think. Again views are your friend. Consider splitting up the data into views where one view contains 10 columns and other view contains the other 10. Okay, maybe your information doesn’t work that way but humans can only process 7 pieces of data at a time, 10 at most (then their heads explode and you don’t want to clean that mess up, especially on a Friday night before the big dance). It drives me batshit crazy when I see a view with 80 columns of data. I often ask the user “So what do you do with all this information”. The response is usually “With this data [the first 10 columns] I decide if I’m going to fire everyone, and with this data [the next 10 columns] I decide if I’m going to set the building on fire and collect the insurance”. It’s at that point I show them how to create two new views “People Who Are About To Get The Axe” and “Beach Time For The Executives”. Again, talk to your users and try to reason with them on cutting down the number of columns they see at once. Vertical Scrolling Another big faux pas I find is the use of multi-line comment fields in views. It’s not so bad when you have a statement like this in your view: “I really like, oh my god, thought I was going to scream when I saw this turtle then I decided what I was going to have for dinner and frankly I hate having to work late so when I was talking to the customer I thought, oh my god, what if the customer has turtles and then it appeared to me that I really was hungry so I'm going to have lunch now.” It’s fine if that’s the only column along with two or three others, but once you slap those 20 columns of data into the list, the comment field wraps and forms a new multi-page novel that takes up your entire screen. Do everyone a favour and just avoid adding the column to views. Train the user to just click through to the item if they need to see the contents. Duplicate Information Duplication is never good. Views and great as you can group data together. For example create a view of project status reports grouped by author. Then you can see what project manager is being a dip and not submitting their report. However if you group by author do you really need the Created By field as well in the view? Or if the view is grouped by Project then Author do you need both. Horizontal real estate is always at a premium so try not to clutter up the view with duplicate data like this. Oh  yeah, if you’re scratching your head saying “But Bil, if I don’t include the Project name in the view and I have a lot of items then how do I know which one I’m looking at”. That’s a hint that your grouping is too vague or you have too much data in the view based on that criteria. Filter it down a notch, create some views, and try to keep the group down to a single screen where you can see the group header at the top of the page. Again it’s just managing the information you have. Redundant, See Redundant This partially relates to duplicate information and smart columns but basically remember to not include the obvious in a view. Remember, don’t make me think. If you’ve gone to the trouble (and it was a lot of trouble wasn’t it?) to create separate views of your data by creating a “September Zombie Brain Sales”, “October Zombie Brain Sales”, etc. then please for the love of all that is holy do not include the Month and Product columns in your view. Similarly if you create a “My” view of anything (“My Favourite Brands of Spandex”, “My Co-Workers I Find The Urge To Disinfect”) then again, do not include the owner or author field (or whatever field you use to identify “My”). That’s just silly. Hope that helps! Happy customizing!

    Read the article

  • Can notes/to-dos in code comments sent to code-reviews result in an effective refactoring process?

    - by dukeofgaming
    I want to start/improve a culture of collective code ownership at my company but at a geographically distributed level... I'd say there is some current collective code-ownership mentality, but only at single geographical sites. This is a follow-up to this question: What is the politically correct way of refactoring other's code? I'm just wondering if submitting *just code comments* for code reviews (we have ReviewBoard, possibly upgrading to Crucible) could actually be an effective mechanism to get the conversation started on improving code, without having others feel territorial about their code. For example, if I add: //ToDo: Refactor this code and that code because of reasons X and Y Then, submit it for code review, and it gets accepted... it could be considered as an agreement (which I think is sometimes harder to get with new code up front). At the same time, the author (and others) might have an easier time digesting and accepting the proposal; rejecting a proposal because it might break things will not longer be a valid reason and therefore the fear of making a change is lost... and at the same time, do not invest 10 hours optimizing something that no one thinks it is worth it and opposes to it just out of fear. This is all conjecture, but I'm feeling something like this (submitting refactoring notes in code comments at the code-review process) would work. Has anyone done something like this in practice?, if so, what have been the results?

    Read the article

  • Ada and 'The Book'

    - by Phil Factor
    The long friendship between Charles Babbage and Ada Lovelace created one of the most exciting and mysterious of collaborations ever to have resulted in a technological breakthrough. The fireworks that created by the collision of two prodigious mathematical and creative talents resulted in an invention, the Analytical Engine, which went on to change society fundamentally. However, beyond that, we just don't know what the bulk of their collaborative work was about:;  it was done in strictest secrecy. Even the known outcome of their friendship, the first programmable computer, was shrouded in mystery. At the time, nobody, except close friends and family, had any idea of Ada Byron's contribution to the invention of the ‘Engine’, and how to program it. Her great insight was published in August 1843, under the initials AAL, standing for Ada Augusta Lovelace, her title then being the Countess of Lovelace. It was contained in a lengthy ‘note’ to her translation of a publication that remains the best description of Babbage's amazing Analytical Engine. The secret identity of the person behind those enigmatic initials was finally revealed by Prince de Polignac who, seventy years later, wrote to Ada's daughter to seek confirmation that her mother had, indeed, been the author of the brilliant sentences that described so accurately how Babbage's mechanical computer could be programmed with punch-cards. L.F. Menabrea's paper on the Analytical Engine first appeared in the 'Bibliotheque Universelle de Geneve' in October 1842, and Ada translated it anonymously for Taylor's 'Scientific Memoirs'. Charles Babbage was surprised that she had not written an original paper as she already knew a surprising amount about the way the machine worked. He persuaded her to at least write some explanatory notes. These notes ended up extending to four times the length of the original article and represented the first published account of how a machine could be programmed to perform any calculation. Her example of programming the Bernoulli sequence would have worked on the Analytical engine had the device’s construction been completed, and gave Ada an unassailable claim to have invented the art of programming. What was the reason for Ada's secrecy? She was the only legitimate child of Lord Byron, who was probably the best known celebrity of the age, so she was already famous. She was a senior aristocrat, with titles, a fortune in money and vast estates in the Midlands. She had political influence, and was the cousin of Lord Melbourne, who was the Prime Minister at that time. She was friendly with the young Queen Victoria. Her mathematical activities were a pastime, and not one that would be considered by others to be in keeping with her roles and responsibilities. You wouldn't dare to dream up a fictional heroine like Ada. She was dazzlingly beautiful and talented. She could speak several languages fluently, and play some musical instruments with professional skill. Contemporary accounts refer to her being 'accomplished in science, art and literature'. On top of that, she was a brilliant mathematician, a talent inherited from her mother, Annabella Milbanke. In her mother's circle of literary and scientific friends was Charles Babbage, and Ada's friendship with him dates from her teenage zest for Mathematics. She was one of the first people he'd ever met who understood what he had attempted to achieve with the 'Difference Engine', and with whom he could converse as intellectual equals. He arranged for her to have an education from the most talented academics in the country. Ada melted the heart of the cantankerous genius to the point that he became a faithful and loyal father-figure to her. She was one of the very few who could grasp the principles of the later, and very different, ‘Analytical Engine’ which was designed from the start to tackle a variety of tasks. Sadly, Ada Byron's life ended less than a decade after completing the work that assured her long-term fame, in November 1852. She was dying of cancer, her gambling habits had caused her to run up huge debts, she'd had more than one affairs, and she was being blackmailed. Her brilliant but unempathic mother was nursing her in her final illness, destroying her personal letters and records, and repaying her debts. Her husband was distraught but helpless. Charles Babbage, however, maintained his steadfast paternalistic friendship to the end. She appointed her loyal friend to be her executor. For years, she and Babbage had been working together on a secret project, known only as 'The Book'. We have a clue to what it was in a letter written by her nine years earlier, on 11th August 1843. It was a joint project by herself and Lord Lovelace, her husband, and was intended to involve Babbage's 'undivided energies'. It involved 'consulting your Engine' (it required Babbage’s computer). The letter gives no hint about the project except for the high-minded nature of its purpose, and its highly mathematical nature.  From then on, the surviving correspondence between the two gives only veiled references to 'The Book'. There isn't much, since Babbage later destroyed any letters that could have damaged her reputation within the Establishment. 'I cannot spare the book today, which I am very sorry for. At the moment I want it for constant reference, but I think you can have it tomorrow' (Oct 1844)  And 'I will send you the book directly, and you can say, when you receive it, how long you will want to keep it'. (Nov 1844)  The two of them were obviously intent on the work: She writes, four years later, 'I have an engagement for Wednesday which will prevent me from attending to your wishes about the book' (Dec 1848). This was something that they both needed to work on, but could not do in parallel: 'I will send the book on Tuesday, and it can be left with you till Friday' (11 Feb 1849). After six years work, it had been so well-handled that it was beginning to fall apart: 'Don't forget the new cover you promised for the book. The poor book is very shabby and wants one' (20 Sept 1849). So what was going on? The word 'book' was not a code-word: it was a real book, probably a 'printer's blank', plain paper, but properly bound so printers and publishers could show off how the published work might look. The hints from the correspondence are of advanced mathematics. It is obvious that the book was travelling between them, back and forth, each one working on it for less than a week before passing it back. Ada and her husband were certainly involved in gambling large sums of money on the horses, and so most biographers have concluded that the three of them were trying to calculate the mathematical odds on the horses. This theory has three large problems. Firstly, Ada's original letter proposing the project refers to its high-minded nature. Babbage was temperamentally opposed to gambling and would scarcely have given so much time to the project, even though he was devoted to Ada. Secondly, Babbage would have very soon have realized the hopelessness of trying to beat the bookies. This sort of betting never attracts his type of intellectual background. The third problem is that any work on calculating the odds on horses would not need a well-thumbed book to pass back and forth between them; they would have not had to work in series. The original project was instigated by Ada, along with her husband, William King-Noel, 1st Earl of Lovelace. Charles Babbage was invited to join the project after the couple had come up with the idea. What could William have contributed? One might assume that William was a Bertie Wooster character, addicted only to the joys of the turf, but this was far from the truth. He was a scientist, a Cambridge graduate who was later elected to be a Fellow of the Royal Society. After Eton, he went to Trinity College, Cambridge. On graduation, he entered the diplomatic service and acted as secretary under Lord Nugent, who was Lord Commissioner of the Ionian Islands. William was very friendly with Babbage too, able to discuss scientific matters on equal terms. He was a capable engineer who invented a process for bending large timbers by the application of steam heat. He delivered a paper to the Institution of Civil Engineers in 1849, and received praise from the great engineer, Isambard Kingdom Brunel. As well as being Lord Lieutenant of the County of Surrey for most of Victoria's reign, he had time for a string of scientific and engineering achievements. Whatever the project was, it is unlikely that William was a junior partner. After Ada's death, the project disappeared. Then, two years later, Babbage, through one of his occasional outbursts of temper, demonstrated that he was able to decrypt one of the most powerful of secret codes, Vigenère's autokey cipher.  All contemporary diplomatic and military messages used a variant of this cipher. Babbage had made three important discoveries, namely, the mathematical law of this cipher, the principle of the key periodicity, and the technique of the symmetry of position. The technique is now known as the Kasiski examination, also called the Kasiski test, but Babbage got there first. At one time, he listed amongst his future projects, the writing of a book 'The Philosophy of Decyphering', but it never came to anything. This discovery was going to change the course of history, since it was used to decipher the Russians’ military dispatches in the Crimean war. Babbage himself played a role during the Crimean War as a cryptographical adviser to his friend, Rear-Admiral Sir Francis Beaufort of the Admiralty. This is as much as we can be certain about in trying to make sense of the bulk of the time that Charles Babbage and Ada Lovelace worked together. Nine years of intensive work, involving the 'Engine' and a great deal of mathematics and research seems to have been lost: or has it? I've argued in the past http://www.simple-talk.com/community/blogs/philfactor/archive/2008/06/13/59614.aspx that the cracking of the Vigenère autokey cipher, was a fundamental motive behind the British Government's support and funding of the 'Difference Engine'. The Duke of Wellington, whose understanding of the military significance of being able to read enemy dispatches, was the most steadfast advocate of the project. If the three friends were actually doing the work of cracking codes by mathematical techniques that used the techniques of key periodicity, and symmetry of position (the use of a book being passed quickly to and fro is very suggestive), intending to then use the 'Engine' to do the routine cracking of each dispatch, then this is a rather different story. The project was Ada and William's idea. (William had served in the diplomatic service and would be familiar with the use of codes). This makes Ada Lovelace the initiator of a project which, by giving both Britain, and probably the USA, a diplomatic and military advantage in the second part of the Nineteenth century, changed world history. Ada would never have wanted any credit for cracking the cipher, and developing the method that rendered all contemporary military and diplomatic ciphering techniques nugatory; quite the reverse. And it is clear from the gaps in the record of the letters between the collaborators that the evidence was destroyed, probably on her request by her irascible but intensely honorable executor, Charles Babbage. Charles Babbage toyed with the idea of going public, but the Crimean war put an end to that. The British Government had a valuable secret, and intended to keep it that way. Ada and Charles had quite often discussed possible moneymaking projects that would fund the development of the Analytic Engine, the first programmable computer, but their secret work was never in the running as a potential cash cow. I suspect that the British Government was, even then, working on the concealment of a discovery whose value to the nation depended on it remaining so. The success of code-breaking in the Crimean war, and the American Civil war, led to the British and Americans  subsequently giving much more weight and funding to the science of decryption. Paradoxically, this makes Ada's contribution even closer to the creation of Colossus, the first digital computer, at Bletchley Park, specifically to crack the Nazi’s secret codes.

    Read the article

  • So long and thanks for the fish&hellip;

    - by Geoff N. Hiten
    This marks my last post as a SQLPASS Board member.  I learned a lot during my year of service and I thank everyone involved for this opportunity.  I would especially like to thank the Chapter leaders and Regional Mentors for Virtual Chapters who (mostly) patiently taught me about Virtual Chapters.   I hope the changes I put in place will help strengthen and grow VCs and PASS going forward.  I would also like to thank every one who encouraged me to reach beyond my comfort zone and accept a leadership position within the PASS organization.  My overall principle was to be a good steward of the PASS community.  Could I have done more?  Always. Did I do enough?  I hope so.  But PASS is a volunteer organization and my time, like yours, is limited.  I have other obligations in life that supersede PASS.  Now I have more time for some of those.  I won’t be going away or leaving the SQL Community.  I will still contribute to the community and support PASS, just in a different role.  Time to let somebody else enjoy the hot seat for a while. Finally, everyone who voted (not just for me) deserves a thanks.  More voters and more engaged voters, strong candidates, and a vigorous debate were all I wanted out of declaring as a candidate last year. This year the SQL community got exactly that. Thank you..

    Read the article

  • First Stable Version of Opera 15 has been Released

    - by Akemi Iwaya
    Opera has just released the first stable version of their revamped browser and will be proceeding at a rapid pace going forward. There is also news concerning the three development streams they will maintain along with news of an update for the older 12.x series for those who are not ready to update to 15.x just yet. The day is full of good news for Opera users whether they have already switched to the new Blink/Webkit Engine version or are still using the older Presto Engine version. First, news of the new development streams… Opera has released details outlining their three new release streams: Opera (Stable) – Released every couple of weeks, this is the most solid version, ready for mission-critical daily use. Opera Next – Updated more frequently than Stable, this is the feature-complete candidate for the Stable version. While it should be ready for daily use, you can expect some bugs there. Opera Developer – A bleeding edge version, you can expect a lot of fancy stuff there; however, some nasty bugs might also appear from time to time. From the Opera Desktop Team blog post: When you install Opera from a particular stream, your installation will stick to it, so Opera Stable will be always updated to Opera Stable, Opera Next to Opera Next and so on. You can choose for yourself which stream is the best for you. You can even follow a couple of them at the same time! Of particular interest is the announcement of continued development for the 12.x series. A new version (12.16) is due to be released soon to help keep the older series up to date and secure while the transition process from 12.x to 15.x continues.    

    Read the article

  • Migrating from SQL Trace to Extended Events

    - by extended_events
    In SQL Server codenamed “Denali” we are moving our diagnostic tracing capabilities forward by building a system on top of Extended Events. With every new system you face the specter of migration which is always a bit of a hassle. I’m obviously motivated to see everyone move their diagnostic tracing systems over to the new extended events based system, so I wanted to make sure we lowered the bar for the migration process to help ease your trials. In my initial post on Denali CTP 1 I described a couple tables that we created that will help map the existing SQL Trace Event Classes to the equivalent Extended Events events. In this post I’ll describe the tables in a bit more details, explain the relationship between the SQL Trace objects (Event Class & Column) and Extended Event objects (Events & Actions) and at the end provide some sample code for a managed stored procedure that will take an existing SQL Trace session (eg. a trace that you can see in sys.Traces) and converts it into event session DDL. Can you relate? In some ways, SQL Trace and Extended Events is kind of like the Standard and Metric measuring systems in the United States. If you spend too much time trying to figure out how to convert between the two it will probably make your head hurt. It’s often better to just use the new system without trying to translate between the two. That said, people like to relate new things to the things they’re comfortable with, so, with some trepidation, I will now explain how these two systems are related to each other. First, some terms… SQL Trace is made up of Event Classes and Columns. The Event Class occurs as the result of some activity in the database engine, for example, SQL:Batch Completed fires when a batch has completed executing on the server. Each Event Class can have any number of Columns associated with it and those Columns contain the data that is interesting about the Event Class, such as the duration or database name. In Extended Events we have objects named Events, EventData field and Actions. The Event (some people call this an xEvent but I’ll stick with Event) is equivalent to the Event Class in SQL Trace since it is the thing that occurs as the result of some activity taking place in the server. An  EventData field (from now on I’ll just refer to these as fields) is a piece of information that is highly correlated with the event and is always included as part of the schema of an Event. An Action is something that can be associated with any Event and it will cause some additional “action” to occur when ever the parent Event occurs. Actions can do a number of different things for example, there are Actions that collect additional data and, take memory dumps. When mapping SQL Trace onto Extended Events, Columns are covered by a combination of both fields and Actions. Knowing exactly where a Column is covered by a field and where it is covered by an Action is a bit of an art, so we created the mapping tables to make you an Artist without the years of practice. Let me draw you a map. Event Mapping The table dbo.trace_xe_event_map exists in the master database with the following structure: Column_name Type trace_event_id smallint package_name nvarchar xe_event_name nvarchar By joining this table sys.trace_events using trace_event_id and to the sys.dm_xe_objects using xe_event_name you can get a fair amount of information about how Event Classes are related to Events. The most basic query this lends itself to is to match an Event Class with the corresponding Event. SELECT     t.trace_event_id,     t.name [event_class],     e.package_name,     e.xe_event_name FROM sys.trace_events t INNER JOIN dbo.trace_xe_event_map e     ON t.trace_event_id = e.trace_event_id There are a couple things you’ll notice as you peruse the output of this query: For the most part, the names of Events are fairly close to the original Event Class; eg. SP:CacheMiss == sp_cache_miss, and so on. We’ve mostly stuck to a one to one mapping between Event Classes and Events, but there are a few cases where we have combined when it made sense. For example, Data File Auto Grow, Log File Auto Grow, Data File Auto Shrink & Log File Auto Shrink are now all covered by a single event named database_file_size_change. This just seemed like a “smarter” implementation for this type of event, you can get all the same information from this single event (grow/shrink, Data/Log, Auto/Manual growth) without having multiple different events. You can use Predicates if you want to limit the output to just one of the original Event Class measures. There are some Event Classes that did not make the cut and were not migrated. These fall into two categories; there were a few Event Classes that had been deprecated, or that just did not make sense, so we didn’t migrate them. (You won’t find an Event related to mounting a tape – sorry.) The second class is bigger; with rare exception, we did not migrate any of the Event Classes that were related to Security Auditing using SQL Trace. We introduced the SQL Audit feature in SQL Server 2008 and that will be the compliance and auditing feature going forward. Doing this is a very deliberate decision to support separation of duties for DBAs. There are separate permissions required for SQL Audit and Extended Events tracing so you can assign these tasks to different people if you choose. (If you’re wondering, the permission for Extended Events is ALTER ANY EVENT SESSION, which is covered by CONTROL SERVER.) Action Mapping The table dbo.trace_xe_action_map exists in the master database with the following structure: Column_name Type trace_column_id smallint package_name nvarchar xe_action_name nvarchar You can find more details by joining this to sys.trace_columns on the trace_column_id field. SELECT     c.trace_column_id,     c.name [column_name],     a.package_name,     a.xe_action_name FROM sys.trace_columns c INNER JOIN    dbo.trace_xe_action_map a     ON c.trace_column_id = a.trace_column_id If you examine this list, you’ll notice that there are relatively few Actions that map to SQL Trace Columns given the number of Columns that exist. This is not because we forgot to migrate all the Columns, but because much of the data for individual Event Classes is included as part of the EventData fields of the equivalent Events so there is no need to specify them as Actions. Putting it all together If you’ve spent a bunch of time figuring out the inner workings of SQL Trace, and who hasn’t, then you probably know that the typically set of Columns you find associated with any given Event Class in SQL Profiler is not fix, but is determine by the contents of the table sys.trace_event_bindings. We’ve used this table along with the mapping tables to produce a list of Event + Action combinations that duplicate the SQL Profiler Event Class definitions using the following query, which you can also find in the Books Online topic How To: View the Extended Events Equivalents to SQL Trace Event Classes. USE MASTER; GO SELECT DISTINCT    tb.trace_event_id,    te.name AS 'Event Class',    em.package_name AS 'Package',    em.xe_event_name AS 'XEvent Name',    tb.trace_column_id,    tc.name AS 'SQL Trace Column',    am.xe_action_name as 'Extended Events action' FROM (sys.trace_events te LEFT OUTER JOIN dbo.trace_xe_event_map em    ON te.trace_event_id = em.trace_event_id) LEFT OUTER JOIN sys.trace_event_bindings tb    ON em.trace_event_id = tb.trace_event_id LEFT OUTER JOIN sys.trace_columns tc    ON tb.trace_column_id = tc.trace_column_id LEFT OUTER JOIN dbo.trace_xe_action_map am    ON tc.trace_column_id = am.trace_column_id ORDER BY te.name, tc.name As you might imagine, it’s also possible to map an existing trace definition to the equivalent event session by judicious use of fn_trace_geteventinfo joined with the two mapping tables. This query extracts the list of Events and Actions equivalent to the trace with ID = 1, which is most likely the Default Trace. You can find this query, along with a set of other queries and steps required to migrate your existing traces over to Extended Events in the Books Online topic How to: Convert an Existing SQL Trace Script to an Extended Events Session. USE MASTER; GO DECLARE @trace_id int SET @trace_id = 1 SELECT DISTINCT el.eventid, em.package_name, em.xe_event_name AS 'event'    , el.columnid, ec.xe_action_name AS 'action' FROM (sys.fn_trace_geteventinfo(@trace_id) AS el    LEFT OUTER JOIN dbo.trace_xe_event_map AS em       ON el.eventid = em.trace_event_id) LEFT OUTER JOIN dbo.trace_xe_action_map AS ec    ON el.columnid = ec.trace_column_id WHERE em.xe_event_name IS NOT NULL AND ec.xe_action_name IS NOT NULL You’ll notice in the output that the list doesn’t include any of the security audit Event Classes, as I wrote earlier, those were not migrated. But wait…there’s more! If this were an infomercial there’d by some obnoxious guy next to me blogging “Well Mike…that’s pretty neat, but I’m sure you can do more. Can’t you make it even easier to migrate from SQL Trace?”  Needless to say, I’d blog back, in an overly excited way, “You bet I can' obnoxious blogger side-kick!” What I’ve got for you here is a Extended Events Team Blog only special – this tool will not be sold in any store; it’s a special offer for those of you reading the blog. I’ve wrapped all the logic of pulling the configuration information out of an existing trace and and building the Extended Events DDL statement into a handy, dandy CLR stored procedure. Once you load the assembly and register the procedure you just supply the trace id (from sys.traces) and provide a name for the event session. Run the procedure and out pops the DDL required to create an equivalent session. Any aspects of the trace that could not be duplicated are included in comments within the DDL output. This procedure does not actually create the event session – you need to copy the DDL out of the message tab and put it into a new query window to do that. It also requires an existing trace (but it doesn’t have to be running) to evaluate; there is no functionality to parse t-sql scripts. I’m not going to spend a bunch of time explaining the code here – the code is pretty well commented and hopefully easy to follow. If not, you can always post comments or hit the feedback button to send us some mail. Sample code: TraceToExtendedEventDDL   Installing the procedure Just in case you’re not familiar with installing CLR procedures…once you’ve compile the assembly you can load it using a script like this: -- Context to master USE master GO -- Create the assembly from a shared location. CREATE ASSEMBLY TraceToXESessionConverter FROM 'C:\Temp\TraceToXEventSessionConverter.dll' WITH PERMISSION_SET = SAFE GO -- Create a stored procedure from the assembly. CREATE PROCEDURE CreateEventSessionFromTrace @trace_id int, @session_name nvarchar(max) AS EXTERNAL NAME TraceToXESessionConverter.StoredProcedures.ConvertTraceToExtendedEvent GO Enjoy! -Mike

    Read the article

  • Implementing fog of war in opengl es 2.0 game

    - by joxnas
    Hi game development community, this is my first question here! ;) I'm developing a tactics/strategy real time android game and I've been wondering for some time what's the best way to implement an efficient and somewhat nice looking fog of war to incorporate in it. My experience with OpenGL or Android is not vast by any means, but I think it is sufficient for what I'm asking here. So far I have thought in some solutions: Draw white circles to a dark background, corresponding to the units visibility, then render to a texture, and then drawing a quad with that texture with blend mode set to multiply. Will this approach be efficient? Will it take too much memory? (I don't know how to render to texture and then use the texture. Is it too messy?) Have a grid object with a vertex shader which has an array of uniforms having the coordinates of all units, and another array which has their visibility range. The number of units will very probably never be bigger then 100. The vertex shader needs to test for each considered vertex, if there is some unit which can see it. In order to do this it, will have to loop the array with the coordinates and do some calculations based on distance. The efficiency of this is inversely proportional to the looks of it. A more dense grid will result in a more beautiful fog of war... but will require a greater amount of vertexes to be checked. Is it possible to find a nice compromise or is this a bad solution from the start? Which solution is the best? Are there better alternatives? Which ones? Thank you for your time.

    Read the article

  • Building the Ultimate SharePoint 2010 Development Environment

    - by Manesh Karunakaran
    It’s been more than a month since SharePoint 2010 RTMed. And a lot of people have downloaded and set up their very own SharePoint 2010 development rigs. And quite a few people have written blogs about setting up good development environments, there is even an MSDN article on it. Two of the blogs worth noting are from MVPs Sahil Malik and Wictor Wilén. Make sure that you check these out as well. Part of the bad side-effects of being a geek is the need to do the technical stuff the best way possible (pragmatic or otherwise), but the problem with this is that what is considered “best” is relative. Precisely the reason why you are reading this post now. Most of the posts that I read are out dated/need updations or are using the wrong OS’es or virtualization solutions (again, opinions vary) or using them the wrong way. Here’s a developer’s view of Building the Ultimate SharePoint 2010 Development Rig. If you are a sales guy, it’s time to close this window. Confusion 1: Which Host Operating System and Virtualization Solution to use? This point has been beaten to death in numerous blog posts in the past, if you have time to invest, read this excellent post by our very own SharePoint Joel on this subject. But if you are planning to build the Ultimate Development Rig, then Windows Server 2008 R2 with Hyper-V is the option that you should be looking at. I have been using this as my primary OS for about 6-7 months now, and I haven’t had any Driver issue or Application compatibility issue. In my experience all the Windows 7 drivers work fine with WIN2008 R2 also. You can enable Aero for eye candy (and the Windows 7 look and feel) and except for a few things like the Hibernation support (which a can be enabled if you really want it), Windows Server 2008 R2, is the best Workstation OS that I have used till date. But frankly the answer to this question of which OS to use depends primarily on one question - Are you willing to change your primary OS? If the answer to that is ‘Yes’, then Windows 2008 R2 with Hyper-V is the best option, if not look at vmWare or VirtualBox, both are equally good. Those who are familiar with a Virtual PC background might prefer Sun VirtualBox. Besides, these provide support for running 64 bit guest machines on 32 bit hosts if the underlying hardware is truly 64 bit. See my earlier post on this. Since we are going to make the ultimate rig, we will use Windows Server 2008 R2 with Hyper-V, for reasons mentioned above. Confusion 2: Should I use a multi-(virtual) server set up? A lot of people use multiple servers for their development environments - like Wictor Wilén is suggesting - one server hosting the Active directory, one hosting SharePoint Server and another one for SQL Server. True, this mimics the production environment the best possible way, but as somebody who has fallen for this set up earlier, I can tell you that you don’t really get anything by doing this. Microsoft has done well to ensure that if you can do it on one machine, you can do it in a farm environment as well. Besides, when you run multiple Server class machine instances in parallel, there are a lot of unwanted processor cycles wasted for no good use. In my personal experience, as somebody who needs to switch between MOSS 2007/SharePoint 2010 environments from time to time, the best possible solution is to Make the host Windows Server 2008 R2 machine your Domain Controller (AD Server) Make all your Virtual Guest OS’es join this domain. Have each Individual Guest OS Image have it’s own local SQL Server instance. The advantages are that you can reuse the users and groups in each of the Guest operating systems, you can manage the users in one place, AD is light weight and doesn't take too much resources on your host machine and also having separate SQL instances for each of the Development images gives you maximum flexibility in terms of configuration, for example your SharePoint rigs can have simpler DB configurations, compared to your MS BI blast pits. Confusion 3: Which Operating System should I use to run SharePoint 2010 Now that’s a no brainer. Use Windows 2008 R2 as your Guest OS. When you are building the ultimate rig, why compromise? If you are planning to run Windows Server 2008 as your Guest OS, there are a few patches that you need to install at different times during the installation, for that follow the steps mentioned here Okay now that we have made our choices, let’s get to the interesting part of building the rig, Step 1: Prepare the host machine – Install Windows Server 2008 R2 Install Windows Server 2008 R2 on your best Desktop/Laptop. If you have read this far, I am quite sure that you are somebody who can install an OS on your own, so go ahead and do that. Make sure that you run the compatibility wizard before you go ahead and nuke your current OS. There are plenty of blogs telling you how to make a good Windows 2008 R2 Workstation that feels and behaves like a Windows 7 machine, follow one and once you are done, head to Step 2. Step 2: Configure the host machine as a Domain Controller Before we begin this, let me tell you, this step is completely optional, you don’t really need to do this, you can simply use the local users on the Guest machines instead, but if this is a much cleaner approach to manage users and groups if you run multiple guest operating systems.  This post neatly explains how to configure your Windows Server 2008 R2 host machine as a Domain Controller. Follow those simple steps and you are good to go. If you are not able to get it to work, try this. Step 3: Prepare the guest machine – Install Windows Server 2008 R2 Open Hyper-V Manager Choose to Create a new Guest Operating system Allocate at least 2 GB of Memory to the Guest OS Choose the Windows 2008 R2 Installation Media Start the Virtual Machine to commence installation. Once the Installation is done, Activate the OS. Step 4: Make the Guest operating systems Join the Domain This step is quite simple, just follow these steps below, Fire up Hyper-V Manager, open your Guest OS Click on Start, and Right click on ‘Computer’ and choose ‘Properties’ On the window that pops-up, click on ‘Change Settings’ On the ‘System Properties’ Window that comes up, Click on the ‘Change’ button Now a window named ‘Computer Name/Domain Changes’ opens up, In the text box titled Domain, type in the Domain name from Step 2. Click Ok and windows will show you the welcome to domain message and ask you to restart the machine, click OK to restart. If the addition to domain fails, that means that you have not set up networking in Hyper-V for the Guest OS to communicate with the Host. To enable it, follow the steps I had mentioned in this post earlier. Step 5: Install SQL Server 2008 R2 on the Guest Machine SQL Server 2008 R2 gets installed with out hassle on Windows Server 2008 R2. SQL Server 2008 needs SP2 to work properly on WIN2008 R2. Also SQL Server 2008 R2 allows you to directly add PowerPivot support to SharePoint. Choose to install in SharePoint Integrated Mode in Reporting Server Configuration. Step 6: Install KB971831 and SharePoint 2010 Pre-requisites Now install the WCF Hotfix for Microsoft Windows (KB971831) from this location, and SharePoint 2010 Pre-requisites from the SP2010 Installation media. Step 7: Install and Configure SharePoint 2010 Install SharePoint 2010 from the installation media, after the installation is complete, you are prompted to start the SharePoint Products and Technologies Configuration Wizard. If you are using a local instance of Microsoft SQL Server 2008, install the Microsoft SQL Server 2008 KB 970315 x64 before starting the wizard. If your development environment uses a remote instance of Microsoft SQL Server 2008 or if it has a pre-existing installation of Microsoft SQL Server 2008 on which KB 970315 x64 has already been applied, this step is not necessary. With the wizard open, do the following: Install SQL Server 2008 KB 970315 x64. After the Microsoft SQL Server 2008 KB 970315 x64 installation is finished, complete the wizard. Alternatively, you can choose not to run the wizard by clearing the SharePoint Products and Technologies Configuration Wizard check box and closing the completed installation dialog box. Install SQL Server 2008 KB 970315 x64, and then manually start the SharePoint Products and Technologies Configuration Wizard by opening a Command Prompt window and executing the following command: C:\Program Files\Common Files\Microsoft Shared Debug\Web Server Extensions\14\BIN\psconfigui.exe The SharePoint Products and Technologies Configuration Wizard may fail if you are using a computer that is joined to a domain but that is not connected to a domain controller. Step 8: Install Visual Studio 2010 and SharePoint 2010 SDK Install Visual Studio 2010 Download and Install the Microsoft SharePoint 2010 SDK Step 9: Install PowerPivot for SharePoint and Configure Reporting Services Pop-In the SQLServer 2008 R2 installation media once again and install PowerPivot for SharePoint. This will get added as another instance named POWERPIVOT. Configure Reporting Services by following the steps mentioned here, if you need to get down to the details on how the integration between SharePoint 2010 and SQL Server 2008 R2 works, see Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010 an excellent article by Alan Le Marquand Step 10: Download and Install Sample Databases for Microsoft SQL Server 2008R2 SharePoint 2010 comes with a lot of cool stuff like PerformancePoint Services and BCS, if you need to try these out, you need to have data in your databases. So if you want to save yourself the trouble of creating sample data for your PerformancePoint and BCS experiments, download and install Sample Databases for Microsoft SQL Server 2008R2 from CodePlex. And you are done! Fire up your Visual Studio 2010 and Start Coding away!!

    Read the article

  • CodePlex Daily Summary for Friday, July 19, 2013

    CodePlex Daily Summary for Friday, July 19, 2013Popular ReleasesuComponents: uComponents v5.5.0: Following on from our 101355 release, we bring you... v5.5.0! Resolved issues The following issues have been resolved in 5.5.0: 14634 14848 DataType Grid 14840 14842 14845 UpgradingThe best way to upgrade uComponents is to re-install via Umbraco's back-office package installer, (either from the package repository or a local package upload). Do not uninstall an existing uComponents package, as this may lead to data loss! Umbraco compatibilityThis release is compatible with Umbraco 4...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.19.4: One Click Install from NuGet This release introduces the 51Degrees.mobi IIS Vary Header Fix. When Compression and Caching is used in IIS, the Vary header is overwritten, making intelligent caching with dynamic content impossible. Find out more about installing the Vary Header fix. Changes to Version 2.1.19.4Handlers now have a ‘Count’ property. This is an integer value that shows how many devices in the dataset that use that handler. Provider.cs -> GetDeviceInfoByID to address a problem w...SalarDbCodeGenerator: SalarDbCodeGenerator v2.1.2013.0719: Version 2.1.2013.0719 2013/7/19 Pattern Changes: * DapperContext pattern is added. * All patterns are updated to work with one-to-one relations. Changes: * One-to-one relation is supported. * Minor bug fixes.Scryber PDF Generation: Scryber.Installer.v0.8.3: This one has tables! A significant update to the scryber library, with the inclusion of tables. There is no column span, or row span, and nested tables are not fully supported. But - they overflow nicely and are well styled. And they can be data bound. Other updates The Link element has inner components directly as a child, rather than in a Content element. The existing files will still continue to work, but are invalid wrt the schema. Databinding is now supported at the document level, r...Player Framework by Microsoft: Player Framework for Windows and WP (v1.3 beta 2): Includes all changes in v1.3 beta 1 Additional support for Windows 8.1 Preview New API (JS): addTextTrack New API (JS): msKeys New API (JS): msPlayToPreferredSourceUri New API (JS): msSetMediaKeys New API (JS): onmsneedkey New API (Xaml): SetMediaStreamSource method New API (Xaml): Stretch property New API (Xaml): StretchChanged event New API (Xaml): AreTransportControlsEnabled property New API (Xaml): IsFullWindow property New API (Xaml): PlayToPreferredSourceUri proper...Outlook 2013 Add-In: Multiple Calendars: As per popular request, this new version includes: - Support for multiple calendars. This can be enabled in the configuration by choosing which ones to show/hide appointments from. In some cases (public folders) it may time out and crash, and so far it only supports "My Calendars", so not shared ones yet. Also they're currently shown in the same font/color so there are no confusions with color categories, but please drop me a line on any suggestions you'd like to see implemented. - Added fri...GoAgent GUI: GoAgent GUI 1.2.0 ???: *???????,??????????????,?????????????? ????????* ????????(Beta) Beta????:https://goagent.codeplex.com/workitem/list/basic ??????:uygw@outlook.com ????:https://goagent.codeplex.com/documentation ?????????????????。Circuit Diagram: Circuit Diagram 2.0 Beta 2: New in this release: Show grid in editor Cut/copy/paste support Bug fixesAscend 3D: Ascend (2013-07-17): Ascend 2.1 Animation bugfixes and speed/robustness improvements Added compatibility for multiple top-level bones on armatures Ascend Blender Exporter 2.1 Removed need to create parent-child relationship between armature and skinning mesh Added support for exporting multiple top-level bones on armatures AscendViewer 1.0.4 Updated to use latest version of Ascend libraryCommunity TFS Build Extensions: July 2013: The July 2013 release contains VS2010 Activities(target .NET 4.0) VS2012 Activities (target .NET 4.5) VS2013 Activities (target .NET 4.5.1) Community TFS Build Manager VS2012 The Community TFS Build Manager can also be found in the Visual Studio Gallery here where updates will first become available. A version supporting VS2010 is also available in the Gallery here.smoketestgit1: new release: Wiki Markup Guide bold italics underline Heading 1 Heading 2 Bullet List Bullet List 2 Number List Number List 2 http://www.example.com Do not apply formattingsmoketesthg: new release: Wiki Markup Guide bold italics underline Heading 1 Heading 2 Bullet List Bullet List 2 Number List Number List 2 http://www.example.com Do not apply formattingsmoketesttfs: new release: Wiki Markup Guide bold italics underline Heading 1 Heading 2 Bullet List Bullet List 2 Number List Number List 2 http://www.example.com Do not apply formattingdatajs - JavaScript Library for data-centric web applications: datajs version 1.1.1: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes ...Orchard Project: Orchard 1.7 RC: Planning releasedTerminals: Version 3.1 - Release: Changes since version 3.0:15992 Unified usage of icons in user interface Added context menu in Organize favorites grid Fixed:34219 34210 34223 33981 34209 Install notes:No changes in database (use database from release 3.0) No upgrade of configuration, passwords, credentials or favorites See also upgrade notes for release 3.0Media Companion: Media Companion MC3.573b: XBMC Link - Let MC update your XBMC Library Fixes in place, Enjoy the XBMC Link function Well, Phil's been busy in the background, and come up with a Great new feature for Media Companion. Currently only implemented for movies. Once we're happy that's working with no issues, we'll extend the functionality to include TV shows. All the help for this is build into the application. Go to General Preferences - XBMC Link for details. Help us make it better* Currently only tested on local and ...Wsus Package Publisher: Release v1.2.1307.15: Fix a bug where WPP crash if 'ShowPendingUpdates' is start with wrong credentials. Fix a bug where WPP crash if ArrivalDateAfter and ArrivalDateBefore is equal in the ComputerView. Add a filter in the ComputerView. (Thanks to NorbertFe for this feature request) Add an option, when right-clicking on a computer, you can ask for display the current logon user of the remote computer. Add an option in settings to choose if WPP ping remote computers using IPv4, IPv6 or IPv6 and, if fail, IP...Lab Of Things: vBeta1: Initial release of LoTVidCoder: 1.4.23: New in 1.4.23 Added French translation. Fixed non-x264 video encoders not sticking in video tab. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30, 50, 59.94, 60 Additional sample rates: 8, 11.025, 12 and 16 kHz Additional higher bitrates for audio Same as Source Constant Framerate 24-bit FLAC encoding Added Windows Phone 8 and Apple TV 3 presets Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will ...New Projects[C#]A Complete MySQL Manager Project: An open-source MySQL database manager oriented to multi-threading environments, supporting to create an entire database from scratch and edit it.bCal Javascript Calendar Generator: Using this simple javascript library, you can create simple navigable calendars in your website with just one line of user code.Better Network: Tools to delete network profile and network lists in Windows 8.C# Intellisense for Notepad++: 'CS-Script Intellisense' is a real C# intellisense solution based on CS-Script and ICSharpCode.NRefactory/Mono.Cecil.DARF: Dynamic Application Runtime Framework: A set of tools and libraries which help software developers create fully distributed and component-based applications. FeiXinV5: Just TestFollower Catcher: Game made for the 2012 Windows 8 Megathon. Second place in Madrid's local competition.MeeOS .NET: El primer sistema operativo GNU GPL en español.Music Vault: Music Vault is program to store your favorite music in one place, you can in any time move all your music library by moving only one file.MVC uTorrent: This is an ASP.NET MVC uTorrent front end, that utilises the uTorrent API. The main features: - Single page application - Can be hosted in IISMyTaskManager: This project Related to distribute the task to log in users OO2RDF: OO2RDF is a framework for data triplification. P(y)rotein Subcellular Location Image Database: A library that allows communication with an OMERO database, an open-source software for storage and manipulation of biological images.PE Toolbox: PE Toolbox ? ????? ?? ???????????, ????? ??????? ??? ??????????? ?? ?????. ???? ????? ??????????? ???????: - PE IFE (Importing from Excel) PeGscan: *PeGscan*ScriptZilla: ScriptZilla is an Free Editor for Windows with more Than 23 Languages.Sharepoint : Reconciliate SSRS Reports and Datasets with datasources Powershell: Powershell to link datasources to SSRS reports and datasetsSync-Moped: The idea behind the "Sync-Moped" is to synchronize two directories with backup functionality.System.Time: System.Time provides a type for .NET that allows the managing of a time in System.String, System.DateTime and System.TimeSpan form at the same time.Text Analytics: Project summary will be added later.Tibia Universal MC .NET: This tool was written as an attempt to port the [url:http://code.google.com/p/xenomc/] to the .NET Framework.VRFramework: VR Framework is a system for creating a virtual reality games using your PC, and Android smart phone, and Unity 3D.

    Read the article

  • Webfarm and IIS configuration tips/tricks

    - by steve schofield
    I was recently talking with some good friends about tips for performance and what an IIS Administrator could do on the server side.  I also see this question from time to time in the forums @ http://forums.iis.net.    Of course, you should test individual settings in a controlled environment while performing load testing before just implementing on your production farm.  IIS Compression enabled (both static and dynamic if possible, set it to 9)  If you are running IIS 6, check this article out by Scott Forsyth. Run FRT for long running pages (Failed Request Tracing) Sql Connection pooling in code Look at using PAL with performance counters ( http://blogs.iis.net/ganekar/archive/2009/08/12/pal-performance-analyzer-with-iis.aspx )  Look at load testing using visual studio load testing tools Log parser finding long running pages.  Here is a couple examples Look at CPU, Memory and disk counters.  Make sure the server has enough resources. Same machineKey account across all same nodes Localize content vs. using UNC based content on a single server (My UNC tag with great posts) Content expiration ETAG’s the same across all web-farms Disable Scalable Networking Pack Use YSlow or Developer tools in Chrome to help measure the client experience improvements. Additionally, some basic counters in for measuring applications is: I would recommend checking out the Chapter 17 in IIS 7 Resource kit. it was one of the chapters I authored. :) Concurrent Connections,  Request Per / Sec, Request Queued.  I strongly suggest testing one change at a time to see how it helps improve your performance.  Hopefully this post provides a few options to review in your environment.   Cheers, Steve SchofieldMicrosoft MVP - IIS

    Read the article

  • Recover Data Like a Forensics Expert Using an Ubuntu Live CD

    - by Trevor Bekolay
    There are lots of utilities to recover deleted files, but what if you can’t boot up your computer, or the whole drive has been formatted? We’ll show you some tools that will dig deep and recover the most elusive deleted files, or even whole hard drive partitions. We’ve shown you simple ways to recover accidentally deleted files, even a simple method that can be done from an Ubuntu Live CD, but for hard disks that have been heavily corrupted, those methods aren’t going to cut it. In this article, we’ll examine four tools that can recover data from the most messed up hard drives, regardless of whether they were formatted for a Windows, Linux, or Mac computer, or even if the partition table is wiped out entirely. Note: These tools cannot recover data that has been overwritten on a hard disk. Whether a deleted file has been overwritten depends on many factors – the quicker you realize that you want to recover a file, the more likely you will be able to do so. Our setup To show these tools, we’ve set up a small 1 GB hard drive, with half of the space partitioned as ext2, a file system used in Linux, and half the space partitioned as FAT32, a file system used in older Windows systems. We stored ten random pictures on each hard drive. We then wiped the partition table from the hard drive by deleting the partitions in GParted. Is our data lost forever? Installing the tools All of the tools we’re going to use are in Ubuntu’s universe repository. To enable the repository, open Synaptic Package Manager by clicking on System in the top-left, then Administration > Synaptic Package Manager. Click on Settings > Repositories and add a check in the box labelled “Community-maintained Open Source software (universe)”. Click Close, and then in the main Synaptic Package Manager window, click the Reload button. Once the package list has reloaded, and the search index rebuilt, search for and mark for installation one or all of the following packages: testdisk, foremost, and scalpel. Testdisk includes TestDisk, which can recover lost partitions and repair boot sectors, and PhotoRec, which can recover many different types of files from tons of different file systems. Foremost, originally developed by the US Air Force Office of Special Investigations, recovers files based on their headers and other internal structures. Foremost operates on hard drives or drive image files generated by various tools. Finally, scalpel performs the same functions as foremost, but is focused on enhanced performance and lower memory usage. Scalpel may run better if you have an older machine with less RAM. Recover hard drive partitions If you can’t mount your hard drive, then its partition table might be corrupted. Before you start trying to recover your important files, it may be possible to recover one or more partitions on your drive, recovering all of your files with one step. Testdisk is the tool for the job. Start it by opening a terminal (Applications > Accessories > Terminal) and typing in: sudo testdisk If you’d like, you can create a log file, though it won’t affect how much data you recover. Once you make your choice, you’re greeted with a list of the storage media on your machine. You should be able to identify the hard drive you want to recover partitions from by its size and label. TestDisk asks you select the type of partition table to search for. In most cases (ext2/3, NTFS, FAT32, etc.) you should select Intel and press Enter. Highlight Analyse and press enter. In our case, our small hard drive has previously been formatted as NTFS. Amazingly, TestDisk finds this partition, though it is unable to recover it. It also finds the two partitions we just deleted. We are able to change their attributes, or add more partitions, but we’ll just recover them by pressing Enter. If TestDisk hasn’t found all of your partitions, you can try doing a deeper search by selecting that option with the left and right arrow keys. We only had these two partitions, so we’ll recover them by selecting Write and pressing Enter. Testdisk informs us that we will have to reboot. Note: If your Ubuntu Live CD is not persistent, then when you reboot you will have to reinstall any tools that you installed earlier. After restarting, both of our partitions are back to their original states, pictures and all. Recover files of certain types For the following examples, we deleted the 10 pictures from both partitions and then reformatted them. PhotoRec Of the three tools we’ll show, PhotoRec is the most user-friendly, despite being a console-based utility. To start recovering files, open a terminal (Applications > Accessories > Terminal) and type in: sudo photorec To begin, you are asked to select a storage device to search. You should be able to identify the right device by its size and label. Select the right device, and then hit Enter. PhotoRec asks you select the type of partition to search. In most cases (ext2/3, NTFS, FAT, etc.) you should select Intel and press Enter. You are given a list of the partitions on your selected hard drive. If you want to recover all of the files on a partition, then select Search and hit enter. However, this process can be very slow, and in our case we only want to search for pictures files, so instead we use the right arrow key to select File Opt and press Enter. PhotoRec can recover many different types of files, and deselecting each one would take a long time. Instead, we press “s” to clear all of the selections, and then find the appropriate file types – jpg, gif, and png – and select them by pressing the right arrow key. Once we’ve selected these three, we press “b” to save these selections. Press enter to return to the list of hard drive partitions. We want to search both of our partitions, so we highlight “No partition” and “Search” and then press Enter. PhotoRec prompts for a location to store the recovered files. If you have a different healthy hard drive, then we recommend storing the recovered files there. Since we’re not recovering very much, we’ll store it on the Ubuntu Live CD’s desktop. Note: Do not recover files to the hard drive you’re recovering from. PhotoRec is able to recover the 20 pictures from the partitions on our hard drive! A quick look in the recup_dir.1 directory that it creates confirms that PhotoRec has recovered all of our pictures, save for the file names. Foremost Foremost is a command-line program with no interactive interface like PhotoRec, but offers a number of command-line options to get as much data out of your had drive as possible. For a full list of options that can be tweaked via the command line, open up a terminal (Applications > Accessories > Terminal) and type in: foremost –h In our case, the command line options that we are going to use are: -t, a comma-separated list of types of files to search for. In our case, this is “jpeg,png,gif”. -v, enabling verbose-mode, giving us more information about what foremost is doing. -o, the output folder to store recovered files in. In our case, we created a directory called “foremost” on the desktop. -i, the input that will be searched for files. This can be a disk image in several different formats; however, we will use a hard disk, /dev/sda. Our foremost invocation is: sudo foremost –t jpeg,png,gif –o foremost –v –i /dev/sda Your invocation will differ depending on what you’re searching for and where you’re searching for it. Foremost is able to recover 17 of the 20 files stored on the hard drive. Looking at the files, we can confirm that these files were recovered relatively well, though we can see some errors in the thumbnail for 00622449.jpg. Part of this may be due to the ext2 filesystem. Foremost recommends using the –d command-line option for Linux file systems like ext2. We’ll run foremost again, adding the –d command-line option to our foremost invocation: sudo foremost –t jpeg,png,gif –d –o foremost –v –i /dev/sda This time, foremost is able to recover all 20 images! A final look at the pictures reveals that the pictures were recovered with no problems. Scalpel Scalpel is another powerful program that, like Foremost, is heavily configurable. Unlike Foremost, Scalpel requires you to edit a configuration file before attempting any data recovery. Any text editor will do, but we’ll use gedit to change the configuration file. In a terminal window (Applications > Accessories > Terminal), type in: sudo gedit /etc/scalpel/scalpel.conf scalpel.conf contains information about a number of different file types. Scroll through this file and uncomment lines that start with a file type that you want to recover (i.e. remove the “#” character at the start of those lines). Save the file and close it. Return to the terminal window. Scalpel also has a ton of command-line options that can help you search quickly and effectively; however, we’ll just define the input device (/dev/sda) and the output folder (a folder called “scalpel” that we created on the desktop). Our invocation is: sudo scalpel /dev/sda –o scalpel Scalpel is able to recover 18 of our 20 files. A quick look at the files scalpel recovered reveals that most of our files were recovered successfully, though there were some problems (e.g. 00000012.jpg). Conclusion In our quick toy example, TestDisk was able to recover two deleted partitions, and PhotoRec and Foremost were able to recover all 20 deleted images. Scalpel recovered most of the files, but it’s very likely that playing with the command-line options for scalpel would have enabled us to recover all 20 images. These tools are lifesavers when something goes wrong with your hard drive. If your data is on the hard drive somewhere, then one of these tools will track it down! Similar Articles Productive Geek Tips Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDUse an Ubuntu Live CD to Securely Wipe Your PC’s Hard DriveReset Your Ubuntu Password Easily from the Live CDBackup Your Windows Live Writer SettingsAdding extra Repositories on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • Optimize Images Using the ASP.NET Sprite and Image Optimization Framework

    The HTML markup of a web page includes the page's textual content, semantic and styling information, and, typically, several references to external resources. External resources are content that is part of web page, but are separate from the web page's markup - things like images, style sheets, script files, Flash videos, and so on. When a browser requests a web page it starts by downloading its HTML. Next, it scans the downloaded HTML for external resources and starts downloading those. A page with many external resources usually takes longer to completely load than a page with fewer external resources because there is an overhead associated with downloading each external resource. For starters, each external resource requires the browser to make an HTTP request to retrieve the resource. What's more, browsers have a limit as to how many HTTP requests they will make in parallel. For these reasons, a common technique for improving a page's load time is to consolidate external resources in a way to reduce the number of HTTP requests that must be made by the browser to load the page in its entirety. This article examines the free and open-source ASP.NET Sprite and Image Optimization Framework, which is a project developed by Microsoft for improving a web page's load time by consolidating images into a sprite or by using inline, base-64 encoded images. In a nutshell, this framework makes it easy to implement practices that will improve the load time for a web page that displays several images. Read on to learn more! Read More >

    Read the article

  • Node remains in commissioning status

    - by Vinitha
    I have been trying to set up ubuntu cloud 12.04. I'm kind of new to MAAS and ubuntu. Here is what I followed. Have installed MAAS server using the steps provided in https://wiki.ubuntu.com/ServerTeam/MAAS For the node, I installed the Ubuntu 12.04 Server Image on a USB Stick. Then restarted the node and opted to enlist the node via boot media, with PXE. once the process was done, the node was powered off as expected. I manually powered on the node, as my node is not PXE enabled. Result - No node was visible on MAAS UI Since step 2 didn't work, I added the node via maas-cli. command. After the execution of this command I got the node reflected on to my MAAS UI. But the status continues to be in "Commissioning" for a long time. Then I executed "maas-cli maas nodes check-commissioning " and i got "Unrecognised signature: POST check_commissioning". I'm not sure where is the error. Could some one please help me solve this issue. I checked the following log file but found no error related to commissioning (pserv.log / maas.log / celery.log/celery-region.log). I found this entry in my auth.log "Nov 16 18:20:34 ubuntuCloud sshd[4222]: Did not receive identification string from xxx.xx.xx.x" not sure if it indicates anything as the ip that is mentioned is not of the node nor of the MAAS server. I also verified the time on the server and node using date cmd - (at one instance the times are : Server: Fri Nov 16 18:15:51 IST 2012 and Node Fri Nov 16 18:15:43 IST 2012). Not sure if 'date' the right cmd to set the time. I have also check maas_local_settings.py for the MAAS url. I'm not sure what are the logs that need to be verified. Is there any log that can be checked on the Node. Thanks Vinitha

    Read the article

  • Advise on career development [closed]

    - by Mike Young
    I am an amateur programmer working at a start-up. I didn't try coding at college. I've been working for 2 months now on web development. I'm satisfied with my progress. My project will go live soon. I work on front-end and my colleague integrates my work in his. So I decided to learn back-end technologies so that I would be able to work on a project from scratch, help my company build up. I recently got to know about the technologies used by fb and was fascinated to learn ,work on them,keep motivating myself. Now I want to work on building a product from scratch, be good at database concepts, a language like ruby or python, and get to know load balancing, dynamic requests from servers, hosting a website, real time communication, secured login, implementing sophisticated search feature for the app, using git by the end of the project.I would like to be a full stack developer in due course of time and learn everything in detail. I decide to keep myself out of time frame, learn every concept in detail.I would like to use both rdbms and non relational dbm for the project. I have no experience except some beginner knowledge in html5,css and JavaScript. I would like to get some advice on how to proceed forward step by step,flow what technologies to pick up and project idea which includes all the above.

    Read the article

  • Post build events using ROBOCOPY instead of XCOPY

    - by Vizioz Limited
    I don't know about you, but for a long time I have used XCOPY statements in my Visual Studio post build events to copy my Umbraco files from the project folders to the local version of the website associated with the project.For the last few months we have been building a website framework for a client, who has subsequently sold the site to 5 clients, each with a different skin and some variations in their functional requirements.So, we now have a single source solutions, that builds and copies the site files into 5 seperate local websites, which enables us to easily test them all, what we had found was that this process was starting to slow up our build process and was reaching 30-45 seconds on a high spec Quad core machine (and slower on others)Today I asked Colin to create seperate Solution Configurations within Visual Studio so that while we were developing we could target a single site, and when we wanted to test all sites, we could target "ALL" and the Post Build script would then copy the files to all sites.This worked well, and with a couple of other optimisations, our build was now taking about 10 seconds for a single site.Then Colin came across ROBOCOPY and suggested that maybe this would be a suitable alternative to XCOPY, well, I had not heard of it.. (shock horror some of you shout, some I am sure like me, are also wondering what it is!)ROBOCOPY is new in Windows Vista & Windows 7 (you can also download it for XP & Windows 2003) and it has a lot of additional features, the two that were most interesting to us were:/MIR = Mirror a folder tree/XD = Exclude Directories/NP = No Progress (i.e. it does not give you a chart of it's results, which just fills up your Output window!)So, we set about implementing ROBOCOPY, we decided to use the /MIR switch on all folders that we knew were always stored in our project folders:- images- css- masterpages- xsltAnd for other files we just used the straight robocopy functionality.We also decided to exclude all the .SVN directories using the /XD switch and finally we added the /NP switch as mentioned above.The beauty of all of this, is the /MIR functionality, as this means that only files that have changed will be copied across which greatly speeds up the process, especially on the images folders which previously copied across on every build, now, if we add a new image to the project it will be copied across automatically and then never again, unless we change it of course!The build time now for all sites is approximately 4 seconds and for a single site, 2 seconds, I would highly recommend the time to make the same optimisations to your build processes if you have not done so already.

    Read the article

  • Project Euler 51: Ruby

    - by Ben Griswold
    In my attempt to learn Ruby out in the open, here’s my solution for Project Euler Problem 51.  I know I started back up with Python this week, but I have three more Ruby solutions in the hopper and I wanted to share. For the record, Project Euler 51 was the second hardest Euler problem for me thus far. Yeah. As always, any feedback is welcome. # Euler 51 # http://projecteuler.net/index.php?section=problems&id=51 # By replacing the 1st digit of *3, it turns out that six # of the nine possible values: 13, 23, 43, 53, 73, and 83, # are all prime. # # By replacing the 3rd and 4th digits of 56**3 with the # same digit, this 5-digit number is the first example # having seven primes among the ten generated numbers, # yielding the family: 56003, 56113, 56333, 56443, # 56663, 56773, and 56993. Consequently 56003, being the # first member of this family, is the smallest prime with # this property. # # Find the smallest prime which, by replacing part of the # number (not necessarily adjacent digits) with the same # digit, is part of an eight prime value family. timer_start = Time.now require 'mathn' def eight_prime_family(prime) 0.upto(9) do |repeating_number| # Assume mask of 3 or more repeating numbers if prime.count(repeating_number.to_s) >= 3 ctr = 1 (repeating_number + 1).upto(9) do |replacement_number| family_candidate = prime.gsub(repeating_number.to_s, replacement_number.to_s) ctr += 1 if (family_candidate.to_i).prime? end return true if ctr >= 8 end end false end # Wanted to loop through primes using Prime.each # but it took too long to get to the starting value. n = 9999 while n += 2 next if !n.prime? break if eight_prime_family(n.to_s) end puts n puts "Elapsed Time: #{(Time.now - timer_start)*1000} milliseconds"

    Read the article

  • Project Euler 53: Ruby

    - by Ben Griswold
    In my attempt to learn Ruby out in the open, here’s my solution for Project Euler Problem 53.  I first attempted to solve this problem using the Ruby combinations libraries. That didn’t work out so well. With a second look at the problem, the provided formula ended up being just the thing to solve the problem effectively. As always, any feedback is welcome. # Euler 53 # http://projecteuler.net/index.php?section=problems&id=53 # There are exactly ten ways of selecting three from five, # 12345: 123, 124, 125, 134, 135, 145, 234, 235, 245, # and 345 # In combinatorics, we use the notation, 5C3 = 10. # In general, # # nCr = n! / r!(n-r)!,where r <= n, # n! = n(n1)...321, and 0! = 1. # # It is not until n = 23, that a value exceeds # one-million: 23C10 = 1144066. # In general: nCr # How many, not necessarily distinct, values of nCr, # for 1 <= n <= 100, are greater than one-million timer_start = Time.now # There's no factorial method in Ruby, I guess. class Integer # http://rosettacode.org/wiki/Factorial#Ruby def factorial (1..self).reduce(1, :*) end end def combinations(n, r) n.factorial / (r.factorial * (n-r).factorial) end answer = 0 100.downto(3) do |c| (2).upto(c-1) { |r| answer += 1 if combinations(c, r) > 1_000_000 } end puts answer puts "Elapsed Time: #{(Time.now - timer_start)*1000} milliseconds"

    Read the article

  • Lightning talk: Coderetreat

    - by Michael Williamson
    In the spirit of trying to encourage more deliberate practice amongst coders in Red Gate, Lauri Pesonen had the idea of running a coderetreat in Red Gate. Lauri and I ran the first one a few weeks ago: given that neither of us hadn’t even been to a coderetreat before, let alone run one, I think it turned out quite well. The participants gave positive feedback, saying that they enjoyed the day, wrote some thought-provoking code and would do it again. Sam Blackburn was one of the attendees, and gave a lightning talk to the other developers in one of our regular lightning talk sessions: In case you can’t watch the video, I’ve transcribed the talk below, although I’d recommend watching the video if you can — I didn’t have much time to do the transcribing! So, what is a coderetreat? So it’s not just something in Red Gate, there’s a website and everything, although it’s not a very big website. It calls itself a community network. The basic ideas behind coderetreat are: you’ve got one day, and you split it into one hour sections. You spend three quarters of that coding, and do a little retrospective at the end. You’re supposed to start fresh each, we were told to delete our code after every session. We were in pairs, swapping after each session, and we did the same task every time. In fact, Conway’s Game of Life is the only task mentioned anywhere that I find for coderetreat. So I don’t know what we’ll do next time, or if we’re meant to do the same thing again. There are some guiding principles which felt to us like restrictions, that you have to code in crazy ways to encourage better code. Final thing is that it’s supposed to be free for outsiders to join. It’s meant to be a kind of networking thing, where you link up with people from other companies. We had a pilot day with Michael and Lauri. Since it was basically the first time any of us had done anything like this, everybody was from Red Gate. We didn’t chat to anybody else for the initial one. The task was Conway’s Game of Life, which most of you have probably heard of it, all but one of us knew about it when did the coderetreat. I won’t got into the details of what it is, but it felt like the right size of task, basically one or two groups actually produced something working by the end of the day, and of course that doesn’t mean it’s necessarily a day’s work to produce that because we were starting again every hour. The task really drives you more than trying to create good code, I found. It was really tempting to try and get it working rather than stick to the rules. But it’s really good to stop and try again because there are so many what-ifs when you’ve finished writing something, “what if I’d done it this way?”. You can answer all those questions at a coderetreat because it’s not about getting a product out the door, it’s about learning and playing with ideas. So we had all these different practices we were trying. I’ll try and go through most of these. Single responsibility is this idea that everything should do just one thing. It was the very first session, we were still trying to figure out how do you go about the Game of Life? So by the end of forty-five minutes hadn’t produced very much for that first session. We were still thinking, “Do we start with a board, how do we represent all these squares? It can be infinitely big, help, this is getting really difficult!”. So, most of us didn’t really get anywhere on the first one. Although it was interesting that some people started with the board, one group started with the FateDecider class that decides whether things live or die. A sort of god class, but in a good way. They managed to implement all of the rules without even defining how the squares were arranged or anything like that. Another thing we tried was TDD (test-driven development). I’m sure most of you know what TDD is: Watch a test, watch it fail for the right reason Write code to pass the test, watch it pass Refactor, check the test still passes Repeat! It basically worked, we were able to produce code, but we often found the tests defined the direction that code went, which is obviously the idea of TDD. But you tend to find that by the time you’ve even written your first assertion, which is supposed to be the very first thing you write, because you write your tests backwards from the assertions back to the initial conditions, you’ve already constrained the logic of the code in some way by the time you’ve done that. You then get to this situation of, “Well, we actually want to go in a slightly different direction. Can we do this?”. Can we write tests that don’t constrain the architecture? Wrapping up all primitives: it’s kind of turtles all the way down. We had a Size, which has a Width and Height, which both derive from Dimension. You’ve got pages of code before you’ve even done anything. No getters and setters (use tell don’t ask instead): mocks and stubs for tests are required if you want to assert that your results are what you think they should be. You can’t just check the internal state of the code. And people found that really challenging and it made them think in a different way which I think is really good. Not having mutable state: that was kind of confusing because we weren’t quite sure what fitted within that rule and what didn’t, and I think we were trying too hard to follow the rule rather than the guideline. No if-statements: supposed to use polymorphism instead, but polymorphism still requires a factory with conditional behaviour. We did something really crazy to get around this: public T If(bool condition, Func<T> left, Func<T> right) { var dict = new Dictionary<bool, Func<T>> {{true, left}, {false, right}}; return dict[condition].Invoke(); } That is not really polymorphism, is it? For-loops: you can always replace a for-loop with recursion, but it doesn’t tend to make it any more readable unless it’s the kind of task that really lends itself to that. So it was interesting, it was good practice, but it wouldn’t make it easier it’s the kind of tree-structure algorithm where that would help. Having a limit on the number of levels of indentation: again, I think it does produce very nice, clean code, but it wasn’t actually a challenge because you just extract methods. That’s quite a useful thing because you can apply that to real code and say, “Okay, should this method really be going crazy like this?” No talking: we hated that. It’s like there’s two of you at a computer, and one of you is doing the typing, what does the other guy do if they’re not allowed to talk. The answer is TDD ping-pong – one person writes the tests, and then the other person writes the code to pass the test. And that creates communication without actually having to have discussion about things which is kind of cool. No code comments: just makes no difference to anything. It’s a forty-five minute exercise, so what are you going to put comments in code for? Finally, this is my fault. I discovered an entertaining way of doing the calculation that was kind of cool (using convolutions over the state of the board). Unfortunately, it turns out to be really hard to implement in C#, so didn’t even manage to work out how to do that convolution in C#. It’s trivial in some high-level languages, but you need something matrix-orientated for it to really work. That’s most of it, really. The thoughts that people went away with: we put down our answers to questions like “What have you learnt?” and “What surprised you?”, “How are you going to do things differently?”, and most people said redoing the problem is really, really good for understanding it properly. People hate having a massive legacy codebase that they can’t change, so being able to attack something three different ways in an environment where the end-product isn’t important: that’s something people really enjoyed. Pair-programming: also people said that they wanted to do more of that, especially with TDD ping-pong, where you write the test and somebody else writes the code. Various people thought different things about immutables, but most people thought they were good, they promote functional programming. And TDD people found really hard. “Tell, don’t ask” people found really, really hard and really, really, really hard to do well. And the recursion just made things trickier to debug. But most people agreed that coderetreats are really cool, and we should do more of them.

    Read the article

  • 2D character controller in unity (trying to get old-school platformers back)

    - by Notbad
    This days I'm trying to create a 2D character controller with unity (using phisics). I'm fairly new to physic engines and it is really hard to get the control feel I'm looking for. I would be really happy if anyone could suggest solution for a problem I'm finding: This is my FixedUpdate right now: public void FixedUpdate() { Vector3 v=new Vector3(0,-10000*Time.fixedDeltaTime,0); _body.AddForce(v); v.y=0; if(state(MovementState.Left)) { v.x=-_walkSpeed*Time.fixedDeltaTime+v.x; if(Mathf.Abs(v.x)>_maxWalkSpeed) v.x=-_maxWalkSpeed; } else if(state(MovementState.Right)) { v.x= _walkSpeed*Time.fixedDeltaTime+v.x; if(Mathf.Abs(v.x)>_maxWalkSpeed) v.x=_maxWalkSpeed; } _body.velocity=v; Debug.Log("Velocity: "+_body.velocity); } I'm trying here to just move the rigid body applying a gravity and a linear force for left and right. I have setup a physic material that makes no bouncing and 0 friction when moving and 1 friction with stand still. The main problem is that I have colliders with slopes and the velocity changes from going up (slower) , going down the slope (faster) and walk on a straight collider (normal). How could this be fixed? As you see I'm applying allways same velocity for x axis. For the player I have it setup with a sphere at feet position that is the rigidbody I'm applying forces to. Any other tip that could make my life easier with this are welcomed :). P.D. While coming home I have noticed I could solve this applying a constant force parallel to the surface the player is walking, but don't know if it is best method.

    Read the article

  • Using Artificial Intelligence (AI) to predict Stock Prices

    - by akaphenom
    Given a set of datavery similar to the Motley Fool CAPS system, where individual users enter BUY and SELL recommendations on various equities. What I would like to do is show each recommendation and I guess some how rate (1-5) as to whether it was good predictor<5 (ie corellation coeffient = 1) of the future stock price (or eps or whatever) or a horrible predictor (ie corellation coeffient = -1) or somewhere inbetween. Each recommendation is tagged to a particular user, so that can be tracked over time. I can also track market direction (bullish / bearish) based off of something like sp500 price. The components I think that would make sense in the model would be: user direction (long/short) market direction sector of stock The thought is that some users are better in bull markets than bear (and vice versa), and some are better at shorts than longs- and then a cobination the above. I can automatically tag the market direction and sector (based off the market at the time and the equity being recommended). The thought is that I could present a series of screens and allow me to rank each individual recommendation by displaying available data absolute, market and sector out performance for a specfic time period out. I would follow a detailed list for ranking the stocks so that the ranking is as objective as possible. My assumtion is that a single user is right no more than 57% of the time - but who knows. I could load the system and say "Lets rank the recommendation as a predictor of stock value 90 days forward"; and that would represent a very explicit set of rankings. NOW here is the crux - I want to create some sort of machine learning algorithm that can identify patterns over a series of time so that as recommendations stream into the application we maintain a ranking of that stock (ie. similar to correlation coeeficient) as to the likelihood of that recommendation (in addition to the past series of recommendations ) will affect the price. Now here is the super crux. I have never taken an AI class / read an AI book / never mind specific to machine learning. So I cam looking for guidance - sample or description of a similar system I could adapt. Place to look for info or any general help. Or even push me in the right direction to get started... My hope is to implment this with F# and be able to impress my friends with a new skillset in F# with an implementation of machine learnign and potentially something (application / source) I can include in a tech portfolio or blog space; Thank you for any advice in advance.

    Read the article

  • Are today's general purpose languages at the right level of abstarction ?

    - by KeesDijk
    Today Uncle Bob Martin, a genuine hero, showed this video In this video Bob Martin claims that our programming languages are at the right level for our problems at this time. One of the reasons I get from this video as that he Bob Martin sees us detail managers and our problems are at the detail level. This is the first time I have to disagree with Bob Martin and was wondering what the people at programmers think about this. First there is a difference between MDA and MDE MDA in itself hasn't worked and I blame way to much formalisation at a level you can't formalize these kind of problems. MDE and MDD are still trying to prove themselves and in my mind show great promise. e.g. look at MetaEdit The detail still needs to be management in my mind, but you do so in one place (framework or generators) instead of at multiple places. Right for our kind of problems ? I think depends on what problems you look at. Do the current programming languages keep up with the current demands on time to market ? Are they good at bridging the business IT communication gap ? So what do you think ?

    Read the article

  • Are today's general purpose languages at the right level of abstraction ?

    - by KeesDijk
    Today Uncle Bob Martin, a genuine hero, showed this video In this video Bob Martin claims that our programming languages are at the right level for our problems at this time. One of the reasons I get from this video as that Bob Martin sees us as detail managers and our problems are at the detail level. This is the first time I have to disagree with Bob Martin and was wondering what the people at programmers think about this. First there is a difference between MDA and MDE MDA in itself hasn't worked and I blame way to much formalisation at a level you can't formalize these kind of problems. MDE and MDD are still trying to prove themselves and in my mind show great promise. e.g. look at MetaEdit The detail still needs to be management in my mind, but you do so in one place (framework or generators) instead of at multiple places. Right for our kind of problems ? I think depends on what problems you look at. Do the current programming languages keep up with the current demands on time to market ? Are they good at bridging the business IT communication gap ? So what do you think ?

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >