Search Results

Search found 16499 results on 660 pages for 'off rhoden'.

Page 259/660 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • Parallelism in .NET – Part 2, Simple Imperative Data Parallelism

    - by Reed
    In my discussion of Decomposition of the problem space, I mentioned that Data Decomposition is often the simplest abstraction to use when trying to parallelize a routine.  If a problem can be decomposed based off the data, we will often want to use what MSDN refers to as Data Parallelism as our strategy for implementing our routine.  The Task Parallel Library in .NET 4 makes implementing Data Parallelism, for most cases, very simple. Data Parallelism is the main technique we use to parallelize a routine which can be decomposed based off data.  Data Parallelism refers to taking a single collection of data, and having a single operation be performed concurrently on elements in the collection.  One side note here: Data Parallelism is also sometimes referred to as the Loop Parallelism Pattern or Loop-level Parallelism.  In general, for this series, I will try to use the terminology used in the MSDN Documentation for the Task Parallel Library.  This should make it easier to investigate these topics in more detail. Once we’ve determined we have a problem that, potentially, can be decomposed based on data, implementation using Data Parallelism in the TPL is quite simple.  Let’s take our example from the Data Decomposition discussion – a simple contrast stretching filter.  Here, we have a collection of data (pixels), and we need to run a simple operation on each element of the pixel.  Once we know the minimum and maximum values, we most likely would have some simple code like the following: for (int row=0; row < pixelData.GetUpperBound(0); ++row) { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This simple routine loops through a two dimensional array of pixelData, and calls the AdjustContrast routine on each pixel. As I mentioned, when you’re decomposing a problem space, most iteration statements are potentially candidates for data decomposition.  Here, we’re using two for loops – one looping through rows in the image, and a second nested loop iterating through the columns.  We then perform one, independent operation on each element based on those loop positions. This is a prime candidate – we have no shared data, no dependencies on anything but the pixel which we want to change.  Since we’re using a for loop, we can easily parallelize this using the Parallel.For method in the TPL: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Here, by simply changing our first for loop to a call to Parallel.For, we can parallelize this portion of our routine.  Parallel.For works, as do many methods in the TPL, by creating a delegate and using it as an argument to a method.  In this case, our for loop iteration block becomes a delegate creating via a lambda expression.  This lets you write code that, superficially, looks similar to the familiar for loop, but functions quite differently at runtime. We could easily do this to our second for loop as well, but that may not be a good idea.  There is a balance to be struck when writing parallel code.  We want to have enough work items to keep all of our processors busy, but the more we partition our data, the more overhead we introduce.  In this case, we have an image of data – most likely hundreds of pixels in both dimensions.  By just parallelizing our first loop, each row of pixels can be run as a single task.  With hundreds of rows of data, we are providing fine enough granularity to keep all of our processors busy. If we parallelize both loops, we’re potentially creating millions of independent tasks.  This introduces extra overhead with no extra gain, and will actually reduce our overall performance.  This leads to my first guideline when writing parallel code: Partition your problem into enough tasks to keep each processor busy throughout the operation, but not more than necessary to keep each processor busy. Also note that I parallelized the outer loop.  I could have just as easily partitioned the inner loop.  However, partitioning the inner loop would have led to many more discrete work items, each with a smaller amount of work (operate on one pixel instead of one row of pixels).  My second guideline when writing parallel code reflects this: Partition your problem in a way to place the most work possible into each task. This typically means, in practice, that you will want to parallelize the routine at the “highest” point possible in the routine, typically the outermost loop.  If you’re looking at parallelizing methods which call other methods, you’ll want to try to partition your work high up in the stack – as you get into lower level methods, the performance impact of parallelizing your routines may not overcome the overhead introduced. Parallel.For works great for situations where we know the number of elements we’re going to process in advance.  If we’re iterating through an IList<T> or an array, this is a typical approach.  However, there are other iteration statements common in C#.  In many situations, we’ll use foreach instead of a for loop.  This can be more understandable and easier to read, but also has the advantage of working with collections which only implement IEnumerable<T>, where we do not know the number of elements involved in advance. As an example, lets take the following situation.  Say we have a collection of Customers, and we want to iterate through each customer, check some information about the customer, and if a certain case is met, send an email to the customer and update our instance to reflect this change.  Normally, this might look something like: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } } Here, we’re doing a fair amount of work for each customer in our collection, but we don’t know how many customers exist.  If we assume that theStore.GetLastContact(customer) and theStore.EmailCustomer(customer) are both side-effect free, thread safe operations, we could parallelize this using Parallel.ForEach: Parallel.ForEach(customers, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); Just like Parallel.For, we rework our loop into a method call accepting a delegate created via a lambda expression.  This keeps our new code very similar to our original iteration statement, however, this will now execute in parallel.  The same guidelines apply with Parallel.ForEach as with Parallel.For. The other iteration statements, do and while, do not have direct equivalents in the Task Parallel Library.  These, however, are very easy to implement using Parallel.ForEach and the yield keyword. Most applications can benefit from implementing some form of Data Parallelism.  Iterating through collections and performing “work” is a very common pattern in nearly every application.  When the problem can be decomposed by data, we often can parallelize the workload by merely changing foreach statements to Parallel.ForEach method calls, and for loops to Parallel.For method calls.  Any time your program operates on a collection, and does a set of work on each item in the collection where that work is not dependent on other information, you very likely have an opportunity to parallelize your routine.

    Read the article

  • Dual NIC Networking CentOS 5.3 (losing connection)

    - by Toymakerii
    I have a CentOS System with two NICs on it. One is Ethernet the other is Wireless. The Ethernet has an address of 192.168.1.110 and is tied to a LAN. (The LAN provides DHCP and DNS for its domain of 192.168.1.*) The Wireless card should have an address of 131.238.. and is tied to the WAN. For some evil reason, I can get the WAN to work for a short amount of time then it suddenly dies. I have to turn on/off the connection to get it to work again. Any suggestions?

    Read the article

  • Hard disk with bad clusters

    - by Dan
    I have been trying to backup some files up to DVD recently, and the burn process failed saying the CRC check failed for certain files. I then tried to browse to these files in Windows explorer and my whole machine locks up and I have to reboot. I ran check disk without the '/F /R' arguments and it told me I had bad sectors. So I re-ran it with the arguments and check disk fails during the 'Chkdsk is verifying usn journal' stage with this error: Insufficient disk space to fix the usn journal $j data stream The hard disk is a 300GB Partition on a 400GB Disk, and there is 160GBs of free space on the partition. My os (Windows 7) is installed on the other partition and is running fine. Any idea how I fix this? or repair it enough to copy my files off it?

    Read the article

  • Migration of VM from Hyper-V to Hyper-V R2 - Pass through disks

    - by Andrew Gillen
    I am trying to migrate a VM which is using two pass through disks from a legacy Hyper-V Cluster to a new R2 cluster. The migrated VM cannot use the pass through disks though. The guest OS (2008 R2) doesn't seem to like the disk and eventually tries to format the disk instead of mounting it. The migration process I have been using for all my VMs is to export the VM to a new lun, then add that new lun to the new cluster, importing the vm off it in the hyper-v console, then making it highly available. I assumed I could do the same thing and just add the two pass through disks to the new cluster and then attach them inside Hyper-V. Is there a process I need to follow to migrate pass through disks that does not involve setting up new Luns and robocopying the data over?

    Read the article

  • Can the Dell PowerEdge T320 boot from a USB stick in hard drive emulation mode?

    - by Icydog
    I am trying to install FreeNAS on a Dell PowerEdge T320. I've followed the instructions here (http://doc.freenas.org/index.php/Burning_an_IMG_File) to write the .img file to four different USB sticks with dd if=FreeNAS-9.2.1.5-RELEASE-x64.img of=/dev/sde, but I can't boot off of any of them. When I try to boot, I get the following screen: F1 FreeBSD F2 FreeBSD F3 Drive 0 Then it auto-selects F1 and either prints # about once a second forever, or it sits with a blinking cursor doing nothing. Forum posts and the wiki page linked above say to set the USB boot emulation mode to hard disk (as opposed to auto or CD/DVD/floppy), and older Dell PowerEdge models including the T310 have this option in the BIOS settings, but I cannot find it on the T320 with the latest BIOS 2.1.2. I have even tried writing the USB image to a USB hard disk, but with the same lack of success. Has anyone been able to find this USB boot setting on a T320, or been able to boot FreeNAS/FreeBSD from USB in some other way on a T320?

    Read the article

  • Which cloud hosting should I use? [closed]

    - by Alyssa Marie Isk
    Possible Duplicate: How to find web hosting that meets my requirements? If anyone wants to get some real life karma by giving a tiny non-profit pointers, please advise! I posted a thread about our website with highly variable traffic (www.WorldOceansDay.org). The event is on June 8th, and the traffic goes from 100-400/day in the off-season, to about 200,000 trying to access the site at any one time on June 8th. It's a Wordpress site hosted on GoDaddy shared hosting and predictably crashed horribly. From the internet's feedback, we've decided to move to a cloud server to handle the traffic, but I'm a huge newbie and I don't have very reliable mentorship, so I'm turning to crowdsourcing. We're trying to decide between Amazon Web Services and RackSpace Cloud servers. Our sys admin consultant also suggested GoDaddy's new 4GH but I have had such incredibly bad experiences with GoDaddy thus far that I am hesitant. From what I've read on the internet, RackSpace might be cheaper? Would AWS totally break the bank? We don't have a ton of money to spend on hosting. We'll also be using CloudFlare to cache and serve the pages since they're dynamic. I've found a few AWS & RackSpace calculators but I am not 100% on how to find those numbers... GoDaddy? Google Analytics? AWS calc is here: http://calculator.s3.amazonaws.com/calc5.html Rackspace is on the right: http://www.rackspace.com/cloud/cloud_hosting_products/servers/pricing/?0a313380 If anyone can help, or through some miracle feels like walking me through this, I would be incredibly appreciative.

    Read the article

  • Windows Server 2003 Terminal Server with installed licenses will not go beyond 2 connections

    - by Erwin Blonk
    I installed the Terminal Server role in Windows Server 2003 Standard 64-bits. Still, only 2 connections are allowed. The License Manager says that there are 10 Device CALs available, which is correct, and that none are given out. For good measure I let the server reboot, to no effect. Before this, there was another server (same Windows, except that it is 32 bits) active as a licensing server. I removed the role first and then then added it to the new server. I then removed the Terminal Server Licensing Server component off the old one and added it to the new one. After that, I added to licenses. When that didn't give the required result, I rebooted to new server. Still, the new server, with licenses and all, acts as if it has the 2 license RDP. The server are all stand-alone, there is no active directory been set up. Both servers are in different workgroups.

    Read the article

  • Convert OpenGL code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run into some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it right before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has different coordinations systems, and functions that alters how you must implement to code a bit. The getPickRay function on the answer linked is what I'm trying to convert. This is the part of my code that I think is giving me trouble when converting from openGl to directX Because I'm unsure on how their different coordination systems differs from one another. PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); Another thing that I am unsure about is this part: XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom right (800,600) ( or whatever resolution you would have) The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH. I removed the variables aspectRatio and zoomFactor because I assumed that they were related to some specific function of his game. To summarize it up to questions : Does the openGL coordination system differ in such a way that this equation in the first of my code examples wouldn't be valid when used in DirectX 11 ( with its respective screen coordination system)? Is the openGL method Matrix4f.transform(a, b, c) equal to the directX method c = XMVector3TransformCoord(b,a)? (where a is a matrix and b,c are vectors). Because I know when it comes to matrices order is important.

    Read the article

  • What is JavaScript, really?

    - by Lord Loh.
    All this started when I was looking for a way to test my webpage for JavaScript conformance like the W3C HTML Validator. I have not found one yet. So let me know if you know of any... I looked for the official JavaScript page and find ECMA Script. These people have standardized a scripting language (I do not feel like calling it JavaScript anymore!) and called it ECMA-262 (Wikipedia). Their latest work is Edition 5.1 JavaScript was developed my Mozilla Corporation and their last stable version is 1.8.5 (see this) which is based on the ECMA's edition 5.1 The Wikipedia page linked mentions dialects. Mozilla's JavaScript 1.8.5 is listed as a dialect along with JScript 9 (IE) and JavaScript (Chrome's V8[Wiki]) and a lot others. Am I to understand that JavaScript 1.8.5 is a derivative of the ECMA-262 and SpiderMonkey[Wiki] is an engine that runs it? And Chrome has its own dialect and V8 engine is the program that runs it? With all these dialects based off ECMA-262, what I can no longer understand is "What is JavaScript"? Are there any truly cross browser scripting languages? Do the various implementers come together to agree on the dialect cross compatibility? Is this effort ECMA?

    Read the article

  • "Not enough memory" error when running Peachtree Accounting 2010 with Logmein

    - by Justin Bowers
    I setup one of my clients with a Logmein Pro account recently and everything has been running fine so far. Today, however, my client received an error when they were logged into their work computer remotely and tried to run a report from Peachtree Accounting 2010. The error states: "Not enough memory to run" and she can't print the report. I looked online and found another user that had this same problem with Peachtree 2010 when connected via Logmein here. The error doesn't happen if the user runs the report while sitting at her work computer, but it always happens when connected via Logmein. I know most of you will push it off as just a Logmein issue, which I'm sure it is - but I would like any constructive input from anyone else who may have had the same or similar problem. Thanks, Justin

    Read the article

  • Wired PS/2 Keyboard and Mouse do not work in 12.04 Live CD or after fresh 12.04 install - Foxconn D270S Atom Motherboard

    - by david krajewski
    My Wired PS/2 keyboard and mouse do not work in 12.04, either in a fresh install or from the Live CD. A USB keyboard and mouse will work. The PS/2 keyboard and mouse will work using the same hardware and a fresh 10.04 Ubuntu install or a fresh Windows 7 install. The motherboard is a Foxconn D270S Atom based motherboard. This problem is specific to this motherboard and Ubuntu 12.04. So far I've tried running in the fresh 12.04 install: sudo apt-get install sudo apt-get upgrade sudo apt-get dist-upgrade After the apt-get upgraded the first time, I now get to the point where there are 0 items to be upgraded. Rebooted and still no PS/2 keyboard/mouse. I've also tried adding the following lines to GRUB at boot time (the PS/2 keyboard works in GRUB) acpi=noirq acpi=off Neither setting made any difference. The motherboard is a Foxconn D270S Atom based motherboard. Even more interestingly, 12.04 recognizes the same PS/2 keyboard and mouse on a separate Gigabyte AM3+ based motherboard. I only have this problem on this particular motherboard with Ubuntu 12.04. Any help would be appreciated. I bought this low-power motherboard specifically for the task of running Ubuntu 12.04 for the next five years... Update... I dug out an old PS/2 mouse that does not use a PS/2 to USB adapter but is directly wired for PS/2. Still no PS/2 keyboard or mouse after reboot. Again, I only have this problem with this motherboard and 12.04 Ubuntu. Other motherboards work fine with 12.04 and this motherboard works fine with 10.04. Update 2... I installed the 12.04 Server version. The text-based installer recognized the keyboard without issue, but after the first boot into the installed OS, the PS/2 keyboard would no longer respond. I tried installing Gnome on the Server install and the PS/2 mouse and keyboard don't work in Gnome either. I Opened bug #995570 for this issue

    Read the article

  • The Beginner’s Guide to Nano, the Linux Command-Line Text Editor

    - by YatriTrivedi
    New to the Linux command-line? Confused by all of the other advanced text editors? How-To Geek’s got your back with this tutorial to Nano, a simple text-editor that’s very newbie-friendly. When getting used to the command-line, Linux novices are often put off by other, more advanced text editors such as vim and emacs. While they are excellent programs, they do have a bit of a learning curve. Enter Nano, an easy-to-use text editor that proves itself versatile and simple. Nano is installed by default in Ubuntu and many other Linux distros and works well in conjunction with sudo, which is why we love it so much Latest Features How-To Geek ETC The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin How to Determine What Kind of Comment to Leave on Facebook [Humorous Flow Chart] View the Cars of Tomorrow Through the Eyes of the Past [Historical Video] Add Romance to Your Desktop with These Two Valentine’s Day Themes for Windows 7 Gmail’s Priority Inbox Now Available for Mobile Web Browsers Touchpad Blocker Locks Down Your Touchpad While Typing Arrival of the Viking Fleet Wallpaper

    Read the article

  • Best Practices for Renaming, Refactoring, and Breaking Changes with Teams

    - by David in Dakota
    What are some Best Practices for refactoring and renaming in team environments? I bring this up with a few scenarios in mind: If a library that is commonly referenced is refactored to introduce a breaking change to any library or project that references it. E.g. arbitrarily changing the name of a method. If projects are renamed and solutions must be rebuilt with updated references to them. If project structure is changed to be "more organized" by introducing folders and moving existing projects or solutions to new locations. Some additional thoughts/questions: Should changes like this matter or is resulting pain an indication of structure gone awry? Who should take responsibility for fixing errors related to a breaking change? If a developer makes a breaking change should they be responsible for going into affected projects and updating them or should they alert other developers and prompt them to change things? Is this something that can be done on a scheduled basis or is it something that should be done as frequently as possible? If a refactoring is put off for too long it is increasingly difficult to reconcile but at the same time in a day spending 1 hour increments fixing a build because of changes happening elsewhere. Is this a matter of a formal communication process or can it be organic?

    Read the article

  • Sonicwall SSL VPN Login : I need help with a NetExtender initialization error.

    - by jacke672
    I receive the error message: "Server is busy now, please try it later!" after logging into our Sonicwall successfully and attempting to initialize NetExtender for the "virtual office" function. It was set up yesterday and I am able to log in without any issues, but I keep getting hung up on the installation and/or initialization of NetExtender. I have attempted to connect remotely on XP and 7 using both FireFox and IE. I am using a Sonicwall NSA-240 with load balancing active (1 ISP and 2 different connections)- I have tried turning off load balancing and disabling the secondary connection but still receive the same error. I've been in contact with SonicWall support but I haven't heard from them as of yet so I'm asking the Server Fault community in the meantime... Does anyone have any ideas as per what could be the issue? Thanks -Jack

    Read the article

  • when using exchange 2010 it complains of not talking to Information Store service on exchange 2007

    - by ChrisMuench
    I am attempting to do an upgrade in my org from exchange 2007 to exchange 2010. I moved all the mailboxes and made sure my certificates were setup in exchange 2010. I then changed my ip forwarding rules to forward mail to my exchange 2010 box. I can receive email. I then powered off my exchange 2007 server. However now when I open my exchange 2010 console it is complaing of not being able to talk to the Information Store service on my exchange 2007 server. What is up? do I have to tell exchange somewhere "who" is the new exchange server? I'm confused. I hate it when software does it automatically. I want to force it to do something.

    Read the article

  • Parsing flat files using SSIS : SSIS Nugget

    - by jamiet
    Often when using SQL Server Integration Services (SSIS) you will find there is more than one way of accomplishing a task and that the most obvious method of doing so might not be the optimal one. In the video below I demonstrate this by way of an experiment using SSIS’s Flat File Source component; I show different ways that you can pull data from a flat file into the SSIS dataflow and also how the nature of the data itself can influence your choice as to how this task should be accomplished. If you are having trouble viewing the video in your blog reader then head to http://sqlblog.com/blogs/jamie_thomson/archive/2010/03/25/parsing-flat-files-using-ssis-ssis-nugget.aspx to see it as it is hosted on my blog!  The main point I want to get across from this video is that a little bit of creative thinking when building your dataflows can sometimes be very beneficial for performance; quite often building a solution that isn’t the most obvious might actually turn out to be the best one. You’ll notice, if you have watched the video, that my editing skills weren’t quite up to snuff and I cut off the final few words however all I was saying was that if you have any feedback on this video then I would love to hear it either via email or preferably the comments section below. I hope this turns out to be useful to some of you. @Jamiet P.S. Incidentally the parsing that we do using SSIS expressions in the video would be much easier if we had a TOKENISE function in SSIS’s expression language and I have asked for the introduction of such a function on Connect at [SSIS] TOKEN(string, tokeniser_string, occurence) function. Feel free to go and vote that up if you think this feature would be useful! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Kernel Panic on VMware Workstation 7.1.3

    - by i.h4d35
    I've been trying to install either Arch Linux or Fedora 17 on VMWare Workstation (7.1.3). After I point to the right ISO image, I get the following error: Booting the kernel PANIC: early exception 0d rip:ffffffff81042dc4 error 0 cr2 0 I am trying to install it on a machine which has a 3rd generation i5 processor. After checking A VMWare panic early exception fix for ivy bridge i3, i5, i7, I tried to turn off the nosmep acpi. This is around, I get the same error but at a different address. Apparently, others have faced this issue before. Thanks in advance.

    Read the article

  • linksys wrt54g router to a Cisco router?

    - by jasondavis
    This may be a strange question but I have no clue. I currently have a basic linksys wrt54g router fo9r my home network. I am considering getting a rack/cabinet and running a home server or 2 and hooking up my home network to it. If I were to do0 this I could pick up a cisco rack mounted router and switch off ebay to use. So If I were to do this, would I just plugin in the cables for the cisco router from my dsl modem or is there more to it to get these working?

    Read the article

  • Pumpktris: The Tetris-Enabled Jack-o’-Lantern [Video]

    - by Jason Fitzpatrick
    You can carve a pumpkin, you might even go high-tech and wire it up with a few LEDs, but can you play Tetris on it? Check out this fully functional Tetris clone built into a jack-o’-lantern. The build comes to us courtesy of tinker Nathan at HaHaBird, who writes: One of my habits is to write down all the crazy, fleeting ideas I have, then go back to review later rather than judging right off the bat, or even worse, forgetting them. Earlier in the month I was looking through that idea notepad and found “Make Tetris Pumpkins” from sometime last year. My original plan had been to make forms to shape pumpkins into Tetris pieces as they grew, then stack them together for Halloween. Since Halloween was only a few weeks away and it was too late to start growing pumpkins, I thought “Why not make a pumpkin you can play Tetris on instead?” Watch the Pumpktris in action via the video above or hit up the link below to see exactly how he went about building it. Pumpktris [via Geek News Central] 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8

    Read the article

  • Pluralsight Meet the Author Podcast on Structuring JavaScript Code

    - by dwahlin
    I had the opportunity to talk with Fritz Onion from Pluralsight about one of my recent courses titled Structuring JavaScript Code for one of their Meet the Author podcasts. We talked about why JavaScript patterns are important for building more re-useable and maintainable apps, pros and cons of different patterns, and how to go about picking a pattern as a project is started. The course provides a solid walk-through of converting what I call “Function Spaghetti Code” into more modular code that’s easier to maintain, more re-useable, and less susceptible to naming conflicts. Patterns covered in the course include the Prototype Pattern, Revealing Module Pattern, and Revealing Prototype Pattern along with several other tips and techniques that can be used. Meet the Author:  Dan Wahlin on Structuring JavaScript Code   The transcript from the podcast is shown below: [Fritz]  Hello, this is Fritz Onion with another Pluralsight author interview. Today we’re talking with Dan Wahlin about his new course, Structuring JavaScript Code. Hi, Dan, it’s good to have you with us today. [Dan]  Thanks for having me, Fritz. [Fritz]  So, Dan, your new course, which came out in December of 2011 called Structuring JavaScript Code, goes into several patterns of usage in JavaScript as well as ways of organizing your code and what struck me about it was all the different techniques you described for encapsulating your code. I was wondering if you could give us just a little insight into what your motivation was for creating this course and sort of why you decided to write it and record it. [Dan]  Sure. So, I got started with JavaScript back in the mid 90s. In fact, back in the days when browsers that most people haven’t heard of were out and we had JavaScript but it wasn’t great. I was on a project in the late 90s that was heavy, heavy JavaScript and we pretty much did what I call in the course function spaghetti code where you just have function after function, there’s no rhyme or reason to how those functions are structured, they just kind of flow and it’s a little bit hard to do maintenance on it, you really don’t get a lot of reuse as far as from an object perspective. And so coming from an object-oriented background in JAVA and C#, I wanted to put something together that highlighted kind of the new way if you will of writing JavaScript because most people start out just writing functions and there’s nothing with that, it works, but it’s definitely not a real reusable solution. So the course is really all about how to move from just kind of function after function after function to the world of more encapsulated code and more reusable and hopefully better maintenance in the process. [Fritz]  So I am sure a lot of people have had similar experiences with their JavaScript code and will be looking forward to seeing what types of patterns you’ve put forth. Now, a couple I noticed in your course one is you start off with the prototype pattern. Do you want to describe sort of what problem that solves and how you go about using it within JavaScript? [Dan]  Sure. So, the patterns that are covered such as the prototype pattern and the revealing module pattern just as two examples, you know, show these kind of three things that I harp on throughout the course of encapsulation, better maintenance, reuse, those types of things. The prototype pattern specifically though has a couple kind of pros over some of the other patterns and that is the ability to extend your code without touching source code and what I mean by that is let’s say you’re writing a library that you know either other teammates or other people just out there on the Internet in general are going to be using. With the prototype pattern, you can actually write your code in such a way that we’re leveraging the JavaScript property and by doing that now you can extend my code that I wrote without touching my source code script or you can even override my code and perform some new functionality. Again, without touching my code.  And so you get kind of the benefit of the almost like inheritance or overriding in object oriented languages with this prototype pattern and it makes it kind of attractive that way definitely from a maintenance standpoint because, you know, you don’t want to modify a script I wrote because I might roll out version 2 and now you’d have to track where you change things and it gets a little tricky. So with this you just override those pieces or extend them and get that functionality and that’s kind of some of the benefits that that pattern offers out of the box. [Fritz]  And then the revealing module pattern, how does that differ from the prototype pattern and what problem does that solve differently? [Dan]  Yeah, so the prototype pattern and there’s another one that’s kind of really closely lined with revealing module pattern called the revealing prototype pattern and it also uses the prototype key word but it’s very similar to the one you just asked about the revealing module pattern. [Fritz]  Okay. [Dan]  This is a really popular one out there. In fact, we did a project for Microsoft that was very, very heavy JavaScript. It was an HMTL5 jQuery type app and we use this pattern for most of the structure if you will for the JavaScript code and what it does in a nutshell is allows you to get that encapsulation so you have really a single function wrapper that wraps all your other child functions but it gives you the ability to do public versus private members and this is kind of a sort of debate out there on the web. Some people feel that all JavaScript code should just be directly accessible and others kind of like to be able to hide their, truly their private stuff and a lot of people do that. You just put an underscore in front of your field or your variable name or your function name and that kind of is the defacto way to say hey, this is private. With the revealing module pattern you can do the equivalent of what objective oriented languages do and actually have private members that you literally can’t get to as an external consumer of the JavaScript code and then you can expose only those members that you want to be public. Now, you don’t get the benefit though of the prototype feature, which is I can’t easily extend the revealing module pattern type code if you don’t like something I’m doing, chances are you’re probably going to have to tweak my code to fix that because we’re not leveraging prototyping but in situations where you’re writing apps that are very specific to a given target app, you know, it’s not a library, it’s not going to be used in other apps all over the place, it’s a pattern I actually like a lot, it’s very simple to get going and then if you do like that public/private feature, it’s available to you. [Fritz]  Yeah, that’s interesting. So it’s almost, you can either go private by convention just by using a standard naming convention or you can actually enforce it by using the prototype pattern. [Dan]  Yeah, that’s exactly right. [Fritz]  So one of the things that I know I run across in JavaScript and I’m curious to get your take on is we do have all these different techniques of encapsulation and each one is really quite different when you’re using closures versus simply, you know, referencing member variables and adding them to your objects that the syntax changes with each pattern and the usage changes. So what would you recommend for people starting out in a brand new JavaScript project? Should they all sort of decide beforehand on what patterns they’re going to stick to or do you change it based on what part of the library you’re working on? I know that’s one of the points of confusion in this space. [Dan]  Yeah, it’s a great question. In fact, I just had a company ask me about that. So which one do I pick and, of course, there’s not one answer fits all. [Fritz]  Right. [Dan]  So it really depends what you just said is absolutely in my opinion correct, which is I think as a, especially if you’re on a team or even if you’re just an individual a team of one, you should go through and pick out which pattern for this particular project you think is best. Now if it were me, here’s kind of the way I think of it. If I were writing a let’s say base library that several web apps are going to use or even one, but I know that there’s going to be some pieces that I’m not really sure on right now as I’m writing I and I know people might want to hook in that and have some better extension points, then I would look at either the prototype pattern or the revealing prototype. Now, really just a real quick summation between the two the revealing prototype also gives you that public/private stuff like the revealing module pattern does whereas the prototype pattern does not but both of the prototype patterns do give you the benefit of that extension or that hook capability. So, if I were writing a library that I need people to override things or I’m not even sure what I need them to override, I want them to have that option, I’d probably pick a prototype, one of the prototype patterns. If I’m writing some code that is very unique to the app and it’s kind of a one off for this app which is what I think a lot of people are kind of in that mode as writing custom apps for customers, then my personal preference is the revealing module pattern you could always go with the module pattern as well which is very close but I think the revealing module patterns a little bit cleaner and we go through that in the course and explain kind of the syntax there and the differences. [Fritz]  Great, that makes a lot of sense. [Fritz]  I appreciate you taking the time, Dan, and I hope everyone takes a chance to look at your course and sort of make these decisions for themselves in their next JavaScript project. Dan’s course is, Structuring JavaScript Code and it’s available now in the Pluralsight Library. So, thank you very much, Dan. [Dan]  Thanks for having me again.

    Read the article

  • How about a new platform for your next API&hellip; a CMS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/05/22/how-about-a-new-platform-for-your-next-apihellip-a.aspxSay what? I’m seeing a type of API emerge which serves static or long-lived resources, which are mostly read-only and have a controlled process to update the data that gets served. Think of something like an app configuration API, where you want a central location for changeable settings. You could use this server side to store database connection strings and keep all your instances in sync, or it could be used client side to push changes out to all users (and potentially driving A/B or MVT testing). That’s a good candidate for a RESTful API which makes proper use of HTTP expiration and validation caching to minimise traffic, but really you want a front end UI where you can edit the current config that the API returns and publish your changes. Sound like a Content Mangement System would be a good fit? I’ve been looking at that and it’s a great fit for this scenario. You get a lot of what you need out of the box, the amount of custom code you need to write is minimal, and you get a whole lot of extra stuff from using CMS which is very useful, but probably not something you’d build if you had to put together a quick UI over your API content (like a publish workflow, fine-grained security and an audit trail). You typically use a CMS for HTML resources, but it’s simple to expose JSON instead – or to do content negotiation to support both, so you can open a resource in a browser and see a nice visual representation, or request it with: Accept=application/json and get the same content rendered as JSON for the app to use. Enter Umbraco Umbraco is an open source .NET CMS that’s been around for a while. It has very good adoption, a lively community and a good release cycle. It’s easy to use, has all the functionality you need for a CMS-driven API, and it’s scalable (although you won’t necessarily put much scale on the CMS layer). In the rest of this post, I’ll build out a simple app config API using Umbraco. We’ll define the structure of the configuration resource by creating a new Document Type and setting custom properties; then we’ll build a very simple Razor template to return configuration documents as JSON; then create a resource and see how it looks. And we’ll look at how you could build this into a wider solution. If you want to try this for yourself, it’s ultra easy – there’s an Umbraco image in the Azure Website gallery, so all you need to to is create a new Website, select Umbraco from the image and complete the installation. It will create a SQL Azure website to store all the content, as well as a Website instance for editing and accessing content. They’re standard Azure resources, so you can scale them as you need. The default install creates a starter site for some HTML content, which you can use to learn your way around (or just delete). 1. Create Configuration Document Type In Umbraco you manage content by creating and modifying documents, and every document has a known type, defining what properties it holds. We’ll create a new Document Type to describe some basic config settings. In the Settings section from the left navigation (spanner icon), expand Document Types and Master, hit the ellipsis and select to create a new Document Type: This will base your new type off the Master type, which gives you some existing properties that we’ll use – like the Page Title which will be the resource URL. In the Generic Properties tab for the new Document Type, you set the properties you’ll be able to edit and return for the resource: Here I’ve added a text string where I’ll set a default cache lifespan, an image which I can use for a banner display, and a date which could show the user when the next release is due. This is the sort of thing that sits nicely in an app config API. It’s likely to change during the life of the product, but not very often, so it’s good to have a centralised place where you can make and publish changes easily and safely. It also enables A/B and MVT testing, as you can change the response each client gets based on your set logic, and their apps will behave differently without needing a release. 2. Define the response template Now we’ve defined the structure of the resource (as a document), in Umbraco we can define a C# Razor template to say how that resource gets rendered to the client. If you only want to provide JSON, it’s easy to render the content of the document by building each property in the response (Umbraco uses dynamic objects so you can specify document properties as object properties), or you can support content negotiation with very little effort. Here’s a template to render the document as HTML or JSON depending on the Accept header, using JSON.NET for the API rendering: @inherits Umbraco.Web.Mvc.UmbracoTemplatePage @using Newtonsoft.Json @{ Layout = null; } @if(UmbracoContext.HttpContext.Request.Headers["accept"] != null &amp;&amp; UmbracoContext.HttpContext.Request.Headers["accept"] == "application/json") { Response.ContentType = "application/json"; @Html.Raw(JsonConvert.SerializeObject(new { cacheLifespan = CurrentPage.cacheLifespan, bannerImageUrl = CurrentPage.bannerImage, nextReleaseDate = CurrentPage.nextReleaseDate })) } else { <h1>App configuration</h1> <p>Cache lifespan: <b>@CurrentPage.cacheLifespan</b></p> <p>Banner Image: </p> <img src="@CurrentPage.bannerImage"> <p>Next Release Date: <b>@CurrentPage.nextReleaseDate</b></p> } That’s a rough-and ready example of what you can do. You could make it completely generic and just render all the document’s properties as JSON, but having a specific template for each resource gives you control over what gets sent out. And the templates are evaluated at run-time, so if you need to change the output – or extend it, say to add caching response headers – you just edit the template and save, and the next client request gets rendered from the new template. No code to build and ship. 3. Create the content With your document type created, in  the Content pane you can create a new instance of that document, where Umbraco gives you a nice UI to input values for the properties we set up on the Document Type: Here I’ve set the cache lifespan to an xs:duration value, uploaded an image for the banner and specified a release date. Each property gets the appropriate input control – text box, file upload and date picker. At the top of the page is the name of the resource – myapp in this example. That specifies the URL for the resource, so if I had a DNS entry pointing to my Umbraco instance, I could access the config with a URL like http://static.x.y.z.com/config/myapp. The setup is all done now, so when we publish this resource it’ll be available to access.  4. Access the resource Now if you open  that URL in the browser, you’ll see the HTML version rendered: - complete with the  image and formatted date. Umbraco lets you save changes and preview them before publishing, so the HTML view could be a good way of showing editors their changes in a usable view, before they confirm them. If you browse the same URL from a REST client, specifying the Accept=application/json request header, you get this response:   That’s the exact same resource, with a managed UI to publish it, being accessed as HTML or JSON with a tiny amount of effort. 5. The wider landscape If you have fairy stable content to expose as an API, I think  this approach is really worth considering. Umbraco scales very nicely, but in a typical solution you probably wouldn’t need it to. When you have additional requirements, like logging API access requests - but doing it out-of-band so clients aren’t impacted, you can put a very thin API layer on top of Umbraco, and cache the CMS responses in your API layer:   Here the API does a passthrough to CMS, so the CMS still controls the content, but it caches the response. If the response is cached for 1 minute, then Umbraco only needs to handle 1 request per minute (multiplied by the number of API instances), so if you need to support 1000s of request per second, you’re scaling a thin, simple API layer rather than having to scale the more complex CMS infrastructure (including the database). This diagram also shows an approach to logging, by asynchronously publishing a message to a queue (Redis in this case), which can be picked up later and persisted by a different process. Does it work? Beautifully. Using Azure, I spiked the solution above (including the Redis logging framework which I’ll blog about later) in half a day. That included setting up different roles in Umbraco to demonstrate a managed workflow for publishing changes, and a couple of document types representing different resources. Is it maintainable? We have three moving parts, which are all managed resources in Azure –  an Azure Website for Umbraco which may need a couple of instances for HA (or may not, depending on how long the content can be cached), a message queue (Redis is in preview in Azure, but you can easily use Service Bus Queues if performance is less of a concern), and the Web Role for the API. Two of the components are off-the-shelf, from open source projects, and the only custom code is the API which is very simple. Does it scale? Pretty nicely. With a single Umbraco instance running as an Azure Website, and with 4x instances for my API layer (Standard sized Web Roles), I got just under 4,000 requests per second served reliably, with a Worker Role in the background saving the access logs. So we had a nice UI to publish app config changes, with a friendly Web preview and a publishing workflow, capable of supporting 14 million requests in an hour, with less than a day’s effort. Worth considering if you’re publishing long-lived resources through your API.

    Read the article

  • Upgrading log shipping from 2005 to 2008 or 2008R2

    - by DavidWimbush
    If you're using log shipping you need to be aware of some small print. The general idea is to upgrade the secondary server first and then the primary server because you can continue to log ship from 2005 to 2008R2. But this won't work if you're keeping your secondary databases in STANDBY mode rather than IN RECOVERY. If you're using native log shipping you'll have some work to do. If you've rolled your own log shipping (ahem) you can convert a STANDBY database to IN RECOVERY like this:   restore database [dw]   with norecovery; and then change your restore code to use WITH NORECOVERY instead of WITH STANDBY. (Finally all that aggravation pays off!) You can either upgrade the secondary server in place or rebuild it. A secondary database doesn't actually get upgraded until you recover it so the log sequence chain is not broken and you can continue shipping from the primary. Just remember that it can take quite some time to upgrade a database so you need to factor that into the expectations you give people about how long it will take to fail over. For more details, check this out: http://msdn.microsoft.com/en-us/library/cc645954(SQL.105).aspx

    Read the article

  • Box Selection and Multi-Line Editing with VS 2010

    - by ScottGu
    This is the twenty-second in a series of blog posts I’m doing on the VS 2010 and .NET 4 release. I’ve already covered some of the code editor improvements in the VS 2010 release.  In particular, I’ve blogged about the Code Intellisense Improvements, new Code Searching and Navigating Features, HTML, ASP.NET and JavaScript Snippet Support, and improved JavaScript Intellisense.  Today’s blog post covers a small, but nice, editor improvement with VS 2010 – the ability to use “Box Selection” when performing multi-line editing.  This can eliminate keystrokes and enables some slick editing scenarios. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Box Selection Box selection is a feature that has been in Visual Studio for awhile (although not many people knew about it).  It allows you to select a rectangular region of text within the code editor by holding down the Alt key while selecting the text region with the mouse.  With VS 2008 you could then copy or delete the selected text. VS 2010 now enables several more capabilities with box selection including: Text Insertion: Typing with box selection now allows you to insert new text into every selected line Paste/Replace: You can now paste the contents of one box selection into another and have the content flow correctly Zero-Length Boxes: You can now make a vertical selection zero characters wide to create a multi-line insert point for new or copied text These capabilities can be very useful in a variety of scenarios.  Some example scenarios: change access modifiers (private->public), adding comments to multiple lines, setting fields, or grouping multiple statements together. Great 3 Minute Box-Selection Video Demo Brittany Behrens from the Visual Studio Editor Team has an excellent 3 minute video that shows off a few cool VS 2010 multi-line code editing scenarios with box selection:   Watch it to learn a few ways you can use this new box selection capability to optimize your typing in VS 2010 even further: Hope this helps, Scott P.S. You can learn more about the VS Editor by following the Visual Studio Team Blog or by following @VSEditor on Twitter.

    Read the article

  • MacBook Pro - 7200 vs 5400 rpm drives - Heat and Noise

    - by webworm
    I am looking at a new 15" MacBook Pro for development purposes. I am planning to run a Virtual Machine for about 50% of my work (Windows 7 x64, IIS, SQL Server, and VS 2010). The upgrade from a 5400 rpm drive to a 7200 rpm is only $45. From what I understand the faster rotational speed of the 7200 rpm drive will help virtual machine performance. However, I am concerned that additional heat and fan noise might be an issue. I will be running mostly on A/C power so decreased battery life is not a major concern for me. Since I would be running with a Core i7 processor which gives off a fair amount of heat already I was wondering if it might be best to stay at 5400 rpm for the hard drive. What do you all think? Thanks!

    Read the article

  • Java Champion Stephen Chin on New Features and Functionality in JavaFX

    - by janice.heiss(at)oracle.com
    In an Oracle Technology Network interview, Java Champion Stephen Chin, Chief Agile Methodologist for GXS, and one of the most prolific and innovative JavaFX developers, provides an update on the rapidly developing changes in JavaFX.Chin expressed enthusiasm about recent JavaFX developments:"There is a lot to be excited about -- JavaFX has a new API face. All the JavaFX 2.0 APIs will be exposed via Java classes that will make it much easier to integrate Java server and client code. This also opens up some huge possibilities for JVM language integration with JavaFX." Chin also spoke about developments in Visage, the new language project created to fill the gap left by JavaFX Script:"It's a domain-specific language for writing user interfaces, which addresses the needs of UI developers. Visage takes over where JavaFX Script left off, providing a statically typed, declarative language with lots of features to make UI development a pleasure.""My favorite language features from Visage are the object literal syntax for quickly building scene graphs and the bind keyword for connecting your UI to the backend model. However, the language is built for UI development from the top down, including subtle details like null-safe dereferencing for exception-less code."Read the entire article.

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >