Search Results

Search found 13573 results on 543 pages for 'the great widi'.

Page 208/543 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • counting unique values based on multiple columns

    - by gooogalizer
    I am working in google spreadsheets and I am trying to do some counting that takes into consideration cell values across multiple cells in each row. Here's my table: |AUTHOR| |ARTICLE| |VERSION| |PRE-SELECTED| ANDREW GOLF STREAM 1 X ANDREW GOLF STREAM 2 X ANDREW HURRICANES 1 JOHN CAPE COD 1 X JOHN GOLF STREAM 1 (Google doc here) Each person can submit multiple articles as well as multiple versions of the same article. Sometimes different people submit different articles that happen to be identically named (Andrew and John both submitted different articles called "Golf Stream"). Multiple versions written by the same person do not count as unique, but articles with the same title written by different people do count as unique. So, I am looking to find a formula that Counts the number of unique articles that have been submitted [4] (without having to manually create extra columns for doing CONCATS, if possible) It would also be great to find formulas that: Count the number of unique articles that have been pre-selected (marked "X" in "PRE-SELECTED" column) [2] Count the number of unique articles that have only 1 version [4] Count the number of unique articles that have more than 1 of their versions pre-selected 1 Thank you so much! Nikita

    Read the article

  • Rouen Business School builds its entire back office UI with Visual WebGui

    - by Webgui
    Two years ago, Rouen Business School (AMBA accredited institution located in Rouen, Normandy, France) decided to develop and implement a proprietary information system in-house. The objective was to administer all the data encompassed by a classic 3500 Students business school: from on-line application forms to the registration system including financial information, scheduling, grades management, etc. The development team at Rouen Business School chose Visual WebGui for the UI. “When we tested the Visual WebGui solution we were really amazed and enthusiastic. It was exactly the kind of solution we were looking for… The great performance of the solution allows us to manage a large amount of information with no delay with a very positive feedback at the user end,” said Stéphane Henry the IT Project Manager of the school.   As a result of the fast development, easy deployment, performance, and professional design that the team experienced with Visual WebGui, the entire back office of Rouen Business School information system was chosen to be developed with the Visual WebGui framework “and after two years we do not see any reason to change this,” commented Stéphane Henry who added that “all the original requirements were satisfied using Visual WebGui.” You can read the full Case study here >

    Read the article

  • More Retro Games

    - by Matt Christian
    Last week I made 2 stops to my local game stores and spent a load of cash on a bunch of new retro games for my collection.  Here are the recent additions: NES - Mega Man 2 - The Adventures of Bayou Billy - Ducktales - Metal Gear - Super Mario Bros / Duck Hunt - Firestorm - Dragon's Lair - Bartman Meets Radioactive Man N64 - Superman 64 - Zelda: Ocarina of Time (in original box, box is in poor condition) Atari - Superman - Adventure - Donkey Kong - Raiders of the Lost Ark Dreamcast - Memory card with view screen - Space Channel 5 Genesis (all in case) - Jurassic Park - Sonic Spinball - Sonic the Hegehog 3 (missing manual) - Spiderman (also called Spiderman vs. The Kingpin) GameGear - Bart vs The Space Mutants Quite a large haul given it was all purchased in 2 days.  Although, Metal Gear I got for a great deal and almost considered buying their other copy simply to resale for more though I decided against it to let another lucky soul find it.  I may need to run over there again because I think they had TMNT 2 (NES) for around $6 and it usually sells for more than that.  I could have sworn I grabbed it and bought it but my receipt tells me differently. I also found my copy of Super Mario 3 and added that to my collection.  Unfortunately one of the corners of the label has begun to peel up pretty badly which sucks although it's still a good item for the collection. In other retro news, this weekend was Easter and while at my grandparents the cousins wanted to play on their NES which was not working.  Me being the retro NES nerd I am, grabbed a screw driver, some Windex, a few toothpicks, and a few cotton swabs and had it up and running under an hour (that includes eating dinner!).  The NES holds the games tighter, has a better connection, and works almost instantly.  I should do THAT for a living!

    Read the article

  • pros and cons of taking an ABAP job

    - by sJhonny
    I'm a programmer with 3 years of .NET experience under my belt, and am currently looking for a new job. One of the options I'm considering is as an OO ABAP developer position with SAP. However, I have several concerns about taking an ABAP job: as ABAP is used exclusively by SAP, any experience in ABAP that I have would be irrelevant in the outside world. I'm also worried that I wouldn't be exposed to new technologies while working in ABAP, and ultimately I would lose touch with what's going on in the world. This is a real sore point, since I really enjoy exploring and learning new & cool stuff. (*note: Yes, I could experiment with other technologies & trends on my own time, but this is much harder to do, and isn't really the same as working full-time with them) One of the nicest things about programming, for me, is finding a great OO architecture / design (I'm really into object-oriented :)). I know that ABAP is a procedural language, and I'm not certain how 'OO' it's OO version is. This leads me to the conclusion that, unless I stay with SAP to the end of my career, any time spent there would be professionaly unbenificial. Is there anyone who can shed some light on these opinions? are my concerns founded? Are there any advantages (career and technology-wise) to ABAP that I'm missing?

    Read the article

  • Emulate "Go to Dekstop/Home/etc." behavior in OS X via AppleScript

    - by pattulus
    OS X has build in support for going to certain Folders (Home, Utilities, Desktop, etc.) via a Shortcut. I wanted to emulate this behavior for the the Downloads Folder. The only thing that is missing the script below is that it won’t succeed when no window is opened in the Finder (see Error message). tell application "Finder" activate set target of Finder window 1 to folder "Downloads" of folder "username" of disk "Macintosh HD" end tell Error message: error "Finder got an error: Can’t set Finder window 1 to folder \"Downloads\" of folder \"username\" of disk \"Macintosh HD\"." number -10006 from Finder window 1 It great if you know about some kind of 'if-compliement' that triggers opening the Downloads Folder in case there is no window 1 open in the Finder. Thanks in advance.

    Read the article

  • Failed to load viewstate.The control tree into which viewstate is being loaded...etc

    - by alaa9jo
    Two days ago,a colleague of mine tried to publish an asp.net website (which is built in VS2008 using framework 3.5) to our server,he configured everything in IIS (he made sure that the selected asp.net version is 2.0) and launched the website..at first it was working great but when he tried to click on a specific treeview...BOOM..: "Failed to load viewstate. The control tree into which viewstate is being loaded must match the control tree that was used to save viewstate during the previous request. For example, when adding controls dynamically, the controls added during a post-back must match the type and position of the controls added during the initial request." In that page there were these control: a TreeView and a Placeholder,when the user selects any node then it's controls will be created dynamically into that placeholder..for the first time it's working fine but when (s)he select another node then that issue appears. He called me to help him with this issue,for me this is the first time I see such an issue,scratch my head then I decided to eliminate the possibilities of this issue one by one,at the development machine it's working perfectly,he published the website at the local IIS and again..it's working perfectly,I took a copy of the website and published it into my laptop but no issues at all,so this is means that it's not an issue in the code. So there is something missing/wrong in our server [it has Windows Server 2003],we went to the server and checked on the web-config and the configurations on IIS...nothing wrong so far,so I decided to check if the framework 3.5 is installed or not and the answer: it wasn't installed Of course he assumed that it was installed and there was nothing to tell if it wasn't from the "ASP.Net version" in IIS because frameworks 3.0 and 3.5 will not be listed there [2.0 will be listed there instead],the only way to check if it was installed or not is to search for the framework in this path:[WINDOWS Folder]\Microsoft.NET\Framework or check if it was installed in Add or remove programs. The obvious solution for his case: We installed Framework 3.5 SP1 into our server,did a restart to the machine and it worked ! If anyone faced the same issue and solved it using the same solution or with a different one please post it here to share experience.

    Read the article

  • WPB .Net User Group 11/29 Meeting - Kinect SDK with Joe Healy - New Meeting Location

    - by Sam Abraham
    We are excited to share great news and updates regarding the West Palm Beach .Net User Group. Our upcoming meeting will feature Joe Healy from Microsoft as speaker for the November 29th, 2011 6:30 PM meeting.   He will be covering the Kinect SDK and answering all our questions regarding the latest Windows Phone 7 Release. We will be also raffling many valuable items as part of our usual free raffle and hope each of our members leaves with a freebie.   We are also honored to share that we will be hosting our special meeting at a new location:   PC Professor 6080 Okeechobee Blvd.,  #200 West Palm Beach, FL 33417 Phone: 561-684-3333.   This is right by the Florida Turnpike entrance on Okeechobee Blvd.   PC Professor will be also providing our free pizza/soda and some additional surprise items for this meeting to mark the debut of our meetings at their location!   We would like to use this opportunity to thank our current host, CompTec, for its generous support and for hosting us for the past 2 years and look forward to their continued support and sponsorship.   A lot of work and effort is put into hosting a meeting that we hope translates into added value and benefit for our membership. We always welcome your feedback and participation as we strive to continuously improve the group.   Special thanks to our group member, Zack Weiner, for helping us find this new location.   For more details and to register please visit: http://www.fladotnet.com/Reg.aspx?EventID=536   Hope to see you all there.   --Sam Abraham & Venkat Subramanian Site Directors – West Palm Beach .Net User Group

    Read the article

  • Visual WebGui launches a CompanionKit for enhanced developers experience

    - by Webgui
    Visual WebGui launched a new major live demo of the platform's concepts, features and controls and the code behind them. The new Developer CompanionKit is a hige leap forward in the developer experience by allowing developers a hands-on exploration of Visual WebGui which should provide better understanding of the system and the ability to utilize the great advantages of Visual WebGui in order to develop better performing rich web applications. The CompanionKit is available online at companionkit.visualwebgui.com/main.wgx We invite you to Explore Visual WebGui via the new CompanionKit and to watch the CompanionKit Intro video. Below is a screenshot taken from the live CompanionKit which allows developers to see how applying an alternate style to the appearance of a DataGridView is done and how it looks running live and its code (C# or VB.NET). You can access the different Controls (within the Controls section) from the left navigation bar or perform a free text search which shows the relevant results from all the sections - additional sections such as a Concept section are expected to be added in the near future.   In addition, the New Developer CompanionKit which was built with Visual WebGui showcases the enhanced UI design capabilities of building more engaing, modern Web 2.0 applications. The CompanionKit will also be available for download in the next few days as part of the media for 6.4 beta 2 SDK (.NET 2.0 or .NET 3.5) under "Help and Documentation".

    Read the article

  • Screen shots and documentation on the cheap

    - by Kyle Burns
    Occasionally I am surprised to open up my toolbox and find a great tool that I've had for years and never noticed.  The other day I had just such an experience with Windows Server 2008.  A co-worker of mine was squinting to read to screenshots that he had taken using the "Print Screen, paste" method in WordPad and asked me if there was a better tool available at a reasonable cost.  My first instinct was to take a look at CamStudio for him, but I also knew that he had an immediate need to take some more screenshots, so I decided to check and see if the Snipping Tool found in Windows 7 is also available in Windows Server 2008.  I clicked the Start button and typed “snip” into the search bar and while the Snipping Tool did not come up, a Control Panel item labeled “Record steps to reproduce a problem” did. The application behind the Control Panel entry was “Problem Steps Recorder” (PSR.exe) and I have confirmed that it is available in Windows 7 and Windows Server 2008 R2 but have not checked other platforms.  It presents a pretty minimal and intuitive interface in providing a “Start Record”, “Stop Record”, and “Add Comment” button.  The “Start Record” button shockingly starts recording and, sure enough, the “Stop Record” button stops recording.  The “Add Comment” button prompts for a comment and for you to highlight the area of the screen to which your comment is related.  Once you’re done recording, the tool outputs an MHT file packaged in a ZIP archive.  This file contains a series of screen shots depicting the user’s interactions and giving timestamps and descriptive text (such as “User left click on “Test” in “My Page – Windows Internet Explorer”) as well as the comments they made along the way and some diagnostics about the applications captured. The Problem Steps Recorder looks like a simple solution to the most common of my needs for documentation that can turn “I can’t understand how to make it do what you’re reporting” to “Oh, I see what you’re talking about and will fix it right away”.  I you’re like me and haven’t yet discovered this tool give it a whirl and see for yourself.

    Read the article

  • Need to make a scheduled task run as another user but keep the current user’s environment

    - by Chad Marmon
    I need to backup users .pst files. The current method I am trying is making a shadow copy using Diskshadow. My script works great all but Diskshadow needs to be ran as administrator but also needs to retain the logged-on user's environment variables; specifically, the %USERNAME% and %HOMESHARE% variables so the right user’s files get copied up to the right network location. I have for the most part got this to work), but there’s no straightforward (or secure, at least) way to pass the password. If I set up a scheduled task to run the script as a domain user with local admin privs, the environment variables get lost. I need to run this script automagically so that there should be no user interaction. If I could figure out how to make a scheduled task run as another user but keep the current user’s environment, I think this would work, but I’ve been beating my head against that for a while now, without any luck.

    Read the article

  • JCP Party at JavOne and other JCP events

    - by heathervc
    Don't miss all of these great opportunities to get involved with the JCP program at JavaOne next week. The details are listed below and listed on the JCP at JavaOne page  as well. Join us for the annual JCP community party on Tuesday evening, 2 October, to be held at the Infusion Lounge. Drop by starting at 6:30 pm to meet fellow Java Community members, JCP members and EC representatives, enjoy appetizers/beer, pick up a door prize, enter a raffle and congratulate the winners and nominees (newly updated nominee information available now) of the 10th annual awards in three categories: JCP Member of the Year, Outstanding Spec Lead, and Most Significant JSR. The day by day breakdown is as follows... Sunday 9/30/12JCP and OpenJDK: Using the JUGs' "Adopt" Programs in Your Group Session ID: UGF10434Location: Moscone West - 2002Date and Time: 9/30/12, 12:15 PM - 1:00 PMJCP Public Executive Committee Face-to-Face Meeting Open to Executive Committee Members and the Java Developer CommunityLocation: Clift Hotel, 495 Geary Street, San Francisco - Rita Room (downstairs from Lobby)Date and Time: 9/30/12, 2:00 PM - 3:30 PM; Agenda includes open Q&A, JCP.Next, EC Elections - no JavaOne pass required! Monday 10/1/12JCP in the OTN Java DEMOgrounds Location: Hilton Hotel Grand BallroomDate and Time: 10/1/12, 4:00 PM - 4:30 PMJCP.Next: Reinvigorating Java Standards Session ID: BOF6272Location: Hilton San Francisco - Plaza A/BDate and Time: 10/1/12, 4:30 PM - 5:15 PM101 Ways to Improve Java: Why Developer Participation Matters Session ID: BOF6283Location: Hilton San Francisco - Continental Ballroom 4Date and Time: 10/1/12, 5:30 PM - 6:15 PM Tuesday 10/2/12JCP in the OTN Java DEMOgrounds Location: Hilton Hotel Grand BallroomDate and Time: 10/2/12, 12:00 PM - 1:30 PMSpec Leads Meeting with the JCP PMO Location: Hilton San Francisco - Van Ness RoomDate and Time: 10/2/12, 3:00 PM - 4:00 PMCome learn how you benefit from the changesMeet the JCP Executive Committee Candidates Session ID: BOF6307Location: Hilton San Francisco - Golden Gate 3/4/5Date and Time: 10/2/12, 4:30 PM - 5:15 PMThe 10th Annual JCP Awards Presentation and Party Enjoy an evening with this year's JCP Award nominees and watch as we announce the winners -  no JavaOne pass required! Location: Infusion Lounge - 124 Ellis Street, San FranciscoDate and Time: 10/2/12, 6:30 PM - 9:00 PM Hope to see you there!

    Read the article

  • Copying Chinese Characters from PDF

    - by Kevin
    I am on a Windows 7 laptop, which I believe comes pre installed with all the language packs to my knowledge. I can see Chinese characters with no issues and normally can copy them fine. (From browser to Microsoft Office works great). I have many PDFs with chinese characters in them, whenever I try to copy them and paste them into another program such as a browser, Microsoft Office, ect I just get strange foreign characters such as: This is copying a line. It is in this order: Chinese characters, pinyin (chinese in roman letters), and then the Englis translation. ,ô t¯ing w?o shu¯o listen to me The pinyin is also getting messed up, as the tonal marks (accents) about each letter are moving to a space of their own. Any ideas on to how to fix this? Thank you very much!

    Read the article

  • Choice of open source license for some components, closed source for others

    - by Peter Serwylo
    G'day, I am working on a set of multiplayer games, where different games play against each other (e.g. you play a Tetris clone, I play an Asteroids clone, but we are both competing against each other). All the games would be based on the same underlying framework written specifically for this project. I am struggling to comprehend how I would license this so that: The underlying framework is open source, so other people can create new games based on it. Some games built on the framework are open source Other games are closed source The goal is to have two bundles on something like the Android market: One free and open source package which has a collection of games Another "premium" (although I dislike that word) paid package which has a different collection of games. Usually I am fond of permissive licenses such as MIT/BSD, however I would prefer something more in the vein of the GPL for this. This is because for software such as the snes-9x SNES emulator, which is a great piece of software, there is a ton of poor quality versions being sold, whereas it would be preferable if there was just one authoritative version which was always kept up to date, and distributed for free. If the underlying framework was GPL'd, would I be able to build closed source games on top of it? Thanks for your input.

    Read the article

  • About HDD enclosure

    - by kmitnick
    hey guys, how r u doin? I have this 3.5" IDE enclosure, and it works great, I mean I love the idea of enclosures ( not the power feeding thing :), btw can't I just insert a chargable battery to feed the power when I am unable to find an electricity block), anyway my question is, when I finish the usage of the enclosure I safely remove it when using Windows or umount when working with Linux, and after that I got confused whether to turn it off or no? when I turn it off, the HDD suddenly stop spinning as if power failure not as when it was an internal and normally shuted down the pc. So is it ok to turn it off the way I've just said??? regards, ~Abed

    Read the article

  • The fastest way to resize images from ASP.NET. And it’s (more) supported-ish.

    - by Bertrand Le Roy
    I’ve shown before how to resize images using GDI, which is fairly common but is explicitly unsupported because we know of very real problems that this can cause. Still, many sites still use that method because those problems are fairly rare, and because most people assume it’s the only way to get the job done. Plus, it works in medium trust. More recently, I’ve shown how you can use WPF APIs to do the same thing and get JPEG thumbnails, only 2.5 times faster than GDI (even now that GDI really ultimately uses WIC to read and write images). The boost in performance is great, but it comes at a cost, that you may or may not care about: it won’t work in medium trust. It’s also just as unsupported as the GDI option. What I want to show today is how to use the Windows Imaging Components from ASP.NET APIs directly, without going through WPF. The approach has the great advantage that it’s been tested and proven to scale very well. The WIC team tells me you should be able to call support and get answers if you hit problems. Caveats exist though. First, this is using interop, so until a signed wrapper sits in the GAC, it will require full trust. Second, the APIs have a very strong smell of native code and are definitely not .NET-friendly. And finally, the most serious problem is that older versions of Windows don’t offer MTA support for image decoding. MTA support is only available on Windows 7, Vista and Windows Server 2008. But on 2003 and XP, you’ll only get STA support. that means that the thread safety that we so badly need for server applications is not guaranteed on those operating systems. To make it work, you’d have to spin specialized threads yourself and manage the lifetime of your objects, which is outside the scope of this article. We’ll assume that we’re fine with al this and that we’re running on 7 or 2008 under full trust. Be warned that the code that follows is not simple or very readable. This is definitely not the easiest way to resize an image in .NET. Wrapping native APIs such as WIC in a managed wrapper is never easy, but fortunately we won’t have to: the WIC team already did it for us and released the results under MS-PL. The InteropServices folder, which contains the wrappers we need, is in the WicCop project but I’ve also included it in the sample that you can download from the link at the end of the article. In order to produce a thumbnail, we first have to obtain a decoding frame object that WIC can use. Like with WPF, that object will contain the command to decode a frame from the source image but won’t do the actual decoding until necessary. Getting the frame is done by reading the image bytes through a special WIC stream that you can obtain from a factory object that we’re going to reuse for lots of other tasks: var photo = File.ReadAllBytes(photoPath); var factory = (IWICComponentFactory)new WICImagingFactory(); var inputStream = factory.CreateStream(); inputStream.InitializeFromMemory(photo, (uint)photo.Length); var decoder = factory.CreateDecoderFromStream( inputStream, null, WICDecodeOptions.WICDecodeMetadataCacheOnLoad); var frame = decoder.GetFrame(0); We can read the dimensions of the frame using the following (somewhat ugly) code: uint width, height; frame.GetSize(out width, out height); This enables us to compute the dimensions of the thumbnail, as I’ve shown in previous articles. We now need to prepare the output stream for the thumbnail. WIC requires a special kind of stream, IStream (not implemented by System.IO.Stream) and doesn’t directlyunderstand .NET streams. It does provide a number of implementations but not exactly what we need here. We need to output to memory because we’ll want to persist the same bytes to the response stream and to a local file for caching. The memory-bound version of IStream requires a fixed-length buffer but we won’t know the length of the buffer before we resize. To solve that problem, I’ve built a derived class from MemoryStream that also implements IStream. The implementation is not very complicated, it just delegates the IStream methods to the base class, but it involves some native pointer manipulation. Once we have a stream, we need to build the encoder for the output format, which could be anything that WIC supports. For web thumbnails, our only reasonable options are PNG and JPEG. I explored PNG because it’s a lossless format, and because WIC does support PNG compression. That compression is not very efficient though and JPEG offers good quality with much smaller file sizes. On the web, it matters. I found the best PNG compression option (adaptive) to give files that are about twice as big as 100%-quality JPEG (an absurd setting), 4.5 times bigger than 95%-quality JPEG and 7 times larger than 85%-quality JPEG, which is more than acceptable quality. As a consequence, we’ll use JPEG. The JPEG encoder can be prepared as follows: var encoder = factory.CreateEncoder( Consts.GUID_ContainerFormatJpeg, null); encoder.Initialize(outputStream, WICBitmapEncoderCacheOption.WICBitmapEncoderNoCache); The next operation is to create the output frame: IWICBitmapFrameEncode outputFrame; var arg = new IPropertyBag2[1]; encoder.CreateNewFrame(out outputFrame, arg); Notice that we are passing in a property bag. This is where we’re going to specify our only parameter for encoding, the JPEG quality setting: var propBag = arg[0]; var propertyBagOption = new PROPBAG2[1]; propertyBagOption[0].pstrName = "ImageQuality"; propBag.Write(1, propertyBagOption, new object[] { 0.85F }); outputFrame.Initialize(propBag); We can then set the resolution for the thumbnail to be 96, something we weren’t able to do with WPF and had to hack around: outputFrame.SetResolution(96, 96); Next, we set the size of the output frame and create a scaler from the input frame and the computed dimensions of the target thumbnail: outputFrame.SetSize(thumbWidth, thumbHeight); var scaler = factory.CreateBitmapScaler(); scaler.Initialize(frame, thumbWidth, thumbHeight, WICBitmapInterpolationMode.WICBitmapInterpolationModeFant); The scaler is using the Fant method, which I think is the best looking one even if it seems a little softer than cubic (zoomed here to better show the defects): Cubic Fant Linear Nearest neighbor We can write the source image to the output frame through the scaler: outputFrame.WriteSource(scaler, new WICRect { X = 0, Y = 0, Width = (int)thumbWidth, Height = (int)thumbHeight }); And finally we commit the pipeline that we built and get the byte array for the thumbnail out of our memory stream: outputFrame.Commit(); encoder.Commit(); var outputArray = outputStream.ToArray(); outputStream.Close(); That byte array can then be sent to the output stream and to the cache file. Once we’ve gone through this exercise, it’s only natural to wonder whether it was worth the trouble. I ran this method, as well as GDI and WPF resizing over thirty twelve megapixel images for JPEG qualities between 70% and 100% and measured the file size and time to resize. Here are the results: Size of resized images   Time to resize thirty 12 megapixel images Not much to see on the size graph: sizes from WPF and WIC are equivalent, which is hardly surprising as WPF calls into WIC. There is just an anomaly for 75% for WPF that I noted in my previous article and that disappears when using WIC directly. But overall, using WPF or WIC over GDI represents a slight win in file size. The time to resize is more interesting. WPF and WIC get similar times although WIC seems to always be a little faster. Not surprising considering WPF is using WIC. The margin of error on this results is probably fairly close to the time difference. As we already knew, the time to resize does not depend on the quality level, only the size does. This means that the only decision you have to make here is size versus visual quality. This third approach to server-side image resizing on ASP.NET seems to converge on the fastest possible one. We have marginally better performance than WPF, but with some additional peace of mind that this approach is sanctioned for server-side usage by the Windows Imaging team. It still doesn’t work in medium trust. That is a problem and shows the way for future server-friendly managed wrappers around WIC. The sample code for this article can be downloaded from: http://weblogs.asp.net/blogs/bleroy/Samples/WicResize.zip The benchmark code can be found here (you’ll need to add your own images to the Images directory and then add those to the project, with content and copy if newer in the properties of the files in the solution explorer): http://weblogs.asp.net/blogs/bleroy/Samples/WicWpfGdiImageResizeBenchmark.zip WIC tools can be downloaded from: http://code.msdn.microsoft.com/wictools To conclude, here are some of the resized thumbnails at 85% fant:

    Read the article

  • #1 O’Reilly eBook for 2010

    - by Jan Goyvaerts
    The year-end issue of O’Reilly’s author newsletter discussed the trends O’Reilly has been seeing the past few years, and their predictions for 2011. The key trend is that digital is now more than ever poised to take over print: Our digitally distributed products have grown from 18.36% of our publishing mix in 2009 to 28.09% of our mix in 2010. What is more impressive is that our digitally distributed products have produced more than double the revenue that has been lost with the decline of print. I think this is important because some say that digital cannibalizes print products. Our data indicates the contrary, as print is declining much more slowly than digital is growing. I think we may be seeing developers purchasing a print book, and then purchasing the electronic editions to search and copying code from, as the incremental cost for digital is more than reasonable. My own book seems to be leading this trend. Thanks to everyone who purchased it! And the five bestselling O’Reilly ebook products for 2010: 1) Regular Expressions Cookbook, 2) jQuery Cookbook, 3) Learning Python, 4) HTML5: Up and Running, and 5) JavaScript Cookbook. I think it’s interesting that the top five ebooks are code-intensive books. They’re great products for search and code reuse. It’s also interesting that none of the top 5 ebooks made the top 5 of print books.

    Read the article

  • That Escalated Quickly

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2014/05/17/that-escalated-quickly.aspxI have been working remotely out of my home for over 4 years now. All of my coworkers during that time have also worked remotely. Lots of folks have written about the challenges inherent in facilitating communication on remote teams and strategies for overcoming them. A popular theme around this topic is the notion of “escalating communication”. In this context “escalating” means taking a conversation from one mode of communication to a different, higher fidelity mode of communication. Here are the five modes of communication I use at work in order of increasing fidelity: Email – This is the “lowest fidelity” mode of communication that I use. I usually only check it a few times a day (and I’m trying to check it even less frequently than that) and I only keep items in my inbox if they represent an item I need to take action on that I haven’t tracked anywhere else. Forums / Message boards – Being a developer, I’ve gotten into the habit of having other people look over my code before it becomes part of the product I’m working on. These code reviews often happen in “real time” via screen sharing, but I also always have someone else give all of the changes another look using pull requests. A pull request takes my code and lets someone else see the changes I’ve made side-by-side with the existing code so they can see if I did anything dumb. Pull requests can facilitate a conversation about the code changes in an online-forum like style. Some teams I’ve worked on also liked using tools like Trello or Google Groups to have on-going conversations about a topic or task that was being worked on. Chat & Instant Messaging  - Chat and instant messaging are the real workhorses for communication on the remote teams I’ve been a part of. I know some teams that are co-located that also use it pretty extensively for quick messages that don’t warrant walking across the office to talk with someone but reqire more immediacy than an e-mail. For the purposes of this post I think it’s important to note that the terms “chat” and “instant messaging” might insinuate that the conversation is happening in real time, but that’s not always true. Modern chat and IM applications maintain a searchable history so people can easily see what might have been discussed while they were away from their computers. Voice, Video and Screen sharing – Everyone’s got a camera and microphone on their computers now, and there are an abundance of services that will let you use them to talk to other people who have cameras and microphones on their computers. I’m including screen sharing here as well because, in my experience, these discussions typically involve one or more people showing the other participants something that’s happening on their screen. Obviously, this mode of communication is much higher-fidelity than any of the ones listed above. Scheduled meetings are typically conducted using this mode of communication. In Person – No matter how great communication tools become, there’s no substitute for meeting with someone face-to-face. However, opportunities for this kind of communcation are few and far between when you work on a remote team. When a conversation gets escalated that usually means it moves up one or more positions on this list. A lot of people advocate jumping to #4 sooner than later. Like them, I used to believe that, if it was possible, organizing a call with voice and video was automatically better than any kind of text-based communication could be. Lately, however, I’m becoming less convinced that escalating is always the right move. Working Asynchronously Last year I attended a talk at our local code camp given by Drew Miller. Drew works at GitHub and was talking about how they use GitHub internally. Many of the folks at GitHub work remotely, so communication was one of the main themes in Drew’s talk. During the talk Drew used the phrase, “asynchronous communication” to describe their use of chat and pull request comments. That phrase stuck in my head because I hadn’t heard it before but I think it perfectly describes the way in which remote teams often need to communicate. You don’t always know when your co-workers are at their computers or what hours (if any) they are working that day. In order to work this way you need to assume that the person you’re talking to might not respond right away. You can’t always afford to wait until everyone required is online and available to join a voice call, so you need to use text-based, persistent forms of communication so that people can receive and respond to messages when they are available. Going back to my list from the beginning of this post for a second, I characterize items #1-3 as being “asynchronous” modes of communication while we could call items #4 and #5 “synchronous”. When communication gets escalated it’s almost always moving from an asynchronous mode of communication to a synchronous one. Now, to the point of this post: I’ve become increasingly reluctant to escalate from asynchronous to synchronous communication for two primary reasons: 1 – You can often find a higher fidelity way to convey your message without holding a synchronous conversation 2 - Asynchronous modes of communication are (usually) persistent and searchable. You Don’t Have to Broadcast Live Let’s start with the first reason I’ve listed. A lot of times you feel like you need to escalate to synchronous communication because you’re having difficulty describing something that you’re seeing in words. You want to provide the people you’re conversing with some audio-visual aids to help them understand the point that you’re trying to make and you think that getting on Skype and sharing your screen with them is the best way to do that. Firing up a screen sharing session does work well, but you can usually accomplish the same thing in an asynchronous manner. For example, you could take a screenshot and annotate it with some text and drawings to illustrate what it is you’re seeing. If a screenshot won’t work, taking a short screen recording while your narrate over it and posting the video to your forum or chat system along with a text-based description of what’s in the recording that can be searched for later can be a great way to effectively communicate with your team asynchronously. I Said What?!? Now for the second reason I listed: most asynchronous modes of communication provide a transcript of what was said and what decisions might have been made during the conversation. There have been many occasions where I’ve used the search feature of my team’s chat application to find a conversation that happened several weeks or months ago to remember what was decided. Unfortunately, I think the benefits associated with the persistence of communicating asynchronously often get overlooked when people decide to escalate to a in-person meeting or voice/video call. I’m becoming much more reluctant to suggest a voice or video call if I suspect that it might lead to codifying some kind of design decision because everyone involved is going to hang up the call and immediately forget what was decided. I recognize that you can record and archive these types of interactions, but without being able to search them the recordings aren’t terribly useful. When and How To Escalate I don’t mean to imply that communicating via voice/video or in person is never a good idea. I probably jump on a Skype call with a co-worker at least once a day to quickly hash something out or show them a bit of code that I’m working on. Also, meeting in person periodically is really important for remote teams. There’s no way around the fact that sometimes it’s easier to jump on a call and show someone my screen so they can see what I’m seeing. So when is it right to escalate? I think the simplest way to answer that is when the communication starts to feel painful. Everyone’s tolerance for that pain is different, but I think you need to let it hurt a little bit before jumping to synchronous communication. When you do escalate from asynchronous to synchronous communication, there are a couple of things you can do to maximize the effectiveness of the communication: Takes notes – This is huge and yet I’ve found that a lot of teams don’t do this. If you’re holding a meeting with  > 2 people you should have someone taking notes. Taking notes while participating in a meeting can be difficult but there are a few strategies to deal with this challenge that probably deserve a short post of their own. After the meeting, make sure the notes are posted to a place where all concerned parties (including those that might not have attended the meeting) can review and search them. Persist decisions made ASAP – If any decisions were made during the meeting, persist those decisions to a searchable medium as soon as possible following the conversation. All the teams I’ve worked on used a web-based system for tracking the on-going work and a backlog of work to be done in the future. I always try to make sure that all of the cards/stories/tasks/whatever in these systems always reflect the latest decisions that were made as the work was being planned and executed. If held a quick call with your team lead and decided that it wasn’t worth the effort to build real-time validation into that new UI you were working on, go and codify that decision in the story associated with that work immediately after you hang up. Even better, write it up in the story while you are both still on the phone. That way when the folks from your QA team pick up the story to test a few days later they’ll know why the real-time validation isn’t there without having to invoke yet another conversation about the work. Communicating Well is Hard At this point you might be thinking that communicating asynchronously is more difficult than having a live conversation. You’re right: it is more difficult. In order to communicate effectively this way you need to very carefully think about the message that you’re trying to convey and craft it in a way that’s easy for your audience to understand. This is almost always harder than just talking through a problem in real time with someone; this is why escalating communication is such a popular idea. Why wouldn’t we want to do the thing that’s easier? Easier isn’t always better. If you and your team can get in the habit of communicating effectively in an asynchronous manner you’ll find that, over time, all of your communications get less painful because you don’t need to re-iterate previously made points over and over again. If you communicate right the first time, you often don’t need to rehash old conversations because you can go back and find the decisions that were made laid out in plain language. You’ll also find that you get better at doing things like writing useful comments in your code, creating written documentation about how the feature that you just built works, or persuading your team to do things in a certain way.

    Read the article

  • Bugzilla - How to setup MTA that will receive Gmail to create bugs

    - by JRock
    I have been looking for a while on setting up an MTA for bugzilla to receive bugs via email and am not really seeing any detailed guides. Currently I am using gmail as the outbound smtp for messages, but I do not have a solution for the receiving of emails as bugs. I am assuming I would setup an MTA and it would grab down the emails and then bugzilla would read them somehow. I am unsure of this process/a solution for this; Any detailed help or direction would be great. Distro: Ubuntu 11.10

    Read the article

  • Adaptec 2420SA RAID 1 rebuild 66% after 1 week

    - by moss
    We're running a 500GB RAID 1 on an Adaptec 2420SA - 2x500GB Hitachi Drives. It suddenly crashed and required a rebuild -- was not even booting. It's only at 66% after about a week. Very frustrated. This is the second box with the same card and drives that has had issues with the RAID/drives. Another box had this happen twice. Both machines are Linux boxes. CentOS and Fedora. I dunno if it's the firmware -- which currently needs an update (any help doing this over PXE would be great- I have used UDA to do PXE boots in the past). Anyway, would love to hear about experiences with this card and firmware. I thinking of going google and just using cheap boxes no raid and have server redundancy instead.

    Read the article

  • Photoshop - Turning white photo to dark photo

    - by K.M
    I've taken the original photo and I was playing around in photoshop and got to amended photo. I was stupid enough to not to save the file in psd or remember how I've done it. However,I definitely remember I pressed Invert (command +I / Control +I) and my white photo turned to proportioned dark photo. Does anybody know how? It was very simple step. It was accidental discovery. Would be great if someone knows the answer. See Original photo See Amended photo

    Read the article

  • ASP.NET4.0-Compatibility Settings for rendering controls

    - by Jalpesh P. Vadgama
    With asp.net 4.0 Microsoft has taken a great step for rendering controls. Now it will have more cleaner html there are lots of enhancement for rendering html controls in asp.net 4.0 now all controls like Menu, List View and other controls renders more cleaner html. But recently i have faced strange problem in rendering controls I have my site in asp.net 3.5 and i want to convert it in asp.net 4.0. I have applied my style as per 3.5 rendering and some of items are obsolete in asp.net 4.0. Modifying style sheet was a tedious job here asp.net 4.0 compatibility  setting comes into help. Asp.net 4.0 compatibility settings provides full backward compatibility in terms of the rendering controls. You can assign this in your web.config section like following. XML, using GeSHi 1.0.8.6<system.web> <pages controlRenderingCompatibilityVersion="3.5|4.0"/> </system.web>  Parsed in 0.001 seconds at 84.92 KB/s Here the values of controlRenderingCompatibility is a string which will indicate on which way control should render in browser if you provide 4.0 then it will controls with more cleaner html and while if you want to go with old legacy rendering like 3.5 then you can put 3.5 and it will render same way as you are doing in asp.net 3.5. Hope this help you!!! Technorati Tags: ASP.NET 4.0,controlRenderingCompatibility

    Read the article

  • configuration transfer over scp on commit not working on Juniper EX-2200 switch

    - by liv2hak
    I am making a series of configuration changes on Junos EX- 2200 switch.I have this router connected to another PC via an ethernet cable.The IP address of the switch is 192.168.1.1.I am able to ping from 192.168.1.1 to 192.168.1.0 and vice-versa. After the changes I make I do the following commands set system archival configuration transfer-on-commit set system archival configuration archive-sites "scp://[email protected]:/home/karthik/ws_karthik/sw1_config_1.txt" password godfather commit Where there is a user with user-name "karthik " and password "godfather".The path shown above also exists in the system How ever I don't see the configuration file sw1_config_1.txt created at the path specified. Also I have verified that sshd is running on the PC (192.168.1.10) Am I doing something wrong here? It would be great if anyone could help me out.

    Read the article

  • New Year's Resolutions and Keeping in Touch in 2011

    - by Brian Dayton
    The run-up to Oracle OpenWorld 2010 San Francisco--and the launch of Fusion Applications--was a busy time for many of us working on the applications business at Oracle. The great news was that the Oracle Applications general sessions, sessions, demogrounds and other programs were very well attended and well received. Unfortunately, for this blog, the work wasn't done there. Yes, there haven't been many additional blog entries since the previous one, which one industry analyst told us "That's a good post!" That being said, our New Year's Resolution is to blog more frequently about what's been keeping us busy since Oracle OpenWorld San Francisco. A quick summary: - A 4-part webcast series covering major elements of Oracle's Applications strategy - Oracle OpenWorld Brazil - Oracle OpenWorld China - A stellar fiscal Q2 for Oracle and our applications business - Engagement with many Oracle Fusion Applications Early Adopter customers (more on this in the coming year) Objectives for the Coming Year Looking forward at 2011 there are many ways in which we hope to continue making connections with our valued customers and partners, sharing information about where Oracle Applications are headed, and answering questions about how to manage your Oracle Applications roadmap. Things to look for in 2011: - Stay connected with Oracle Applications on a daily basis via our Facebook page. You don't have to be a member of Facebook---but if you are and "like" the page you'll have daily insights and updates delivered to your account http://www.facebook.com/OracleApps - Coming soon, an Oracle Applications strategy update World Tour---a global program that takes key updates and information to cities around the globe - Save the date: On February 3rd, Oracle will be hosting a global, online conference for Oracle Applications customers, partners and interested parties Happy New Year and look for us in 2011.

    Read the article

  • Podcasting vs Stack Overflow vs Geekswithblogs

    - by MarkPearl
    For a few years now I have been looking for effective ways to be involved in the “community”. While there are a few community programming events in my area (Johannesberg), there isn’t too much face to face stuff – which has caused me to turn to the internet. My internet attempts have been varied – at first I took the passive approach of listening to tech podcasts. This was great for a while, but soon the content became semi-repetitive and a little boring. It seemed that the podcasts I was listening to all went round the same themes and speakers and while I am still a keen listener to several tech podcasts – it didn’t quench my thirst. So I began to be a bit more active – starting with stack overflow – where I would scan the site for questions that were in the realm of my ability to answer. It worked for a while but soon it began to be discouraging – there seems to be so many people that know so much more than me and are quicker at typing that I felt fairly ineffective. So while I still use Stack Overflow when I am in a pickle and need some help – it feels more like me taking from the community than giving anything. Which brought me to Geeks with blogs. Till I found GWB I hadn’t felt like I was an active part of a community. I had blogged before on Blogspot and Wordpress but hadn’t felt associated to the community. Now when I get a comment from someone on one of my GWB posts either thanking me or adding a bit more or correcting me, it makes me feel like I am contributing to a community. So well done GWB. Thanks for making a spot that makes me feel at home!

    Read the article

  • On contract work, obligations to said contract, and looking out for yourself…

    - by jlnorsworthy
    Without boring you all with details, my last two contract assignments were cut short; I was given 3 days notice on one, and 4 weeks notice on the other. Neither of these were due to performance – they both basically came down budget issues. On my second contract, I got the feeling that I may not have been a great place to stay for the duration of my contract. Because of money/time spent getting me in the door, and the possible negative effect of my employer/recruiter, I decided to stay at least for a few months (and start looking several weeks before the end of my supposedly “extendable” contract). These experiences have left me a little wary of contract work. It seems that if I land a bad contract, that my recruiter would take a hit (reputation or otherwise) if I quickly found another job. But on the other hand, the client company won’t think twice of ending the contract early for any reason. I know that the counter argument to this is “maybe your recruiter shouldn’t have put you into a crappy assignment”… either way, it seems that since I am relying on him to provide me with work, that I should try to not damage his reputation with client companies. I’m basically brand new to contracting (these were my first two contracts) so these concerns are new to me. TLDR: Is contract work, by its very nature, largely unstable? Am I worried too much about my recruiter? Should I be quicker to start looking for a new job even after just weeks at a new company (when the environment seems unstable)? If so, do I look through my recruiter or just find another position by any means necessary?

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >