Search Results

Search found 2750 results on 110 pages for 'progress 4gl'.

Page 27/110 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Motion - can't get streaming working from a webcam

    - by Emmanuel Brunet
    I'm trying to record a video stream from my Tenvis IP camera with motion 3.2.12 on Debian 7.5. I used the standard debian package with sudo apt-get install motion Assume my DNS IP cam is webcam, user : admin, password : password /etc/motion/motion.conf Bellow are my configuration file settings : netcam_url http://webcam/videostream.cgi netcam_userpass admin:password target_dir /media/videos/log/motion # The mini-http server listens to this port for requests (default: 0 = disabled) webcam_port 1234 ffmpeg_cap_new on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) webcam_quality 50 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion on # Maximum framerate for webcam streams (default: 1) webcam_maxrate 15 # Restrict webcam connections to localhost only (default: on) webcam_localhost on # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 control_port 8080 control_authentication admin:password Issue #1 when I try display http:/localhost:1234 the browser looks frozen, no HTTP 404 received but it stills waiting for data it seems .. Issue #2 in the output directory motion writes a lot of jpeg snapshots althought I just want to have several video sequenced files. Log I run motion in interactive mode in a terminal, here is the ouput root@mercure:/etc/motion# motion -c motion-1.0.conf [0] Processing thread 0 - config file motion-1.0.conf [0] Motion 3.2.12 Started [0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478785 [0] Thread 1 is from motion-1.0.conf [0] motion-httpd/3.2.12 running, accepting connections [0] motion-httpd: waiting for data on port TCP 8080 [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Operation now in progress [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165303.avi]: Operation now in progress [1] File of type 1 saved to: ~/tmp/motion/01-20140603165303-01.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Resource temporarily unavailable [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165329.avi]: Resource temporarily unavailable [1] File of type 1 saved to: ~/tmp/motion/01-20140603165329-00.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Connection reset by peer [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165355.avi]: Connection reset by peer [1] File of type 1 saved to: ~/tmp/motion/01-20140603165355-06.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [0] httpd - Finishing [0] httpd Closing [0] httpd thread exit [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 It doesn't find the codec ... avcodec_open - could not open codec: Operation now in progress I've changed fmpeg_video_codec from mpeg4 to swf the result is the same. When using flv format motion writes a lot of .jpg image but I can't see anything at http://localhost:1234 [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-02.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-03.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-04.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-05.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-06.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-02.jpg Any idea just to get the video stream recoded on my local disk ?

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SSAS 2008 R2– Little Gems

    - by ACALVETT
    I have spent the last few days working with SSAS 2008 R2 and noticed a few small enhancements which many people probably won’t notice but i will list them here and why they are important to me. New profiler events Commit: This is a new sub class event for “progress report end”. This represents the elapsed time taken for the server to commit your data. It is important because for the duration of this event a server level lock will be in place blocking all incoming connections and causing time out...(read more)

    Read the article

  • ParallelWork: Feature rich multithreaded fluent task execution library for WPF

    - by oazabir
    ParallelWork is an open source free helper class that lets you run multiple work in parallel threads, get success, failure and progress update on the WPF UI thread, wait for work to complete, abort all work (in case of shutdown), queue work to run after certain time, chain parallel work one after another. It’s more convenient than using .NET’s BackgroundWorker because you don’t have to declare one component per work, nor do you need to declare event handlers to receive notification and carry additional data through private variables. You can safely pass objects produced from different thread to the success callback. Moreover, you can wait for work to complete before you do certain operation and you can abort all parallel work while they are in-flight. If you are building highly responsive WPF UI where you have to carry out multiple job in parallel yet want full control over those parallel jobs completion and cancellation, then the ParallelWork library is the right solution for you. I am using the ParallelWork library in my PlantUmlEditor project, which is a free open source UML editor built on WPF. You can see some realistic use of the ParallelWork library there. Moreover, the test project comes with 400 lines of Behavior Driven Development flavored tests, that confirms it really does what it says it does. The source code of the library is part of the “Utilities” project in PlantUmlEditor source code hosted at Google Code. The library comes in two flavors, one is the ParallelWork static class, which has a collection of static methods that you can call. Another is the Start class, which is a fluent wrapper over the ParallelWork class to make it more readable and aesthetically pleasing code. ParallelWork allows you to start work immediately on separate thread or you can queue a work to start after some duration. You can start an immediate work in a new thread using the following methods: void StartNow(Action doWork, Action onComplete) void StartNow(Action doWork, Action onComplete, Action<Exception> failed) For example, ParallelWork.StartNow(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workEndedAt = DateTime.Now; }); Or you can use the fluent way Start.Work: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .Run(); Besides simple execution of work on a parallel thread, you can have the parallel thread produce some object and then pass it to the success callback by using these overloads: void StartNow<T>(Func<T> doWork, Action<T> onComplete) void StartNow<T>(Func<T> doWork, Action<T> onComplete, Action<Exception> fail) For example, ParallelWork.StartNow<Dictionary<string, string>>( () => { test = new Dictionary<string,string>(); test.Add("test", "test"); return test; }, (result) => { Assert.True(result.ContainsKey("test")); }); Or, the fluent way: Start<Dictionary<string, string>>.Work(() => { test = new Dictionary<string, string>(); test.Add("test", "test"); return test; }) .OnComplete((result) => { Assert.True(result.ContainsKey("test")); }) .Run(); You can also start a work to happen after some time using these methods: DispatcherTimer StartAfter(Action onComplete, TimeSpan duration) DispatcherTimer StartAfter(Action doWork,Action onComplete,TimeSpan duration) You can use this to perform some timed operation on the UI thread, as well as perform some operation in separate thread after some time. ParallelWork.StartAfter( () => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workCompletedAt = DateTime.Now; }, waitDuration); Or, the fluent way: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .RunAfter(waitDuration);   There are several overloads of these functions to have a exception callback for handling exceptions or get progress update from background thread while work is in progress. For example, I use it in my PlantUmlEditor to perform background update of the application. // Check if there's a newer version of the app Start<bool>.Work(() => { return UpdateChecker.HasUpdate(Settings.Default.DownloadUrl); }) .OnComplete((hasUpdate) => { if (hasUpdate) { if (MessageBox.Show(Window.GetWindow(me), "There's a newer version available. Do you want to download and install?", "New version available", MessageBoxButton.YesNo, MessageBoxImage.Information) == MessageBoxResult.Yes) { ParallelWork.StartNow(() => { var tempPath = System.IO.Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Settings.Default.SetupExeName); UpdateChecker.DownloadLatestUpdate(Settings.Default.DownloadUrl, tempPath); }, () => { }, (x) => { MessageBox.Show(Window.GetWindow(me), "Download failed. When you run next time, it will try downloading again.", "Download failed", MessageBoxButton.OK, MessageBoxImage.Warning); }); } } }) .OnException((x) => { MessageBox.Show(Window.GetWindow(me), x.Message, "Download failed", MessageBoxButton.OK, MessageBoxImage.Exclamation); }); The above code shows you how to get exception callbacks on the UI thread so that you can take necessary actions on the UI. Moreover, it shows how you can chain two parallel works to happen one after another. Sometimes you want to do some parallel work when user does some activity on the UI. For example, you might want to save file in an editor while user is typing every 10 second. In such case, you need to make sure you don’t start another parallel work every 10 seconds while a work is already queued. You need to make sure you start a new work only when there’s no other background work going on. Here’s how you can do it: private void ContentEditor_TextChanged(object sender, EventArgs e) { if (!ParallelWork.IsAnyWorkRunning()) { ParallelWork.StartAfter(SaveAndRefreshDiagram, TimeSpan.FromSeconds(10)); } } If you want to shutdown your application and want to make sure no parallel work is going on, then you can call the StopAll() method. ParallelWork.StopAll(); If you want to wait for parallel works to complete without a timeout, then you can call the WaitForAllWork(TimeSpan timeout). It will block the current thread until the all parallel work completes or the timeout period elapses. result = ParallelWork.WaitForAllWork(TimeSpan.FromSeconds(1)); The result is true, if all parallel work completed. If it’s false, then the timeout period elapsed and all parallel work did not complete. For details how this library is built and how it works, please read the following codeproject article: ParallelWork: Feature rich multithreaded fluent task execution library for WPF http://www.codeproject.com/KB/WPF/parallelwork.aspx If you like the article, please vote for me.

    Read the article

  • Scrum for Team Foundation Server 2010

    - by Martin Hinshelwood
    I will be presenting a session on “Scrum for TFS2010” not once, but twice! If you are going to be at the Aberdeen Partner Group meeting on 27th April, or DDD Scotland on 8th May then you may be able to catch my session. Credit: I want to give special thanks to Aaron Bjork from Microsoft who provided me with most of my material He is a Scrum and Power Point genius. Scrum for Team Foundation Server 2010 Synopsis Visual Studio ALM (formerly Visual Studio Team System (VSTS)) and Team Foundation Server (TFS) are the cornerstones of development on the Microsoft .NET platform. These are the best tools for a team to have successful projects and for the developers to have a focused and smooth software development process. For TFS 2010 Microsoft is heavily investing in Scrum and has already started moving some teams across to using it. Martin will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even asses your Scrum knowledge by having a go at the Scrum Open Assessment. Come and see Martin Hinshelwood, Visual Studio ALM MVP and Solution Architect from SSW show you: How to successfully gather requirements with User stories How to plan a project using TFS 2010 and Scrum How to work with a product backlog in TFS 2010 The right way to plan a sprint with TFS 2010 Tracking your progress The right way to use work items What you can use from the built in reporting as well as the Project portals available on from the SharePoint dashboard The important reports to give your Product Owner / Project Manager Walk away knowing how to see the project health and progress. Visual Studio ALM is designed to help address many of these traditional problems faced by teams. It does so by providing a set of integrated tools to help teams improve their software development activities and to help managers better support the software development processes. During this session we will cover the lifecycle of creating work items and how this fits into Scrum using Visual Studio ALM and Team Foundation Server. If you want to know more about how to do Scrum with TFS then there is a new course that has been created in collaboration with Microsoft and Scrum.org that is going to be the official course for working with TFS 2010. SSW has Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools. Ken Schwaber and and Sam Guckenheimer: Professional Scrum Development Technorati Tags: Scrum,VS ALM,VS 2010,TFS 2010

    Read the article

  • Scrum for Team Foundation Server 2010

    - by Martin Hinshelwood
    I will be presenting a session on “Scrum for TFS2010” not once, but twice! If you are going to be at the Aberdeen Partner Group meeting on 27th April, or DDD Scotland on 8th May then you may be able to catch my session. Credit: I want to give special thanks to Aaron Bjork from Microsoft who provided me with most of my material He is a Scrum and Power Point genius. Updated 9th May 2010 – I have now presented at both of these sessions  and posted about it. Scrum for Team Foundation Server 2010 Synopsis Visual Studio ALM (formerly Visual Studio Team System (VSTS)) and Team Foundation Server (TFS) are the cornerstones of development on the Microsoft .NET platform. These are the best tools for a team to have successful projects and for the developers to have a focused and smooth software development process. For TFS 2010 Microsoft is heavily investing in Scrum and has already started moving some teams across to using it. Martin will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even asses your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Come and see Martin Hinshelwood, Visual Studio ALM MVP and Solution Architect from SSW show you: How to successfully gather requirements with User stories How to plan a project using TFS 2010 and Scrum How to work with a product backlog in TFS 2010 The right way to plan a sprint with TFS 2010 Tracking your progress The right way to use work items What you can use from the built in reporting as well as the Project portals available on from the SharePoint dashboard The important reports to give your Product Owner / Project Manager Walk away knowing how to see the project health and progress. Visual Studio ALM is designed to help address many of these traditional problems faced by teams. It does so by providing a set of integrated tools to help teams improve their software development activities and to help managers better support the software development processes. During this session we will cover the lifecycle of creating work items and how this fits into Scrum using Visual Studio ALM and Team Foundation Server. If you want to know more about how to do Scrum with TFS then there is a new course that has been created in collaboration with Microsoft and Scrum.org that is going to be the official course for working with TFS 2010. SSW has Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools. Ken Schwaber and and Sam Guckenheimer: Professional Scrum Development Technorati Tags: Scrum,VS ALM,VS 2010,TFS 2010

    Read the article

  • How Can I Track the Modifications a Program’s Installer Makes?

    - by Jason Fitzpatrick
    What exactly are those installation apps doing as the progress bar whizzes by? If you want to keep a close eye on things, you’ll need the right tools. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • New DMV… not yet

    - by Michael Zilberstein
    Downloaded and installed new toy: And while reading BOL, stumbled upon new extremely useful DMV: sys.dm_exec_query_profiles . This DMV enables DBA to monitor query progress while it is being executed. Counters in the DMV are per operation per thread. So we’ll be able to monitor in real time which thread (even for parallel processing) processes which node in the plan. Or find heavy operations “post mortem”. We all know the uncomfortable feeling when some heavy query runs and the boss starts asking...(read more)

    Read the article

  • The kernel column by Jon Masters #87

    <b>Linux User and Developer:</b> "The past month saw steady progress toward the final 2.6.34 kernel release, including the announcement of initial Release Candidate kernels 2.6.34-rc1 through 2.6.34-rc4. The latter had an interesting virtual memory bug that added a week of delay..."

    Read the article

  • Why is Star Craft II lagging with these specs? [closed]

    - by Kev
    17.3" FULL HD (1920X1080) LED LCD w/nVIDIA GeForce® GT 425M w/1GB GDDR3 + Intel GMA HD4500 i also got the 640m core i7, i have 8 gb ram but for some damn reason, when there is a batle in progress it appears as if my graphics card or something is not powerful enough, shouldn't these specs be more than enough to handle star craft II? what could i do to improve my laptop? i also got the 64 bit os

    Read the article

  • SOCharts: Charts by Tags

    - by abhin4v
    Screenshot I created this small app as a weekend hack. It shows the reputations, upvotes, downvotes and accepted answers for a user against the tags for the answers. About I wanted to know how may upvotes I was away from getting the bronze badge for the clojure tag. But I could not find any straightforward way of doing that. So I wrote this app (in Clojure, of course). The SO API is used for the data and the charts are created using the Google Chart API. The charts are opened in the default browser. License Licensed under EPL 1.0. Download If you have Clojure and Leiningen installed, you can simply get the code from https://gist.github.com/725331, save it as socharts.clj and then run lein repl -e "(load \"socharts\")(refer 'socharts.socharts)(-main)" for launching the Swing UI If you don't have Clojure installed, but have Java then download the standalone jar from http://dl.dropbox.com/u/5247/socharts-1.0.0-standalone.jar and run it as javaw -jar socharts-1.0.0-standalone.jar Once the UI is launched, just type your user id in the input box and press <ENTER>. It will take some time to download the data from the SO API (the progress bar shows the download progress) and then it will open the charts in your default browser. You can also run it as a command line app by running lein repl -e "(load \"socharts\")(refer 'socharts.socharts)(-main <userid>)" or java -jar socharts-1.0.0-standalone.jar <userid> where you replace <userid> with your user id. Be warned that because of a missing feature in the SO API, it will fetch the data for each question you have answered. So the maximum limit is 10000 answers (the SO API call limit). Platform All platforms with Java 1.6. Contact You can reach me at abhinav [at] abhinavsarkar [dot] net. Please report bugs/comments/suggestions as answers to this post. Code Code was written in Clojure with the UI in Swing. It is available at https://gist.github.com/725331. It's a public gist so your can fork it if you like to do some changes.

    Read the article

  • Time Tracking on an Agile Team

    - by Stephen.Walther
    What’s the best way to handle time-tracking on an Agile team? Your gut reaction to this question might be to resist any type of time-tracking at all. After all, one of the principles of the Agile Manifesto is “Individuals and interactions over processes and tools”.  Forcing the developers on your team to track the amount of time that they devote to completing stories or tasks might seem like useless bureaucratic red tape: an impediment to getting real work done. I completely understand this reaction. I’ve been required to use time-tracking software in the past to account for each hour of my workday. It made me feel like Fred Flintstone punching in at the quarry mine and not like a professional. Why You Really Do Need Time-Tracking There are, however, legitimate reasons to track time spent on stories even when you are a member of an Agile team.  First, if you are working with an outside client, you might need to track the number of hours spent on different stories for the purposes of billing. There might be no way to avoid time-tracking if you want to get paid. Second, the Product Owner needs to know when the work on a story has gone over the original time estimated for the story. The Product Owner is concerned with Return On Investment. If the team has gone massively overtime on a story, then the Product Owner has a legitimate reason to halt work on the story and reconsider the story’s business value. Finally, you might want to track how much time your team spends on different types of stories or tasks. For example, if your team is spending 75% of their time doing testing then you might need to bring in more testers. Or, if 10% of your team’s time is expended performing a software build at the end of each iteration then it is time to consider better ways of automating the build process. Time-Tracking in SonicAgile For these reasons, we added time-tracking as a feature to SonicAgile which is our free Agile Project Management tool. We were heavily influenced by Jeff Sutherland (one of the founders of Scrum) in the way that we implemented time-tracking (see his article http://scrum.jeffsutherland.com/2007/03/time-tracking-is-anti-scrum-what-do-you.html). In SonicAgile, time-tracking is disabled by default. If you want to use this feature then the project owner must enable time-tracking in Project Settings. You can choose to estimate using either days or hours. If you are estimating at the level of stories then it makes more sense to choose days. Otherwise, if you are estimating at the level of tasks then it makes more sense to use hours. After you enable time-tracking then you can assign three estimates to a story: Original Estimate – This is the estimate that you enter when you first create a story. You don’t change this estimate. Time Spent – This is the amount of time that you have already devoted to the story. You update the time spent on each story during your daily standup meeting. Time Left – This is the amount of time remaining to complete the story. Again, you update the time left during your daily standup meeting. So when you first create a story, you enter an original estimate that becomes the time left. During each daily standup meeting, you update the time spent and time left for each story on the Kanban. If you had perfect predicative power, then the original estimate would always be the same as the sum of the time spent and the time left. For example, if you predict that a story will take 5 days to complete then on day 3, the story should have 3 days spent and 2 days left. Unfortunately, never in the history of mankind has anyone accurately predicted the exact amount of time that it takes to complete a story. For this reason, SonicAgile does not update the time spent and time left automatically. Each day, during the daily standup, your team should update the time spent and time left for each story. For example, the following table shows the history of the time estimates for a story that was originally estimated to take 3 days but, eventually, takes 5 days to complete: Day Original Estimate Time Spent Time Left Day 1 3 days 0 days 3 days Day 2 3 days 1 day 2 days Day 3 3 days 2 days 2 days Day 4 3 days 3 days 2 days Day 5 3 days 4 days 0 days In the table above, everything goes as predicted until you reach day 3. On day 3, the team realizes that the work will require an additional two days. The situation does not improve on day 4. All of the sudden, on day 5, all of the remaining work gets done. Real work often follows this pattern. There are long periods when nothing gets done punctuated by occasional and unpredictable bursts of progress. We designed SonicAgile to make it as easy as possible to track the time spent and time left on a story. Detecting when a Story Goes Over the Original Estimate Sometimes, stories take much longer than originally estimated. There’s a surprise. For example, you discover that a new software component is incompatible with existing software components. Or, you discover that you have to go through a month-long certification process to finish a story. In those cases, the Product Owner has a legitimate reason to halt work on a story and re-evaluate the business value of the story. For example, the Product Owner discovers that a story will require weeks to implement instead of days, then the story might not be worth the expense. SonicAgile displays a warning on both the Backlog and the Kanban when the time spent on a story goes over the original estimate. An icon of a clock is displayed. Time-Tracking and Tasks Another optional feature of SonicAgile is tasks. If you enable Tasks in Project Settings then you can break stories into one or more tasks. You can perform time-tracking at the level of a story or at the level of a task. If you don’t break a story into tasks then you can enter the time left and time spent for the story. As soon as you break a story into tasks, then you can no longer enter the time left and time spent at the level of the story. Instead, the time left and time spent for a story is rolled up from its tasks. On the Kanban, you can see how the time left and time spent for each task gets rolled up into each story. The progress bar for the story is rolled up from the progress bars for each task. The original estimate is never rolled up – even when you break a story into tasks. A story’s original estimate is entered separately from the original estimates of each of the story’s tasks. Summary Not every Agile team can avoid time-tracking. You might be forced to track time to get paid, to detect when you are spending too much time on a particular story, or to track the amount of time that you are devoting to different types of tasks. We designed time-tracking in SonicAgile to require the least amount of work to track the information that you need. Time-tracking is an optional feature. If you enable time-tracking then you can track the original estimate, time left, and time spent for each story and task. You can use time-tracking with SonicAgile for free. Register at http://SonicAgile.com.

    Read the article

  • Ubuntu software center not opening [closed]

    - by I'll sudeepdino008
    Ubuntu software center is not opening, when I type: software-center in the command line, the following errors are generated: (software-center:8570): Gtk-WARNING **: Theme parsing error: gtk.css:227:31: Failed to import: Error opening file: No such file or directory 2012-09-30 17:00:58,068 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'http://10.3.100.212:8080/' 2012-09-30 17:00:58,071 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2012-09-30 17:00:58,327 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-09-30 17:00:58,428 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2012-09-30 17:00:58,433 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 243, in open self._cache = apt.Cache(GtkMainIterationProgress()) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 102, in __init__ self.open(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 145, in open self._cache = apt_pkg.Cache(progress) SystemError: E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/ppa.launchpad.net_webupd8team_themes_ubuntu_dists_precise_main_binary-i386_Packages, E:The package lists or status file could not be parsed or opened. 2012-09-30 17:01:00,130 - softwarecenter.db.enquire - ERROR - _get_estimate_nr_apps_and_nr_pkgs failed Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/enquire.py", line 115, in _get_estimate_nr_apps_and_nr_pkgs tmp_matches = enquire.get_mset(0, len(self.db), None, xfilter) File "/usr/share/software-center/softwarecenter/db/appfilter.py", line 89, in __call__ if (not pkgname in self.cache and File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 263, in __contains__ return self._cache.__contains__(k) AttributeError: 'NoneType' object has no attribute '__contains__' Traceback (most recent call last): File "/usr/bin/software-center", line 176, in <module> app.run(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1422, in run self.show_available_packages(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1352, in show_available_packages self.view_manager.set_active_view(ViewPages.AVAILABLE) File "/usr/share/software-center/softwarecenter/ui/gtk3/session/viewmanager.py", line 154, in set_active_view view_widget.init_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/availablepane.py", line 171, in init_view self.apps_filter) File "/usr/share/software-center/softwarecenter/ui/gtk3/views/catview_gtk.py", line 238, in __init__ self.build(desktopdir) File "/usr/share/software-center/softwarecenter/ui/gtk3/views/catview_gtk.py", line 511, in build self._build_homepage_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/catview_gtk.py", line 271, in _build_homepage_view self._append_whats_new() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/catview_gtk.py", line 450, in _append_whats_new whats_new_cat = self._update_whats_new_content() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/catview_gtk.py", line 439, in _update_whats_new_content docs = whats_new_cat.get_documents(self.db) File "/usr/share/software-center/softwarecenter/db/categories.py", line 124, in get_documents nonblocking_load=False) File "/usr/share/software-center/softwarecenter/db/enquire.py", line 317, in set_query self._blocking_perform_search() File "/usr/share/software-center/softwarecenter/db/enquire.py", line 212, in _blocking_perform_search matches = enquire.get_mset(0, self.limit, None, xfilter) File "/usr/share/software-center/softwarecenter/db/appfilter.py", line 89, in __call__ if (not pkgname in self.cache and File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 263, in __contains__ return self._cache.__contains__(k) AttributeError: 'NoneType' object has no attribute '__contains__'

    Read the article

  • Is There a Center of the Universe?

    - by Akemi Iwaya
    From our earliest days mankind has debated what should be defined as the center of the universe, a ‘definition’ that continues to evolve as our technology and knowledge of the universe improves. Follow along with mankind’s historical progress on defining the center of the universe and learn some great facts about the debate with this terrific TEDEducation video. Is there a center of the universe? – Marjee Chmiel and Trevor Owens [YouTube]     

    Read the article

  • When done is not done

    - by Tony Davis
    Most developers and DBAs will know what it’s like to be asked to do "a quick tidy up" on a project that, on closer inspection, turns out to be a barely working prototype: as the cynical programmer says, "when you’re told that a project is 90% done, prepare for the next 90%". It is easy to convince a layperson that an application is complete just by using test data, and sticking to the workflow that the development team has implemented and tested. The application is ‘done’ only in the sense that the anticipated paths through the software features, using known data, are fully supported. Reality often strikes only when testers reveal its strange and erratic behavior in response to behavior from the end user that strays from the "ideal". The problem is this: how do we measure progress, accurately and objectively? Development methods such as Scrum or Kanban, when implemented rigorously, can mitigate these problems for developers, to some extent. They force a team to progress one small, but complete feature at a time, to find out how long it really takes for this feature to be "done done"; in other words done to the point where its performance and scalability is understood, it is tested for all conceivable edge cases and doesn’t break…it is ready for prime time. At that point, the team has a much more realistic idea of how long it will take them to really complete all the remaining features, and so how far away the end is. However, it is when software crosses team boundaries that we feel the limitations of such techniques. No matter how well drilled the development team is, problems will still arise if they don’t deploy frequently to a production environment. If they work feverishly for months on end before finally tossing the finished piece of software over the fence for the DBA to deploy to the "real world" then once again will dawn the realization that "done done" is still out of reach, as the DBA uncovers poorly code transactions, un-scalable queries, inefficient caching, and so on. By deploying regularly, end users will also have a much earlier opportunity to tell you how far what you implemented strayed from what they wanted. If you have a tale to tell, anonymized of course, of a "quick polish" project that turned out to be anything but, and what the major problems were, please do share it. Cheers, Tony.

    Read the article

  • LINQ to Twitter v2.0.8 Released

    - by Joe Mayo
    Today, I released LINQ to Twitter v2.0.8. Besides normal maintenance, this release includes the Twitter Geo API and the Suggested Users API. LINQ to Twitter is hosted on CodePlex.com: http://linqtotwitter.codeplex.com/ In addition to new functionality, I've made much progress toward LINQ to Twitter documentation; primarily in the Making API Calls area: http://linqtotwitter.codeplex.com/wikipage?title=Making%20API%20Calls&referringTitle=Documentation There's also a discussion forum where you can ask and view questions: http://linqtotwitter.codeplex.com/Thread/List.aspx As always, constructive feedback is welcome. Joe

    Read the article

  • How to Create a Realistic Timeline for your Projects

    - by Aditi
    Developing a Realistic project time line is a biggest and most challenging task of any team. We here at JustSkins, have learned over time that developing and adhering to a timeline isn’t easy but is not impossible. Keeping in consideration from any technical glitches to a human resource issue, unexpected complications can come up at any time during the entire project life cycle, How ever there are many things you can do in order to save the project from going off-track there. A specific timeline is very important statistic for time management planning and keeping your client informed of the progress. Have a rigid time tracking assures the client, that you are committed to achieving specific project milestones in time. The more you work on varied IT projects, the more you know about the aspects of project and you get to better develop future estimates and timelines. Make a Structure When estimating the time required to accomplish each task, consider which all team members will be involved, also assign the amount of time each individual must put in to the project. Define Scope & dependability and set deadlines for accomplishing them. Sometimes Working in Phases or modules help in doing more in lesser time. One must use a Project management tool in order to systematize the collaboration between the team members. Realistic Goal Setting One approach is to keep a bandwidth of few days to deal with delay, errors & incorrect coding issues you are likely to have in the course. It is very realistic to keep delivery date to client different then internal delivery timeline. If your resource is having hard time finishing this task in the time specified, keep some room to give him a day or two extra to accomplish his task. This does not upset client delivery and is the safe way of doing projects. Keep and Insightful Approach Identify potential problems before they delay your project. To be a great IT manager you have to be honest & diplomatic at the same time, it is essential for you to give earlier notice of potential delays or scope changes to your clients. In situation where delay is inevitable you should be in a position to provide immediate, on-demand status progress reports. Learning from past experiences if very important one must keep a track of actual time spent on all aspects of the projects, this will help you create better future estimates and timelines.

    Read the article

  • Radio Shack Cell Phone Commercial from 1989 [Video]

    - by Asian Angel
    Cell phone technology has come a long way since the early days and it is progress that we can all be thankful for. This commercial from 1989 features a positively gigantic model when compared to today’s small and sleek cell phones. 1989 Radio Shack Cellular Phone Commercial [via MUO] How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Filtering Your Content

    - by rickramsey
    Watch it directly on YouTube You can't always get what you want, but we do try to get you what you need. Use these OTN System Collections to see what's been published lately in your area of interest: Sysadmin Collection Developer Collection OTN ystems Collection See all collections (work in progress) If you prefer to use your RSS feeder, try this page: RSS Feeds for OTN Systems Content - Rick System Admin and Developer Community of OTN OTN Garage Blog OTN Garage on Facebook OTN Garage on Twitter

    Read the article

  • Good resource for business development Techniques

    - by Morons
    I work for an IT consulting firm… As I progress in my career I (like most who work for IT firms) am spending more and more time participating in business development, usually as a technical expert. Can any one recommend a good resource (or book) on business development preferably targeting technology businesses? (I am NOT looking for “how to get leads”… I’m looking for “how to conduct a solid sales pitch\ Demo Software” type stuff)

    Read the article

  • MS Certifications - Useful to your career ?

    - by NeilHambly
    Now I admit I've had mixed feelings on the certification subject previously and of a result I've not looked @ going down the MS Certification route, however with my previous experience this really hasn't hindered my progress any (Thankfully). However as I now have a different perspective for a number of varying reasons of which I will not bore you with the details. I will be undertaking some exams (6 of them) for accredition so right now I'm just formulating my study plans, with my...(read more)

    Read the article

  • Why do business analysts and project managers get higher salaries than programmers?

    - by jpartogi
    We have to admit that programming is much more difficult than creating documentation or even creating Gantt chart and asking progress to programmers. So for us that are naives, knowing that programming are generally more difficult, why does business analysts and project managers gets higher salary than programmers? What is it that makes their job a high paying job when even at most time programmers are the ones that goes home late?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >