Search Results

Search found 20052 results on 803 pages for 'software prerequisites'.

Page 100/803 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Failed to download repository information Check your Internet connection

    - by Luca Brazza
    I need to check if I have updates for Ubuntu. I think it is 11.05 As you can see this is what it says: Failed to download repository information Check your Internet connection. Details: W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/main/binary-amd64/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/restricted/binary-amd64/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch http://ppa.launchpad.net/ferramroberto/java/ubuntu/dists/precise/main/source/Sources 404 Not Found , W:Failed to fetch http://ppa.launchpad.net/ferramroberto/java/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found , E:Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Microsoft Sql Server driver for Nodejs - Part 2

    - by chanderdhall
    Nodejs, Sql server and Json response with Rest This post is part 2 of Microsoft Sql Server driver for Node js.In this post we will look at the JSON responses from the Microsoft Sql Server driver for Node js. Pre-requisites: If you have read the Part 1 of the series, you should be good. We will be using a framework for Rest within Nodejs - Restify, but that would need no prior learning. Restify: Restify is a simple node module for building RESTful services. It is slimmer than Express. Express is a complete module that has all what you need to create a full-blown browser app. However, Restify does not have additional overhead of templating, rendering etc that would be needed if your app has views. So, as the name suggests it's an awesome framework for building RESTful services and is very light-weight. Set up - You can continue with the same directory or project structure we had in the previous post, or can start a new one. Install restify using npm and you are good to go. npm install restify Go to Server.js and include Restify in your solution. Then create the server object using restify.CreateServer() - SLICK - ha? var restify = require('restify'); var server = restify.createServer(); server.listen(8080, function () { console.log('%s listening at %s', server.name, server.url); }); Then make sure you provide a port for the Server to listen at. The call back function is optional but helps you for debugging purposes. Once you are done, save the file and then go to the command prompt and hit 'node server.js' and you should see the following:   To test the server, go to your browser and type the address 'http://localhost:8080/' and oops you will see an error.   Why is that? - Well because we haven't defined any routes. Let's go ahead and create a route. To begin with I'd like to return whatever is typed in the url after my name and the following code should do it. server.get('/ChanderDhall/:status', function respond(req, res, next) { res.end("hello " + req.params.name + "") }); You can also avoid writing call backs inline. Something like this. function respond(req, res, next) { res.end("Chander Dhall " + req.params.name + ""); } server.get('/hello/:name', respond); Now if you go ahead and type http://localhost:8080/ChanderDhall/LovesNode you will get the response 'Chander Dhall loves node'. NOTE: Make sure your url has the right case as it's case-sensitive. You could have also typed it in as 'server.get('/chanderdhall/:name', respond);' Stored procedure: We've talked a lot about Restify now, but keep in mind the post is about being able to use Sql server with Node and return JSON. To see this in action, let's go ahead and create another route to a list of Employees from a stored procedure. server.get('/Employees', Employees); The following code will return a JSON response.  function Employees(req, res, next) { res.header("Content-Type: application/json"); //Need to specify the Content-Type which is //JSON in our case. sql.open(conn_str, function (err, conn) { if (err) { //Logs an error console.log("Error opening the database connection!"); return; } console.log("before query!"); conn.queryRaw("exec sp_GetEmployees", function (err, results) { if (err) { //Connection is open but an error occurs whileWhat else can be done? May be create a formatter or may be even come up with a hypermedia type but that may upset some pragmatists. Well, that's going to be a totally different discussion and is really not part of this series. Summary: We've discussed how to execute a stored procedure using Microsoft Sql Server driver for Node. Also, we have discussed how to format and send out a clean JSON to the app calling this API.  

    Read the article

  • Object oriented design importance

    - by user5507
    I started studying Object Oriented Design and Modelling using the this book by James Rumbaugh. It uses a tool called Object Modeling Technique (OMT). I have certain newbie questions. I searched the net, but couldn't get answers The book is pretty old. Don't know why the school told me to learn this. I know OMT is a predecessor of the Unified Modeling Language (UML). So its a waste? Whether the concepts change very much when we move from OMT to UML? I know OMT has Object, Dynamic and Functional Model. Wikipedia says UML is compatible with OMT and UML is a model too. As per wikipedia the UML models are Static and Dynamic and they are represented by different diagrams like class, object, activity, sequence..... I couldn't find the equivalence of this in OMT. I read that there are many object oriented development methods like OMT, Booch,.... Which one is used by Industry ? Where could I get a comparison of different Object oriented development methods?

    Read the article

  • April Omnibus

    - by KKline
    I freely admit it - I'm a sluggard. I should be blogging a couple times per week and tweeting in between. But, for some unknown reason, April has been a tough month to get this in gear. Hence, I'm putting out an omnibus post to cover all of the stuff I've been up to, instead of the one-off's I usually post when I've got something new to mention. Isn't it funny how life gets in the way of the stuff we want and intend to do? As they say - "The road to hell is paved with good intentions", or was that...(read more)

    Read the article

  • VG.net 8.5 Released

    - by Frank Hileman
    We have released version 8.5 of the VG.net vector graphics system. This release supports Visual Studio 2013. Companies who purchased a VG.net license after October 1, 2013, are eligible for a free upgrade. We will be sending you an email. There is one cosmetic problem which wasted our time, as we could not find a work around. It occurs when your display is set to a high DPI. You can see the problem in the image of the toolbox below, which uses a DPI of 125%, on Windows 7: The ToolboxItem class accepts only Bitmaps with a size of 16x16. We tried many sizes and many bitmap formats. As you can see, this tiny Bitmap is then scaled by the toolbox, and the scaling algorithm adds artifacts. This is an "improvement" Microsoft recently added to Visual Studio 2013.

    Read the article

  • Native, FOSS GUI prototyping tools?

    - by Oli
    As part of my job as a web developer, I spend an amount of time doing UI prototypes to show the client. It's a pain in the behind but sometimes it has to be done. I've seen Shuttleworth (and the design team) pump out images like this: That's made by something called Balsamiq Mockups... Something that balances on top of Adobe Air (yack!) and costs $79. I've tried it but it kept falling over. I think it had something to do with Air not the app itself. My point is if I'm paying out for something, I want it to be native.

    Read the article

  • Rescue overdue offshore projects and convince management to use automated tests

    - by oazabir
    I have published two articles on codeproject recently. One is a story where an offshore project was two months overdue, my friend who runs it was paying the team from his own pocket and he was drowning in ever increasing number of change requests and how we brainstormed together to come out of that situation. Tips and Tricks to rescue overdue projects Next one is about convincing management to go for automated test and give developers extra time per sprint, at the cost of reduced productivity for couple of sprints. It’s hard to negotiate this with even dev leads, let alone managers. Whenever you tell them - there’s going to be less features/bug fixes delivered for next 3 or 4 sprints because we want to automate the tests and reduce manual QA effort; everyone gets furious and kicks you out of the meeting. Especially in a startup where every sprint is jam packed with new features and priority bug fixes to satisfy various stakeholders, including the VCs, it’s very hard to communicate the benefits of automated tests across the board. Let me tell you of a story of one of my startups where I had the pleasure to argue on this and came out victorious. How to convince developers and management to use automated test instead of manual test If you like these, please vote for me!

    Read the article

  • Where can I find alternatives to...?

    - by jumpnett
    There has been a couple questions here regarding alternatives to certain programs, and I'm sure as more people start using Ubuntu, and join this site, there will be more people looking for alternatives to programs they used in there previous operating system. Therefore I figured I start a thread to list different sites that list alternatives to programs. (Please just post one link per answer). Here is my favorite site to look for alternatives: alternativeTo

    Read the article

  • New games for 2011

    - by Oli
    I know there have been other questions like "What native games are available?" and they often have issues because they turn into a never-ending list of every game ever released for Linux. But I'd like to know what's coming out this year. Good answers can include: A game that's coming out in 2011 A Linux port being released for games that might be older (eg Trine) As much information and as many screenshots and links as possible Few old games unless they're doing a major update that changes the game very significantly. One game per answer, add as much information as possible and work with each other to build a catalogue of awesome things to look forward to this year.

    Read the article

  • How to name a bug?

    - by Pieter
    Bugs usually receive a descriptive name: "That X-Y synchronization issue", "That crash after actions A, B and D but not C", "Yesterday's update problem". Even the JIRA issue tracker has a field "Summary" instead of "Name". In discussing "big" bugs, I actually use JIRA id's to prevent confusion. There's a few restrictions to take into account: When reporting a bug, only the consequence of a bug is known. The root cause might never even be found. Several reported bugs might be found out to be duplicates, or might be completely different consequences of the same bug. In large projects, bugs will come at you by the dozens every month. Now, how would you name a bug? Name them like hurricanes perhaps?

    Read the article

  • F# in ASP.NET, mathematics and testing

    - by DigiMortal
    Starting from Visual Studio 2010 F# is full member of .NET Framework languages family. It is functional language with syntax specific to functional languages but I think it is time for us also notice and study functional languages. In this posting I will show you some examples about cool things other people have done using F#. F# and ASP.NET As I am ASP/ASP.NET MVP I am – of course – interested in how people use different languages and technologies with ASP.NET. C# MVP Tomáš Petrícek writes about developing ASP.NET MVC applications using F#. He also shows how to use LINQ To SQL in F# (using F# PowerPack) and provides sample solution and Visual Studio 2010 template for F# MVC web applications. You may also find interesting how you can create controllers in F#. Excellent work, Tomáš! Vladimir Matveev has interesting example about how to use F# and ApplicationHost class to process ASP.NET requests ouside of IIS. This is simple and very straight-forward example and I strongly suggest you to take a look at it. Very cool example is project Strom in Codeplex. Storm is web services testing tool that is fully written on F#. Take a look at this site because Codeplex offers also source code besides binaries. Math Functional languages are strong in fields like mathematics and physics. When I wrote my C# example about BigInteger class I found out that recursive version of Fibonacci algorithm in C# is not performing well. In same time I made same experiment on F# and in F# there were no performance problems with recursive version. You can find F# version of Fibonacci algorithm from Bob Palmer’s blog posting Fibonacci numbers in F#. Although golden spiral is useful for solving many problems I looked for some practical code example and found one. Kean Walmsley published in his Through the Interface blog very interesting posting Creating Fibonacci spirals in AutoCAD using F#. There are also other cool examples you may be interested in. Using numerical components by Extreme Optimization  it is possible to make some numerical integration (quadrature method) using F# (also C# example is available). fsharp.it introduces factorials calculation on F#. Robert Pickering has made very good work on programming The Game of Life in Silverlight and F# – I definitely suggest you to try out this example as it is very illustrative too. Who wants something more complex may take a look at Newton basin fractal example in F# by Jonathan Birge. Testing After some searching and surfing I found out that there is almost everything available for F# to write tests and test your F# code. FsCheck - FsCheck is a port of Haskell's QuickCheck. Important parts of the manual for using FsCheck is almost literally "adapted" from the QuickCheck manual and paper. Any errors and omissions are entirely my responsibility. FsTest - This project is designed to Language Oriented Programming constructs around unit testing and behavior testing in F#. The goal of this project is to create a Domain Specific Language for testing F# code in a way that makes sense for functional programming. FsUnit - FsUnit makes unit-testing with F# more enjoyable. It adds a special syntax to your favorite .NET testing framework. xUnit.NET - xUnit.net is a developer testing framework, built to support Test Driven Development, with a design goal of extreme simplicity and alignment with framework features. It is compatible with .NET Framework 2.0 and later, and offers several runners: console, GUI, MSBuild, and Visual Studio integration via TestDriven.net, CodeRush Test Runner and Resharper. It also offers test project integration for ASP.NET MVC. Getting started Well, as a first thing you need Visual Studio 2010. Then take a look at these resources: F# samples @ MSDN Microsoft F# Developer Center @ MSDN F# Language Reference @ MSDN F# blog F# forums Real World Functional Programming: With Examples in F# and C# (Amazon) Happy F#-ing! :)

    Read the article

  • GNU Smalltalk package

    - by Peter
    I've installed the GNU Smalltalk package and can get to the SmallTalk command line with the command 'gst'. However, I can't start the visual gst browser using the command: $ gst-browser When I try, this is what I get: peter@peredur:~$ gst-browser Object: CFunctionDescriptor new: 1 "<0x40488720>" error: Invalid C call-out gdk_colormap_get_type SystemExceptions.CInterfaceError(Smalltalk.Exception)>>signal (ExcHandling.st:254) SystemExceptions.CInterfaceError class(Smalltalk.Exception class)>>signal: (ExcHandling.st:161) Smalltalk.CFunctionDescriptor(Smalltalk.CCallable)>>callInto: (CCallable.st:165) GdkColormap class>>getType (GTK.star#VFS.ZipFile/Funcs.st:1) optimized [] in GLib class>>registerAllTypes (GTK.star#VFS.ZipFile/GtkDecl.st:78) Smalltalk.OrderedCollection>>do: (OrderColl.st:68) GLib class>>registerAllTypes (GTK.star#VFS.ZipFile/GtkDecl.st:78) Smalltalk.UndefinedObject>>executeStatements (GTK.star#VFS.ZipFile/GtkImpl.st:1078) Object: CFunctionDescriptor new: 1 "<0x404a7c28>" error: Invalid C call-out gtk_window_new SystemExceptions.CInterfaceError(Exception)>>signal (ExcHandling.st:254) SystemExceptions.CInterfaceError class(Exception class)>>signal: (ExcHandling.st:161) CFunctionDescriptor(CCallable)>>callInto: (CCallable.st:165) GTK.GtkWindow class>>new: (GTK.star#VFS.ZipFile/Funcs.st:1) VisualGST.GtkDebugger(VisualGST.GtkMainWindow)>>initialize (VisualGST.star#VFS.ZipFile/GtkMainWindow.st:131) VisualGST.GtkDebugger class(VisualGST.GtkMainWindow class)>>openSized: (VisualGST.star#VFS.ZipFile/GtkMainWindow.st:19) [] in VisualGST.GtkDebugger class>>open: (VisualGST.star#VFS.ZipFile/Debugger/GtkDebugger.st:16) [] in BlockClosure>>forkDebugger (DebugTools.star#VFS.ZipFile/DebugTools.st:380) [] in Process>>onBlock:at:suspend: (Process.st:392) BlockClosure>>on:do: (BlkClosure.st:193) [] in Process>>onBlock:at:suspend: (Process.st:393) BlockClosure>>ensure: (BlkClosure.st:269) [] in Process>>onBlock:at:suspend: (Process.st:370) [] in BlockClosure>>asContext: (BlkClosure.st:179) BlockContext class>>fromClosure:parent: (BlkContext.st:68) Everything hangs at this point until I hit ^C, after which, I get: Object: CFunctionDescriptor new: 1 "<0x404a7c28>" error: Invalid C call-out gtk_window_new SystemExceptions.CInterfaceError(Exception)>>signal (ExcHandling.st:254) SystemExceptions.CInterfaceError class(Exception class)>>signal: (ExcHandling.st:161) CFunctionDescriptor(CCallable)>>callInto: (CCallable.st:165) GTK.GtkWindow class>>new: (GTK.star#VFS.ZipFile/Funcs.st:1) VisualGST.GtkDebugger(VisualGST.GtkMainWindow)>>initialize (VisualGST.star#VFS.ZipFile/GtkMainWindow.st:131) VisualGST.GtkDebugger class(VisualGST.GtkMainWindow class)>>openSized: (VisualGST.star#VFS.ZipFile/GtkMainWindow.st:19) [] in VisualGST.GtkDebugger class>>open: (VisualGST.star#VFS.ZipFile/Debugger/GtkDebugger.st:16) [] in BlockClosure>>forkDebugger (DebugTools.star#VFS.ZipFile/DebugTools.st:380) [] in Process>>onBlock:at:suspend: (Process.st:392) BlockClosure>>on:do: (BlkClosure.st:193) [] in Process>>onBlock:at:suspend: (Process.st:393) BlockClosure>>ensure: (BlkClosure.st:269) [] in Process>>onBlock:at:suspend: (Process.st:370) [] in BlockClosure>>asContext: (BlkClosure.st:179) BlockContext class>>fromClosure:parent: (BlkContext.st:68) peter@peredur:~$ Is there a problem with this package?

    Read the article

  • Part 5, Moving Forum threads from CommunityServer to DotNetNuke

    - by Chris Hammond
    This is the fifth post in a series of blog posts about converting from CommunityServer to DotNetNuke. A brief background: I had a number of websites running on CommunityServer 2.1, I decided it was finally time to ditch CommunityServer due to the change in their licensing model and pricing that made it not good for the small guy. This series of blog posts is about how to convert your CommunityServer based sites to DotNetNuke . Previous Posts: Part 1: An Introduction Part 2: DotNetNuke Installation...(read more)

    Read the article

  • Expected sprint completion rate and load in scrum?

    - by bjarkef
    Recently at work there have been increased focus on completion rate and load on the developers in our sprints. With completion rate I mean, if we plan 20 user stories for a sprint, what percentage of these user stories are closed at the end of the sprint. And with load I mean, if we have a sprint with 3 developers of 60 hours each, i.e. 180 hours for the sprint, how many hours worth of user stories do we schedule for the sprint. So I am really interested in others experience with this, I guess this is something everybody working with scrum deals with. My question is, what completion rate and load are expected/usual, and how are your team doing with respect to these parameters?

    Read the article

  • Working with Legacy code #4 : Remove the hard dependencies

    - by andrewstopford
    During your refactoring cycle you should be seeking out the hard dependencies that the code may have, examples of these can include. File System Database Network (HTTP) Application Server (Crystal) Classes that service these kind (or code that can be abstracted to a class) of these kind of dependencies should be wrapped in an interface for easier mocking. If you team starts refering to the interface version of these classes the hard dependency will over time work it's self free.

    Read the article

  • Change default language settings in Visual Studio 2012

    - by sreejukg
    The first thing you need to do after the installation of Visual Studio 2012 is to choose the IDE preferences. Once you select your preferred collection of settings, the IDE will always choose dialogs and other options according to your selection. Nowadays developer’s needs to work with different programming environments and due to this, developers might need to reset the default settings. In this article, I am going to demonstrate how you can change the default settings in Visual Studio 2012. For the purpose of this demonstration, I have installed Visual Studio 2012 and selected C++ as my default environment settings. So now when I go to file -> new project, it will give me C++ templates by default as follows. If you want to select another language, you need to expand Other Languages section and select C# or VB. Now I am going to change these default settings. I am going to change the default language preference to C#. In Visual Studio 2012, go to tools menu and select Import and Export Settings. Here you have 3 options; one is to export the current settings so that the settings are saved for future use. Also you can import previously saved settings. The last option available is to reset it to default. It is a good Idea to export your settings and import it as you need in later stages. To reset the settings to default select the Reset option and click next. Now Visual Studio will ask you to whether you would like to save the settings, which can be used in future to restore. Select any one option and click next. For the purpose of this demo, I have selected not to save the settings. Click Next button to continue. Now Visual Studio will bring you the similar dialog that appears just after installation to select your IDE settings. Select the required settings from the available list and click Finish button. Click Finish once you are done. If everything OK, you will see the success message as below. Now go to file -> new Project, you will see the selected language appear by default. I selected C# in the previous step and the new project dialog appears as follows. Changing IDE settings in Visual Studio 2012 is very easy and straight forward.

    Read the article

  • Lessons from rewriting POP Forums for MVC, open source-like

    - by Jeff
    It has been a ton of work, interrupted over the last two years by unemployment, moving, a baby, failing to sell houses and other life events, but it's really exciting to see POP Forums v9 coming together. I'm not even sure when I decided to really commit to it as an open source project, but working on the same team as the CodePlex folks probably had something to do with it. Moving along the roadmap I set for myself, the app is now running on a quasi-production site... we launched MouseZoom last weekend. (That's a post-beta 1 build of the forum. There's also some nifty Silverlight DeepZoom goodness on that site.)I have to make a point to illustrate just how important starting over was for me. I started this forum thing for my sites in old ASP more than ten years ago. What a mess that stuff was, including SQL injection vulnerabilities and all kinds of crap. It went to ASP.NET in 2002, but even then, it felt a little too much like script. More than a year later, in 2003, I did an honest to goodness rewrite. If you've been in this business of writing code for any amount of time, you know how much you hate what you wrote a month ago, so just imagine that with seven years in between. The subsequent versions still carried a fair amount of crap, and that's why I had to start over, to make a clean break. Mind you, much of that crap is still running on some of my production sites in a stable manner, but it's a pain in the ass to maintain.So with that clean break, there is much that I have learned. These are a few of those lessons, in no particular order...Avoid shiny object syndromeOver the years, I've embraced new things without bothering to ask myself why. I remember spending the better part of a year trying to adapt this app to use the membership and profile API's in ASP.NET, just because they were there. They didn't solve any known problem. Early on in this version, I dabbled in exotic ORM's, even though I already had the fundamental SQL that I knew worked. I bloated up the client side code with all kinds of jQuery UI and plugins just because, and it got in the way. All the new shiny can be distracting, and I've come to realize that I've allowed it to be a distraction most of my professional life.Just query what you needI've spent a lot of time over-thinking how to query data. In the SQL world, this means exotic joins, special caches, the read-update-commit loop of ORM's, etc. There are times when you have to remind yourself that you aren't Facebook, you'll never be Facebook, and that databases are in fact intended to serve data. In a lot of projects, back in the day, I used to have these big, rich data objects and pass them all over the place, through various application tiers, when in reality, all I needed was some ID from the entity. I try to be mindful of how many queries hit the database on a given request, but I don't obsess over it. I just get what I need.Don't spend too much time worrying about your unit testsIf you've looked at any of the tests for POP Forums, you might offer an audible WTF. That's OK. There's a whole lot of mocking going on. In some cases, it points out where you're doing too much, and that's good for improving your design. In other cases it shows where your design sucks. But the biggest trap of unit testing is that you worry it should be prettier. That's a waste of time. When you write a test, in many cases before the production code, the important part is that you're testing the right thing. If you have to mock up a bunch of stuff to test the outcome, so be it, but it's not wasted time. You're still doing up the typical arrange-action-assert deal, and you'll be able to read that later if you need to.Get back to your HTTP rootsASP.NET Webforms did a reasonably decent job at abstracting us away from the stateless nature of the Web. A lot of people criticize it, but I think it all worked pretty well. These days, with MVC, jQuery, REST services, and what not, we've gone back to thinking about the wire. The nuts and bolts passing between our Web browser and server matters. This doesn't make things harder, in my opinion, it makes them easier. There is something incredibly freeing about how we approach development of Web apps now. HTTP is a really simple protocol, and the stuff we push through it, in particular HTML and JSON, are pretty simple too. The debugging points are really easy to trap and trace.Premature optimization is prematureI'll go back to the data thing for a moment. I've been known to look at a particular action or use case and stress about the number of calls that are made to the database. I'm not suggesting that it's a bad thing to keep these in mind, but if you worry about it outside of the context of the actual impact, you're wasting time. For example, I query the database for last read times in a forum separately of the user and the list of forums. The impact on performance barely exists. If I put it under load, exceeding the kind of load I expect, it still barely has an impact. Then consider it only counts for logged in users. The context of this "inefficient" action is that it doesn't matter. Did I mention I won't be Facebook?Solve your own problems firstThis is another trap I've fallen into. I've often thought about what other people might need for some feature or aspect of the app. In other words, I was willing to make design decisions based on non-existent data. How stupid is that? When I decided to truly open source this thing, building for myself first was a stated design goal. This app has to server the audiences of CoasterBuzz, MouseZoom and other sites first. In this development scenario, you don't have access to mountains of usability studies or user focus groups. You have to start with what you know.I'm sure there are other points I could make too. It has been a lot of fun to work on, and I look forward to evolving the UI as time goes on. That's where I hope to see more magic in the future.

    Read the article

  • Logkeys fragile?

    - by Ahmed Nematallah
    The program logkeys (which seems to be the only keylogger for linux out there which I can run) has some problems, it stops logging after some time, never returns again, I don't know how to trigger that bug, if the file is edited while logging, it just stops, if the file exists before logging it doesn't try to append to it or delete it or anything, the first issue is the most important but the rest are quite annoying can anyone help me because I'm not a linux programmer (I don't really know anything about the linux API but I am a beginner C++ programmer) and I won't be able to make my own keylogger thanks for the interest BTW I'm sure I got the right input device because it starts logging then stops and I use the command "logkeys -s -u -d /dev/input/event3 -o '/home/ahmed/Documents/log.txt'"

    Read the article

  • Third-Grade Math Class

    - by andyleonard
    An Odd Thing Happened... ... when I was in third grade math class: I was handed a sheet of arithmetic problems to solve. There were maybe 20 problems on the page and we were given the remainder of the class to complete them. I don't remember how much time remained in the class, I remember I finished working on the problems before my classmates. That wasn't the odd part. The odd part was that I started working on the first problem, concentrating pretty hard. I worked the sum and moved to the next...(read more)

    Read the article

  • 4th Annual Hartford Code Camp - The Code Camp Manifesto lives on!

    - by SB Chatterjee
    It is amazing that Thom Robbins' blog posting back in December 2004 laid the foundation of the Code Camps that have grown world-wide - there is at least one every week-end in some country (unscientific tweets stats sampling). This week end, we at the Connecticut .NET Developers Group had the 4th Annual Hartford Code Camp and it was well attended with 120+ attendees with ~30 sessions. Our thanks to the Speakers from near and far who made our event a success.

    Read the article

  • Unable install cmake and ccmake?

    - by user159618
    So the thing is I'm trying to install Cmake and cmake-curses-gui. I have updated the system with apt-get-update. sudo apt-get install cmake Reading package lists... Done Building dependency tree Reading state information... Done Package cmake is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'cmake' has no installation candidate sudo apt-get install cmake-curses-gui Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package cmake-curses-gui That's strange. Can anyone give some pointers? Pastbin sources.list :- http://pastebin.com/DufycYfZ

    Read the article

  • Secret Agent Man

    - by Bil Simser
    Just a quick one this morning as we all get started in the week. Something that comes into play (sometimes in a big way) is the user agent string your browser gives off. So for example using the User-Agent field in the request header, you can determine what browser the user is running and act accordingly.Internet Explorer 9 modified the UA string slightly so just in case you're looking for it here are the user agent strings for IE9 (in various modes):Internet Explorer 9 Mode: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)Internet Explorer 8 Mode: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)Internet Explorer 7 Mode: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)Internet Explorer 9 (Compatibility Mode): Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)A couple of things to note here:This was from a 64-bit Windows 7 client so that might account for the WOW64 in the agent string (I don't have a 32-bit client to test from)Various applications and platforms add to the UA string just like they do in previous IE releases. So for example you can see I have various .NET versions installed as well as Zune. You can take advantage of this by querying the UA string for compatibilities and present options accordingly to the end user.As applications will continue to add and modify this string you'll want to query the string for parts not the entire string. For example if you want to detect if you're coming from IE running  on a Windows Phone 7 just look for "iemobile" in the user agent stringHappy hacking!

    Read the article

  • Part 4, Getting the conversion tables ready for CS to DNN

    - by Chris Hammond
    This is the fourth post in a series of blog posts about converting from CommunityServer to DotNetNuke. A brief background: I had a number of websites running on CommunityServer 2.1, I decided it was finally time to ditch CommunityServer due to the change in their licensing model and pricing that made it not good for the small guy. This series of blog posts is about how to convert your CommunityServer based sites to DotNetNuke . Previous Posts: Part 1: An Introduction Part 2: DotNetNuke Installation...(read more)

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • C# with keyword equivalent

    - by oazabir
    There’s no with keyword in C#, like Visual Basic. So you end up writing code like this: this.StatusProgressBar.IsIndeterminate = false; this.StatusProgressBar.Visibility = Visibility.Visible; this.StatusProgressBar.Minimum = 0; this.StatusProgressBar.Maximum = 100; this.StatusProgressBar.Value = percentage; Here’s a work around to this: With.A<ProgressBar>(this.StatusProgressBar, (p) => { p.IsIndeterminate = false; p.Visibility = Visibility.Visible; p.Minimum = 0; p.Maximum = 100; p.Value = percentage; }); Saves you repeatedly typing the same class instance or control name over and over again. It also makes code more readable since it clearly says that you are working with a progress bar control within the block. It you are setting properties of several controls one after another, it’s easier to read such code this way since you will have dedicated block for each control. It’s a very simple one line function that does it: public static class With { public static void A<T>(T item, Action<T> work) { work(item); } } You could argue that you can just do this: var p = this.StatusProgressBar; p.IsIndeterminate = false; p.Visibility = Visibility.Visible; p.Minimum = 0; p.Maximum = 100; p.Value = percentage; But it’s not elegant. You are introducing a variable “p” in the local scope of the whole function. This goes against naming conventions. Morever, you can’t limit the scope of “p” within a certain place in the function.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >