Search Results

Search found 19838 results on 794 pages for 'software'.

Page 98/794 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Writing C# Code Using SOLID Principles

    - by bipinjoshi
    Most of the modern programming languages including C# support objected oriented programming. Features such as encapsulation, inheritance, overloading and polymorphism are code level features. Using these features is just one part of the story. Equally important is to apply some object oriented design principles while writing your C# code. SOLID principles is a set of five such principles--namely Single Responsibility Principle, Open/Closed Principle, Liskov Substitution Principle, Interface Segregation Principle and Dependency Inversion Principle. Applying these time proven principles make your code structured, neat and easy to maintain. This article discusses SOLID principles and also illustrates how they can be applied to your C# code.http://www.binaryintellect.net/articles/7f857089-68f5-4d76-a3b7-57b898b6f4a8.aspx 

    Read the article

  • The ugly evolution of running a background operation in the context of an ASP.NET app

    - by Jeff
    If you’re one of the two people who has followed my blog for many years, you know that I’ve been going at POP Forums now for over almost 15 years. Publishing it as an open source app has been a big help because it helps me understand how people want to use it, and having it translated to six languages is pretty sweet. Despite this warm and fuzzy group hug, there has been an ugly hack hiding in there for years. One of the things we find ourselves wanting to do is hide some kind of regular process inside of an ASP.NET application that runs periodically. The motivation for this has always been that a lot of people simply don’t have a choice, because they’re running the app on shared hosting, or don’t otherwise have access to a box that can run some kind of regular background service. In POP Forums, I “solved” this problem years ago by hiding some static timers in an HttpModule. Truthfully, this works well as long as you don’t run multiple instances of the app, which in the cloud world, is always a possibility. With the arrival of WebJobs in Azure, I’m going to solve this problem. This post isn’t about that. The other little hacky problem that I “solved” was spawning a background thread to queue emails to subscribed users of the forum. This evolved quite a bit over the years, starting with a long running page to mail users in real-time, when I had only a few hundred. By the time it got into the thousands, or tens of thousands, I needed a better way. What I did is launched a new thread that read all of the user data in, then wrote a queued email to the database (as in, the entire body of the email, every time), with the properly formatted opt-out link. It was super inefficient, but it worked. Then I moved my biggest site using it, CoasterBuzz, to an Azure Website, and it stopped working. So let’s start with the first stupid thing I was doing. The new thread was simply created with delegate code inline. As best I can tell, Azure Websites are more aggressive about garbage collection, because that thread didn’t queue even one message. When the calling server response went out of scope, so went the magic background thread. Duh, all I had to do was move the thread to a private static variable in the class. That’s the way I was able to keep stuff running from the HttpModule. (And yes, I know this is still prone to failure, particularly if the app recycles. For as infrequently as it’s used, I have not, however, experienced this.) It was still failing, but this time I wasn’t sure why. It would queue a few dozen messages, then die. Running in Azure, I had to turn on the application logging and FTP in to see what was going on. That led me to a helper method I was using as delegate to build the unsubscribe links. The idea here is that I didn’t want yet another config entry to describe the base URL, appended with the right path that would match the routing table. No, I wanted the app to figure it out for you, so I came up with this little thing: public static string FullUrlHelper(this Controller controller, string actionName, string controllerName, object routeValues = null) { var helper = new UrlHelper(controller.Request.RequestContext); var requestUrl = controller.Request.Url; if (requestUrl == null) return String.Empty; var url = requestUrl.Scheme + "://"; url += requestUrl.Host; url += (requestUrl.Port != 80 ? ":" + requestUrl.Port : ""); url += helper.Action(actionName, controllerName, routeValues); return url; } And yes, that should have been done with a string builder. This is useful for sending out the email verification messages, too. As clever as I thought I was with this, I was using a delegate in the admin controller to format these unsubscribe links for tens of thousands of users. I passed that delegate into a service class that did the email work: Func<User, string> unsubscribeLinkGenerator = user => this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = user.UserID, key = _profileService.GetUnsubscribeHash(user) }); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); Cool, right? Actually, not so much. If you look back at the helper, this delegate then will depend on the controller context to learn the routing and format for the URL. As you might have guessed, those things were turning null after a few dozen formatted links, when the original request to the admin controller went away. That this wasn’t already happening on my dedicated server is surprising, but again, I understand why the Azure environment might be eager to reclaim a thread after servicing the request. It’s already inefficient that I’m building the entire email for every user, but going back to check the routing table for the right link every time isn’t a win either. I put together a little hack to look up one generic URL, and use that as the basis for a string format. If you’re wondering why I didn’t just use the curly braces up front, it’s because they get URL formatted: var baseString = this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = "--id--", key = "--key--" }); baseString = baseString.Replace("--id--", "{0}").Replace("--key--", "{1}"); Func unsubscribeLinkGenerator = user => String.Format(baseString, user.UserID, _profileService.GetUnsubscribeHash(user)); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); And wouldn’t you know it, the new solution works just fine. It’s still kind of hacky and inefficient, but it will work until this somehow breaks too.

    Read the article

  • Errors while updating Ubuntu 13.10

    - by santiago
    When I execute the updater it always throws the same error saying: Failed to download repository information. Check your internet connection. And when I run the sudo apt-get update I get these lines at the end: W: Failed to fetch cdrom://Ubuntu 13.10 Saucy Salamander - Release i386 (20131016.1)/dists/saucy/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs W: Failed to fetch cdrom://Ubuntu 13.10 Saucy Salamander - Release i386 (20131016.1)/dists/saucy/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs W: Failed to fetch http://ppa.launchpad.net/tiheum/equinox/ubuntu/dists/saucy/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/tiheum/equinox/ubuntu/dists/saucy/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead

    Read the article

  • apt sources.list disabled on upgrade to 12.04

    - by user101089
    After a do-release-upgrade, I'm now running ubuntu 12.04 LTS, as indicated below > lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04 LTS Release: 12.04 Codename: precise However, I find that all the entries in my /etc/apt/sources.list were commented out except for one. QUESTION: Is it safe for me to edit these, replacing the old 'lucid' with 'precise' in what is shown below? ## unixteam source list # deb http://debian.yorku.ca/ubuntu/ precise main main/debian-installer restricted restricted/debian-installer # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu/ precise main restricted # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu/ lucid-updates main restricted # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu/ lucid-updates main restricted # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu/ precise universe # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu/ precise universe # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu/ precise multiverse # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu/ precise multiverse # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu lucid-security main restricted # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu lucid-security main restricted # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu lucid-security universe # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu lucid-security universe # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu lucid-security multiverse # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu lucid-security multiverse # disabled on upgrade to precise # R sources # see http://cran.us.r-project.org/bin/linux/ubuntu/ for details # deb http://probability.ca/cran/bin/linux/ubuntu lucid/ # disabled on upgrade to precise deb http://archive.ubuntu.com/ubuntu precise main multiverse universe

    Read the article

  • Microsoft OneNote alternative?

    - by YSN
    Hi folks! Is there any program for Linux that has about the same functionality and usability as Microsoft OneNote? At the moment I am checking out Basket (for KDE), that seems to point to the right direction, but still lacks much of the functionality of OneNote and is very buggy unfortunately. For those of you, who do not know OneNote and want to get an idea of what I am talking about, please see the following video: http://www.youtube.com/watch?v=xdi67tnx6nA Any alternative suggestions (including a combination of different tools or apps running in wine) are very much appreciated. Thanks, YSN

    Read the article

  • Using old code on new version of Visual Studio [migrated]

    - by Tu Tran
    I have a project which was started from 90s in C/C++. Therefore, it contains many old coding styles such as K&R-style function declaration, obsolete function, ... The project works fine in Visual Studio 2008, but now I want to use it in the new version of Visual Studio (specifically VS 2010) because we have other projects in Visual Studio 2010/2012. I don't want to have too many versions of Visual Studio on my machine. When I try to compile the old project, Visual Studio throws too many errors. I can fix all of them but I am scared to edit the source code and I want other people to be able to pen it in the old version of VS too. I want the project to remain backwards compatible with VS. My question is how to use the old code in Visual Studio 2010/2012 without changing the code. Or if necessary how do I just fix a few lines of code, but make sure it won't cause an error if someone else opens that code in the older version of VS. Is there a way to tell newer Visual Studio versions to use older compiler flags or something like that?

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • Show #14 DotNetNuke 5.6.1, Razor/Webmatrix and WebCamps

    - by Chris Hammond
    Once again, it’s been far too long since the last show, this time just over 4 months, For Show #14 I am joined by Joe Brinkman. Take a listen and see what has been going on in the DNN world. Length: 47:56 Size: 43.8mb MP3 Download Welcome back to DNNVoice Welcome to guest Joe Brinkman ( http://blog.theaccidentalgeek.com/ ) Introduction to Joe and Welcome back from Chris Hammond ( http://www.chrishammond.com ) and what he's been doing DotNetNuke Training Free Extensions Module Development Templates...(read more)

    Read the article

  • A Quick Primer on SharePoint Customization

    - by PeterBrunone
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} This one goes out to all the people who have been asked to change the way a SharePoint site looks.  Management wants to know how long it will take, and you can whip that out by tomorrow, right?  If you don't have time to prepare a treatise on what's involved, or if you just want to lend some extra weight to your case by quoting a blogger who was an MVP for seven years, then dive right in; this post is for you. There are three main components of SharePoint visual customization:   1)       Theme – A theme encompasses all the standardized text formatting and coloring (borders, fonts, etc), including the background images of various sections. All told, there could be around 50 images involved, and a few hundred CSS (style) classes.  Installing a theme once it’s been created is no great feat.  Given the number of pieces, of course, creating a new theme could take anywhere from a day to a week… once decisions have been made about the desired appearance. 2)      Master Page – A master page provides the framework for page layout.  This includes all the top and side menus, where content shows up, et cetera.  Master pages have been around for a long time in ASP.NET (Microsoft’s web development platform), and they do require some .NET programming knowledge.  Beyond that, in SharePoint, there are a few dozen controls which the system expects find on a given page.  They’re not all used at once, but if they’re not there when they’re needed, chaos ensues.  Estimating a custom master page is difficult, as it depends on the level of customization.  I’ve been on projects where I was brought in simply to fix some problems and add a few finishing touches, and it took 2-3 weeks.  Master page customization requires a large amount of testing time to make sure that the HTML, JavaScript, CSS, and control placement all work well together. 3)      Individual page layout – Each page (ideally) uses a master page for its template, but within the content areas defined by the master page, web parts can be added, removed, and configured from within the browser.  The wireframe that Brent provided could most likely be completed simply by manipulating the content on the home page in this fashion, and we had allowed about a day of effort for the task.  If needed, further functionality can be provided by an experienced ASP.NET developer; custom forms are a common example.  This of course is a bit more in-depth than simple content manipulation and could take several days per page (or more; there’s really no way to quantify this without a set of requirements).   That’s basically it.  To recap:  Fonts and coloring are done with themes, and can take anywhere from a day to a week to create (not counting creative time); required technical skills include HTML, CSS, and image manipulation.  Templated layout is done with master pages, and generally requires a developer familiar with both ASP.NET and SharePoint in particular; it can have far-reaching consequences depending on the complexity of the changes, and could add weeks or months to a project.  Page layout can be as simple as content manipulation in the web browser, taking a few hours per page, or it can involve more detail, like custom forms, and can require programming expertise and significantly more development time.

    Read the article

  • Adding a website link to the Member Directory in DotNetNuke 6.2

    - by Chris Hammond
    In case you missed it, DotNetNuke 6.2 was released today, check out Will Morgenweck’s blog post for more details on the release . With some of the new features DotNetNuke 6.2 makes it easier to start to customize the listing of members on your site, and also the Profile display for users on the website. I started implementing DotNetNuke 6.2 on one of my racing websites last night (yeah, so I upgraded before the release happened, a benefit of working for the corp ). In doing so I configured the profile...(read more)

    Read the article

  • .NET development on a “Retina” MacBook Pro

    - by Jeff
    The rumor that Apple would release a super high resolution version of its 15” laptop has been around for quite awhile, and one I watched closely. After more than three years with a 17” MacBook Pro, and all of the screen real estate it offered, I was ready to replace it with something much lighter. It was a fantastic machine, still doing 6 or 7 hours after 460 charge cycles, but I wanted lighter and faster. With the SSD I put in it, I was able to sell it for $750. The appeal of higher resolution goes way back, when I would plug into a projector and scale up. Consolas, as it turns out, is a nice looking font for code when it’s bigger. While I have mostly indifference for iOS, I have to admit that a higher dot pitch on the iPhone and iPad is pretty to look at. So I ordered the new 15” “Retina” model as soon as the Apple Store went live with it, and got it seven days later. I’ve been primarily using Parallels as my VM of choice from OS X for about five years. They recently put out an update for compatibility with the display, though I’m not entirely sure what that means. I figured there would have to be some messing around to get the VM to look right. The combination that seems to work best is this: Set the display in OS X to “more room,” which is roughly the equivalent of the 1920x1200 that my 17” did. It’s not as stunning as the text at the default 1440x900 equivalent (in OS X), but it’s still quite readable. Parallels still doesn’t entirely know what to do with the high resolution, though what it should do is somehow treat it as native. That flaw aside, I set the Windows 7 scaling to 125%, and it generally looks pretty good. It’s not really taking advantage of the display for sharpness, but hopefully that’s something that Parallels will figure out. Screen tweaking aside, I got the base model with 16 gigs of RAM, so I give the VM 8. I can boot a Windows 7 VM in 9 seconds. Nine seconds! The Windows Experience Index scores are all 7 and above, except for graphics, which are both at 6. Again, that’s in a VM. It’s hard to believe there’s something so fast in a little slim package like that. Hopefully this one gets me at least three years, like the last one.

    Read the article

  • Hang In There

    - by andyleonard
    Introduction This post is about persistence in the face of adversity. Losing Everything Isn't Losing When I was in Army Basic Training, I heard the senior drill sargeant tell a soldier "This is just a thing, and things can't hurt you." It seemed an odd thing to say. So odd that it stuck with me all these years since boot camp. I believe part of the reason was the truth in that statement. Things can't hurt you. Does fear of losing everything paralyze you? Have you ever lost everything? I have. Well,...(read more)

    Read the article

  • Solution to Jira web service getWorklogs method error: Object of type System.Xml.XmlNode[] cannot be stored in an array of this type

    - by DigiMortal
    When using Jira web service methods that operate on work logs you may get the following error when running your .NET application: Object of type System.Xml.XmlNode[] cannot be stored in an array of this type. In this posting I will show you solution to this problem. I don’t want to go to deep in details about this problem. I think it’s enough for this posting to mention that this problem is related to one small conflict between .NET web service support and Axis. Of course, Jira team is trying to solve it but until this problem is solved you can use solution provided here. There is good solution to this problem given by Jira forum user Kostadin. You can find it from Jira forum thread RemoteWorkLog serialization from Soap Service in C#. Solution is simple – you have to use SOAP extension class to replace new class names with old ones that .NET found from WSDL. Here is the code by Kostadin. public class JiraSoapExtensions : SoapExtension {     private Stream _streamIn;     private Stream _streamOut;       public override void ProcessMessage(SoapMessage message)     {         string messageAsString;         StreamReader reader;         StreamWriter writer;           switch (message.Stage)         {             case SoapMessageStage.BeforeSerialize:                 break;             case SoapMessageStage.AfterDeserialize:                 break;             case SoapMessageStage.BeforeDeserialize:                 reader = new StreamReader(_streamOut);                 writer = new StreamWriter(_streamIn);                 messageAsString = reader.ReadToEnd();                 switch (message.MethodInfo.Name)                 {                     case "getWorklogs":                     case "addWorklogWithNewRemainingEstimate":                     case "addWorklogAndAutoAdjustRemainingEstimate":                     case "addWorklogAndRetainRemainingEstimate":                         messageAsString = messageAsString.                             .Replace("RemoteWorklogImpl", "RemoteWorklog")                             .Replace("service", "beans");                         break;                 }                 writer.Write(messageAsString);                 writer.Flush();                 _streamIn.Position = 0;                 break;             case SoapMessageStage.AfterSerialize:                 _streamIn.Position = 0;                 reader = new StreamReader(_streamIn);                 writer = new StreamWriter(_streamOut);                 messageAsString = reader.ReadToEnd();                 writer.Write(messageAsString);                 writer.Flush(); break;         }     }       public override Stream ChainStream(Stream stream)     {         _streamOut = stream;         _streamIn = new MemoryStream();         return _streamIn;     }       public override object GetInitializer(Type type)     {         return GetType();     }       public override object GetInitializer(LogicalMethodInfo info,         SoapExtensionAttribute attribute)     {         return null;     }       public override void Initialize(object initializer)     {     } } To get this extension work with Jira web service you have to add the following block to your application configuration file (under system.web section). <webServices>   <soapExtensionTypes>    <add type="JiraStudioExperiments.JiraSoapExtensions,JiraStudioExperiments"           priority="1"/>   </soapExtensionTypes> </webServices> Weird thing is that after successfully using this extension and disabling it everything still works.

    Read the article

  • How can I back up my ubuntu system?

    - by Eloff
    I'm sure there's a lot of questions on here similar to this, and I've been reading them, but I still feel this warrants a new question. I want nightly, incremental backups (full disk images would waste a lot of space - unless compressed somehow.) Preferably rotating or deleting old backups when running out of space or after a fixed number of backups. I want to be able to quickly and painlessly restore my system from these backups. This is my first time running ubuntu as my main development machine and I know from my experience with it as a server and in virtual machines that I regularly manage to make it unbootable or damage it to the point of being unable to rescue it. So how would you recommend I do this? There are so many options out there I really don't know where to start. There seems to be a vocal school of thought that it's sufficient to backup your home directory and the list of installed packages from the package manager. I've already installed lots of things from source, or outside of the package manager (development tools, ides, compilers, graphics drivers, etc.) So at the very least, if I do not back up the operating system itself I need to grab all config files, all program binaries, all created but required files, etc. I'd rather backup too much than too little - an ubuntu install is tiny anyway. Also this drastically reduces the restore time, which would cost me more in my time than the extra storage space. I tried using Deja Dup to backup the root partition, excluding some things like /mnt /media /dev /proc etc. Although many websites assured me you can backup a running linux system this way - that seems to be false as it complained that it could not backup the following files: /boot/System.map-3.0.0-17-generic /boot/System.map-3.2.0-22-generic /boot/vmcoreinfo-3.0.0-17-generic /boot/vmlinuz-3.0.0-17-generic /boot/vmlinuz-3.2.0-22-generic /etc/.pwd.lock /etc/NetworkManager/system-connections/LAN Connection /etc/apparmor.d/cache/lightdm-guest-session /etc/apparmor.d/cache/sbin.dhclient /etc/apparmor.d/cache/usr.bin.evince /etc/apparmor.d/cache/usr.lib.telepathy /etc/apparmor.d/cache/usr.sbin.cupsd /etc/apparmor.d/cache/usr.sbin.tcpdump /etc/apt/trustdb.gpg /etc/at.deny /etc/ati/inst_path_default /etc/ati/inst_path_override /etc/chatscripts /etc/cups/ssl /etc/cups/subscriptions.conf /etc/cups/subscriptions.conf.O /etc/default/cacerts /etc/fuse.conf /etc/group- /etc/gshadow /etc/gshadow- /etc/mtab.fuselock /etc/passwd- /etc/ppp/chap-secrets /etc/ppp/pap-secrets /etc/ppp/peers /etc/security/opasswd /etc/shadow /etc/shadow- /etc/ssl/private /etc/sudoers /etc/sudoers.d/README /etc/ufw/after.rules /etc/ufw/after6.rules /etc/ufw/before.rules /etc/ufw/before6.rules /lib/ufw/user.rules /lib/ufw/user6.rules /lost+found /root /run/crond.reboot /run/cups/certs /run/lightdm /run/lock/whoopsie/lock /run/udisks /var/backups/group.bak /var/backups/gshadow.bak /var/backups/passwd.bak /var/backups/shadow.bak /var/cache/apt/archives/lock /var/cache/cups/job.cache /var/cache/cups/job.cache.O /var/cache/cups/ppds.dat /var/cache/debconf/passwords.dat /var/cache/ldconfig /var/cache/lightdm/dmrc /var/crash/_usr_lib_x86_64-linux-gnu_colord_colord.102.crash /var/lib/apt/lists/lock /var/lib/dpkg/lock /var/lib/dpkg/triggers/Lock /var/lib/lightdm /var/lib/mlocate/mlocate.db /var/lib/polkit-1 /var/lib/sudo /var/lib/urandom/random-seed /var/lib/ureadahead/pack /var/lib/ureadahead/run.pack /var/log/btmp /var/log/installer/casper.log /var/log/installer/debug /var/log/installer/partman /var/log/installer/syslog /var/log/installer/version /var/log/lightdm/lightdm.log /var/log/lightdm/x-0-greeter.log /var/log/lightdm/x-0.log /var/log/speech-dispatcher /var/log/upstart/alsa-restore.log /var/log/upstart/alsa-restore.log.1.gz /var/log/upstart/console-setup.log /var/log/upstart/console-setup.log.1.gz /var/log/upstart/container-detect.log /var/log/upstart/container-detect.log.1.gz /var/log/upstart/hybrid-gfx.log /var/log/upstart/hybrid-gfx.log.1.gz /var/log/upstart/modemmanager.log /var/log/upstart/modemmanager.log.1.gz /var/log/upstart/module-init-tools.log /var/log/upstart/module-init-tools.log.1.gz /var/log/upstart/procps-static-network-up.log /var/log/upstart/procps-static-network-up.log.1.gz /var/log/upstart/procps-virtual-filesystems.log /var/log/upstart/procps-virtual-filesystems.log.1.gz /var/log/upstart/rsyslog.log /var/log/upstart/rsyslog.log.1.gz /var/log/upstart/ureadahead.log /var/log/upstart/ureadahead.log.1.gz /var/spool/anacron/cron.daily /var/spool/anacron/cron.monthly /var/spool/anacron/cron.weekly /var/spool/cron/atjobs /var/spool/cron/atspool /var/spool/cron/crontabs /var/spool/cups

    Read the article

  • how non-programmer become developer

    - by Sarang
    Every year there are different types of freshers getting recruited. But, our IT field is not only limited to IT Engineers & Computer Engineers. It is full of all different types of engineers. What is a way an engineer can be a proper developer ? I am asking this because, whatever engineering the student gone for, one can be shifted to IT development if he/she has some particular qualities within. What are those quelities required to be in a developer or required to be implemented to be developer ?

    Read the article

  • A simple DotNetNuke article module with C# and VB.NET Source

    - by Chris Hammond
    For the DotNetNuke Connections conference last month I provided an advanced DotNetNuke module development course as a pre-conference training session. That training covered details on how to implement some of the newer features in the DotNetNuke platform within custom modules, mainly ContentItem integration and Taxonomy features. For the course I created a very basic Article module for DotNetNuke, ultimately naming it DNNSimpleArticle. For the course I created both a C# and a VB.NET version of the...(read more)

    Read the article

  • How to sync tasks between Ubuntu and Android?

    - by andrewsomething
    I was using first Tasque and then GTG on Ubuntu and Astrid on Android with Remember The Milk as the backend. Recently, Astrid has dropped support for Remember The Milk, and both Tasque and GTG's support for syncing with RTM has always left something to be desired. So I'm looking for a new solution, and I'm open to leaving RTM especially if it is for something that isn't proprietary. What have you found to keep your tasks lists in-sync between Ubuntu and Android? In particular, I'm looking for a desktop based program on the Ubuntu side rather than something in the browser. Astrid has treated me well, but I wouldn't mind trying something else. Currently, it can sync with Astrid.com, Google Tasks, and Producteev. Anyone know of a desktop app that supports any of those?

    Read the article

  • Apple IIGS emulator?

    - by xiaohouzi79
    What is the best quality Apple IIGS emulator for Ubuntu that is relatively easy to install? I have tried KEGS, but get the following (working without probs on my Windows partition): Preparing X Windows graphics system Visual 0 id: 00000021, screen: 0, depth: 24, class: 4 red: 00ff0000, green: 0000ff00, blue: 000000ff cmap size: 256, bits_per_rgb: 8 Chose visual: 0, max_colors: -1 Will use shared memory for X pipes: pipe_fd = 4, 5 pipe2_fd: 6,7 open /dev/dsp failed, ret: -1, errno:2 parent dying, could not get sample rate from child ret: 0, fd: 6 errno:11

    Read the article

  • Failed to download repository information Check your Internet connection

    - by Luca Brazza
    I need to check if I have updates for Ubuntu. I think it is 11.05 As you can see this is what it says: Failed to download repository information Check your Internet connection. Details: W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/main/binary-amd64/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/restricted/binary-amd64/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch http://ppa.launchpad.net/ferramroberto/java/ubuntu/dists/precise/main/source/Sources 404 Not Found , W:Failed to fetch http://ppa.launchpad.net/ferramroberto/java/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found , E:Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Is Information Technology really Engineering?

    - by RPK
    While travelling I met a mathematician who was sitting near me. In a discussion he said: "...there is nothing like engineering in IT or rather programming". A true engineering is what Architecture is, what Electrical and Mechanical is. It made me think and I was puzzled. A percent of my brain agreed also because in Indian Army, there is no subject like Computer Engineering in the Engineering Corps. They don't consider programming as engineering. This is what I heard few years back, I don't know what Indian Army thinks now. What are your views?

    Read the article

  • What blogging clients are available?

    - by jokerdino
    I regularly blog on both Wordpress and Blogger platform and as such, desktop clients are far more convenient than browser based clients. When I used Windows, I was using a desktop blogging client called Windows Live Writer. Are there any Ubuntu alternatives for blogging clients available? Features expected: Multiple blog support Post drafts to the blog Save drafts locally Add tags / categories Upload media

    Read the article

  • Microsoft Sql Server driver for Nodejs - Part 2

    - by chanderdhall
    Nodejs, Sql server and Json response with Rest This post is part 2 of Microsoft Sql Server driver for Node js.In this post we will look at the JSON responses from the Microsoft Sql Server driver for Node js. Pre-requisites: If you have read the Part 1 of the series, you should be good. We will be using a framework for Rest within Nodejs - Restify, but that would need no prior learning. Restify: Restify is a simple node module for building RESTful services. It is slimmer than Express. Express is a complete module that has all what you need to create a full-blown browser app. However, Restify does not have additional overhead of templating, rendering etc that would be needed if your app has views. So, as the name suggests it's an awesome framework for building RESTful services and is very light-weight. Set up - You can continue with the same directory or project structure we had in the previous post, or can start a new one. Install restify using npm and you are good to go. npm install restify Go to Server.js and include Restify in your solution. Then create the server object using restify.CreateServer() - SLICK - ha? var restify = require('restify'); var server = restify.createServer(); server.listen(8080, function () { console.log('%s listening at %s', server.name, server.url); }); Then make sure you provide a port for the Server to listen at. The call back function is optional but helps you for debugging purposes. Once you are done, save the file and then go to the command prompt and hit 'node server.js' and you should see the following:   To test the server, go to your browser and type the address 'http://localhost:8080/' and oops you will see an error.   Why is that? - Well because we haven't defined any routes. Let's go ahead and create a route. To begin with I'd like to return whatever is typed in the url after my name and the following code should do it. server.get('/ChanderDhall/:status', function respond(req, res, next) { res.end("hello " + req.params.name + "") }); You can also avoid writing call backs inline. Something like this. function respond(req, res, next) { res.end("Chander Dhall " + req.params.name + ""); } server.get('/hello/:name', respond); Now if you go ahead and type http://localhost:8080/ChanderDhall/LovesNode you will get the response 'Chander Dhall loves node'. NOTE: Make sure your url has the right case as it's case-sensitive. You could have also typed it in as 'server.get('/chanderdhall/:name', respond);' Stored procedure: We've talked a lot about Restify now, but keep in mind the post is about being able to use Sql server with Node and return JSON. To see this in action, let's go ahead and create another route to a list of Employees from a stored procedure. server.get('/Employees', Employees); The following code will return a JSON response.  function Employees(req, res, next) { res.header("Content-Type: application/json"); //Need to specify the Content-Type which is //JSON in our case. sql.open(conn_str, function (err, conn) { if (err) { //Logs an error console.log("Error opening the database connection!"); return; } console.log("before query!"); conn.queryRaw("exec sp_GetEmployees", function (err, results) { if (err) { //Connection is open but an error occurs whileWhat else can be done? May be create a formatter or may be even come up with a hypermedia type but that may upset some pragmatists. Well, that's going to be a totally different discussion and is really not part of this series. Summary: We've discussed how to execute a stored procedure using Microsoft Sql Server driver for Node. Also, we have discussed how to format and send out a clean JSON to the app calling this API.  

    Read the article

  • April Omnibus

    - by KKline
    I freely admit it - I'm a sluggard. I should be blogging a couple times per week and tweeting in between. But, for some unknown reason, April has been a tough month to get this in gear. Hence, I'm putting out an omnibus post to cover all of the stuff I've been up to, instead of the one-off's I usually post when I've got something new to mention. Isn't it funny how life gets in the way of the stuff we want and intend to do? As they say - "The road to hell is paved with good intentions", or was that...(read more)

    Read the article

  • Object oriented design importance

    - by user5507
    I started studying Object Oriented Design and Modelling using the this book by James Rumbaugh. It uses a tool called Object Modeling Technique (OMT). I have certain newbie questions. I searched the net, but couldn't get answers The book is pretty old. Don't know why the school told me to learn this. I know OMT is a predecessor of the Unified Modeling Language (UML). So its a waste? Whether the concepts change very much when we move from OMT to UML? I know OMT has Object, Dynamic and Functional Model. Wikipedia says UML is compatible with OMT and UML is a model too. As per wikipedia the UML models are Static and Dynamic and they are represented by different diagrams like class, object, activity, sequence..... I couldn't find the equivalence of this in OMT. I read that there are many object oriented development methods like OMT, Booch,.... Which one is used by Industry ? Where could I get a comparison of different Object oriented development methods?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >