Search Results

Search found 9988 results on 400 pages for 'tv less in jersey'.

Page 64/400 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Count function on tree structure (non-binary)

    - by Spevy
    I am implementing a tree Data structure in c# based (largely on Dan Vanderboom's Generic implementation). I am now considering approach on handling a Count property which Dan does not implement. The obvious and easy way would be to use a recursive call which Traverses the tree happily adding up nodes (or iteratively traversing the tree with a Queue and counting nodes if you prefer). It just seems expensive. (I also may want to lazy load some of my nodes down the road). I could maintain a count at the root node. All children would traverse up to and/or hold a reference to the root, and update a internally settable count property on changes. This would push the iteration problem to when ever I want to break off a branch or clear all children below a given node. Generally less expensive, and puts the heavy lifting what I think will be less frequently called functions. Seems a little brute force, and that usually means exception cases I haven't thought of yet, or bugs if you prefer. Does anyone have an example of an implementation which keeps a count for an Unbalanced and/or non-binary tree structure rather than counting on the fly? Don't worry about the lazy load, or language. I am sure I can adjust the example to fit my specific needs. EDIT: I am curious about an example, rather than instructions or discussion. I know this is not technically difficult...

    Read the article

  • SharePoint Saturday Huntsville Wrap Up

    - by Mark Rackley
    So, Cathy Dew (@catpaint1) and company put on a great SharePoint Saturday event this past weekend. I got to hang out with some old friends and meet some new ones. I’d list you all, but I’d undoubtedly miss someone and don’t want to offend anyone.  Although I find it odd that I see @MossLover now more since she moved to New Jersey than when she lived next door in Kansas City… what’s up with that? Anyway, Cathy did a tremendous job organizing the event.  Everything went smoothly and everyone had a great time. Maybe I can talk her into organizing the rest of SharePoint Saturday Ozarks on June 12th… you know that’s coming up? right? While you’re here why not go ahead and register right now at: http://spsozarks.eventbrite.com/  Yes.. that was a shameless plug… I did my default presentation on “Wrapping Your Head Around the SharePoint Beast”. This continues to be my most popular presentation. I try to tweak it every time and I always have fun doing it. I get to pick on people and they pick on me back, but I always manage to learn something new when I present it. I had a great interactive crowd and they didn’t throw anything at me.  All in all I consider it a success.  Thanks for coming if you attended!  You can get the slides here:  SharePoint Saturday Huntsville - Wrapping Your Head Around the SharePoint Beast Next up for me is SharePoint Saturday DC on May 15th.  Wow this is going to be a huge event with space for 1500 attendees.. no, that is not a typo!  Stop me and say hi if you are able to make it!!

    Read the article

  • How to determine the amount to spend per phrase on Adwords research?

    - by Anonymous -
    My company would like to start a PPC advertising campaign. Whilst I understand the concept and how to set everything up from a technical point of view, this is something I've never done before. Logically, we'd like to test out a wide range of keywords that we think would lead to conversions, which we've put together through brainstorming and with some help from Google's External Keyword Tool. Sub-question whilst I remember - am I correct in thinking that in Google's keyword tool, keywords that we think will perform well that have a low competition yet high monthly searches are good since there will be less advertisers, meaning our bid per click will be less? Is there a common benchmark or process of doing a round of tests with keywords? Should we wait for 100 clicks on each keyword, see which ones have lead to the most sales (or rather, sales that are sustainable with the cost per click of that keyword), then drop the ones which aren't converting and put that budget onto the converting keywords? We realistically have a few hundred keywords/phrases we would like to test, but spending $100 per keyword/phrase is going to work out as quite an expensive test. It would be nice to be able to spend $5-10 per phrase, but I don't think the sample size would be great enough to determine anything usefully reliable. Another approach might be to setup all the keywords, and those that bring the most sales within x hours/days would be the ones we use. What is the common procedure with things like this? I know there are a plethora of companies that specialize in exactly this, but this is something we anticipate doing a lot in the future, so it would make sense to do it in house if at all possible.

    Read the article

  • Responsive Design: Which Framework Should I Use? CSS3 & HTML5

    - by Jayhal
    I've been looking for a suitable set of HTML5/CSS3 foundation files to start new projects on. I started off piecing together my own files, but I believe I might be better served in finding a solid and fairly compatible(with me) CSS3/HTML5 framework and then tweaking certain things that may not best suit my own process. I'd love to find something that is responsive and that includes aspects focusing on layout, type(hor and vert baselines), form and interface components, cross-browser issues, and preferably built on something other than a just imple css reset, but that does include rebuilding elements consistently across browsers for a clean work slate. Extra features like polyfills or others area great, as is good documentation and examples. So far, off the top of my head I know of, Skeleton 1140 Grid 320 & Up (plus BP) HTML5 Boilerplate 2.0 and Mobile Inuit.css Less Framework Fluir Perkins.Less A few WP themes Are there any great one I don't know about? I work a lot in WP, and something that is easily incorporated (but also stand alone) is ideal. Plugins and wide set feature while maintaining the ability to cut it down when needed(flexibility) is also a big plus, and in par with a faster learning, since I want to start using whatever I find immediately . What are some of the better options you guys might be able to recommend? Systems or scripts, plugins, and other related tools are also welcome, Thanks!

    Read the article

  • Will an online degree get you a job that requires "CS or equivalent 4-year degree"? [on hold]

    - by qel
    I'm a nerdy slacker type who didn't get my life together till I was 30. I've had a real job for a couple years doing C#/SQL. I've gotten several raises, but I'm making less than most developers, and the atmosphere is ... not positive. Looking for a new job, I think my applications get thrown out because I don't have a degree. And I want to finish a Bachelor's just to feel like less of a loser. I have a lot of college credits from 1996-2003 and a low GPA, so I don't know if that's worth much. An online degree looks like a good option, but I just don't know what I should be looking at for online schools because they all look like fake degrees. If they had programs equivalent to a real Comp Sci degree, I don't think they would have weird sounding names like they do. University of Phoenix has a B.S./Information Technology-Software Engineering. DeVry has a B.S./Computer Engineering Technology program. But that's not CS, and most other things I see have even more fake-sounding names. Are these useless degrees? Some people say DeVry and UoP are acceptable, some people say they're a joke. I have enough experience now, though, that maybe all I'm missing is being able to check the box that I have a 4-year degree. Harvard Extension seems like a real degree, even if it isn't a real Harvard degree, but I'd have to live there at least 3 months, which kinda defeats the purpose of an online degree fitting around work.

    Read the article

  • Basic is Best

    - by Eric A. Stephens
    Fellow foodies will recognize the recent movement towards "farm-to-table" restaurants. These venues attempt to simplify their menus and source ingredients as close to the source as possible. I had the opportunity to dine at such a restaurant the other evening. I was gushing about the appetizer to my server when she described the preparation for the item and then punctuated her comments with "basic is best". I reminded my fellow enterprise architect diners there was an architecture lesson in that statement. They rolled their eyes and chuckled. But they also knew I was right. I'm reminded of Frederick Brooks' book The Mythical Man Month and his latest The Design of Design. The former must read book talks about complexity. But he refrains from damning all complexity. The world we live in and enterprises we strive to transform with enterprise architecture are complicated organisms, much like the human body. But sometimes a simple solution is the best approach. Fewer applications (think: portfolio rationalization). Fewer components. Fewer lines of code. Whatever level of abstraction you are working at, less is more. I'm reminded of the enterprise architecture principle "Control Technical Diversity". At one firm I created pithy catch phrases for each principles. I named this one "Less is More". But perhaps another variation is what my server said the other night, "Basic is Best".

    Read the article

  • filtering dates in a data view webpart when using webservices datasource

    - by Patrick Olurotimi Ige
    I was working on a data view web part recently and i had  to filter the data based on dates.Since the data source was web services i couldn't use  the Offset which i blogged about earlier.When using web services to pull data in sharepoint designer you would have to use xpath.So for example this is the soap that populates the rows<xsl:variable name="Rows" select="/soap:Envelope/soap:Body/ddw1:GetListItemsResponse/ddw1:GetListItemsResult/ddw1:listitems/rs:data/z:row/>But you would need to add some predicate [] and filter the date nodes.So you can do something like this (marked in red)<xsl:variable name="Rows" select="/soap:Envelope/soap:Body/ddw1:GetListItemsResponse/ddw1:GetListItemsResult/ddw1:listitems/rs:data/z:row[ddwrt:FormatDateTime(string(@ows_Created),1033,'yyyyMMdd') &gt;= ddwrt:FormatDateTime(string(substring-after($fd,'#')),1033,'yyyyMMdd')]"/>For the filtering to work you need to have the date formatted  above as yyyyMMdd.One more thing you must have noticed is the $fd variable.This variable is created by me creating a calculated column in the list so something like this [Created]-2So basically that the xpath is doing is get me data only when the Created date  is greater than or equal to the Created date -2 which is 2 date less than the created date.Also not that when using web services in sharepoint designer and try to use the default filtering you won't get to see greater tha or less than in the option list comparison.:(Hope this helps.

    Read the article

  • Working as a contractor--questions part II

    - by universe_hacker
    This is a follow-up to this discussion: Working as a contractor--questions I still would like to get more info on the following points, when working as a contractor as opposed to direct employee: Housing: for short-term contracts (let's say 6 months or less) that are not in your home area, where and how do you search for short-term, flexible housing? Especially since an employer would typically want you to start immediately, so you don't have the time to go out and explore. Also, would you typically look for furnished apartments because the cost transporting your own furniture for a few months is not justified? Work hours and pay: are contractors more strictly supervised (as far as getting specified work done) because they get paid by the hour? There are also supposed to get overtime pay (at a higher rate) if they work more than 40 hours per week, does this really happen? Or do they work unpaid extra hours just like many regular employees? Some potential employers have mentioned paying the "per diem", which is essentially a non-taxable daily allowance, which is supposed to be used for living expenses. This money gets subtracted from you per-hour rate, but the advantage is that you pay less tax. However, from the information I have seen, the per diem can only be paid if you maintain a "permanent" residence you intend to return to. Is this checked in practice, and if yes, how?

    Read the article

  • Pair Programming, for or against? [on hold]

    - by user1037729
    I believe it has many advantages over individual programming: Pros By pairing senior with relatively junior staff, the more junior can get up to speed with both project and computing experience, and the senior will re-think the problem in order to communicate with the junior, thus re-checking his own thinking (rubber duck principle!). At least 2 people will know about any single piece of work, if one person is away the other can cover, or if some one leaves a project knowledge transfer is easier. Two brains on a complex task is more effective, communication keeps the work free flowing and provides redundancy in decision making. Code is effectively reviewed as its being written, no need for a separate reviewing phase which requires a context switch as someone who has not been working on the piece in question would be required to understand and review the related code. Reviewing code on your own which you haven't written or architected is not fun, hence counter productive. Cons Less bandwith for performing tasks, lets say we have 4 devs, pair programming requires 2 devs per task, so we would be doing 2 tasks concurrently as a posed to 4. I believe this "Con" does not stand up as the pair programmed task would complete sooner and comes with a review built in for free! Ie the pair programming task would be more efficient and thus free up resources earlier. Less flexibility to chop and change tasks as two developers are tied into a task, when flexibility is required this could be a problem.

    Read the article

  • When does a programmer know when a new job is not right?

    - by Mysterion
    I believe that the interview process is a selling of both parties - what can the employee offer the employer and vice versa. Assuming an individual has been careful in selecting their new employer (via thorough questioning in the interview process), however when they arrive at the job they find the employer has not been honest about certain aspects of the job. Examples of this dishonesty could include: The employee making it clear that technical excellence is an important factor, which is promised by the employer, but is not fully delivered or a good technical structure does not exist. The employee states they want to work on well architected and short (lets say less than 1 yr) long projects, yet when they start they find they are placed on a poorly architected older project. The employee being told of a pair programming environment to get him up to speed on the project, but being left to his own devices/questioning on arrival. The employee is promised a culture that encourages innovation and technical excellence but finds that this is not the case (eg. using technology for knowledge retention is laughed at). I know that a lot of famous developers feel that you make the place you work at. Is it realistic for a new employee with limited experience in the industry (say less than 5 years) to be able to join the company and change attitudes or even challenge the employer on the perceived dishonesty? Should they stay in this job or cut their losses?

    Read the article

  • What does SVN do better than git?

    - by doug
    No question that the majority of debates over programmer tools distill to either personal choice (by the user) or design emphasis, i.e., optimizing design according to particular uses cases (by the tool builder). Text Editors are probably the most prominent example--a coder who works on a Windows at work and codes in Haskell on the Mac at home, values cross-platform and compiler integration and so chooses Emacs over Textmate, etc. It's less common that a newly introduced technology is genuinely, demonstrably superior to the extant options. I wonder if this is in fact the case with version-control systems, in particular, centralized VCS (CVS, SVN) versus distributed VCS (git, hg)? I used SVN for about five years, and SVN is currently used where I work. A little less than three years ago, I switched to git (and gitHub) for all of my personal projects. I can think of a number of advantages of git over subversion (and which for the most part abstract to advantages of distributed over centralized VCS), but I cannot think of one contra example--some task (that's relevant and arises in a programmers usual workflow) that subversion does better than git. The only conclusion I have drawn from this is that I don't have any data--not that git is better, etc. My guess is that such counter-examples exist, hence this question.

    Read the article

  • Learning a new concept - write from scratch or use frameworks?

    - by Stu
    I have recently been trying to learn about MVVM and all of the associated concepts such as repositories, mediators, data access. I made a decision that I would not use any frameworks for this so that I could gain a better understanding of how everything worked. I’m beginning to wonder if that was the best idea because I have hit some problems which I am not able to solve, even with the help of Stack Overflow! Writing from scratch I still feel that you have a much better understanding of something when you have been in the guts of it than if you were at a higher level. The other side of that coin is that you are in the guts of something that you don't fully understand which will lead to bad design decisions. This then makes it hard to get help because you will create unusual scenarios which are less likely to occur when you working within the confines of a framework. I have found that there are plenty of tutorials on the basics of a concept but very few that take you all the way from novice to expert. Maybe I should be looking at a book for this? Using frameworks The biggest motivation for me to use frameworks is that they are much more likely to be used in the workplace than a custom rolled solution. This can be quite a benefit when starting a new job if it's one less thing you have to learn. I feel that there is much better support for a framework than a custom solution which makes sense; many more people are using the framework than the solution that you created. The level of help is much wider as well, from basic questions to really specific, detailed questions. I would be interested to hear other people's views on this. When you are learning something new, should you/do you use frameworks or not? Why? If it's a combination of both, when do you stop one and move on to the other?

    Read the article

  • xubuntu 12.04 screen regularly stops refreshing, refreshing resumes after un-/re-maximizing a window

    - by user68477
    My screen frequently stops completely refreshing. I can make it resume refreshing by un-maximizing/re-maximize a window or by switching workspace (the un-/re-maximizing works every time. Switching workspace sometimes has to be done a couple of times). The immediate impression is that the system is frozen: there is apparently no reaction to anything I do but interestingly window title bar will change, if I switch application with (i.e alt+tab or browse through folders) I saw an identical issue in ubuntu 10.04, though a lot less frequent, I never saw this in ubuntu 12.04 (which I have been using the last 4-5 months). After switching to Xubuntu I'm seeing this again and more frequently. The specific reason I'm not sure this is a bug: I installed gnome-control-center which dragged in tons of packages. This was while trying to fix dual-screen setup. I believe the issue surfaced after this. I later meticulously removed every package from this batch (purge) in the hope that every setting would also be removed. But the issue has persisted. Another issue happened at the same time, it may be totally unrelated but it feels as if it is the same basic issue: the screen resolution of the greeter became less than the expected 1680x1050 and often after login there's just a blank and totally unresponsive wallpaper without panel so I have to force reboot. When the login is successful it's very clear that it works hard to determine the correct resolution which is achieved after a few blink to black screens. My questions: 1) Is this a settings issue or a bug? 2) How do I begin to research the issue - could I perhaps some way reset xubuntu/xfce to default. 3) If this is a bug where would be the most appropriate place report this? System: Thinkpad T500 ati radeon HD 3650 $ fglrxinfo display: :0.0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Mobility Radeon HD 3650 OpenGL version string: 3.3.11627 Compatibility Profile Context $ uname -a Linux srvname 3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux xfce 4.10 Compiz 0.9.7.8

    Read the article

  • Jet Brains release WebStorm 5.0

    - by TATWORTH
    At http://www.jetbrains.com/webstorm/whatsnew/index.html?WS50ROW, Jet Brains have announced the release of WebStorm 5.0, an IDE that brings the ease of code writing in VB.NET and C# that you get with ReSharper, to JavaScript, CSS and LESS. (There are some more details in http://blog.jetbrains.com/webide/2012/08/liveedit-plugin-features-in-detail/)Code completion in JavaScript, CSS and LESS is a very welcome feature. I look forward to trying out Web Storm. The download at http://www.jetbrains.com/webstorm/download/index.html comes with a free 30-day trial).Price information is at http://www.jetbrains.com/webstorm/buy/index.jsp - you should note that if you are an open-source developer, you can apply for a free license. The price of a personal license at £23 + VAT is a no-brainer. The price of a Commercial license would have been paid for in a few days of the increased productivity that this tool brings.Web Storm currently requires Google Chrome to run. Like ReSharper it appears to be a very able tool. It includes tools such as:XSLT debuggingJSLint for checking for JavaScript errorsJavaScript debuggingJavaScript unit testing (including code coverage)JavaScript folding regionsCoffeeScript supportWell I suggest that you try WebStorm 5.0

    Read the article

  • JQuery Validation dates [migrated]

    - by james
    Im trying to get my form to validate...so basically its working, but a little bit too well, I have two text boxes, one is a start date, the other an end date in the format of mm/dd/yyyy if the start date is greater than the end date...there is an error if the end date is less than the start date...there is an error if the start date is less than today's date...there is an error The only thing is when I correct the error, the error warning is still there...here is my code: // Validate Date Ranges if ($(this).val() != '' && dates.not(this).val != '') { if ($(this).hasClass("FromCal")) { if (new Date(testDate) > new Date(otherDate)) { addError($(this)); $('.flightDateError').text('* Start date must be earlier than end date.'); isValid = false; return; } } else { if (new Date(testDate) < new Date(otherDate)) { addError($(this)); $('.flightDateError').text('* End date must be later than start date.'); return; } } } and here are the two text boxes: <div id="campaign_start" style="display: inline-block"> <label class="date_range_label">from:</label> <asp:TextBox ID="FromCalTbx" runat="server" Width="100px" CssClass="FromCal editable float_left required" /> </div> <div id="campaign_end" style="display: inline-block"> <label class="date_range_label">to:</label> <asp:TextBox ID="ToCalTbx" runat="server" Width="100px" CssClass="float_left optional"/> </div> PS - testDate is the start Date otherDate is the end Date

    Read the article

  • make gnome3 to work as I expect! [closed]

    - by gnome
    Possible Duplicate: How to revert to GNOME Classic? I don't like unity so I tried Ubuntu Gnome Remix 12.10. But I feel disappointed too. - No right click on desktop. (I know there is a way to enable it). - No icon on desktop. - I want to have "applications", "palces", "systems" on top panel as before. It just takes much more steps to arrive where I want to go in default gnome 3 settings. - It seems to me that for some applications, when it is maximalized, there is no close buttons or "restore to previous size" button. In fact, it has no title bar in this case, so I can't drag the title bar to restore its previous size and move the window. So I got 3 questions here: (for those who like the way gnome work before) What difference between gnome2 and gnome3 that you prefer the way it works in gnome2? How do you change it to work in the gnome2 way? I don't like the appearance in gnome3. For those who have tried to make your gnome3 looks good, could you share how you customize your gnome3 appearance? (this is for chatting only, not a real question) Why all the main OS (Windows, Mac, Some Linux) want to integrate Desktop and Tablet with focus on Tablet? (Which means that when it is installed on a desktop, it still has tablet appearance, which makes things strange.) Do they think the personal desktop PC will become less and less?

    Read the article

  • Can anyone point me to some open source directX rendering engines or frameworks? [on hold]

    - by Jim
    I'm completely new to graphics API programmming, but not at all new to the theory and principle operation of game engines and rendering engines. That being said, I want to do some experiments of rendering very dense geometry scenes in a basic rendering engine or game engine. I don't need a lot of bells and whistles. What I need is enough control that I can implement my own scene graph algorithms and control the rendering pipeline very specifically. My ideal candidate engine would be either a rendering engine or game engine with a modular design that might be ready to go out of the box but would be simple enough in case I need to rip out some of the guts in the rendering management and implement my own. It's a tough call because I'm right at the level where it's almost better to go from scratch, but there's no sense in having to build every single basic thing such as heirarchical transforms, etc. I just want to work with rendering optimization to push dense geometry for maximum FPS. Does anyone have a suggestion for an engine or basic framework to use? I requested DirectX in my title because I figured it would likely be better supported and less likely for me to run into some obscure less-documented problem. But OpenGL might be acceptable if the recommended framework was definitely better than my other options. EDIT: I should add that I really want GPU tessellation support (part of adding to the density of geometry detail).

    Read the article

  • One site being on a subdirectory of another. Does google count this againt you?

    - by Mick
    I have created two similar websites (relating to monetary systems). So far, one appears to be loved by Google and the other hated. I'm struggling to work out why. This is a mystery to me because both sites were created by me with the same design philosophy, both in pure html. Both are packed to the rafters with references to, and information about, their respective subjects. One issue I'm worried may be the cause is to do with the location of the sites. I got a web hosting package from hostmonster.com for the successful one, but less liked one is just an "add-on" which sits on a subdirectory of the successful one. I wonder if Google somehow detects this and treats it as a less significant website? EDIT: Just to clarify, even though one site is an add-on that sits on a subdirectory of the other, the URL is arranged to look like it is a root. I.e. the unpopular site can be accessed directly with a simple www.myunpopularsite.com name, without specifying any subdirectory.

    Read the article

  • The Freemium-Premium Puzzle

    The more time I spend thinking about the value of information, the more I found that digitalizing information significantly changed the 'information markets', potentially in an irreversible manner. The graph at the bottom outlines my current view. The existing business models tend to be the same in the digital and analogue information world, i.e. revenue is derived from a combination of consumers' payments and advertisement. Even monetizing 'meta-information' such as search engines isn't new. Just think of the once popular 'Who'sWho'. What really changed is the price-value ratio. The curve is pushed down, closer to the axis. You pay less for the same, or often even get more for less. If you recall the capabilities I described in relevance of information you will see that there are many additional features available for digital content compared to analogue content. I think this is a good 'blue ocean strategy' by combining existing capabilities in a new way. (Kim W.C. & Mauborgne, R. (2005) Blue Ocean Strategies. Boston: Harvard Business School Publishing.). In addition the different channels of digital information distribution significantly change the value of information. I will touch on this in one of my next blogs. Right now, many information providers started to offer 'freemium' content through digital channels, hoping to get a premium for the 'full' content. No freemium seems to take them out of business, because they are apparently no longer visible in today's most relevant channels of information consumption. But, the more freemium is provided, the lower the premium gets; a truly puzzling situation. To make it worse, channel providers increasingly regard information as a value adding and differentiating activity. Maybe new types of exclusive, strategic alliances will solve the puzzle, introducing new types of 'gate-keepers', which - to me - somehow does not match the spirit of the WWW and the generation Y's perception of information consumption and exchange.

    Read the article

  • Domain Specific Software Engineering (DSSE)

    Domain Specific Software Engineering (DSSE) believes that creating every application from nothing is not advantageous when existing systems can be leveraged to create the same application in less time and with less cost.  This belief is founded in the idea that forcing applications to recreate exiting functionality is unnecessary. Why would we build a better wheel when we already have four really good and proven wheels? DSSE suggest that we take an existing wheel and just modify it to fit an existing need of a system. This allows developers to leverage existing codebases so that more time and expense are focused on creating more usable functionality compared to just creating more functionality. As an example, how many functions do we need to create to send an email when one can be created and used by all other applications within the existing domain? Key Factors of DSSE Domain Technology Business A Domain in DSSE is used to control the problem space for a project. This control allows for applications to be developed within specific constrains that focus development is to a specific direction.Technology in DSSE offers a variety of technological solutions to be applied within a domain. Technology Examples: Tools Patterns Architectures & Styles Legacy Systems Business is the motivator for any originations to use DSSE in there software development process. Business reason to use DSSE: Minimize Costs Maximize market and Profits When these factors are used in combination additional factors and benefits can be found. Result of combining Key Factors of DSSE Domain + Business  = Corporate Core Competencies Domain expertise improved by market and business expertise Domain + Technology = Application Family Architectures All possible technological solutions to problems in a domain without any business constraints.  Business + Technology =  Domain independent infrastructure Tools and techniques for building systems  independent of all domains  Domain + Business + Technology = Domain-specific software engineering Applies technology to domain related goals in the context of business and market expertise

    Read the article

  • How do you plan your asynchronous code?

    - by NullOrEmpty
    I created a library that is a invoker for a web service somewhere else. The library exposes asynchronous methods, since web service calls are a good candidate for that matter. At the beginning everything was just fine, I had methods with easy to understand operations in a CRUD fashion, since the library is a kind of repository. But then business logic started to become complex, and some of the procedures involves the chaining of many of these asynchronous operations, sometimes with different paths depending on the result value, etc.. etc.. Suddenly, everything is very messy, to stop the execution in a break point it is not very helpful, to find out what is going on or where in the process timeline have you stopped become a pain... Development becomes less quick, less agile, and to catch those bugs that happens once in a 1000 times becomes a hell. From the technical point, a repository that exposes asynchronous methods looked like a good idea, because some persistence layers could have delays, and you can use the async approach to do the most of your hardware. But from the functional point of view, things became very complex, and considering those procedures where a dozen of different calls were needed... I don't know the real value of the improvement. After read about TPL for a while, it looked like a good idea for managing tasks, but in the moment you have to combine them and start to reuse existing functionality, things become very messy. I have had a good experience using it for very concrete scenarios, but bad experience using them broadly. How do you work asynchronously? Do you use it always? Or just for long running processes? Thanks.

    Read the article

  • Content light website and Google - Tell google it's a listings site (as opposed shop, reviews or restaurants)

    - by Doug Firr
    I have a listings style website. Due to the nature of this (listings) the site is content light. Each page is typically less that 50 words but there are many pages. The site in question has had a ton of media coverage and so has some great inbound links from places like Wired, Fast Company, Canada Broadcasting Corporation and many many other bloggers, media websites and recycle related niche authors (It's a recycling site). But Google really ignores it. Traffic from search is very very low - less than 5% of all traffic. I know that using markup you can tell Google whether your site is a restaurant, article, review, shop, local business and a few other categories (https://www.google.com/webmasters/markup-helper/u/0/). Is there a way to tell Google that my site is a listings site? I suspect, but do not know for sure, that part of the problem is that Google simply does not know what my site is? It's a crowdmap where people post curbalerts. The information is useful to people but it is presented in a short, concise way - a pin on a map, a picture and a short description. Adding anything further is not necessary for the site's intended purpose. 1st question - how best to tell the search engines what y site is - listings and not some spammy website? Any recommendations in improving our site's Search presence? You can take a look here if interested: http://tinyurl.com/lxg4hn7

    Read the article

  • New Workstation &ndash; Lenovo W530 Core i7 32GB 256GB SSD Win8Pro

    - by Brian Lanham
    So I pretty-much have my new machine up and running full-time. I am still going to have to hit my old workstation for some things but am more-or-less working on my new machine.  It’s really fast. And Bret was right, I’m not so far using all the RAM. 16 would have been enough but as @CodeMonkeyJava “go big or go home”. Windows 8 is…interesting.  So far I still seem to do most of my work in the “Desktop”.  However, I like the Store concept and I like the Metro UX.  Live tiles are also nice.  I really like how I can switch between Desktop and Metro easily.  Overall I think Microsoft has done a great job of combining the needed experience for touch and mouse. My overall Windows 8 rating is 5.9 because of the video card. Otherwise I’m hitting 7.8.  The system boots from cold in about 11 seconds and performs complete shutdown in 4.7 seconds.  It wakes from sleep in less than 1 second. VS 2012 starts and restarts almost instantly.  In fact, I find myself staring at the start page without realizing it.  Build time doesn’t seem to be significantly increased but it is faster. I seem to already be reinvigorated for work with this new machine. I’m looking forward to the performance.

    Read the article

  • An algorithm for finding subset matching criteria?

    - by Macin
    I recently came up with a problem which I would like to share some thoughts about with someone on this forum. This relates to finding a subset. In reality it is more complicated, but I tried to present it here using some simpler concepts. To make things easier, I created this conceptual DB model: Let's assume this is a DB for storing recipes. Recipe can have many instructions steps and many ingredients. Ingredients are stored in a cupboard and we know how much of each ingredient we have. Now, when we create a recipe, we have to define how much of each ingredient we need. When we want to use a recipe, we would just check if required amount is less than available amount for each product and then decide if we can cook a dinner - if amount required for at least one ingredient is less than available amount - recipe cannot be cooked. Simple sql query to get the result. This is straightforward, but I'm wondering, how should I work when the problem is stated the other way round, i.e. how to find recipies which can be cooked only from ingredients that are available? I hope my explanation is clear, but if you need any more clarification, please ask.

    Read the article

  • Series On Embedded Development (Part 1)

    - by user12612705
    This is the first in a series of entries on developing applications for the embedded environment. Most of this information is relevant to any type of embedded development (and even for desktop and server too), not just Java. This information is based on a talk Hinkmond Wong and I gave at JavaOne 2012 entitled Reducing Dynamic Memory in Java Embedded Applications. One thing to remember when developing embeddded applications is that memory matters. Yes, memory matters in desktop and server environments as well, but there's just plain less of it in embedded devices. So I'm going to be talking about saving this precious resource as well as another precious resource, CPU cycles...and a bit about power too. CPU matters too, and again, in embedded devices, there's just plain less of it. What you'll find, no surprise, is that there's a trade-off between performance and memory. To get better performance, you need to use more memory, and to save more memory, you need to need to use more CPU cycles. I'll be discussing three Memory Reduction Categories: - Optionality, both build-time and runtime. Optionality is about providing options so you can get rid of the stuff you don't need and include the stuff you do need. - Tunability, which is about providing options so you can tune your application by trading performance for size, and vice-versa. - Efficiency, which is about balancing size savings with performance.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >