Search Results

Search found 2492 results on 100 pages for 'dash tom bang'.

Page 90/100 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Convert Date MMDDYY to Date for database Insert

    - by lesponce
    I got an array with a date set as 071712 . no / slash characters for the date, no dash. nothing, just plain 071712 (coming from a text file). I need to convert the date so I can include it in a SqlServer insert statement. I'm calling a stored procedure for the insert. So far I have this: // This is not working so far. DateTime date = Convert.ToDateTime(fileLines[4]); (date will be used as a parm for the stored procedure)

    Read the article

  • add tag at specific div index

    - by stackover
    var element = document.getElementById("divname"); var pagerHtml ='<a href="#" class="prev">&lt Prev</a>'; for (var page = 1; page <= this.pages; page++){ `pagerHtml += '<a href="#" class="'+ page> + page + '</a> ';` } pagerHtml += '<a class="dotline" style="display:none;">........</a> pagerHtml += '<a href="#" class="next"> Next &gt;</a>'; element.innerHTML = pagerHtml; if(this.pages > 9){ `for(var i=this.buffer;i<(this.pages-this.buffer);i++){` `("."+i+1)).hide();` } } I want to add class "dotline" so that when pages are hidden I would do $(".dash").show(); and it would look like 1,2,.....,8,9,10. any ideas would be appreciated.

    Read the article

  • Sneak peek at next generation Three MiFi unit – Huawei E585

    - by Liam Westley
    Last Wednesday I was fortunate to be invited to a sneak preview of the next generation Three MiFi unit, the Huawei E585. Many thanks to all those who posted questions both via this blog or via @westleyl on Twitter. I think I made sure I asked every question posed to the MiFi product manager from Three UK, and so here's the answers you were after. What is a MiFi? For those who are wondering, a MiFi unit is a 3G broadband modem combined with a WiFi access point, providing 3G broadband data access to up to five devices simultaneously via standard WiFi connections. What is different? It appears the prime task of enhancing the MiFi was to improve the user experience and user interface, both in terms of the device hardware and within the management software to configure the device.  I think this was a very sensible decision as these areas had substantial room for improvement. Single button operation to switch on, enable WiFi and connect to 3G Improved OELD display (see below), replacing the multi coloured LEDs; including signal strength, SMS notifications, the number of connected clients and data usage Management is via a web based dashboard accessible from any web browser. This is a big win for those running Linux, Mac OS/X, iPad users and, for me, as I can now configure the device from Windows 7 64-bit Charging is via micro USB, the new standard for small USB devices; you cannot use your old charger for the new MiFi unit Automatic reconnection when regaining a signal Improved charging time, which should allow recharging of the device when in use Although subjective, the black and silver design does look more classy than the silver and white plastic of the original MiFi What is the same? Virtually the same size and weight The battery is the same unit as the original MiFi so you’ll have a handy spare if you upgrade Data plans remain the same as the current MiFi, so cheapest price for upgraders will be £49 pay as you go Still only works on 3G networks, with no fallback to GPRS or EDGE There is no specific upgrade path for existing three customers, either from dongle or from the original MiFi My opinion I think three have concentrated on the correct areas of usability and user experience rather than trying to add new whizz bang technology features which aren’t of interest to mainstream users. The one button operation and the improved device display will make it much easier to use when out and about. If the automatic reconnection proves reliable that will remove a major bugbear that I experienced the previous evening when travelling on the First Great Western line from Paddington to Didcot Parkway.  The signal was repeatedly lost as we sped through tunnels and cuttings, and without automatic reconnection is was a real pain to keep pressing the data button on the MiFi to re-establish my data connection. And finally, the web based dashboard will mean I no longer need to resort to my XP based netbook to configure the SSID and password. My everyday laptop runs Windows 7 64-bit which appears to confuse the older 3 WiFi manager which cannot locate the MiFi when connected. Links to other sites, and other images of the device Good first impressions from Ben Smith, http://thereallymobileproject.com/2010/06/3uk-announce-a-new-mifi-with-a-screen/ Also, a round up of other sneak preview posts, http://www.3mobilebuzz.com/2010/06/11/mifi-round-two-your-view/ Pictures Here is a comparison of the old MiFi device next to the new device, complete with OLED display and the Huawei logo now being a prominent feature on the front of the device. One of my fellow bloggers had a Linux based netbook, showing off the web based dashboard complete with Text messages panel to manage SMS. And finally, I never thought that my blog sub title would ever end up printed onto a cup cake, ... and here's some of the other cup cakes ...

    Read the article

  • Subscribable World Cup 2010 Calendar

    - by jamiet
    I bang on quite a lot on this blog about ways in which data can get published over the web and one of the most interesting ways, in my opinion, of publishing data in a structured manner that is well understood is to use the iCalendar specification. There isn’t much information in the world that doesn’t have some concept of “when” so iCalendar is a great way of distributing that information. You have probably used iCalendar at some point without even knowing about it. All files with a .ics suffix are iCalendar format files and that is why you can happily import them into Outlook, Hotmail Calendar, Google Calendar etc… where they can be parsed and have the semantic data (when, where and who) extracted from them. Importing of iCalendar format data is really only half the trick though; in my opinion the real value of iCalendar-formatted calendar is the ability to subscribe to them. Subscribing has a simple benefit over importing but that single benefit is of massive importance: a subscriber to an iCalendar calendar can periodically check to see if any updates have been made and, if they have, automatically update the local copy. The real benefit to the user is the productivity gain – a single update to an iCalendar means that all subscribers are automatically made aware of the change and there is zero effort on the part of the subscriber; as my former colleague Howard van Rooijen is fond of saying, “work smarter not harder” – nowhere is this edict more ably demonstrated than subscribing versus importing of calendars. If you want to read some more thoughts about iCalendar then go and read my past blog post Calendar syndication - My big hope for 2009's breakthrough technology or better still go and seek out Jon Udell who speaks very authoritatively on the issue of iCalendar. With this subject of iCalendar on my mind I was interested to discover (via Steve Clayton’s blog post Download the world cup fixtures) that the BBC had made a .ics file available containing all of the matches in the upcoming World Cup. As you can probably guess this was a file that was made available so that it could be imported into your calendar of choice. It had one obvious downside though, right now nobody knows who is going to be playing in the knock-out stages so the calendar looks like this: with no teams being named after 25th June. How much more useful would this calendar have been if the BBC had made it possible to subscribe to the calendar instead, thus the calendar could be updated with the teams for the knock out stages when they are known and every subscriber would have a permanently up-to-date record of all the fixtures in their calendar. Better still, the calendar could be updated with match results as well or perhaps even post a match report from the BBC sport pages; when calendars are made subscribable a sea of opportunity opens up for distribution of information. So with that in mind I have decided to go one better than the BBC. I have imported their .ics into a brand new Hotmail calendar and made it publicly available at the following URLs: HTML http://cid-dc1ed121af0476be.calendar.live.com/calendar/World+Cup+2010/index.html iCalendar webcal://cid-dc1ed121af0476be.calendar.live.com/calendar/World+Cup+2010/calendar.ics The link you’re really interested in is the second one - click on that and it should open up in your calendar software of choice. Or, if you want to view it in an online calendar such as Hotmail Calendar or Google Calendar, copy and paste that URL into the appropriate place. I shall endeavour to keep the calendar updated throughout the World Cup and even if I don’t you’re no worse off than if you had imported the BBC’s .ics file so why not give it a try? If I do keep it up to date then you will have a permanent record of the 2010 World Cup available in your calendar. Forever. If you have your calendar synced to your smartphone then you’ll be carrying match reports around with you without you having to do a single thing. Surely that’s worth a quick click isn’t it?   If you have any thoughts let me have them in the comments below. Thanks for reading. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • maintaining a growing, diverse codebase with continuous integration

    - by Nate
    I am in need of some help with philosophy and design of a continuous integration setup. Our current CI setup uses buildbot. When I started out designing it, I inherited (well, not strictly, as I was involved in its design a year earlier) a bespoke CI builder that was tailored to run the entire build at once, overnight. After a while, we decided that this was insufficient, and started exploring different CI frameworks, eventually choosing buildbot. One of my goals in transitioning to buildbot (besides getting to enjoy all the whiz-bang extras) was to overcome some of the inadequacies of our bespoke nightly builder. Humor me for a moment, and let me explain what I have inherited. The codebase for my company is almost 150 unique c++ Windows applications, each of which has dependencies on one or more of a dozen internal libraries (and many on 3rd party libraries as well). Some of these libraries are interdependent, and have depending applications that (while they have nothing to do with each other) have to be built with the same build of that library. Half of these applications and libraries are considered "legacy" and unportable, and must be built with several distinct configurations of the IBM compiler (for which I have written unique subclasses of Compile), and the other half are built with visual studio. The code for each compiler is stored in two separate Visual SourceSafe repositories (which I am simply handling using a bunch of ShellCommands, as there is no support for VSS). Our original nightly builder simply took down the source for everything, and built stuff in a certain order. There was no way to build only a single application, or pick a revision, or to group things. It would launched virtual machines to build a number of the applications. It wasn't very robust, it wasn't distributable. It wasn't terribly extensible. I wanted to be able to overcame all of these limitations in buildbot. The way I did this originally was to create entries for each of the applications we wanted to build (all 150ish of them), then create triggered schedulers that could build various applications as groups, and then subsume those groups under an overall nightly build scheduler. These could run on dedicated slaves (no more virtual machine chicanery), and if I wanted I could simply add new slaves. Now, if we want to do a full build out of schedule, it's one click, but we can also build just one application should we so desire. There are four weaknesses of this approach, however. One is our source tree's complex web of dependencies. In order to simplify config maintenace, all builders are generated from a large dictionary. The dependencies are retrieved and built in a not-terribly robust fashion (namely, keying off of certain things in my build-target dictionary). The second is that each build has between 15 and 21 build steps, which is hard to browse and look at in the web interface, and since there are around 150 columns, takes forever to load (think from 30 seconds to multiple minutes). Thirdly, we no longer have autodiscovery of build targets (although, as much as one of my coworkers harps on me about this, I don't see what it got us in the first place). Finally, aformentioned coworker likes to constantly bring up the fact that we can no longer perform a full build on our local machine (though I never saw what that got us, either, considering that it took three times as long as the distributed build; I think he is just paranoically phobic of ever breaking the build). Now, moving to new development, we are starting to use g++ and subversion (not porting the old repository, mind you - just for the new stuff). Also, we are starting to do more unit testing ("more" might give the wrong picture... it's more like any), and integration testing (using python). I'm having a hard time figuring out how to fit these into my existing configuration. So, where have I gone wrong philosophically here? How can I best proceed forward (with buildbot - it's the only piece of the puzzle I have license to work on) so that my configuration is actually maintainable? How do I address some of my design's weaknesses? What really works in terms of CI strategies for large, (possibly over-)complex codebases?

    Read the article

  • Top 5 Mobile Apps To Keep Track Of Cricket Scores [ICC World Cup]

    - by Gopinath
    The ICC World Cup 2011 has started with a bang today and the first match between India vs Bangladesh was a cracker. India trashed Bangladesh with a huge margin, thanks to Sehwag for scoring an entertaining 175 runs in 140 runs. At the moment it’s very clear that whole India is gripped with cricket fever and so the rest of fans across the globe. Couple of days ago we blogged about how to watch live streaming of ICC cricket world cup online for free as well as top 10 websites to keep track live scores on your computers. What about tracking live cricket scores on mobiles phones? Here is our guide to top mobile apps available for Symbian(Nokia), Android, iOS and Windows mobiles. By the way, we are covering free apps alone in this post. Why to waste money when free apps are available? SnapTu – Symbian Mobile App SnapTu is a multi feature application that lets you to track live cricket scores, read latest news and check stats published on cric info. SnapTu has tie up with Cric Info and accessing all of CricInfo website on your mobile is very easy. Along with live scores, SnapTu also lets you access your Facebook, Twitter and Picassa on your mobile. This is my favourite application to track cricket on Symbian mobiles. Download SnapTu for your mobiles here Yahoo! Cricket – Symbian & iOS App Yahoo! Cricket Scores is another dedicated application to catch up with live scores and news on your Nokia mobiles and iPhones. This application is developed by Yahoo!, the web giant as well as the official partner of ICC. Features of the app at a glance Cricket: Get a summary page with latest scores, upcoming matches and details of the recent matches News: View sections devoted to the latest news, interviews and photos Statistics: Find the latest team and player stats Download Yahoo! Cricket For Symbian Phones   Download Yahoo! Cricket For iOS ESPN CricInfo – Android and iOS App Is there any site that is better than CricInfo to catch up with latest cricket news and live scores? I say No. ESPN CricInfo is the best website available on the web to get up to the minute  cricket information with in-depth analysis from cricket experts. The live commentary provided by CricInfo site is equally enjoyable as watching live cricket on TV. CricInfo guys have their official applications for Android mobiles and iOS devices and you accessing ball by ball updates on these application is joy. Download ESPN Crick Info App: Android Version, iPhone Version NDTV Cricket – Android, iOS and Blackberry App NDTV Cricket App is developed by NDTV, the most popular English TV news channel in India. This application provides live coverage of international and domestic cricket (Test, ODI & T20) along with latest News, Photos, Videos and Stats. This application is available for iOS devices(iPhones, iPads, iPod Touch), Android mobiles and Blackberry devices. Download NDTV Cricket for iOS here & here    Download NDTV Apps For Rest of OSs ECB Cricket – Symbian, iOS & Android App If you are an UK citizen then  this may be the right application to download for getting live cricket score updates as well as latest news about England Cricket Board. ECB Cricket is an official application of England Cricket Board Download ECB Cricket : Android Version, iPhone Version, Symbian Version Are there any better apps that we missed to feature in this list? This article titled,Top 5 Mobile Apps To Keep Track Of Cricket Scores [ICC World Cup], was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Windows 8 Camp&ndash;Ways to Prepare

    - by Lori Lalonde
    When Windows 8 was announced at the BUILD conference back in September, it created quite a buzz among the developer community. By the spring of 2012,  Windows 8 Developer Camps started popping up everywhere imaginable. I received a lot of questions from CTTDNUG members about whether or not we would be hosting one locally. If you recall my post about the Windows Phone/Azure Developer Workshop that CTTDNUG hosted back in March, you’ll remember that the biggest hurdle to overcome when planning this type of event was finding the right venue. It took some time, but I finally found a venue that was available and provided the prerequisites needed to ensure this camp is a success. I am very excited that CTTDNUG will be hosting a Windows 8 Camp this summer in the Kitchener/Waterloo area. In fact, it’s coming up in less than 2 weeks. Clearly other developers are excited as well, because our registration numbers show that the event is already 70% full! On top of that, I was fortunate enough to also book two well-known evangelists to present and teach at this full day developer camp: Andrei Marukovich and Atley Hunter. This was the icing on the cake. With the content provided by Microsoft, and two local experts that live and breathe Windows 8 development, I know that I, along with other developers that attend this event, will have the opportunity to maximize our learning potential and hit the ground running. If you plan on attending a Windows 8 Developer Camp soon, and want to ensure you get the most “bang for your buck” (figuratively speaking, since these camps are free), there are some things you can do to prepare before the big day: 1) Install the prerequisites on your own device before the big day I can’t stress this enough. Otherwise, you will be spending valuable time during the hands-on period downloading and installing what is needed, rather than digging into the development and using that time to ask the experts on-hand about programming challenges, issues, questions you may have with respect to your development. Prerequisites: Windows 8 Release Preview Visual Studio 2012 RC Download the Windows 8 SDK Samples 2) Purchase, download, and read Charles Petzold’s newest book:  Programming Windows 6th Edition This is a great introduction to the type of content you will be learning about during the camp. Doing some light reading beforehand might raise some questions about the concepts discussed in the book, which will give you the opportunity to write them down and bring them with you to the camp. The experts on hand will be able to answer them for you. 3) Make use of the freebies that are available Telerik has recently released a preview of their RadControls for Metro. You can sign up to receive a license code to give you access to install the preview for free and start playing around with it. Syncfusion also offers a free download of their Metro Studio package, which is a collection of metro style icons that you can customize and use in your own applications. Last but not least, once you’ve installed the Windows 8 Release Preview on your own device, go to the Windows 8 Store and download a handful of the free apps that are available. Testing out other Metro apps may give you ideas of what you can do in your own apps and analyze what features you like: application flow, type of animations used, concepts that were leveraged, how live tiles were used, etc. I hope you found these tips to be useful as you embark on a new development journey! Although this post focused on how to prepare for a Windows 8 camp, the same ideas are there whichever developer camp/workshop/event you attend. Learning does not begin and end on the day of the event. Attending a developer camp is just one step of many to master whatever technology you are interested in. It is a continuous process, which is fully maximized when you do your homework beforehand, actively participate during,  and follow up by putting what you learned to practice afterwards. Happy coding!

    Read the article

  • Too Clever for My Own Good

    - by AjarnMark
    Yesterday I caught myself being a little too clever for my own good with some ASP.NET code.  It seems that I have forgotten some of my good old classic HTML and JavaScript skills, and become too dependent on the .NET Framework and WebControls to do the work for me.  Here’s the scenario… In order to improve the User Interface and better communicate to the user when something is happening that they need to wait for, we have started to modify some of our larger (slower) pages to display messages like Processing… or Reloading… while they are cycling through a postback.  (Yes, I understand this could be improved by using AJAX / Callbacks and so on, but even then, you need to let your user know that they need to wait for that section to be re-rendered, so for the moment these pages will continue to use good ol’ Postbacks.)  It’s a very simple trick, really.  All I want to do is when some control triggers a postback, first run a little client-side JavaScript to hide the main contents of the page (such as a GridView) and display the appropriate message.  This lets the user know, “Hey, we’re doing something, don’t click another link or scroll and try to take action right now.” The first places I hooked this up were easy.  Most common cause of a postback:  Buttons.  And when you’re writing the markup or declarative code for an ASP:Button control, there is the handy OnClientClick property which is designed for just this purpose…to run client-side JavaScript before the postback occurs.  This is distinguished from the OnClick property which tells the control what Server-side code to run.  Great!  Done!  Easy! But then there are other controls like DropDownLists and CheckBoxes that we use on our pages with the AutoPostback=True setting which cause postbacks.  And these don’t have OnClientClick or OnClientSelectedIndexChanged events.  So I started getting creative, using an ASP:CustomValidator control in conjunction with setting the CausesValidation and ValidationGroup settings on these controls, which basically caused the action on the control to fire the Custom Validator, which was defined with a Client Side validation function which then did the hide content/show message code (and return a meaningless IsValid setting).  This also caused me to define a different ValidationGroup setting for my real data entry validator controls so that I could control them separately and only have them fire when I really wanted validation, and not just my show/hide trick. For a little while I was pretty proud of myself for coming up with this clever approach to get around what I considered to be a serious oversight on the DropDownList and CheckBox controls declarative syntax.  Then, in the midst of my smugness, just as I was about to commit my changes to the source code repository, it dawned on me that there is a much simpler and much more appropriate way to accomplish this.  All that I really needed to do was to put in my server-side code (I used the Page_Init section) a call to MyControl.Attributes.Add(“onClick”, “myJavaScriptFunctionName()”) for the checkboxes, and for the DropDownLists (which become select tags) use “onChange” instead of “onClick”.  This is exactly the type of thing that the Attributes collection is there for…so you can add attributes to be rendered with the control that you would have otherwise stuck right into the HTML markup if you had been doing this by hand in the first place. Ugh!  A few hours wasted on clever tricks that I ended up completely removing, but I did learn a lot more about custom validators and validation groups in the process.  And got a good reminder that all that stuff (HTML, JavaScript, and CSS) I learned back when I wrote classic ASP pages is still valuable today.  Oh, and one more thing…don’t get lulled into too much reliance on the the whiz-bang tool to do it for you.  After all, WebControls are just another layer of abstraction, and sometimes you need to dig down through the layers and get a little closer to the native language.

    Read the article

  • Open source adventures with... wait for it... Microsoft

    - by Jeff
    Last week, Microsoft announced that it was going to open source the rest of the ASP.NET MVC Web stack. The core MVC framework has been open source for a long time now, but the other pieces around it are also now out in the wild. Not only that, but it's not what I call "big bang" open source, where you release the source with each version. No, they're actually committing in real time to a public repository. They're also taking contributions where it makes sense. If that weren't exciting enough, CodePlex, which used to be a part of the team I was on, has been re-org'd to a different part of the company where it is getting the love and attention (and apparently money) that it deserves. For a period of several months, I lobbied to get a PM gig with that product, but got nowhere. A year and a half later, I'm happy to see it finally treated right. In any case, I found a bug in Razor, the rendering engine, before the beta came out. I informally sent the bug info to some people, but it wasn't fixed for the beta. Now, with the project being developed in the open, I was able to submit the issue, and went back and forth with the developer who wrote the code (I met him once at a meet up in Bellevue, I think), and he committed a fix. I tried it a day later, and the bug was gone. There's a lot to learn from all of this. That open source software is surprisingly efficient and often of high quality is one part of it. For me the win is that it demonstrates how open and collaborative processes, as light as possible, lead to better software. In other words, even if this were a project being developed internally, at a bank or something, getting stakeholders involved early and giving people the ability to respond leads to awesomeness. While there is always a place for big thinking, experience has shown time and time again that trying to figure everything out up front takes too long, and rarely meets expectations. This is a lesson that probably half of Microsoft has yet to learn, including the team I was on before I split. It's the reason that team still hasn't shipped anything to general availability. But I've seen what an open and iterative development style can do for teams, at Microsoft and other places that I've worked. When you can have a conversation with people, and take ideas and turn them into code quickly, you're winning. So why don't people like winning? I think there are a lot of reasons, and they can generally be categorized into fear, skepticism and bad experiences. I can't give the Web stack teams enough credit. Not only did they dream big, but they changed a culture that often seems immovable and hopelessly stuck. This is a very public example of this culture change, but it's starting to happen at every scale in Microsoft. It's really interesting to see in a company that has been written off as dead the last decade.

    Read the article

  • Hijax == sneaky Javascript redirects? Will I get banned from Google?

    - by Chris Jacob
    Question Will I get penalised as "sneaky Javascript redirects" by Google if I have the following Hijax setup (which requires a JavaScript redirect on the page indexed by google). Goal I want to implement Hijax to enable AJAX content to be accessibile to non-JavaScript users and search engine crawlers. Background I'm working on a static file server (GitHub Pages). No server side tricks allowed (so Google's #! "hash bang" solution is not an option). I'm trying to keep my files DRY. I don't want to repeat the common OUTER template in all my files i.e. header, navigation menu, footer, etc They will live in the main index.html Setup the Hijax index.html page contains all OUTER html/css/js... the site's template. index.html has a <div id="content"> which defaults to containing the "homepage" html. index.html has a navigation menu, with a Hijax link to an "about" page. With JavaScript disabled (e.g. crawler) it follows link to /about.html. With JavaScript enabled (e.g. most people) the link updates the url hash fragment to /#about and jQuery replaces the <div id="content"> innerHTML with $("#content").load("about.html #inner-container");. AJAX content about.html does not contain anything extra to try an cloak content for crawlers. about.html file contains enough HTML / CSS / JavaScript to display /about.html as a standalone page with it's own META data... e.g. <html><head><title>About</title>...</head><body></body></html>. about.html has NO OUTER HTML template (i.e. header, navigation menu, footer, etc). about.html <body> contains a <div id="inner-container"> which holds the content that is injected into index.html. about.html has a <noscript> tag as the first child of <body> which explains to non-JavaScript users that they are viewing the about page "inner content" - with a link to navigate to the index.html page to get the full page layout with menu. The (Sneaky?) Redirect Google indexes the /about.html page. However when a person with JavaScript enabled visits that page there is no OUTER html template (e.g. header, navigation menu, footer, etc). So I need to do a JavaScript redirect to get the person over the /#about page (deeplinking to the "about" page "state" in index.html). I'm thinking of doing a "redirect on click or after 10 seconds". The end results is that user ends up on an "enhanced" page back on index.html with all it's OUTER template - but the core "page" content is practically identical. Known issue with inbound links e.g. Share / Bookmarking It seems that if a user shares the URL /#about on their blog, when allocating inbound links to my site Google ignores everything after the # ... it allocates value to the / page - See: http://stackoverflow.com/questions/5028405/hashbang-vs-hijax/5166665#5166665. I can only try an minimise this issue offering "share" buttons on the page with the appropriate urls i.e. /about.html. Duplicate Sorry. I posted this same question over on http://stackoverflow.com/questions/5561686/hijax-sneaky-javascript-redirects-will-i-get-banned-from-google ... then realised it probably belongs more on this Stack Exchange site... Not sure if I should delete the Stack Overflow question? Or just leave it on both sites? Please leave comment.

    Read the article

  • Is This Your Idea of Disaster Recovery?

    - by rickramsey
    Don't just make do with less. Protect what you've got. By, for instance, deploying Oracle Solaris 10 inside a zone cluster. "Wait," you say, "what is a zone cluster?" It is a zone deployed across different physical servers. "Who would do that!" you ask in a mild panic. Why, an upstanding sysadmin citizen interested in protecting his or her employer's investment with appropriate high availability and disaster recovery. If one server gets wiped out by Hurricane Sandy along with pretty much the entire East Coast of the USA, your zone continues to run on the other server(s). Provided you set them up in Edinburgh. This white paper (pdf) explains what a zone cluster is and how to use it. If a white paper reminds you of having to read War and Peace in school, just use this Oracle RAC and Solaris Cluster Cheat Sheet, instead. "But wait!" you exclaim. "I didn't realize Solaris 10 offered zone clusters!" I didn't, either! And in an earlier version of this blog post I said that zone clusters were only available with Oracle Solaris 11. But Karoly Vegh pointed me to the documentation for Oracle Solaris Cluster 3.3, which explains how to manage zone clusters in Oracle Solaris 10. Bite my fist! So, the point I was trying to make is not just that you can run Oracle Solaris 10 zone clusters, but that you can run them in an Oracle Solaris 11 environment. Now let's return to our conversation and pick up where we left off ... "Oh no! Whatever shall I do?" Fear not. Remember how Oracle Solaris 11 lets you create a Solaris 10 branded zone inside a system running Oracle Solaris 11? Well, the Solaris Cluster engineers thought that was a bang-up idea, and decided to extend Oracle Solaris Cluster so that you could run your Solaris 10 applications inside the protective cocoon of an Oracle Solaris 11 zone cluster. Take advantage of the installation improvements and network virtualization capabilities of Oracle Solaris 11 while still running your application on Oracle Solaris 10. You Luddite, you. That capability is in the latest release of Oracle Solaris Cluster, version 4.1, which became available last Friday. "Last Friday! Is it too late to get a copy?" You can still get a free copy from our download center (see below). And, if you'd like to know what other goodies the 4.1 release of Oracle Solaris Cluster provides, see: What's New In Oracle Solaris Cluster 4.1 (pdf) Free download Oracle Solaris Cluster 4.1 (SPARC or x86) Tech Article: How to Upgrade to Oracle Solaris Cluster 4.0, by Tim Read. As always, you can get the latest information about Oracle Solaris Cluster, plus technical how-to articles, documentation, and more from Oracle Solaris Cluster Resource Page for Sysadmins and Developers. And don't forget about the online launch of Oracle Solaris 11.1 and Oracle Solaris Cluster 4.1, scheduled for Nov 7. "I feel so much better, now!" Think nothing of it. That's what we're here for. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Investigating Strategies For Functional Decomposition

    - by Liam McLennan
    Introducing Functional Decomposition Before I begin I must apologise. I think I am using the term ‘functional decomposition’ loosely, and probably incorrectly. For the purpose of this article I use functional decomposition to mean the recursive splitting of a large problem into increasingly smaller ones, so that the one large problem may be solved by solving a set of smaller problems. The justification for functional decomposition is that the decomposed problem is more easily solved. As software developers we recognise that the smaller pieces are more easily tested, since they do less and are more cohesive. Functional decomposition is important to all scientific pursuits. Once we understand natural selection we can start to look for humanities ancestral species, once we understand the big bang we can trace our expanding universe back to its origin. Isaac Newton acknowledged the compositional nature of his scientific achievements: If I have seen further than others, it is by standing upon the shoulders of giants   The Two Strategies For Functional Decomposition of Computer Programs Private Methods When I was working on my undergraduate degree I was taught to functionally decompose problems by using private methods. Consider the problem of painting a house. The obvious solution is to solve the problem as a single unit: public void PaintAHouse() { // all the things required to paint a house ... } We decompose the problem by breaking it into parts: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { // everything required to paint the undercoat } private void PaintTopcoat() { // everything required to paint the topcoat } The problem can be recursively decomposed until a sufficiently granular level of detail is reached: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { prepareSurface(); fetchUndercoat(); paintUndercoat(); } private void PaintTopcoat() { fetchPaint(); paintTopcoat(); } According to Wikipedia, at least one computer programmer has referred to this process as “the art of subroutining”. The practical issues that I have encountered when using private methods for decomposition are: To preserve the top level API all of the steps must be private. This means that they can’t easily be tested. The private methods often have little cohesion except that they form part of the same solution. Decomposing to Classes The alternative is to decompose large problems into multiple classes, effectively using a class instead of each private method. The API delegates to related classes, so the API is not polluted by the sub-steps of the problem, and the steps can be easily tested because they are each in their own highly cohesive class. Additionally, I think that this technique facilitates better adherence to the Single Responsibility Principle, since each class can be decomposed until it has precisely one responsibility. Revisiting my previous example using class composition: public class HousePainter { private undercoatPainter = new UndercoatPainter(); private topcoatPainter = new TopcoatPainter(); public void PaintAHouse() { undercoatPainter.Paint(); topcoatPainter.Paint(); } } Summary When decomposing a problem there is more than one way to represent the sub-problems. Using private methods keeps the logic in one place and prevents a proliferation of classes (thereby following the four rules of simple design) but the class decomposition is more easily testable and more compatible with the Single Responsibility Principle.

    Read the article

  • What do you need to know to be a world-class master software developer? [closed]

    - by glitch
    I wanted to bring up this question to you folks and see what you think, hopefully advise me on the matter: let's say you had 30 years of learning and practicing software development in front of you, how would you dedicate your time so that you'd get the biggest bang for your buck. What would you both learn and work on to be a world-class software developer that would make a large impact on the industry and leave behind a legacy? I think that most great developers end up being both broad generalists and specialists in one-two areas of interest. I'm thinking Bill Joy, John Carmack, Linus Torvalds, K&R and so on. I'm thinking that perhaps one approach would be to break things down by categories and establish a base minimum of "software development" greatness. I'm thinking: Operating Systems: completely internalize the core concepts of OS, perhaps gain a lot of familiarity with an OSS one such as Linux. Anything from memory management to device drivers has to be complete second nature. Programming Languages: this is one of those topics that imho has to be fully grokked even if it might take many years. I don't think there's quite anything like going through the process of developing your own compiler, understanding language design trade-offs and so on. Programming Language Pragmatics is one of my favorite books actually, I think you want to have that internalized back to back, and that's just the start. You could go significantly deeper, but I think it's time well spent, because it's such a crucial building block. As a subset of that, you want to really understand the different programming paradigms out there. Imperative, declarative, logic, functional and so on. Anything from assembly to LISP should be at the very least comfortable to write in. Contexts: I believe one should have experience working in different contexts to truly be able to appreciate the trade-offs that are being made every day. Embedded, web development, mobile development, UX development, distributed, cloud computing and so on. Hardware: I'm somewhat conflicted about this one. I think you want some understanding of computer architecture at a low level, but I feel like the concepts that will truly matter will be slightly higher level, such as CPU caching / memory hierarchy, ILP, and so on. Networking: we live in a completely network-dependent era. Having a good understanding of the OSI model, knowing how the Web works, how HTTP works and so on is pretty much a pre-requisite these days. Distributed systems: once again, everything's distributed these days, it's getting progressively harder to ignore this reality. Slightly related, perhaps add solid understanding of how browsers work to that, since the world seems to be moving so much to interfacing with everything through a browser. Tools: Have a really broad toolset that you're familiar with, one that continuously expands throughout the years. Communication: I think being a great writer, effective communicator and a phenomenal team player is pretty much a prerequisite for a lot of a software developer's greatness. It can't be overstated. Software engineering: understanding the process of building software, team dynamics, the requirements of the business-side, all the pitfalls. You want to deeply understand where what you're writing fits from the market perspective. The better you understand all of this, the more of your work will actually see the daylight. This is really just a starting list, I'm confident that there's a ton of other material that you need to master. As I mentioned, you most likely end up specializing in a bunch of these areas as you go along, but I was trying to come up with a baseline. Any thoughts, suggestions and words of wisdom from the grizzled veterans out there who would like to share their thoughts and experiences with this? I'd really love to know what you think!

    Read the article

  • DataTableReader is invalid for current DataTable 'TempTable'

    - by Sk93
    Hi, I'm getting the following error whenever my code creates a DataTableReader from a valid DataTable Object: "DataTableReader is invalid for current DataTable 'TempTable'." The thing is, if I reboot my machine, it works fine for an undertimed amount of time, then dies with the above. The code that throws this error could have been working fine for hours and then: bang. you get this error. It's not limited to one line either; it's every single location that a DataTableReader is used. Also, this error does NOT occur on the production web server - ever. This is an example of one of the lines where it falls over: If I step over this line, I get this: However, if I do this in the immediate window: I get no problems. Same goes if I actually use that line in the code. This has been driving me nuts for the best part of a week, and I've failed to find anything on Google that could help (as I'm pretty positive this isn't a coding issue). Some technical info: DEV Box: Vista 32bit (with all current windows updates) Visual Studio 2008 v9.0.30729.1 SP dotNet Framework 3.5 SP1 SQL Server: Microsoft SQL Server 2005 Standard Edition- 9.00.4035.00 (X64) Windows 2003 64bit (with all current windows updates) Web Server: Windows 2003 64bit (with all current windows updates) any help, ideas, or advice would be greatly appreciated! Cheers, Ian

    Read the article

  • What are the best open-source software non-profits for making financial contributions and/or facilitating useful work?

    - by Jason S
    I'm not a great programmer myself (my main job is more electrical engineering) and have never really helped out with any open source projects, but I've benefited greatly from free and/or open-source software (MySQL, OpenOffice, Firefox, Apache, PHP, Java, etc.) and at some point would like to make some modest financial contributions to help keep this stuff going. I'm wondering, what are the best non-profits to make financial contributions? I'm aware of: Open Source Initiative (founded 10 years ago by several prominent figures including programmer and "The Cathedral and the Bazaar" author Eric S. Raymond) Free Software Foundation Mozilla Foundation Apache Foundation Anyone have a particular favorite? Ideally I'd like to give money to a non-profit that would foster some of the smaller but promising open-source and/or free software projects. The big projects like Firefox and Apache are already well-established. There are a few small individual shareware programs I've already paid for directly. But it's those middle-ground projects that I would really like my contributions to support. (one that comes to mind is a good GUI for Subversion or Mercurial.) It's one thing for a single person to donate a little $$ to a small project. It's another for a foundation or something to give larger grants to projects that give a good bang for the buck. Conservation organizations like The Nature Conservancy, or the Trust for Public Lands, have really honed this approach, but I'm not really sure if there's an equivalent model in software-land.

    Read the article

  • WCF JSON Service returns XML on Fault

    - by Anthony Johnston
    I am running a ServiceHost to test one of my services and all works fine until I throw a FaultException - bang I get XML not JSON my service contract - lovely /// <summary> /// <para>Get category by id</para> /// </summary> [OperationContract(AsyncPattern = true)] [FaultContract(typeof(CategoryNotFound))] [FaultContract(typeof(UnexpectedExceptionDetail))] IAsyncResult BeginCategoryById( CategoryByIdRequest request, AsyncCallback callback, object state); CategoryByIdResponse EndCategoryById(IAsyncResult result); Host Set-up - scrummy yum var host = new ServiceHost(serviceType, new Uri(serviceUrl)); host.AddServiceEndpoint( serviceContract, new WebHttpBinding(), "") .Behaviors.Add( new WebHttpBehavior { DefaultBodyStyle = WebMessageBodyStyle.Bare, DefaultOutgoingResponseFormat = WebMessageFormat.Json, FaultExceptionEnabled = true }); host.Open(); Here's the call - oo belly ache var request = WebRequest.Create(serviceUrl + "/" + serviceName); request.Method = "POST"; request.ContentType = "application/json; charset=utf-8"; request.ContentLength = 0; try { // receive response using (var response = request.GetResponse()) { var responseStream = response.GetResponseStream(); // convert back into referenced object for verification var deserialiser = new DataContractJsonSerializer(typeof (TResponseData)); return (TResponseData) deserialiser.ReadObject(responseStream); } } catch (WebException wex) { var response = wex.Response; using (var responseStream = response.GetResponseStream()) { // convert back into fault //var deserialiser = new DataContractJsonSerializer(typeof(FaultException<CategoryNotFound>)); //var fex = (FaultException<CategoryNotFound>)deserialiser.ReadObject(responseStream); var text = new StreamReader(responseStream).ReadToEnd(); var fex = new Exception(text, wex); Logger.Error(fex); throw fex; } } the text var contains the correct fault, but serialized as Xml What have I done wrong here?

    Read the article

  • Need guidelines for optimizing WebGL performance by minimizing shader changes

    - by brainjam
    I'm trying to get an idea of the practicality of WebGL for rendering large architectural interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material. I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up. So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example, what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements() should I try to write ubershaders to minimize shader switching? should I try to aggregate geometry to minimize the number of gl.drawElements() calls I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.

    Read the article

  • How did your team customize Stylecop (and perhaps other tools) for .Net for a good result?

    - by Hamish Grubijan
    Our team is still in a love / hate relationship with it. I am hoping to put an end to the debate by having an internal vote on what rules should be excluded and which rules should be added. Before doing so, I wanted to ask others SO users. To standardize (but not limit) the responses: What is your current StyleCop version? What .Net version do you currently target? Which default rules did you turn off? Which non-default rules have you turned on? Have you coded your own rules? Please describe. Do you have any other StyleCop tricks worth sharing? Do you use Resharper? What version? Is it a good bang for the buck? Do you use any other tools for .Net / C++ which integrate with Visual Studio and aid development? Did you get your money's worth? Anything else you like to add? ... Thank you!

    Read the article

  • PHP Installer Script

    - by totallymike
    I'm looking to create a fully functional installer for a slightly complicated web application using PHP. Basically, you go to URL/setup.php and you are asked a series of questions, and during the process a database is set up and a config file is populated according to the answers given. A good example of this is how Wordpress handles its install procedure, as well as any number of PHP applications. My question to you, oh great internet, is this: This technique is so prevalent that surely not everyone is re-inventing the wheel, is there a place I can go to find a basic walkthrough (not code, just a description) of the technique. Either that or a good book which outlines if not this specific technique, then the fundamentals relevant to this? I can bang out something ugly and probably make it work myself, but if there's a best known method I'd like to find it. Far more important than having a working function is to have a description of what's going on from step to step. I'd like to have the installer, but I really want to learn how it's done. Thanks, Mike

    Read the article

  • Generic ASP.NET MVC Route Conflict

    - by Donn Felker
    I'm working on a Legacy ASP.NET system. I say legacy because there are NO tests around 90% of the system. I'm trying to fix the routes in this project and I'm running into a issue I wish to solve with generic routes. I have the following routes: routes.MapRoute( "DefaultWithPdn", "{controller}/{action}/{pdn}", new { controller = "", action = "Index", pdn = "" }, null ); routes.MapRoute( "DefaultWithClientId", "{controller}/{action}/{clientId}", new { controller = "", action = "index", clientid = "" }, null ); The problem is that the first route is catching all of the traffic for what I need to be routed to the second route. The route is generic (no controller is defined in the constraint in either route definition) because multiple controllers throughout the entire app share this same premise (sometimes we need a "pdn" sometimes we need a "clientId"). How can I map these generic routes so that they go to the proper controller and action, yet not have one be too greedy? Or can I at all? Are these routes too generic (which is what I'm starting to believe is the case). My only option at this point (AFAIK) is one of the following: In the contraints, apply a regex to match the action values like: (foo|bar|biz|bang) and the same for the controller: (home|customer|products) for each controller. However, this has a problem in the fact that I may need to do this: ~/Foo/Home/123 // Should map to "DefaultwithPdn" ~/Foo/Home/abc // Should map to "DefaultWithClientId" Which means that if the Foo Controller has an action that takes a pdn and another action that takes a clientId (which happens all the time in this app), the wrong route is chosen. To hardcode these contstraints into each possible controller/action combo seems like a lot of duplication to me and I have the feeling I've been looking at the problem for too long so I need another pair of eyes to help out. Can I have generic routes to handle this scenario? Or do I need to have custom routes for each controller with constraints applied to the actions on those routes? Thanks

    Read the article

  • Hashtable is that fast

    - by Costa
    Hi s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]. Is the hash function of the java string, I assume the rest of languages is similar or close to this implementation. If we have hash-Table and a list of 50 elements. each element is 7 chars ABCDEF1, ABCDEF2, ABCDEF3..... ABCDEFn If each bucket of hashtable contains 5 strings (I think this function will make it one string per bucket, but let us assume it is 5). If we call col.Contains("ABCDEFn"); // will do 6 comparisons and discover the difference on the 7th. The hash-table will take around 70 operations (multiplication and additions) to get the hashcode and to compare with 5 strings in bucket. and BANG it found. For list it will take around 300 comparisons to find it. for the case that there is only 10 elements, the list will take around 70 operations but the Hashtable will take around 50 operations. and note that hashtable operations are more time consuming (it is multiplications). I conclude that HybirdDictionary in .Net probably is the best choice for that most cases that require Hashtable with unknown size, because it will let me use a list till the list becomes more than 10 elements. still need something like HashSet rather than a Dictionary of keys and values, I wonder why there is no HybirdSet!! So what do u think? Thanks

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • How to track the touch vector?

    - by mystify
    I need to calculate the direction of dragging a touch, to determine if the user is dragging up the screen, or down the screen. Actually pretty simple, right? But: 1) Finger goes down, you get -touchesBegan:withEvent: called 2) Must wait until finger moves, and -touchesMoved:withEvent: gets called 3) Problem: At this point it's dangerous to tell if the user did drag up or down. My thoughts: Check the time and accumulate calculates vectors until it's secure to tell the direction of touch. Easy? No. Think about it: What if the user holds the finger down for 5 minutes on the same spot, but THEN decides to move up or down? BANG! Your code would fail, because it tried to determine the direction of touch when the finger didn't move really. Problem 2: When the finger goes down and stays at the same spot for a few seconds because the user is a bit in the wind and thinks about what to do now, you'll get a lot of -touchesMoved:withEvent: calls very likely, but with very minor changes in touch location. So my next thought: Do the accumulation in -touchesMoved:withEvent:, but only if a certain threshold of movement has been exceeded. I bet you have some better concepts in place?

    Read the article

  • iPhone: How to Determine Average Light/Dark of an Area of an UIImage

    - by TechZen
    I need to place labels with a transparent background over a variable-content UIImage. Readability will vary significantly depending on the relationship between the color of the label's text and the color/luminosity of the area of the image displayed under the label. Since the image will be constantly changing, the color of the label's text needs to change in sync. I have found several techniques for determining the color, perceived luminosity etc of a single pixel. However, I need to rather quickly (while a view loads) determine the rough perceived color/luminosity of an area of the UIImage under the frame of the UILabel. I presume I will also need to measure the alpha because the same color/luminosity looks different at different alpha values. Is there a way to calculate such a value for an area? Will I be reduced to simply summing pixels? If it comes to that, is there an algorithm to accomplish this? I've thought of two possible approaches: Perform some "folding" operations i.e. combining pixels from one half of the area to the other half. Then repeat until I get a single value. Would this be practical? How would you logically combine pixels to average their perceived color/luminosity? Sample a statistically significant number of pixels in the area and then combine them (somehow) to get a rough measure. I think this problem comes up a lot these days with people being so found of customizing backgrounds. Seems like something that would be worth my time to bang out a category or class to handle this and then share it around.

    Read the article

  • What problem did MS solve by creating PowerShell? [closed]

    - by Fred
    I'm asking because PowerShell confuses me. I've been trying to write some deployment scripts using PowerShell and I've been less than enthused by the result. I have a co-worker who loves PowerShell and defends it at every turn. Said co-worker claims PowerShell was never written to be a strong shell, but instead was written to: a) Allow you to peek and poke at .NET assemblies on the command-line (why is this a reason for PowerShell to exist?) b) To be hosted in .NET applications for automation, similar to DCOP in KDE and how Gnome is using CORBA. c) to be treated as ".NET script" rather than as an actual shell (related to b). I've always felt like Windows was missing a decent way to bang out automation scripts. cmd is too simplistic in many cases, and WSH is too obtuse (although the combination can be used successfully, I'm not a fan). When I first heard about PowerShell I felt like Windows was finally getting a decent shell that would be able to help with automation of many tasks, but recent experiences, and my co-worker, tell me otherwise. To be clear, I don't take issue with the fact that it's built on .NET, or that it passes objects around rather than text (despite my Unix background :]), and I'm not arguing that PowerShell is useless, but from what I can see, it doesn't solve the problem I was hoping it would solve very well. As soon as you step outside of the .NET/Powershell world, things quit being nice and cozy for you. So with all that out of the way, what problem did MS solve by creating PowerShell, or is it some political bastard child as I suspect? I've googled and haven't hit upon anything that sufficiently answered that for me, but the more citations the better.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >