Search Results

Search found 4879 results on 196 pages for 'geeks'.

Page 127/196 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • On Reflector Pricing

    - by Nick Harrison
    I have heard a lot of outrage over Red Gate's decision to charge for Reflector. In the interest of full disclosure, I am a fan of Red Gate. I have worked with them on several usability tests. They also sponsor Simple Talk where I publish articles. They are a good company. I am also a BIG fan of Reflector. I have used it since Lutz originally released it. I have written my own add-ins. I have written code to host reflector and use its object model in my own code. Reflector is a beautiful tool. The care that Lutz took to incorporate extensibility is amazing. I have never had difficulty convincing my fellow developers that it is a wonderful tool. Almost always, once anyone sees it in action, it becomes their favorite tool. This wide spread adoption and usability has made it an icon and pivotal pillar in the DotNet community. Even folks with the attitude that if it did not come out of Redmond then it must not be any good, still love it. It is ironic to hear everyone clamoring for it to be released as open source. Reflector was never open source, it was free, but you never were able to peruse the source code and contribute your own changes. You could not even use Reflector to view the source code. From the very beginning, it was never anyone's intention for just anyone to examine the source code and make their own contributions aside from the add-in model. Lutz chose to hand over the reins to Red Gate because he believed that they would be able to build on his original vision and keep the product viable and effective. He did not choose to make it open source, hoping that the community would be up to the challenge. The simplicity and elegance may well have been lost with the "design by committee" nature of open source. Despite being a wonderful and beloved tool, Reflector cannot be an easy tool to maintain. Maybe because it is so wonderful and beloved, it is even more difficult to maintain. At any rate, we have high expectations. Reflector must continue to be able to reasonably disassemble every language construct that the framework and core languages dream up. We want it to be fast, and we also want it to continue to be simple to use. No small order. Red Gate tried to keep the core product free. Sadly there was not enough interest in the Pro version to subsidize the rest of the expenses. $35 is a reasonable cost, more than reasonable. I have read the blog posts and forum posts complaining about the time associated with getting the expense approved. I have heard people complain about the cost being unreasonable if you are a developer from certain countries. Let's do the math. How much of a productivity boost is Reflector? How many hours do you think it saves you in a typical project? The next question is a little easier if you are a contractor or a consultant, but what is your hourly rate? If you are not a contractor, you can probably figure out an hourly rate. How long does it take to get a return on your investment? The value added proposition is not a difficult one to make. I have read people clamoring that Red Gate sucks and is evil. They complain about broken promises and conflicts of interest. Relax! Red Gate is not evil. The world is not coming to an end. The sun will come up tomorrow. I am sure that Red Gate will come up with options for volume licensing or site licensing for companies that want to get a licensed copy for their entire team. Don't panic, and I am sure that many great improvements are on the horizon. Switching the UI to WPF and including a tabbed interface opens up lots of possibilities.

    Read the article

  • Rant - Why is Windows Azure not available in Africa?

    - by Allan Rwakatungu
    Yesterday at the .NET user group meeting in Kampala Uganda  I gave a talk on cloud computing with Windows Azure  (details will be in my next blog post). The guys where excited. Without owning they own inftrastucture and at low cost they can build scalable , highly available applications. Not quite. Azure accounts are only available to people in particular countries - none from Africa. I attended PDC in 2008 when Microsoft unleashed Windows Azure. One of the case studies to show the benefits ofr cloud computing was a project in Africa for an education service in Ethiopia. The point they where making was that the cloud was perfect for scenarios where computing infrastructure is not sophiscated, like Ethiopia. Perfect , i thought. So i got my beta account from PDC and started playing around in the cloud. Then Azure goes live , my beta account does not work any more and I cant pay because am from Uganda. Microsoft , this sucks. I dont know the reasons for Microsoft doing this, but am sure we can work out something. We in Africa need the cloud more than anybody else in the world. Setting up data centers that are higly scalable and available for our startups is not an option we have. But we also cant pay for cloud computing with Microsoft. Microsoft, we know we are a tiny insigficant market for a company your size, but your excluding us only continues to widen the digital divide. Microsoft , how about you have a reseller model for cloud computing. Instead of trying to deal direclty with each client you have local partners who help you sell and bill your cloud services. I think that would lead to Windows Azure being available in Africa. I can help you resell in Uganda.

    Read the article

  • Do you have a contract between the Product Owner and the Team?

    - by Martin Hinshelwood
    Working in Scrum it is useful to define a Sprint Contract between the Product Owner (PO) and the implementation Team. Doing this helps to improve common understanding in, and sometimes to enforce, the relationship between the PO and the Team. This is simply an agreement between the PO for one Sprint and is not really a commercial contract and should be confirmed via an e-mail at the beginning of every Sprint. “The implementation team agrees to do its best to deliver an agreed on set of features (scope) to a defined quality standard by the end of the sprint. (Ideally they deliver what they promised, or even a bit more.) The Product Owner agrees not to change his instructions before the end of the Sprint.” - Agile Project management (http://agilesoftwaredevelopment.com/blog/peterstev/10-agile-contracts#Sprint) Each of the Sprints in a Scrum project can be considered a mini-project that has Time (Sprint Length), Scope (Sprint Backlog), Quality (Definition of Done) and Cost (Team Size*Sprint Length). Only the scope can vary and this is measured every sprint. Figure: Good Example, the product owner should reply to the team and commit to the contract This Rule has been added to SSW’s Rules to better Scrum with TFS   Technorati Tags: SSW,Scrum,SSW Rules

    Read the article

  • Digineer Is Hiring

    - by Christopher House
    My employer, Digineer is looking for BizTalk and SharePoint people.  If you or someone you know is looking for a great opportunity in the Minneapolis/St. Paul area, get in touch via the Contact link on this page.

    Read the article

  • Welcome

    - by 13DaysaWeek
    Greetings! Let me start with an introduction.  My name is Chris House.  I'm a consultant with Digineer, a management and technology consultant firm in the Twin Cities of Minneapolis/St. Paul, Minnesota.  My particular area of focus is on middle tier technologies such as BizTalk, WCF, etc. I'm looking forward to sharing some of the interesting tidbits I come across during the course of my work.

    Read the article

  • Handy Javascript array Extensions &ndash; distinct()

    - by Liam McLennan
    The following code adds a method to javascript arrays that returns a distinct list of values. Array.prototype.distinct = function() { var derivedArray = []; for (var i = 0; i < this.length; i += 1) { if (!derivedArray.contains(this[i])) { derivedArray.push(this[i]) } } return derivedArray; }; and to demonstrate: alert([1,1,1,2,2,22,3,4,5,6,7,5,4].distinct().join(',')); This produces 1,2,22,3,4,5,6,7

    Read the article

  • Thread.Interrupt Is Evil

    - by Alois Kraus
    Recently I have found an interesting issue with Thread.Interrupt during application shutdown. Some application was crashing once a week and we had not really a clue what was the issue. Since it happened not very often it was left as is until we have got some memory dumps during the crash. A memory dump usually means WindDbg which I really like to use (I know I am one of the very few fans of it).  After a quick analysis I did find that the main thread already had exited and the thread with the crash was stuck in a Monitor.Wait. Strange Indeed. Running the application a few thousand times under the debugger would potentially not have shown me what the reason was so I decided to what I call constructive debugging. I did create a simple Console application project and try to simulate the exact circumstances when the crash did happen from the information I have via memory dump and source code reading. The thread that was  crashing was actually MS code from an old version of the Microsoft Caching Application Block. From reading the code I could conclude that the main thread did call the Dispose method on the CacheManger class which did call Thread.Interrupt on the cache scavenger thread which was just waiting for work to do. My first version of the repro looked like this   static void Main(string[] args) { Thread t = new Thread(ThreadFunc) { IsBackground = true, Name = "Test Thread" }; t.Start(); Console.WriteLine("Interrupt Thread"); t.Interrupt(); } static void ThreadFunc() { while (true) { object value = Dequeue(); // block until unblocked or awaken via ThreadInterruptedException } } static object WaitObject = new object(); static object Dequeue() { object lret = "got value"; try { lock (WaitObject) { } } catch (ThreadInterruptedException) { Console.WriteLine("Got ThreadInterruptException"); lret = null; } return lret; } I do start a background thread and call Thread.Interrupt on it and then directly let the application terminate. The thread in the meantime does plenty of Monitor.Enter/Leave calls to simulate work on it. This first version did not crash. So I need to dig deeper. From the memory dump I did know that the finalizer thread was doing just some critical finalizers which were closing file handles. Ok lets add some long running finalizers to the sample. class FinalizableObject : CriticalFinalizerObject { ~FinalizableObject() { Console.WriteLine("Hi we are waiting to finalize now and block the finalizer thread for 5s."); Thread.Sleep(5000); } } class Program { static void Main(string[] args) { FinalizableObject fin = new FinalizableObject(); Thread t = new Thread(ThreadFunc) { IsBackground = true, Name = "Test Thread" }; t.Start(); Console.WriteLine("Interrupt Thread"); t.Interrupt(); GC.KeepAlive(fin); // prevent finalizing it too early // After leaving main the other thread is woken up via Thread.Abort // while we are finalizing. This causes a stackoverflow in the CLR ThreadAbortException handling at this time. } With this changed Main method and a blocking critical finalizer I did get my crash just like the real application. The funny thing is that this is actually a CLR bug. When the main method is left the CLR does suspend all threads except the finalizer thread and declares all objects as garbage. After the normal finalizers were called the critical finalizers are executed to e.g. free OS handles (usually). Remember that I did call Thread.Interrupt as one of the last methods in the Main method. The Interrupt method is actually asynchronous and does wake a thread up and throws a ThreadInterruptedException only once unlike Thread.Abort which does rethrow the exception when an exception handling clause is left. It seems that the CLR does not expect that a frozen thread does wake up again while the critical finalizers are executed. While trying to raise a ThreadInterrupedException the CLR goes down with an stack overflow. Ups not so nice. Why has this nobody noticed for years is my next question. As it turned out this error does only happen on the CLR for .NET 4.0 (x86 and x64). It does not show up in earlier or later versions of the CLR. I have reported this issue on connect here but so far it was not confirmed as a CLR bug. But I would be surprised if my console application was to blame for a stack overflow in my test thread in a Monitor.Wait call. What is the moral of this story? Thread.Abort is evil but Thread.Interrupt is too. It is so evil that even the CLR of .NET 4.0 contains a race condition during the CLR shutdown. When the CLR gurus can get it wrong the chances are high that you get it wrong too when you use this constructs. If you do not believe me see what Patrick Smacchia does blog about Thread.Abort and List.Sort. Not only the CLR creators can get it wrong. The BCL writers do sometimes have a hard time with correct exception handling as well. If you do tell me that you use Thread.Abort frequently and never had problems with it I do suspect that you do not have looked deep enough into your application to find such sporadic errors.

    Read the article

  • Two new features in November 2009 CTP

    - by kaleidoscope
    Windows Azure Diagnostics Managed Library: The new Diagnostics API enables logging using standard .NET APIs. The Diagnostics API provides built-in support for collecting standard logs and diagnostic information, including the Windows Azure logs, IIS 7.0 logs, Failed Request logs, crash dumps, Windows Event logs, performance counters, and custom logs. Variable-size Virtual Machines (VMs): Developers may now specify the size of the virtual machine to which they wish to deploy a role instance, based on the role's resource requirements. The size of the VM determines the number of CPU cores, the memory capacity, and the local file system size allocated to a running instance. e.g.: <WebRole name=”WebRole1” vmsize=”ExtraLarge”> Supported values for the ‘vmsize’ are: 1. Small 2. Medium 3. Large 4.       ExtraLarge More information for Diagnostics Managed Library can be found at: http://msdn.microsoft.com/en-us/library/ee758705.aspx   Girish, A

    Read the article

  • Google Bot trying to access my web app's sitemap

    - by geekrutherford
    Interesting find today...   I was perusing the event log on our web server today for any unexpected ASP.NET exceptions/errors. Found the following:   Exception information: Exception type: HttpException Exception message: Path '/builder/builder.sitemap' is forbidden. Request information: Request URL: https://www.bondwave.com:443/builder/builder.sitemap Request path: /builder/builder.sitemap User host address: 66.249.71.247 User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE   At first I thought this was maybe an attempt by a hacker to mess with the sitemap. Using a handy web site (www.network-tools.com) I did a lookup on the IP address and found it was a Google bot trying to crawl the application. In this case, I would expect an exception or 403 since the site requires authentication anyway.

    Read the article

  • Launching "Script of the Day" - Learn an amazing IT script sample every 24 hours

    - by Jialiang
    Every day is an opportunity to learn something or discover something new.  Learn one IT script sample every day; Be an IT master in a year! Microsoft All-In-One Script Framework offers "Script of the Day".  "Script of the Day" introduces one amazing script sample every 24 hours that demonstrates the frequently asked IT tasks.  If you are curious about and passionate for learning something new, follow the "Script of the Day” RSS feed or visit the "Script of the Day" homepage, and share your feedback with us [email protected].     Subscribe to the RSS Feed: http://blogs.technet.com/b/onescript/rss.aspx?tags=ScriptOfTheDay

    Read the article

  • Configuring Full-Text Search for pdf and docx files

    - by Lukasz Kurylo
    I think in may I was creating a little filters module based on Full Text-Search. I have configured my dev machine, the same for two testing servers – in our company for internal testing before we deployed it to client, and then on the testing client server. Until last week this build  was still on the testing server and finally we got feedback that we can deploy it on the production one. I only say that, I lost half a day because I had not correctly remembered what I was doing to configure the FTS on the previous servers and I had no notes for that. I foolishly believed in my memory. Lesson learned.   For future reference a bunch of steps to configure the FTS for searching in *.pdf and *.docx files (and by the way in other Office files like *.xlsx).   1. From the page (link) download and install the *.pdf IFilter for FTS. 2. To the PATH global system variable add path to the catalog, where you installed the plugin. Default for this version is: C:\Program Files\Adobe\Adobe PDF iFilter 9 for 64-bit platforms\bin 3. From the page (link) download a FilterPackx64.exe and install it. 4. Now from SSMS execute the following procedures: -sp_fulltext_service 'load_os_resources',1 -sp_fulltext_service 'verify_signature', 0 5. Restart the server 6. Now we must check if the plugins are visible: -select document_type, path from sys.fulltext_document_types where document_type = '.pdf' -select document_type, path from sys.fulltext_document_types where document_type = '.docx' 7. If we see a result, then we can assume that everything is ok*. 8. Right now we can create a catalog for FTS and indexes on appropriate columns.     *I lost a lot of hours to find out, why the plugin for the *.pdf files wasn’t indexed any file in the database, but in the sys.fulltext_document_types table there was available a line for this plugin. After the deeper investigation I found that the *.pdf files actually were indexed. At least the EOF sign was added to the indexes and nothing more for each file. In the end the problem was that, I forgot to add the /bin in the path to the plugin in PATH variable..

    Read the article

  • TFS 2008 to TFS 2010 moves and some issues

    - by Enrique Lima
    There have been many things going on this year around TFS.  Most of them had to do with migrations (I don’t call them upgrades for the most part since it involved new hardware and such).  Many were implementations using the Conchango SfTS template (now EMC). But there were others that were CMMI or Agile 4.0. Everything would move just fine, no issues.  That was until you attempted to run Test Case Management or run the last configuration steps for Lab Management. There is an error that states a project is not ready to run or integrate with Test or Lab Management.  And while there was some documentation on how to adjust and update the Agile WITs to work with it, there was still some disconnect to making it work with CMMI. Now there is a great post on how to run the “fix” from end to end. Check the post here:  TFS 2010: Enable Test Case Management for upgraded Team Projects

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Do you know about the Visual Studio 2010 Database Projects Guidance?

    - by Martin Hinshelwood
    Early on in the Team System (now Visual Studio ALM) cycle a new product surfaced within Team System that was affectionately called “Data Dude”, but had the more formal name of “Visual Studio 2005 Team Edition for Database Professionals”. The purpose of this product was to try and make the database a “first class citizen” in the development world. Those that started using Visual Studio 2005 Team Edition for Database Professionals (Data Dude) loved it, but everyone else did not get it. The capabilities were a little patchy, but the one thing it did bring to the party was the ability to put your database schema under source control. This was revolutionary as previously your DBA sat as far away from the team as possible, and usually in a dark cupboard, now they could partake of all the goodness of Version Control, Work Item Tracking and automated builds. The problem was that the understanding required to manage these projects was very different to that needed previously. Then the Visual Studio ALM Rangers got a hold of it…and produced some of the best guidance available. Figure: Download the guidance from http://vsdatabaseguide.codeplex.com/ This guidance discusses scenarios and approaches of using the Database Projects in Visual Studio 2010 to help you use the tools more effectively and maximize their value to your organization This guidance is focused on these five areas: Solution and Project Management Source Code Control and Configuration Management Integrating External Changes with the Project System Build and Deployment Automation with Visual Studio Database Projects Database Testing and Deployment Verification Each of these areas has common guidance, usage scenarios, hands on labs, and lessons learned from real world engagements and the community discussions.   The guidance is broken down into three packages: Guidance documentation Hands-on-lab (HOL) documentation note: The documentation is available in XPS-only format packages or complete XPS,PDF,DOCX format packages HOL Package If you need assistance and no one else can help, then you may need to call the Visual Studio ALM Rangers. The Visual Studio ALM Rangers have the mission to provide out of band solutions for missing features or guidance. They are supported by Microsoft Product Group, Microsoft Consulting Services, Microsoft Most Valued Professionals (MVPs) and technical specialists from technology communities around the globe, giving you a real-world view from the field, where the technology has been tested and used. For more information on the Rangers please visit http://msdn.microsoft.com/en-us/vstudio/ee358786.aspx and for more a list of other Rangers projects please see http://msdn.microsoft.com/en-us/vstudio/ee358787.aspx.

    Read the article

  • WP7&ndash;Samsung Owners Should Hold Off on the Update

    - by D'Arcy Lussier
    Microsoft released an update for WP7 that’s meant to improve updating the devices going forward. They’ve identified a glitch though, specifically with Samsung phones. From Winrumors: A number of Windows Phone 7 users applied the patch on Monday and some Samsung Omnia owners devices have been left in a “bricked” state. Devices simply instruct users to connect them to a PC, hard resetting the device or connecting it to a PC does not appear to solve the issue. Microsoft has also been advising users with broken devices to return them to stores for exchange. The scary thing about this is the resolution: return the “broken” devices to stores for exchange. Many Samsung Focus owners in Canada purchased unlocked phones from the US, and supposedly that act alone voided the warranty. So applying the update has very dire implications for those who might be left with a very pretty brick. I’m going to wait until there are successful installs of the patched update before going ahead with it on my device.

    Read the article

  • A Basic Thread

    - by Joe Mayo
    Most of the programs written are single-threaded, meaning that they run on the main execution thread. For various reasons such as performance, scalability, and/or responsiveness additional threads can be useful. .NET has extensive threading support, from the basic threads introduced in v1.0 to the Task Parallel Library (TPL) introduced in v4.0. To get started with threads, it's helpful to begin with the basics; starting a Thread. Why Do I Care? The scenario I'll use for needing to use a thread is writing to a file.  Sometimes, writing to a file takes a while and you don't want your user interface to lock up until the file write is done. In other words, you want the application to be responsive to the user. How Would I Go About It? The solution is to launch a new thread that performs the file write, allowing the main thread to return to the user right away.  Whenever the file writing thread completes, it will let the user know.  In the meantime, the user is free to interact with the program for other tasks. The following examples demonstrate how to do this. Show Me the Code? The code we'll use to work with threads is in the System.Threading namespace, so you'll need the following using directive at the top of the file: using System.Threading; When you run code on a thread, the code is specified via a method.  Here's the code that will execute on the thread: private static void WriteFile() { Thread.Sleep(1000); Console.WriteLine("File Written."); } The call to Thread.Sleep(1000) delays thread execution. The parameter is specified in milliseconds, and 1000 means that this will cause the program to sleep for approximately 1 second.  This method happens to be static, but that's just part of this example, which you'll see is launched from the static Main method.  A thread could be instance or static.  Notice that the method does not have parameters and does not have a return type. As you know, the way to refer to a method is via a delegate.  There is a delegate named ThreadStart in System.Threading that refers to a method without parameters or return type, shown below: ThreadStart fileWriterHandlerDelegate = new ThreadStart(WriteFile); I'll show you the whole program below, but the ThreadStart instance above goes in the Main method. The thread uses the ThreadStart instance, fileWriterHandlerDelegate, to specify the method to execute on the thread: Thread fileWriter = new Thread(fileWriterHandlerDelegate); As shown above, the argument type for the Thread constructor is the ThreadStart delegate type. The fileWriterHandlerDelegate argument is an instance of the ThreadStart delegate type. This creates an instance of a thread and what code will execute, but the new thread instance, fileWriter, isn't running yet. You have to explicitly start it, like this: fileWriter.Start(); Now, the code in the WriteFile method is executing on a separate thread. Meanwhile, the main thread that started the fileWriter thread continues on it's own.  You have two threads running at the same time. Okay, I'm Starting to Get Glassy Eyed. How Does it All Fit Together? The example below is the whole program, pulling all the previous bits together. It's followed by its output and an explanation. using System; using System.Threading; namespace BasicThread { class Program { static void Main() { ThreadStart fileWriterHandlerDelegate = new ThreadStart(WriteFile); Thread fileWriter = new Thread(fileWriterHandlerDelegate); Console.WriteLine("Starting FileWriter"); fileWriter.Start(); Console.WriteLine("Called FileWriter"); Console.ReadKey(); } private static void WriteFile() { Thread.Sleep(1000); Console.WriteLine("File Written"); } } } And here's the output: Starting FileWriter Called FileWriter File Written So, Why are the Printouts Backwards? The output above corresponds to Console.Writeline statements in the program, with the second and third seemingly reversed. In a single-threaded program, "File Written" would print before "Called FileWriter". However, this is a multi-threaded (2 or more threads) program.  In multi-threading, you can't make any assumptions about when a given thread will run.  In this case, I added the Sleep statement to the WriteFile method to greatly increase the chances that the message from the main thread will print first. Without the Thread.Sleep, you could run this on a system with multiple cores and/or multiple processors and potentially get different results each time. Interesting Tangent but What Should I Get Out of All This? Going back to the main point, launching the WriteFile method on a separate thread made the program more responsive.  The file writing logic ran for a while, but the main thread returned to the user, as demonstrated by the print out of "Called FileWriter".  When the file write finished, it let the user know via another print statement. This was a very efficient use of CPU resources that made for a more pleasant user experience. Joe

    Read the article

  • Null Values And The T-SQL IN Operator

    - by Jesse
    I came across some unexpected behavior while troubleshooting a failing test the other day that took me long enough to figure out that I thought it was worth sharing here. I finally traced the failing test back to a SELECT statement in a stored procedure that was using the IN t-sql operator to exclude a certain set of values. Here’s a very simple example table to illustrate the issue: Customers CustomerId INT, NOT NULL, Primary Key CustomerName nvarchar(100) NOT NULL SalesRegionId INT NULL   The ‘SalesRegionId’ column contains a number representing the sales region that the customer belongs to. This column is nullable because new customers get created all the time but assigning them to sales regions is a process that is handled by a regional manager on a periodic basis. For the purposes of this example, the Customers table currently has the following rows: CustomerId CustomerName SalesRegionId 1 Customer A 1 2 Customer B NULL 3 Customer C 4 4 Customer D 2 5 Customer E 3   How could we write a query against this table for all customers that are NOT in sales regions 2 or 4? You might try something like this: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE SalesRegionId NOT IN (2,4)   Will this work? In short, no; at least not in the way that you might expect. Here’s what this query will return given the example data we’re working with: CustomerId CustomerName SalesRegionId 1 Customer A 1 5 Customer E 5   I was expecting that this query would also return ‘Customer B’, since that customer has a NULL SalesRegionId. In my mind, having a customer with no sales region should be included in a set of customers that are not in sales regions 2 or 4.When I first started troubleshooting my issue I made note of the fact that this query should probably be re-written without the NOT IN clause, but I didn’t suspect that the NOT IN clause was actually the source of the issue. This particular query was only one minor piece in a much larger process that was being exercised via an automated integration test and I simply made a poor assumption that the NOT IN would work the way that I thought it should. So why doesn’t this work the way that I thought it should? From the MSDN documentation on the t-sql IN operator: If the value of test_expression is equal to any value returned by subquery or is equal to any expression from the comma-separated list, the result value is TRUE; otherwise, the result value is FALSE. Using NOT IN negates the subquery value or expression. The key phrase out of that quote is, “… is equal to any expression from the comma-separated list…”. The NULL SalesRegionId isn’t included in the NOT IN because of how NULL values are handled in equality comparisons. From the MSDN documentation on ANSI_NULLS: The SQL-92 standard requires that an equals (=) or not equal to (<>) comparison against a null value evaluates to FALSE. When SET ANSI_NULLS is ON, a SELECT statement using WHERE column_name = NULL returns zero rows even if there are null values in column_name. A SELECT statement using WHERE column_name <> NULL returns zero rows even if there are nonnull values in column_name. In fact, the MSDN documentation on the IN operator includes the following blurb about using NULL values in IN sub-queries or expressions that are used with the IN operator: Any null values returned by subquery or expression that are compared to test_expression using IN or NOT IN return UNKNOWN. Using null values in together with IN or NOT IN can produce unexpected results. If I were to include a ‘SET ANSI_NULLS OFF’ command right above my SELECT statement I would get ‘Customer B’ returned in the results, but that’s definitely not the right way to deal with this. We could re-write the query to explicitly include the NULL value in the WHERE clause: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE (SalesRegionId NOT IN (2,4) OR SalesRegionId IS NULL)   This query works and properly includes ‘Customer B’ in the results, but I ultimately opted to re-write the query using a LEFT OUTER JOIN against a table variable containing all of the values that I wanted to exclude because, in my case, there could potentially be several hundred values to be excluded. If we were to apply the same refactoring to our simple sales region example we’d end up with: 1: DECLARE @regionsToIgnore TABLE (IgnoredRegionId INT) 2: INSERT @regionsToIgnore values (2),(4) 3:  4: SELECT 5: c.CustomerId, 6: c.CustomerName, 7: c.SalesRegionId 8: FROM Customers c 9: LEFT OUTER JOIN @regionsToIgnore r ON r.IgnoredRegionId = c.SalesRegionId 10: WHERE r.IgnoredRegionId IS NULL By performing a LEFT OUTER JOIN from Customers to the @regionsToIgnore table variable we can simply exclude any rows where the IgnoredRegionId is null, as those represent customers that DO NOT appear in the ignored regions list. This approach will likely perform better if the number of sales regions to ignore gets very large and it also will correctly include any customers that do not yet have a sales region.

    Read the article

  • Better Embedded 2012

    - by Valter Minute
    Il 24 e 25 Settembre 2012 a Firenze si svolgerà la conferenza “Better Embedded 2012”. Lo scopo della conferenza è quello di parlare di sistemi embedded a 360°, abbracciando sia lo sviluppo firmware, che i sistemi operativi e i toolkit dedicati alla realizzazione di sistemi dedicati. E’ un’ottima occasione per confrontarsi e, in soli due giorni, avere una panoramica ampia dall’hardware a Linux, da Android a Windows CE, dal .NET microframework a QT, senza troppi messaggi commerciali e con un’ottima apertura sia alle tecnologie commerciali che a quelle free e open source. Io parteciperò come speaker, parlando di Windows CE, ma anche come spettatore interessato a molte delle track di un programma che si va popolando e diventa mano a mano più interessante. Se volete cogliere l’occasione per visitare Firenze (che già sarebbe un motivo più che sufficiente!) e parlare di embedded, contattatemi perchè come speaker posso fornire un codice sconto che vi farà risparmiare un 20% sul prezzo della conferenza (che già è vantaggiosissimo, vista la quantità di contenuti dedicati all’embedded). Arrivederci a Firenze!

    Read the article

  • Team Foundation Service Preview now open for all!

    - by Tarun Arora
    The concept of TFS in the cloud was first presented back in early 2010, the product team worked hard to preview a constantly evolving solution at the BUILD conference last year and after having completed 31 Sprints today the preview service has been opened for all. No more invitation codes required, TfsPreview has been made public! “Since we announced the Team Foundation Service Preview at the BUILD conference last year, we’ve limited the on boarding of new customers by requiring invitation codes to create accounts.  The main reason for this has been to control the growth of the service to make sure it didn’t run away from us and end up with a bad user experience.  In this time period, we’ve continued to work on our infrastructure, performance, scale, monitoring, management and, of course, some cool new features like cloud build. ”   - Brian Harry Since the service is still in preview, it is free for all… If you haven’t, now is the best time to try out the offering. There is no fixed time line on how long before service becomes chargeable but the terms of service support production use, the service is reliable and the product team committed to carry all of your data forward into production. “The service will remain in “preview” for a while longer while we work through additional features like data portability, commercial terms, etc but the terms of service support production use, the service is reliable and we expect to carry all of your data forward into production. ”  - Brian Harry As of today it’s possible to use TFS Preview with VS 2012 RC, VS 2010 SP1, VS 2008 SP1, the service currently does not work with VS 2005, this is something the product team is actively working on. You can refer to Brian’s announcement blog post here, http://blogs.msdn.com/b/bharry/archive/2012/06/11/team-foundation-service-preview-is-public.aspx

    Read the article

  • Self Welcoming Post on geekswithblogs.net

    - by OscarRibbeck
    Hello!.As you may notice :), this is my first post on geekswithblogs.com . I  have been using the .Net Framework mainly to develop ASP.NET WebApps for some years now and I am moving from using the .Net Framework 2.0 to using the latest features on the 4.x Frameworks, I am planning to document whenever is possible some of the stuff I learn using this space kindly given by the staff of the site. The feedback I get will also be very important for my progress and my plan is to learn a lot from what you guys can teach me with your comments on here.I also found myself with the necessity of putting somewhere code samples because sometimes when you post on forums the entries get locked and you can't do anything to add relevant details on them. The code will either be explained on its entirety or will be posted on a link that has an explained working sample for you guys to test and learn from.My posts will be in English, and I am an intermediate English speaker/writer so bare with me if it's not perfect sometimes, I am always learning something new though.I hope this get to be a useful resource for anyone interested. Cheers and Happy Coding for everyone!,Oscar

    Read the article

  • AJI Software is now a Microsoft Gold Application Lifecycle Management (ALM) Partner

    - by Jeff Julian
    Our team at AJI Software has been hard at work over the past year on certifications and projects that has allowed us to reach Gold Partner status in the Microsoft Partner Program.  We have focused on providing services that not only assist in custom software development, but process analysis and mentoring.  I definitely want to thank each one of our team members for all their work.  We are currently the only Microsoft Gold ALM Partner for a 500 mile radius around Kansas City. If you or your team is in need of assistance with Team Foundation Server, Agile Processes, Scrum Mentoring, or just a process/team assessment, please feel free to give us a call.  We also have practices focused on SharePoint, Mobile development (iOS, Android, Windows Mobile), and custom software development with .NET.  Technorati Tags: Gold Partner,ALM,Scrum,TFS,AJI Software

    Read the article

  • VS 2010 snippet manager and quickCode

    - by Michael Freidgeim
    During the last few years I've used QuickCode and it was very helpful. VS 2010 has  Code Snippets Manager that MS finally made quite convenient to use, so I will use it in a future.   I will need to convert my QuickCode commands to VS snippets. I am sure I will miss Alt-Q hotkey for some time.

    Read the article

  • Upgrading visual studio with Crystal Reports

    - by jkrebsbach
    In the process up updating an app from Visual Studio 2003 to VS 2008.  It happens to have a couple dozen crystal reports that it executes regarly. Upgraded visual studio to 2008, and when attempting to generate the reports an exception was thrown. A significant portion of the rendering engine for Crystal Reports is not coming from Crystal, it's coming from Visual Studio and those methods and properties have changed over the years.  I needed to upgrade the report generating methods from the VS 2003 way of doing things to the VS 2008 way for the report to generate successfully. Not only that, but this means that while we were previously rendering with Crystal 9 in VS 2003, Visual Studio 2008 will render per Crystal 10, which treats things like column widths in Excel different (by default, at least) so now we have to go through all of our reports and compare outputs for Crystal just to upgrade the Visual Studio environment that I foolishly believed would  not be affected.

    Read the article

  • MySQL - Configuration

    - by Stuart Brierley
    Having previously detailed how to install MySQL Server, the next step is configuring MySQL. The MySQL configuration wizard can either be run immediately following installation from the MySQL installation wizard or manually from the Start Menu. Following the splash screen you can then choose whether to run a detailed or standard configuration. The detailed configuration allows you to create the optimal configuration for your specific machine, whereas the standard configuration creates a general configuration that can then be manually tuned. I chose detailed.   You are then asked to choose the type of server instance that is being configured. In this case it is a developer machine. Following this you are asked to choose the type of database usage that you expect on the server. I opted for multifunctional. You must then specify the location of the InnoDB tablespace.   Next specify the number of concurent connections to the server.   Now you must configure the networking options. I left the Strict mode enabled as this is the recommended option, but I disabled TCP/IP networking as I wanted to restrict this MySQL installation to the local machine only.   Set the character set that is best suited to your use - for me this was the default standard character set. Next up is the option to run MySQL as a service and whether or not to include the mysql dircetories in the windows PATH. I kept the install as a windows service option enabled, but unchecked the Launch MySQL server automatically option. This is because I only wanted MySQL running when I specifically want to use it. I also enabled the include in windows PATH option.   You can then change the security settings for the mysql installation. I opted to change the root password, disable root from local machines and disable annoymous access.   You are now ready to execute the configuration.   Once completed you should hopefully see the completed screen with lots of nice ticks against the various configuration tasks.

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >