Search Results

Search found 17192 results on 688 pages for 'geeks with blogs'.

Page 144/688 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Using ASP.NET Membership Provider with an ACL

    - by geekrutherford
    Up until recently one of my applications has used the membership provider within ASP.NET exclusively. However, it has been proposed that while the currently defined roles are beneficial, security needs to be more granular to restrict both access to certain pages and functionality present within a given page.   Unfortunately, the role based security ASP.NET gives you out of the box falls down in this area. This is not due to a lack of foresight by Microsoft, but rather it was simply not designed for implementing both role based security and any inherent ACL you may define within these roles. Mind you some would say an ACL is independent of the role to which a user belongs and is assigned to the user directly.   The application mentioned here has it's own User object (which encapsulates the membership provider user object as a property) and SQL Server table to store extended information not present in the aspnet_users table. While I could have modified the aspnet membership schema to suit the applications needs, it seemed smarter to simply create a separate table with a foreign key back to the aspnet_users table.   Since I have a separate object to store extended user information, I simply created an ACL object and expose it as a property of my user object.   This is all well and good, but it does not help in regards to the SiteMapProvider and restricting access at the page level based on the users ACL.   The straightforward answer would be to develop some code within the databound event for the menu that checks the page title and has hardcoded logic that dictates a user must have certain permissions turned on. The problem with this approach is that it's HARDCODED!!! If you need to change access to a page you'd need to do a build and go through your normal deployment process....ugh!!!   An alternative method, albeit not perfect, is to utilize the resourceKey property on the SiteMapNodes in the SiteMap file with the name of the required permission to view the page. Within the databound event for your menu you iterate the SiteMapNodes in the menus SiteMapProvider looking for a match at the page level based on title. When a match is detected, you have a switch/case on the SiteMapNodes resourceKey (the name of the ACL permission required). The case for the resourceKey ensures the users ACL permission is turned on and viola!!!   This is noteably not perfect in that it is using the resourceKey in a manner other than intended.  Since the application is not localized, using it in the manner described it not an issue.   Below is a sample SiteMap file with the resourceKey used as the ACL permission identifier:     Below is the ItemDataBound event. This application uses the Telerik Menu control:

    Read the article

  • Oracle Sequences

    - by jkrebsbach
    Reminder to myself - SQL Server has nice index columns directly tied to their tables. Oracle has sequences that are islands to themselves. select seq_name.currval from dual; select seq_name.nextval from dual; currval - return current number at top of sequence nextval - increment sequence by 1, return new number   therefore - to create functionality in oracle similar to an index column - OPTION A) - Create insert trigger: CREATE OR REPLACE TRIGGER dept_bir BEFORE INSERT ON departments FOR EACH ROW WHEN (new.id IS NULL) BEGIN SELECT dept_seq.NEXTVAL INTO :new.id FROM dual; END; This will handle creating a unique identity, but will not necessarily inform process flow of identity without additional logic. OPTION B) - Select indentity into temp variable, insert whole item into tab **** When attemptint to query currval, the below error was being thrown - SELECT seq_name.currval from dual; ERROR : TABLE OR VIEW DOES NOT EXIST *** Although Oracle sys tables may have access to the sequences, that isn't to say the Oracle user may have access to those sequences - verify permissions when the system can't see object that are being reported in the object explorer.

    Read the article

  • JustCode Provides Reflector Alternative

    - by Joe Mayo
    If you've been a loyal Reflector user, you've probably been exposed to the debacle surrounding RedGate's decision to no longer offer a free version.  Since then, the race has begun for a replacement with a provider that would stand by their promises to the community.  Mono has an ongoing free alternative, which has been available for a long time.  However, other vendors are stepping up to the plate, with their own offerings. If Not Reflector, Then What? One of these vendors is Telerik.  In their recent Q1 2011 release of JustCode, Telerik offers a decompilation utility rivaling what we've become accustomed to in Reflector.  Not only does Telerik offer a usable replacement, but they've (in my opinion), produced a product that integrates more naturally with visual Studio than any other product ever has.  Telerik's decompilation process is so easy that the accompanying demo in this post is blindingly short (except for the presence of verbose narrative). If you want to follow along with this demo, you'll need to have Telerik JustCode installed.  If you don't have JustCode yet, you can buy it or download a trial at the Telerik Web site . A Tall Tale; Prove It! With JustCode, you can view code in the .NET Framework or any other 3rd party library (that isn't well obfuscated).  This demo depends on LINQ to Twitter, which you can download from CodePlex.com and create a reference or install the package online as described in my previous post on NuGet.  Regardless of the method, you'll have a project with a reference to LINQ to Twitter.  Use a Console Project if you want to follow along with this demo. Note:  If you've created a Console project, remember to ensure that the Target Framework is set to .NET Framework 4.  The default is .NET Framework 4 Client Profile, which doesn't work with LINQ to Twitter.  You can check by double-clicking the Properties folder on the project and inspecting the Target Framework setting. Next, you'll need to add some code to your program that you want to inspect. Here, I add code to instantiate a TwitterContext, which is like a LINQ to SQL DataContext, but works with Twitter: var l2tCtx = new TwitterContext(); If you're following along add the code above to the Main method, which will look similar to this: using LinqToTwitter; namespace NuGetInstall { class Program { static void Main(string[] args) { var l2tCtx = new TwitterContext(); } } } The code above doesn't really do anything, but it does give something that I can show and demonstrate how JustCode decompilation works. Once the code is in place, click on TwitterContext and press the F12 (Go to Definition) key.  As expected, Visual Studio opens a metadata file with prototypes for the TwitterContext class.  Here's the result: Opening a metadata file is the normal way that Visual Studio works when navigating to the definition of a type where you don't have the code.  The scenario with TwitterContext happens because you don't have the source code to the file.  Visual Studio has always done this and you can experiment by selecting any .NET type, i.e. a string type, and observing that Visual Studio opens a metadata file for the .NET String type. The point I'm making here is that JustCode works the way Visual Studio works and you'll see how this can make your job easier. In the previous figure, you only saw prototypes associated with the code. i.e. Notice that the default constructor is empty.  Again, this is normal because Visual Studio doesn't have the ability to decompile code.  However, that's the purpose of this post; showing you how JustCode fills that gap. To decompile code, right click on TwitterContext in the metadata file and select JustCode Navigate -> Decompile from the context menu.  The shortcut keys are Ctrl+1.  After a brief pause, accompanied by a progress window, you'll see the metadata expand into full decompiled code. Notice below how the default constructor now has code as opposed to the empty member prototype in the original metadata: And Why is This So Different? Again, the big deal is that Telerik JustCode decompilation works in harmony with the way that Visual Studio works.  The navigate to functionality already exists and you can use that, along with a simple context menu option (or shortcut key) to transform prototypes into decompiled code. Telerik is filling the the Reflector/Red Gate gap by providing a supported alternative to decompiling code.  Many people, including myself, used Reflector to decompile code when we were stuck with buggy libraries or insufficient documentation.  Now we have an alternative that's officially supported by a company with an excellent track record for customer (developer) service, Telerik.  Not only that, JustCode has several other IDE productivity tools that make the deal even sweeter. Joe

    Read the article

  • Visual Studio 2012 Launch Winnipeg&ndash;Slides

    - by Dylan Smith
    The Winnipeg .Net User Group hosted a VS 2012 Launch Event at the Imax in Winnipeg on Thursday, Dec 6.  Doing presentations on the giant Imax screen is always fun, and I did the first 2 sessions on: End-To-End Application Lifecycle Management with TFS 2012 Improving Developer Productivity with Visual Studio 2012 Thanks to everybody that came out, and if anybody is interested my slide decks can be downloaded here: TFS 2012 Slides VS 2012 Slides Also the Virtual Machine that I used to do my demo’s can be downloaded from Brian Keller’s blog here: VS 2012 ALM Virtual Machine

    Read the article

  • KnockoutJS 2.3.0 - Uncaught Error: You cannot apply bindings multiple times to the same element.

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/07/25/knockoutjs-2.3.0---uncaught-error-you-cannot-apply-bindings-multiple.aspxI upgrade KnockoutJs through Nuget and started getting the error ‘Uncaught Error: You cannot apply bindings multiple times to the same element.’ when I used applyBindings after the main page load. I had some dynamically added DOM elements and re-applying bindings worked before. It always seemed like a workaround/hack, but now Knockout is telling me that I shouldn’t do it. The quick way to fix this is to use ko.cleanNode($(‘#id’) and this works. A different/possibly better way, as suggested by x0n might be to use templates and Knockout’s template binding (<script type=’text/html’>…</script>).   Thanks again to the StackOverflow community for quickly providing me with the solution. Check out my question for all the details.

    Read the article

  • FREE Windows Azure Platform Compute and Storage through the Cloud Essentials Pack for Partners

    - by Eric Nelson
    It can be difficult to find something to look forward to in January – but this year it was a little easier as a) I got lots of great Xbox 360 games and b) the Windows Azure Platform element of the Cloud Essentials Pack for Microsoft Partner Network partners went live. I have previously explained what the Cloud Essentials Pack is and how you can access – but at the time I couldn’t share the details of the Windows Azure Platform element. The Windows Azure Platform element is now available. It gives you each month, for FREE: Windows Azure: 750 hours of extra small compute instance 25 hours of small compute instance 3GB of storage and 250,000 storage transactions SQL Azure: 1 SQL Azure Web Edition database (5GB) Windows Azure AppFabric: App Fabric with 100,000 Access Control transactions and 2 Service Bus connections Plus: Data Transfer:  3GB in and 6GB out (More details of the offer) To activate this offer You need to: Sign your company up to Microsoft Platform Ready (NB: there are other routes to get this benefit – but I know about MPR) Read about Microsoft Platform Ready Visit http://www.microsoftcloudpartner.com/ and sign up.

    Read the article

  • Hyper-V Live Migration across Sites!

    - by Ryan Roussel
    One of the great sessions I sat in on at Tech Ed this week was stretching a Windows 2008 R2 Hyper-V  Failover Cluster across sites.  With this ability, you could actually implement a Hyper-V cluster where you could migrate or even Live Migrate VMs across sites.   With this area’s propensity for Hurricanes, this will be a very popular topic for me over the next few months. While this technology is possible today, it’s also very complicated and can be very expensive to implement.    First your WAN connection has to support the ability to trunk your VLAN across both sites in order to Live Migrate.  This means you can’t use a Layer 3 routed connection like MPLS.  It has to be a Metro Ethernet connection or "Dark Fiber”.  Dark Fiber is unused Fiber already in the ground that can be leased from  various providers. Both of these connections would allow you to trunk layer 2 across your WAN.  Cisco does have the ability to trunk layer 2 across a routed connection by muxing the traffic but this is only available in their Nexus product line which has a very steep price tag.   If you are stuck with MPLS or the like and Nexus switching is not a realistic possibility, you will have to implement a multi-subnet cluster in which case Live Migration won’t be possible.  However you can still failover VMs to the remote site with some planning and manual intervention.  The consideration here is that the VMs will be on a different subnet once migrated, so you will have to change the IP addressing of your VMs.  This also has ramifications with DNS and Name resolution to control your down time.  DHCP with Reservations for your VMs is the preferred method to achieve the IP changes as this will automate that part of the process.   Secondly, you will have to have  a mechanism to replicate your storage across both sites.  Many SAN vendors natively support hardware based synchronous and asynchronous replication.  Some even support cluster shared volumes which were introduced in 2008 R2.   If your SANs do not support this natively, there are alternative file based replication products either software based like Double Take or hardware appliance like EMC.  Be sure to check with your vendor on the support of Disk majority if you’re replicating your quorum disk between SANs.   The last consideration is the ability to maintain quorum for your cluster.  If your replication provider does not support Disk Majority through replication, you will have to explore Node Majority with File Share Witness.  This will affect your design as a 3 node cluster with 1 node at the remote site and FSW at the production site would not have the ability to maintain quorum if the production site was lost. MS best practice for this would be to implement an even node cluster with 2 nodes at  each site and the FSW at a third site.   And there you have it.  While some considerations and research goes into implementing this solution, even a multi-subnet solution would be invaluable to organizations in the implementations of “warm” DR sites.

    Read the article

  • The path to MCSE:SharePoint. The Overview.

    - by Enrique Lima
    There have been some changes to certifications recently.  And with that new challenges and requirements.  In the past we had MCTS and MCITP or MCPD on a specific product and that was it, now the story is somewhat different. You will need to not only know the product (yes, I am one that still focuses on knowing a product not the test) but also the environment on which it sits or lives (therefore Windows Server and supporting services). The requirements for MCSE: SharePoint now take you through the MCSA:Windows Server 2012.  Many have questioned this, I don’t. Why? I have seen plenty of “accidental SharePoint Farm Administrators” that have no background with the Server OS, much less with the services (like DNS and IIS).  Again, I am not saying this will guarantee knowledge but it does in some way require exposure to it. So, again, the next number of posts will be to provide guidance for the needed knowledge for the test requirements.  If you have seen the way I go about this, you then know I don’t focus on exam questions, but rather providing guidance to the TechNet and MSDN documentation to get to know the product.  I will also in this case go through the process to setup your virtual environment to play with the products and get to know them. Now, the requirements themselves are: MCSA: Windows Server 2012. Exam 70-410: Installing and Configuring Windows Server 2012 Exam 70-411: Administering Windows Server 2012 Exam 70-412: Configuring Advanced Windows Server 2012 Services SharePoint specific exams. Exam 70-331: Core Solutions of Microsoft SharePoint Server 2013 Exam 70-332: Advanced Solutions of Microsoft SharePoint Server 2013 Passing the 5 exams will grant you the MCSE: SharePoint credential.

    Read the article

  • Announcing the Winnipeg Visual Studio.NET 2010 Launch Event!

    - by D'Arcy Lussier
    That’s right Winnipeg, we’re having our own Visual Studio.NET 2010 launch event on May 11th brought to you by your local Winnipeg .NET User Group, Anvil Digital, Imaginet, Microsoft, and Protegra! We’re excited to bring a day of sessions highlighting developer productivity, application lifecycle management, and web development using these new technologies! We’re also thrilled to have this event at the IMAX Theatre at Portage Place! The day looks like this: The event is FREE and we’re providing a continental breakfast for attendees. To register for the event, visit our registration site here. If you have any questions, please contact me through comments on this post or via email through my blog. D’Arcy

    Read the article

  • Virtualized data centre&ndash;Part four: The design

    - by marc dekeyser
    Welcome back to the fourth post in this series! Today we will have a look at what Microsoft recommends as a “private cloud design” and what I will make of it. Whilst my own solution is based of the reference architecture, it is quite different indeed! An important thing to know is that, whilst I am using the private cloud as a reference, I am skipping most of the steps in designing a private cloud. If that is why you are here, please read the links at the end of the article and skim through my own content. A private cloud is much more process driven than just building a virtual infrastructure… The architecture of it all… So imagine for a minute that you have unlimited funds to build this lab of yours… You’d want redundancy on all levels and separation of each network where possible! Unfortunately we don’t have that luxury and, as you saw me hinting at in the previous article, our own design will be more limited but still quite capable! Networking From the networking perspective I will not have a fully redundant network, after all, this is but a lab environment! Thanks to Server 2012 I will be able to use bonding on my NIC’s and use LACP to improve the performance on that part. Storage As I mentioned in the previous article a Synology DS1218+ will be used for iSCSI provisioning. This device has 2 NICs on-board which can be bonded in to one 2 Gbps interface giving me a decent throughput and making the disks the most limiting factor in the storage design. Domain controllers and extra infrastructure Server 2012 completely supports running domain controllers virtualized and has no need to actually have a reachable DC when booting… That being said I need a remote access machine to power on the hosts (I have no need for them running 24/7) and a possible System Center VMM 2012 box (although server 2012 is not supported until SP1 :( ). Undecided on if I am to install those boxes separately or as a virtual machine… Which amounts to… Something like this pretty picture!                   Sources Microsoft Private Cloud Solutions Repository (en-US) http://social.technet.microsoft.com/wiki/contents/articles/12131.microsoft-private-cloud-solutions-repository-en-us.aspx Reference  Architecture: http://social.technet.microsoft.com/wiki/contents/articles/3819.reference-architecture-for-private-cloud.aspx Private Cloud Reference Model: http://social.technet.microsoft.com/wiki/contents/articles/4399.private-cloud-reference-model.aspx

    Read the article

  • Get to Know a Candidate (6 of 25): Jill Stein&ndash;Green Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Stein is a physician with degrees from Harvard College and Harvard Medical School.  She serves on the boards of Greater Boston Physicians for Social Responsibility and MassVoters for Fair Elections, and has been active with the Massachusetts Coalition for Healthy Communities Jill Stein advocates a "Green New Deal" in which renewable energy jobs would be created to address climate change and environmental issues with the objective of employing "every American willing and able to work". Citing the research of Dr. Phillip Harvey, Professor of Law & Economics at Rutgers University, as evidence of the successful economic effects of the 1930s' New Deal projects, Stein would fund the plan with a 30% reduction in the U.S. military budget, returning US troops home, and increasing taxes on areas such as capital gains, offshore tax havens and multimillion dollar real estate. Stein plans on impacting what she sees as a growing convergence of environmental crises in water, soil, fisheries and forests, through the creation of sustainable infrastructure based in clean renewable energy generation and sustainable communities principles such as increasing intra-city mass transit and inter-city railroads, creating 'complete streets' that safely encourage bike and pedestrian traffic and regional food systems based on sustainable organic agriculture The Green Party of the United States was founded in 1991 as a voluntary association of state green parties. With its founding, the Green Party of the United States became the primary national Green organization in the United States, eclipsing the Greens/Green Party USA, which emphasized non-electoral movement building. The Green Party of the United States of America emphasizes environmentalism, non-hierarchical participatory democracy, social justice, respect for diversity, peace and nonviolence. Their "Ten Key Values," which are described as non-authoritative guiding principles, are as follows: Grassroots democracy Social justice and equal opportunity Ecological wisdom Nonviolence Decentralization Community-based economics Feminism and gender equality Respect for diversity Personal and global responsibility Future focus and sustainability The Green Party does not accept donations from corporations. Thus, the party's platforms and rhetoric critique any corporate influence and control over government, media, and American society at large. Stein has access to 403 electoral votes and is a write-in candidate in GA, IN, and MS Learn more about Jill Stein and Green Party on Wikipedia.

    Read the article

  • How To Validate an Email Address

    - by Richard Jones
    I’m using my blog today as book mark service today. I just found this article on howto really validate if an email address exists. I’ll have a go at wrapping this article into a WCF/Webservice http://www.webdigi.co.uk/blog/2009/how-to-check-if-an-email-address-exists-without-sending-an-email/

    Read the article

  • Register now for the UK Windows Azure Self-paced Interactive Learning Course starting May 10th

    - by Eric Nelson
    [Suggested twitter tag #selfpacedazure] We (myself and David Gristwood) have been working in the UK to create a fantastic opportunity to get yourself up to speed on the Windows Azure Platform over a 6 week period starting May 10th – without ever needing to leave the comfort of your home/office.  The course is derived from the internal training Microsoft gives on Azure which is both fun and challenging in equal parts – and we felt was just too good to keep to ourselves! We will be releasing more details nearer the date but hopefully the following is enough to convince you to register and … recommend it to a colleague or three :-) What we have produced is the “Microsoft Azure Self-paced Learning Course”. This is a free, interactive, self-paced, technical training course covering the Windows Azure platform – Windows Azure, SQL Azure and the Azure AppFabric. The course takes place over a six week period finishing on June 18th. During the course you will work from your own home or workplace, and get involved via interactive Live Meetings session, watch on-line videos, work through hands-on labs and research and complete weekly coursework assignments. The mentors and other attendees on the course will help you in your research and learning, and there are weekly Live Meetings where you can raise questions and interact with them. This is a technical course, aimed at programmers, system designers, and architects who want a solid understanding of the Microsoft Windows Azure platform, hence a prerequisite for this course is at least six months programming in the .NET framework and Visual Studio. Check out the full details of the event or go straight to registration.   The course outline is: Week 1 - Windows Azure Platform Week 2 - Windows Azure Storage Week 3 - Windows Azure Deep Dive and Codename "Dallas" Week 4 - SQL Azure Week 5 - Windows Azure Platform AppFabric Access Control Week 6 - Windows Azure Platform AppFabric Service Bus If you have any questions about the course and its suitability, please email [email protected].

    Read the article

  • Maps for Africa

    - by samkea
    My friends and some people from i-network Uganda, have been requesting me to help them with some international maps. I have uploaded these ones here, you can download and re-use them. They are not so upto-date but are the one s i use. I will continue posting any updates here and will remove the older ones if need be. Enjoy! Africa Bounderies: http://cid-5e33c7b2ca3aa2a9.skydrive.live.com/self.aspx/Maps/0412010AdminBdries^_Africa.rar   I will be posting others soon, keep checking....

    Read the article

  • The True Cost of a Solution

    - by D'Arcy Lussier
    I had a Twitter chat recently with someone suggesting Oracle and SQL Server were losing out to OSS (Open Source Software) in the enterprise due to their issues with scaling or being too generic (one size fits all). I challenged that a bit, as my experience with enterprise sized clients has been different – adverse to OSS but receptive to an established vendor. The response I got was: Found it easier to influence change by showing how X can’t solve our problems or X is extremely costly to scale. Money talks. I think this is definitely the right approach for anyone pitching an alternate or alien technology as part of a solution: identify the issue, identify the solution, then present pros and cons including a cost/benefit analysis. What can happen though is we get tunnel vision and don’t present a full view of the costs associated with a solution. An “Acura”te Example (I’m so clever…) This is my dream vehicle, a Crystal Black Pearl coloured Acura MDX with the SH-AWD package! We’re a family of 4 (5 if my daughters ever get their wish of adding a dog), and I’ve always wanted a luxury type of vehicle, so this is a perfect replacement in a few years when our Rav 4 has hit the 8 – 10 year mark. MSRP – $62,890 But as we all know, that’s not *really* the cost of the vehicle. There’s taxes and fees added on, there’s the extended warranty if I choose to purchase it, there’s the finance rate that needs to be factored in… MSRP –   $62,890 Taxes –      $7,546 Warranty - $2,500 SubTotal – $72,936 Finance Charge – $ 1094.04 Grand Total – $74,030 Well! Glad we did that exercise – we discovered an extra $11k added on to the MSRP! Well now we have our true price…or do we? Lifetime of the Vehicle I’m expecting to have this vehicle for 7 – 10 years. While the hard cost of the vehicle is known and dealt with, the costs to run and maintain the vehicle are on top of this. I did some research, and here’s what I’ve found: Fuel and Mileage Gas prices are high as it is for regular fuel, but getting into an MDX will require that I *only* purchase premium fuel, which comes at a premium price. I need to expect my bill at the pump to be higher. Comparing the MDX to my 2007 Rav4 also shows I’ll be gassing up more often. The Rav4 has a city MPG of 21, while the MDX plummets to 16! The MDX does have a bigger fuel tank though, so all in all the number of times I hit the pumps might even out. Still, I estimate I’ll be spending approximately $8000 – $10000 more on gas over a 10 year period than my current Rav4. Service Options Limited Although I have options with my Toyota here in Winnipeg (we have 4 Toyota dealerships), I do go to my original dealer for any service work. Still, I like the fact that I have options. However, there’s only one Acura dealership in all of Winnipeg! So if, for whatever reason, I’m not satisfied with the level of service I’m stuck. Non Warranty Service Work Also let’s not forget that there’s a bulk of work required every year that is *not* covered under warranty – oil changes, tire rotations, brake pads, etc. I expect I’ll need to get new tires at the 5 years mark as well, which can easily be $1200 – $1500 (I just paid $1000 for new tires for the Rav4 and we’re at the 5 year mark). Now these aren’t going to be *new* costs that I’m not used to from our existing vehicles, but they should still be factored in. I’d budget $500/year, or $5000 over the 10 years I’ll own the vehicle. Final Assessment So let’s re-assess the true cost of my dream MDX: MSRP                    $62,890 Taxes                       $7,546 Warranty                 $2,500 Finance Charge         $1094 Gas                        $10,000 Service Work            $5000 Grand Total           $89,030 So now I have a better idea of 10 year cost overall, and I’ve identified some concerns with local service availability. And there’s now much more to consider over the original $62,890 price tag. Tying This Back to Technology Solutions The process that we just went through is no different than what organizations do when considering implementing a new system, technology, or technology based solution, within their environments. It’s easy to tout the short term cost savings of particular product/platform/technology in a vacuum. But its when you consider the wider impact that the true cost comes into play. Let’s create a scenario: A company is not happy with its current data reporting suite. An employee suggests moving to an open source solution. The selling points are: - Because its open source its free - The organization would have access to the source code so they could alter it however they wished - It provided features not available with the current reporting suite At first this sounds great to the management and executive, but then they start asking some questions and uncover more information: - The OSS product is built on a technology not used anywhere within the organization - There are no vendors offering product support for the OSS product - The OSS product requires a specific server platform to operate on, one that’s not standard in the organization All of a sudden, the true cost of implementing this solution is starting to become clearer. The company might save money on licensing costs, but their training costs would increase significantly – developers would need to learn how to develop in the technology the OSS solution was built on, IT staff must learn how to set up and maintain a new server platform within their existing infrastructure, and if a problem was found there was no vendor to contact for support. The true cost of implementing a “free” OSS solution is actually spinning up a project to implement it within the organization – no small cost. And that’s just the short-term cost. Now the organization must ensure they maintain trained staff who can make changes to the OSS reporting solution and IT staff that will stay knowledgeable in the new server platform. If those skills are very niche, then higher labour costs could be incurred if those people are hard to find or if trained employees use that knowledge as leverage for higher pay. Maybe a vendor exists that will contract out support, but then there are those costs to consider as well. And let’s not forget end-user training – in our example, anyone that runs reports will need to be trained on how to use the new system. Here’s the Point We still tend to look at software in an “off the shelf” kind of way. It’s very easy to say “oh, this product is better than vendor x’s product – and its free because its OSS!” but the reality is that implementing any new technology within an organization has a cost regardless of the retail price of the product. Training, integration, support – these are real costs that impact an organization and span multiple departments. Whether you’re pitching an improved business process, a new system, or a new technology, you need to consider the bigger picture costs of implementation. What you define as success (in our example, having better reporting functionality) might not be what others define as success if implementing your solution causes them issues. A true enterprise solution needs to consider the entire enterprise.

    Read the article

  • Save password in WCF adapter binding file

    - by Edmund Zhao
    Binding file for WCF Adapter doesn't save the password no matter it is generated by "Add Generated Items..." wizard in Visual Studio or "Export Bindings..." in administration console. It is by design dut to the consideration of security, but it is very annoying especially when you import bindings which contain multiple WCF send ports. The way to aviod retyping password everytime after an import is to edit the binding file before import. Here is what needs to be done. 1. Find the following string:     &lt;Password vt="1" /&gt; "&lt;" means "<", "&gt;" means ">", "vt" means "Variable Type", variable type 1 is "NULL", so the above string can be translated to "<Password/>" 2. Replace it with:     &lt;Password vt="8"&gt;MyPassword&lt;/Password&gt;    variable type 8 is "string", the above string can be transalted to "<Password>MyPassword</Password>"   Binding file uses a lot of character entity references for XML character encoding purpose. For a list of the special charactor entiy references, you can check from here. ...Edmund Zhao

    Read the article

  • How to remove Visual J# .NET from installation package (MSI)

    - by Narendra Tiwari
    While creating Web Setup, Visual J# .NET automatically included in the MSI package.When we install this MSI on a server machine which does not have Visual J# .NET installed, installer prompts a message to install Visual J# .NET. Usually we dont need to install Visual J# .NET and it can be avoided to add into installer. To do this:- - Open setUp project (.vdproj) file in a text editor.- Find below section for LauchCondition for Visual J# .NET and remove it."LaunchCondition"        {            "{836E08B8-0285-4809-BA42-01DB6754A45D}:_237E8F40F1A4464FBD27D8992CFDD623"            {            "Name" = "8:Visual J# .NET"            "Condition" = "8:REQ_VJSLIB_VER_PRESENT = \"TRUE\""            "Message" = "8:[VSDVJSMSG]"            "InstallUrl" = "8:http://msdn.microsoft.com/vjsharp"            }            "{836E08B8-0285-4809-BA42-01DB6754A45D}:_DF1CA2119CD64D4B94CE993CF1624ACE"            {            "Name" = "8:IIS Condition"            "Condition" = "8:IISVERSION >= \"#4\""            "Message" = "8:[VSDIISMSG]"            "InstallUrl" = "8:"            }        }- Save .vdproj file and Build again to generate new MSI installer.- Install the MSI on a new machine again where J# does not exist, It should not prompt the same message to install J#.

    Read the article

  • HTML5 data-* (custom data attribute)

    - by Renso
    Goal: Store custom data with the data attribute on any DOM element and retrieve it. Previously under HTML4 we used to use classes to store custom data, something to the affect of <input class="account void limit-5000 over-4999" /> and then have to parse the data out of the class In a book published by Peter-Paul Koch in 2007, ppk on JavaScript, he explains why and how to use custom attributes to make data more accessible to JavaScript, using name-value pairs. Accessing a custom attribute account-limit=5000 is much easier and more intuitive than trying to parse it out of a class, Plus, what if the class name for example "color-5" has a representative class definition in a CSS stylesheet that hides it away or worse some JavaScript plugin that automatically adds 5000 to it, or something crazy like that, just because it is a valid class name. As you can see there are quite a few reasons why using classes is a bad design and why it was important to define custom data attributes in HTML5. Syntax: You define the data attribute by simply prefixing any data item you want to store with any HTML element with "data-". For example to store our customers account data with a hidden input element: <input type="hidden" data-account="void" data-limit=5000 data-over=4999  /> How to access the data: account  -     element.dataset.account limit    -     element.dataset.limit You can also access it by using the more traditional get/setAttribute method or if using jQuery $('#element').attr('data-account','void') Browser support: All except for IE. There is an IE hack around this at http://gist.github.com/362081. Special Note: Be AWARE, do not use upper-case when defining your data elements as it is all converted to lower-case when reading it, so: data-myAccount="A1234" will not be found when you read it with: element.dataset.myAccount Use only lowercase when reading so this will work: element.dataset.myaccount

    Read the article

  • Take advantage of the stimulus plan by hiring someone!

    - by Randy Walker
    In case you didn’t know, businesses can take advantage of the stimulus package by hiring an unemployed worker.  The Hiring Incentives to Restore Employment (HIRE) Act can pay the business portion of the Social Security taxes as well as give you a $1000 general business tax credit. If you’re unemployed, make sure and mention this to a potential employee! You can find out more information from here on Intuit’s website.  http://www.qbenews.com/QB_Payroll/1003_qbpb/landing_01.html

    Read the article

  • Clear list of recent repositories in Git Extensions

    - by Marko Apfel
    Orphaned and wrong specified repositories in the recent list are annoying. Straightaway I does not found an option to clean this entries. And also not the persistence place for that. So it was time for Process Explorer. The storage happens under: HKEY_CURRENT_USER\Software\GitExtensions\GitExtensions\1.0.0.0 in the string value “history” You could edit the content of the string value or delete it – than during restarting Git Extensions the string value will be created with a default skeleton.

    Read the article

  • A good news for China and Japan developers: Sample Browser is localized to Chinese and Japanese

    - by Jialiang
    Translate this pageArabicBulgarianCatalanChinese SimplifiedChinese TraditionalCzechDanishDutchEstonianFinnishFrenchGermanGreekHaitian CreoleHebrewHindiHmong DawHungarianIndonesianItalianJapaneseKoreanLatvianLithuanianNorwegianPersianPolishPortugueseRomanianRussianSlovakSlovenianSpanishSwedishThaiTurkishUkrainianVietnamese Microsoft® Translator Check out this page in {0} translated from {1}translated fromOriginal:Translated:Automatic translation powered by Microsoft® TranslatorStart translatingStop translatingCloseClose and show original pageSelect The Sample Browser Local Language Support feature is released today!  It is supporting Simplified Chinese and Japanese UI, and is optimized for Chinese and Japanese sample searches.  This should be a good news for China and Japan developers :)  We will add more languages in the near future. Install:  http://aka.ms/samplebrowser If you have already installed the previous version of Sample Browser, you can simply reopen it to get the auto-update. For example, in the Chinese UI, you can directly search samples in Chinese.  The Sample Browser is optimized to surface Chinese samples that match your query first.  This gives China and Japan developers a completely localized experience with code samples!   Particularly thanks to Japan MVPs and Satoru Kitabata – Japan MVP Lead.  Our team leant the strong need of localized Sample Browser from them.  They also devoted much time to translating the UI elements to Japanese, and making it available to Japan developers.

    Read the article

  • I Blame SNMP!

    - by brendonpage
    Anyone who has been reading my blog would have noticed that I have deviated slightly from my original post plan! This post was meant to be on uploading files in Silverlight, so what happened you may ask? Well last weekend I had some friends over for a LAN and one of them brought a managed switch with, which he had just been purchased for work. He proceeded show me how cool it was, how he planned on improving his work network and how it can be monitored remotely via SNMP. After this explanation he started to google for a free SNMP graphing tool. After a few hours of hearing disgruntled mutterings from him I asked what was wrong, he proceeded to rant about how he couldn’t find any tools that suited his needs. It was at this point I though the most dangerous thing a programmer can ever think “I wonder how hard it would be to make one”, of course the answer at the time is always “It can’t be that hard”, and so started my journey into SNMP. I am still in the early stages of this journey so I don’t have to much to report yet, but once I have finished the first version of my SNMP graphing tool I will definitely be posting about it! For now if there are any of you who are interested in doing any SNMP development in C# I would recommend looking at the #Sharp project on CodePlex (http://sharpsnmplib.codeplex.com/), it is the SNMP library I have decided to use and thus far it works beautifully.

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >