Search Results

Search found 48340 results on 1934 pages for 'microsoft test manager'.

Page 434/1934 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • Which of the following relational database management systems would a company adopt (for migration), if any, MS Access, MS SQL Server or MySQL?

    - by Hassan Hagi
    Dear programmers, as part of my final year university project, I am conducting research into relational database management systems such as Microsoft Office Access 2007, Microsoft SQL Server 2008 and MySQL 5.1. The description does not need to be detailed however; I am trying to find empirical evidence and professional opinion/fact to determine which of the three databases are best suited for the required size of company (stated or unstated). OS: Microsoft windows (XP or newer) Please consider the following, but full details are not necessary: Memory management Migration Design constraints Integrity (data and others) Triggers User constraints Ease of use Performance Crash Recovery (not the operating system) Total Cost of Ownership (TCO) Also any info on Open source (to do with the three RDBMS) Thank you for your time and help. Hassan Hagi

    Read the article

  • Welcome to new blog!! Agile.NAV

    - by ssmantha
    I am quite ecstatic to announce a new blog, to which I am also a co-author. http://agilenav.wordpress.com. Agile.NAV brings in a vast amount of information of the work I did together with my colleague on bringing Microsoft Dynamics NAV under the hood of Team Foundation Server. For the past couple of years we have been working on creating development tools (more on integration side) for Microsoft Dynamics NAV which includes, Version Control, Automated Build system and our new automation testing integration with Dynamics NAV 2013. To start of with we got very good initial responses from community’s distinguished members like Luc van Vugt (see here). The idea is to drive the shift in mind-set for the Microsoft Dynamics NAV developer community. We share the same passion as people like Luc, about creating software in a professional manner.

    Read the article

  • grub-probe: error: cannot find a GRUB drive for /dev/sda1

    - by Thilina
    I'm trying to add windows 7 for my new 12.10 grub bootloader. None of the things worked out; such as .. copying bootx64.efi methods, I'm getting this output: grub-probe --target=fs_uuid /boot/efi/EFI/Microsoft/Boot/bootmgfw.efi grub-probe: error: cannot find a GRUB drive for /dev/sda1. Check your device.map. ....... my device map ....... (hd0) /dev/disk/by-id/ata-WDC_WD6400BPVT-55HXZT3_WD-WXD1EA1MSVR4 ....... 40_custom ..... menuentry "Microsoft Windows x86_64 UEFI-GPT" { insmod part_gpt insmod fat insmod search_fs_uuid insmod chain search --fs-uuid --no-floppy --set=root 80BD-E086 chainloader (${root})/efi/Microsoft/Boot/bootmgfw.efi } When booting to windows 7 give me blank black screen with a cursor blinks for 2 seconds then reboot, I've tried boot-repair too. I think I'm missing Windows UEFI Bootloader files.

    Read the article

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • Release build vs nightly build

    - by Tuomas Hietanen
    Hi! A typical solution is to have a CI (Continuous Integration) build running on a build server: It will analyze the source code, make build (in debug) and run tests, measure test coverage, etc. Now, another build type usually known is "Nightly build": do slow stuff like create code documents, make a setup package, deploy to test environment, and run automatic (smoke or acceptance) tests against the test environment, etc. Now, the question: Is it better to have a third separate "Release build" as release build? Or do "Nightly build" in release mode and use it as a release? What are you using in your company? (The release build should also add some kind of tag to source control of potential product version.)

    Read the article

  • Sarg Daily Report not generated

    - by Suleman
    Ubuntu 12.04 LTS Squid 3 Sarg 2.3.2 following is my crontab configuration # SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin m h dom mon dow user command 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # this does not generate daily html report.

    Read the article

  • Books are Dead! Long Live the Books!

    - by smisner
    We live in interesting times with regard to the availability of technical material. We have lots of free written material online in the form of vendor documentation online, forums, blogs, and Twitter. And we have written material that we can buy in the form of books, magazines, and training materials. Online videos and training – some free and some not free – are also an option. All of these formats are useful for one need or another. As an author, I pay particular attention to the demand for books, and for now I see no reason to stop authoring books. I assure you that I don’t get rich from the effort, and fortunately that is not my motivation. As someone who likes to refer to books frequently, I am still a big believer in books and have evidence from book sales that there are others like me. If I can do my part to help others learn about the technologies I work with, I will continue to produce content in a variety of formats, including books. (You can view a list of all of my books on the Publications page of my site and my online training videos at Pluralsight.) As a consumer of technical information, I prefer books because a book typically can get into a topic much more deeply than a blog post, and can provide more context than vendor documentation. It comes with a table of contents and a (hopefully accurate) index that helps me zero in on a topic of interest, and of course I can use the Search feature in digital form. Some people suggest that technology books are outdated as soon as they get published. I guess it depends on where you are with technology. Not everyone is able to upgrade to the latest and greatest version at release. I do assume, however, that the SQL Server 7.0 titles in my library have little value for me now, but I’m certain that the minute I discard the book, I’m going to want it for some reason! Meanwhile, as electronic books overtake physical books in sales, my husband is grateful that I can continue to build my collection digitally rather than physically as the books have a way of taking over significant square footage in our house! Blog posts, on the other hand, are useful for describing the scenarios that come up in real-life implementations that wouldn’t fit neatly into a book. As many years that I have working with the Microsoft BI stack, I still run into new problems that require creative thinking. Likewise, people who work with BI and other technologies that I use share what they learn through their blogs. Internet search engines help us find information in blogs that simply isn’t available anywhere else. Another great thing about blogs, also, is the connection to community and the dialog that can ensue between people with common interests. With the trend towards electronic formats for books, I imagine that we’ll see books continue to adapt to incorporate different forms of media and better ways to keep the information current. At the moment, I wish I had a better way to help readers with my last two Reporting Services books. In the case of the Microsoft® SQL Server™ 2005 Reporting Services Step by Step book, I have heard many cases of readers having problems with the sample database that shipped on CD – either the database was missing or it was corrupt. So I’ve provided a copy of the database on my site for download from http://datainspirations.com/uploads/rs2005sbsDW.zip. Then for the Microsoft® SQL Server™ 2008 Reporting Services Step by Step book, we decided to avoid the database problem by using the AdventureWorks2008 samples that Microsoft published on Codeplex (although code samples are still available on CD). We had this silly idea that the URL for the download would remain constant, but it seems that expectation was ill-founded. Currently, the sample database is found at http://msftdbprodsamples.codeplex.com/releases/view/37109 but I have no idea how long that will remain valid. My latest books (#9 and #10 which are milestones I never anticipated), Building Integrated Business Intelligence Solutions with SQL Server 2008 R2 and Office 2010 (McGraw Hill, 2011) and Business Intelligence in Microsoft SharePoint 2010 (Microsoft Press, 2011), will not ship with a CD, but will provide all code samples for download at a site maintained by the respective publishers. I expect that the URLs for the downloads for the book will remain valid, but there are lots of references to other sites that can change or disappear over time. Does that mean authors shouldn’t make reference to such sites? Personally, I think the benefits to be gained from including links are greater than the risks of the links becoming invalid at some point. Do you think the time for technology books has come to an end? Is the delivery of books in electronic format enough to keep them alive? If technological barriers were no object, what would make a book more valuable to you than other formats through which you can obtain information?

    Read the article

  • How to encrypt php folder under /var/www?

    - by sirchaos
    I need to encrypt the folder /var/www/test. The folder contains PHP files. The goal it to prevent any user to read the php content AND if the HD is mounted on another computer, the /var/www/test should be encrypted AND if computer booted up without any user logged I would like anyone to be able to access data in /var/www/tests. What is the correct approach for this? I've tried "ecryptfs-setup-private" as advised in How to encrypt /var/www? yet it didn't work for me. I've might missed something - I've tested the folders while booting with ubuntu 12.04 installation disk and mounted the drive, than I was able to access /var/www/test content.. yet this is what I want to prevent. The gnome-encfs isn't the way to go since its decryption happens when users logs on to the system & I would like the system to be working after power failure etc' without any one logged in. Please advice.

    Read the article

  • Azure November CTP updates

    - by kaleidoscope
    Below some modifications to be noted, which were shipped in latest Nov CTP. 1. StorageClient class has been deprecated. We can now find StorageClient methods in Microsoft.WindowsAzure.StorageClient.  CloudStorageAccount (which replaces the StorageAccountInfo from July CTP. 2. The basic interface for RoleEntryPoint (from which we inherit our Web Role and WorkerRole) has been changed in Nov CTP. Now we have 3 new methods called OnStart (), OnStop () and Run (). The methods that have been discontinued are Start() and Stop() You can find more information on RoleEntryPoint at : http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleentrypoint.aspx\ Lokesh, M

    Read the article

  • Creating symlink for Postgres

    - by Edwin
    As a developer, I often ssh right into my local database, just to test my application before pushing my code. However, I find that every time I want to access Postgres, I have to type in postgres@ubuntu:~$ /usr/local/pgsql/bin/psql test whereas on my work machine, all I have to do is type postgres@ubuntu:~$ psql --dbname=test --username=user I tried creating a symlink, which was successful, but whenever I try connecting to it through this shortcut, I get the following error message: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? How do I get this to work? In case it makes any difference, I'm using a self-compiled version of the 9.1.x series.

    Read the article

  • Why doesn't the Visual Studio C compiler like this? [migrated]

    - by justin
    The following code compiles fine on Linux using gcc -std=c99 but gets the following errors on the Visual Studio 2010 C compiler: Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 16.00.40219.01 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. fib.c fib.c(42) : error C2057: expected constant expression fib.c(42) : error C2466: cannot allocate an array of constant size 0 fib.c(42) : error C2133: 'num' : unknown size The user inputs the amount of Fibonacci numbers to generate. I'm curious as to why the Microsoft compiler doesn't like this code. http://pastebin.com/z0uEa2zw

    Read the article

  • Sometimes you have to brag on your employer

    - by Mickey Gousset
    A lot of you know me as an Application Lifecycle Management MVP, and a huge proponent of ALM, TFS, and Visual Studio.  For my day job, however, I work for Infront Consulting Group, a System Center consulting and training organization.  I love what I do there, and work closely with Operations Manager, Service Manager, and Orchestrator.  And believe it or not, use a lot of ALM best practices around all of those. Infront was just recognized as a 2012 Microsoft Corporate Account Virtualization Data Center Services Partner of the Year.  This award recognizes a solution partner that has demonstrated leadership and commitment in driving Microsoft virtualization solutions in the Microsoft Corporate Account segment.  I’m very proud of Infront, and all the hard work that everyone here has put into the incredible services we provide, which lead to us winning this award.

    Read the article

  • Unit testing - getting started

    - by higgenkreuz
    I am just getting started with unit testing but I am not sure if I really understand the point of it all. I read tutorials and books on it all, but I just have two quick questions: I thought the purpose of unit testing is to test code we actually wrote. However, to me it seems that in order to just be able to run the test, we have to alter the original code, at which point we are not really testing the code we wrote but rather the code we wrote for testing. Most of our codes rely on external sources. Upon refactoring our code however, even it would break the original code, our tests still would run just fine, since the external sources are just muck-ups inside our test cases. Doesn't it defeat the purpose of unit testing? Sorry if I sound dumb here, but I thought someone could enlighten me a bit. Thanks in advance.

    Read the article

  • yield – Just yet another sexy c# keyword?

    - by George Mamaladze
    yield (see NSDN c# reference) operator came I guess with .NET 2.0 and I my feeling is that it’s not as wide used as it could (or should) be.   I am not going to talk here about necessarity and advantages of using iterator pattern when accessing custom sequences (just google it).   Let’s look at it from the clean code point of view. Let's see if it really helps us to keep our code understandable, reusable and testable.   Let’s say we want to iterate a tree and do something with it’s nodes, for instance calculate a sum of their values. So the most elegant way would be to build a recursive method performing a classic depth traversal returning the sum.           private int CalculateTreeSum(Node top)         {             int sumOfChildNodes = 0;             foreach (Node childNode in top.ChildNodes)             {                 sumOfChildNodes += CalculateTreeSum(childNode);             }             return top.Value + sumOfChildNodes;         }     “Do One Thing” Nevertheless it violates one of the most important rules “Do One Thing”. Our  method CalculateTreeSum does two things at the same time. It travels inside the tree and performs some computation – in this case calculates sum. Doing two things in one method is definitely a bad thing because of several reasons: ·          Understandability: Readability / refactoring ·          Reuseability: when overriding - no chance to override computation without copying iteration code and vice versa. ·          Testability: you are not able to test computation without constructing the tree and you are not able to test correctness of tree iteration.   I want to spend some more words on this last issue. How do you test the method CalculateTreeSum when it contains two in one: computation & iteration? The only chance is to construct a test tree and assert the result of the method call, in our case the sum against our expectation. And if the test fails you do not know wether was the computation algorithm wrong or was that the iteration? At the end to top it all off I tell you: according to Murphy’s Law the iteration will have a bug as well as the calculation. Both bugs in a combination will cause the sum to be accidentally exactly the same you expect and the test will PASS. J   Ok let’s use yield! That’s why it is generally a very good idea not to mix but isolate “things”. Ok let’s use yield!           private int CalculateTreeSumClean(Node top)         {             IEnumerable<Node> treeNodes = GetTreeNodes(top);             return CalculateSum(treeNodes);         }             private int CalculateSum(IEnumerable<Node> nodes)         {             int sumOfNodes = 0;             foreach (Node node in nodes)             {                 sumOfNodes += node.Value;             }             return sumOfNodes;         }           private IEnumerable<Node> GetTreeNodes(Node top)         {             yield return top;             foreach (Node childNode in top.ChildNodes)             {                 foreach (Node currentNode in GetTreeNodes(childNode))                 {                     yield return currentNode;                 }             }         }   Two methods does not know anything about each other. One contains calculation logic another jut the iteration logic. You can relpace the tree iteration algorithm from depth traversal to breath trevaersal or use stack or visitor pattern instead of recursion. This will not influence your calculation logic. And vice versa you can relace the sum with product or do whatever you want with node values, the calculateion algorithm is not aware of beeng working on some tree or graph.  How about not using yield? Now let’s ask the question – what if we do not have yield operator? The brief look at the generated code gives us an answer. The compiler generates a 150 lines long class to implement the iteration logic.       [CompilerGenerated]     private sealed class <GetTreeNodes>d__0 : IEnumerable<Node>, IEnumerable, IEnumerator<Node>, IEnumerator, IDisposable     {         ...        150 Lines of generated code        ...     }   Often we compromise code readability, cleanness, testability, etc. – to reduce number of classes, code lines, keystrokes and mouse clicks. This is the human nature - we are lazy. Knowing and using such a sexy construct like yield, allows us to be lazy, write very few lines of code and at the same time stay clean and do one thing in a method. That's why I generally welcome using staff like that.   Note: The above used recursive depth traversal algorithm is possibly the compact one but not the best one from the performance and memory utilization point of view. It was taken to emphasize on other primary aspects of this post.

    Read the article

  • VirtualBox appliance for the Oracle Communications Service Delivery Platform (SDP) Products

    - by chlander
    It's been quite awhile since we last blogged. This blog is written by Leif Lourie, a Curriculum Developer for the Oracle Communications Service Delivery Platform (SDP) products. For the last 8 years, Leif has worked as a Curriculum Developer for many of the telecom-oriented products that Oracle offers. He has been working in the telecom industry for about 25 years and has also worked as a software developer, project manager, and solutions architect. He is currently working on courseware for an upcoming release for one of the Service Delivery Platform products. Thanks to Leif not only for this blog, but for making the VM described in the blog available. Cheryl Lander, Oracle Communications InfoDev Senior Director To be able to download, install and test a product within a day is many times very important for people that are doing the primary evaluation of a software product. If it takes longer, it will require a bigger effort, like a proof-of-concept project with many people involved. Of course, if the product is chosen for a more thorough test, it will probably happen anyway, but then maybe with focus on integration instead of product features. We have a long tradition of creating complex software that is easy to install and test and we have often been praised for the ease of getting our products up and running. One key for this has been that there has always been an installer for Windows, as well as for the production environments that usually are Unix and Linux. And, the windows installer has, in most cases, been released for developing and testing purposes. Lately, this has changed. Our products are very seldom released for the Windows platform, at all. And even the Linux versions are almost always released for 64-bit systems. This is creating problems for many of the people that want to try out our products, since few have access to a 64-bit Linux system of the right platform. Most of us are using a laptop with Windows or Mac OS. Some of us are using Linux or Solaris, but probably a non certified distribution for the product you want to test. My job, among other things, is to develop hands-on practices for our products. For me, it is crucial to have access to environments for installing and using our products. For this reason I have been using virtual machines for many years.I have a ready-made base system, with the necessary tools installed for all the products I create hands-on practices for. Whenever I start working on hands-on practices for a new product or a new version, I just copy the base system and start working with a clean slate. This saves me a lot of time! Now, I would like to start saving time for my favorite student: You! If you are using our products and regularly test new versions you might benefit from the virtual machine that is now available on Oracle Technology Network: The Virtual Machine for the Oracle Communications Service Delivery Platform (SDP) Products. This virtual machine contains an installation of the 64-bit version of Oracle Enterprise Linux, version 6. It also has Oracle Database Express Edition (XE), Oracle Java and Oracle Enterprise Pack for Eclipse installed. By using Oracle VM VirtualBox you may use Windows, OS X, Linux or Solaris on your laptop. VirtualBox can be installed on top of any of these platforms and give you the ability to run virtual machines in your laptop. After downloading and starting the virtual machine you will also need to download the installation files for the product you want to test; for example Oracle Communications Services Gatekeeper or Oracle Communications Online Mediation Controller. In some cases there are lessons and practices available for the products. The freely available courses are listed in Oracle Learning Library as a Collection of Oracle Communications Service Delivery Platform Courses. As time goes by, we will make this list collection bigger. Also, the goal is to update the virtual machine about one to two times per year. So you will always be able to get a well maintained virtual machine for the Service Delivery Platform products from us. We Value Your Feedback If you would like to suggest improvements or report issues on any of the product documentation, curriculum, or training produced by the Oracle Communications Information Development team, you can use these channels: Email [email protected]. Post a comment on this blog. Thanks for reading!

    Read the article

  • How can I associate .doc files to MS Word 2010 using the same .desktop file as launcher?

    - by nastys
    I'm trying to associate .doc and .docx files to MS Word 2010 using the same .desktop file as Unity dash and launcher, so I can use the Word icon in launcher. I tried: [Desktop Entry] Name=Microsoft Word 2010 Exec=env WINEPREFIX="/home/nastys/.mso2010" wine "C:/Program Files/Microsoft Office/Office14/WINWORD.exe" %f Type=Application StartupNotify=true Comment=Create and edit professional-looking documents such as letters, papers, reports, and booklets by using Microsoft Word. Icon=29F5_WINWORD.0 StartupWMClass=WINWORD.EXE MimeType=application/msword; application/vnd.openxmlformats-officedocument.wordprocessingml.document; Using this .desktop file I can launch Word with its icon in Unity launcher, but if I associate .doc files to the same file Word will launch, but it won't open the .doc file. If I associate .doc files to any .desktop file generated by Wine it will launch Word, but it will use Wine icon.

    Read the article

  • Why isn't cron running my script?

    - by Jingqiang Zhang
    Now I want to use Backup and Whenever gem to automatic backup my database. When I connect the server by ssh as an added user to run backup perform -t my_backup,it works well.But the cron file: 0 22 * * * /bin/bash -l -c 'backup perform -t my_backup' can't run at 22:00. When I use cat /etc/crontab check the cron's config file,it is: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # The /bin/bash and /bin/sh are different.What's the reason?How to do?

    Read the article

  • Windows Azure Platform TCO/ROI Analysis Tool

    - by kaleidoscope
    Microsoft have released a tool to help you figure out how much money you can save by switching to Windows Azure from your on-premises solution. The tool will provide you with a customized estimate of potential cost savings you (or your company or organization) may achieve by building on the Windows Azure Platform. Upon completion of the TCO and ROI Calculator profile analysis, you will be presented with a detailed report which shows estimated line item costs for an accurate TCO and a 1 to 3 year ROI analysis for you or your company or organization. You should not interpret the analysis report you receive as a part of this process to be a commitment on the part of Microsoft, and Microsoft makes no guarantees regarding the accuracy of any information presented in the report. You should not view the results of this report as a substitute for engaging with a third party expert to independently evaluate you or your company’s specific computing needs. The analysis report you will receive is for informational purposes only. For more information check this link. Geeta, G

    Read the article

  • FREE Windows Azure Platform Compute and Storage through the Cloud Essentials Pack for Partners

    - by Eric Nelson
    It can be difficult to find something to look forward to in January – but this year it was a little easier as a) I got lots of great Xbox 360 games and b) the Windows Azure Platform element of the Cloud Essentials Pack for Microsoft Partner Network partners went live. I have previously explained what the Cloud Essentials Pack is and how you can access – but at the time I couldn’t share the details of the Windows Azure Platform element. The Windows Azure Platform element is now available. It gives you each month, for FREE: Windows Azure: 750 hours of extra small compute instance 25 hours of small compute instance 3GB of storage and 250,000 storage transactions SQL Azure: 1 SQL Azure Web Edition database (5GB) Windows Azure AppFabric: App Fabric with 100,000 Access Control transactions and 2 Service Bus connections Plus: Data Transfer:  3GB in and 6GB out (More details of the offer) To activate this offer You need to: Sign your company up to Microsoft Platform Ready (NB: there are other routes to get this benefit – but I know about MPR) Read about Microsoft Platform Ready Visit http://www.microsoftcloudpartner.com/ and sign up.

    Read the article

  • TFS 2008 to TFS 2010 moves and some issues

    - by Enrique Lima
    There have been many things going on this year around TFS.  Most of them had to do with migrations (I don’t call them upgrades for the most part since it involved new hardware and such).  Many were implementations using the Conchango SfTS template (now EMC). But there were others that were CMMI or Agile 4.0. Everything would move just fine, no issues.  That was until you attempted to run Test Case Management or run the last configuration steps for Lab Management. There is an error that states a project is not ready to run or integrate with Test or Lab Management.  And while there was some documentation on how to adjust and update the Agile WITs to work with it, there was still some disconnect to making it work with CMMI. Now there is a great post on how to run the “fix” from end to end. Check the post here:  TFS 2010: Enable Test Case Management for upgraded Team Projects

    Read the article

  • Registration free hosting for ASP.NET web service

    - by Andrew
    I've built a simple ASP.NET web service, tested it locally and would like to test it when externally hosted. Are there free hosting services available where I can just upload the assembly and service description file and test it straight away. Without registering the account, etc. My service does not do anything malicious and I am ok to run it in a restricted (security sandbox, bandwith, calls per second, etc) environment? I have heard about appharbor.com but it looks like an overkill to test a simple web service.

    Read the article

  • Ubuntu 13.04 on Lenovo G580 > built-in (web)cam not recognized

    - by user159295
    I've installed Ubuntu 13.04 on a Lenovo G580. Now I noticed that the built-in (web)cam is not being recognized. Does anybody know how to fix this and get the webcam running? Any help is appreciated! Thanks I've downloaded/installed the suggested program and the webcam is working within that program. However, when running Camera Test from the System Testing here's what happens: when clicking on "Test", there's NO image showing up a few seconds after clicking on "Test", an error message is popping up saying "Sorry, Ubuntu 13.04 has experienced an internal error" When going to Skype, video is enable and under "select webcam" there's "Lenovo EasyCamera (/dev/video0) as only option selected. But, Skype doesn't recognize the camera as there's no pictures. Any ideas on how to get this straightened out?

    Read the article

  • Q&amp;A: How do I cancel my Windows Azure Platform Introductory Special? (or any Subscription)

    - by Eric Nelson
    Short answer: Don’t! Just kidding :-) Long answer: I believe it is the same process as for other Microsoft Online Services – but I have never tried it. Hence please post a comment if you follow this successfully or not and I will amend. From http://www.microsoft.com/online/help/en-us/mocp/, search for “cancel” and you get: What I am not clear about is whether an Introductory Special is classed as a trial. Either way, the answer is to contact support and ask to cancel. I would suggest you are fully armed with details of your subscription which you can get from signing in to https://mocp.microsoftonline.com. You can contact support via a online web form at https://mocp-support.custhelp.com/ Or You can call them. The details are again on the support page http://www.microsoft.com/online/help/en-us/mocp/  In the UK you can call 0800 731 8457 or (0) 20 3027 6039 Monday – Friday 09:00 – 17:00 GMT (UTC). I hope that helps.

    Read the article

  • How come verification does not include actual testing?

    - by user970696
    Having read a lot about this topic, I still did not get it. Verification should prove that you are building the product right, while validation you build the right product. But only static techniques are mentioned as being verification methods (code reviews, requirements checks...). But how can you say if its implemented correctly if you do not test it? It is said that verification checks e.g. code for its correctnes. Verification - ensure that the product meet specified requirements. Again, if the function is specified to work somehow, only by testing I can say that it does. Could anyone explain this to me please? EDIT: As Wiki says: Verification:Preparing of the test cases (based on the analysis of the requireemnts) Validation: Running of the test cases

    Read the article

  • Powershell – script all objects on all databases to files

    - by Nigel Rivett
    <# This simple PowerShell routine scripts out all the user-defined functions, stored procedures, tables and views in all the databases on the server that you specify, to the path that you specify. SMO must be installed on the machine (it happens if SSMS is installed) To run - set the servername and path Open a command window and run powershell Copy the below into the window and press enter - it should run It will create the subfolders for the databases and objects if necessary. #> $path = “C:\Test\Script\" $ServerName = "MyServerNameOrIpAddress" [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') $serverInstance = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $ServerName $IncludeTypes = @(“tables”,”StoredProcedures”,"Views","UserDefinedFunctions") $ExcludeSchemas = @(“sys”,”Information_Schema”) $so = new-object (‘Microsoft.SqlServer.Management.Smo.ScriptingOptions’) $so.IncludeIfNotExists = 0 $so.SchemaQualify = 1 $so.AllowSystemObjects = 0 $so.ScriptDrops = 0 #Script Drop Objects $dbs=$serverInstance.Databases foreach ($db in $dbs) { $dbname = "$db".replace("[","").replace("]","") $dbpath = "$path"+"$dbname" + "\" if ( !(Test-Path $dbpath)) {$null=new-item -type directory -name "$dbname"-path "$path"} foreach ($Type in $IncludeTypes) { $objpath = "$dbpath" + "$Type" + "\" if ( !(Test-Path $objpath)) {$null=new-item -type directory -name "$Type"-path "$dbpath"} foreach ($objs in $db.$Type) { If ($ExcludeSchemas -notcontains $objs.Schema ) { $ObjName = "$objs".replace("[","").replace("]","") $OutFile = "$objpath" + "$ObjName" + ".sql" $objs.Script($so)+"GO" | out-File $OutFile #-Append } } } }

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >