Search Results

Search found 101927 results on 4078 pages for 'ms sql server'.

Page 592/4078 | < Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >

  • Reading an embedded file from an ASP.NET Custom Server Control an rendering it

    - by Andreas Grech
    I currently have a file "abc.htm" in my Custom Server Control Project and it's Build Action is set to Embedded Resource. Now in the RenderContents(HtmlTextWriter output) method, I need to read that file and render it on the website. I am trying the following but it's to no avail: protected override void RenderContents(HtmlTextWriter output) { var providersURL = Page.ClientScript.GetWebResourceUrl(typeof (OpenIDSel), "OpenIDSelector.Providers.htm"); var fi = new FileInfo(providersURL); // <- exception here //the remaining code is to possibly render the file } This is an example of how the providersURL is: /WebResource.axd?d=kyU2OiYu6lwshLH4pRUCUmG-pzI4xDC1ii9u032IPWwUzMsFzFHzL3veInwslz8Y0&t=634056587753507131 FileInfo is throwing System.ArgumentException: Illegal characters in path.

    Read the article

  • VisualSVN Server + Trac Authentication Problems

    - by danscott
    I have Trac set up on my VisualSVN server (using Subversion authentication), however every time I navigate to the Trac home page after opening the browser, I get the basic authentication dialog asking me for my username/password. What I would like to do is have a login form in Trac, which would allow me to log in forever using cookies. I have tried installing the AccountManagerPlugin, but I am completely unsure of how to correctly set it up. (I am used to working with IIS on corporate intranets, so this is kind of alien to me) I have managed to bypass the basic authentication dialog by setting this in my httpd-custom.conf: AuthName "Trac" AuthType Basic AuthBasicProvider file AuthUserFile "E:/Repositories/htpasswd" #Require valid-user I have tried using SvnServePasswordStore as my password store but I do not know which of the files in the repository directory to point it at. Help would be appreciated!

    Read the article

  • 64 bit COM(ActiveX) server

    - by Velja Radenkovic
    Hello, I have activex server exe that was building and registering fine on 32bit OS. I wanted to make 64 bit version of that exe by upgrading project to Visual Studio 2010 and changing platform to X64 which apparently doesn't work. Application itself works but I don't see it registered after running That.exe /RegServer I would appreciate any usable advice on migrating activex from 32 to x64. Code that is processing /RegServer param is below: if(lstrcmpi(lpszToken, _T("RegServer")) == 0) { _Module.UpdateRegistryFromResource(IDR_OUTDISKSARG, TRUE); nRet = _Module.RegisterServer(TRUE); bRun = false; break; } 32 bit activex is unuable for me since I have to load it in x64 .NET process.

    Read the article

  • Traffic consumed by Team Foundation Server 2010

    - by micha12
    We are currently selecting a source control and issue tracking software, and are looking towards Team Foundation Server 2010. Some participants of our project often have slow Internet connection (for example during travel), and therefore it is important for us to have a source control system that does not consume too much traffic. I was unable to find information on traffic consumption when using TFS 2010. Does anyone has such info? Does TFS 2010 support traffic compression? Do other source control systems (like SVN, for example) produce less or more traffic than TFS 2010?

    Read the article

  • Using Remote Web Server to Initialize iPhone App

    - by Chris_K
    My iPhone app relies on a vendor's XML feed to provide data. But that feed is not locked down. The vendor could change the format of the XML at any time, although so far they've promised not to. Since I might want to tell my app to use a different URL for its data source, I'd like to set up a single "Command Central" Web page, on my own server, to direct the app to the correct data source. In other words, each time my app starts, in the background and unseen by the user, it would visit "http://www.myserver.com/iphoneapp_data_sources.xml" to retrieve the URL for retrieving data from my vendor. That way, if my vendor suddenly changes the exact URL or the XML feed that the app needs, I can update that Web page and ensure that all installations of the app are using the correct XML feed. Does anyone have any advice or examples showing this kind of approach? It seems as if this must be a common problem, but so far I haven't found a well-established design pattern that fits it.

    Read the article

  • Anyone Using the Abyss Web Server

    - by infocyde
    Just curious to see if anyone is using the Abyss Web Server for any projects. http://www.aprelium.com/ I've checked it out a few times, had it running a few ASP.Net demo sites, but haven't gotten to far with it. I like the ease of use, but I'm thinking both IIS and Apache out class Abyss for the most part. Has anyone used it? If so, what is your experience? I ask because I'm tempted to use if for some projects, but if it isn't worth the investment I probably won't. Thanks for your time.

    Read the article

  • system() not working in php using windows server 2003

    - by jazzy
    hi i have to extract the cabfile(.cab) on the server. i am Finding such script which extract cab file but i didn't get it yet. So now i am try to extract using cabarc.exe. But i face the problem that when i run command throuw commandline its work fine but when i give same command to system() or exec() function in php it is not work. code is as follow: $command = "c:\\exe\\cabarc X c:\\cab\\data.cab c:\\data\\"; if(($output = system($command,$return) != false) { echo "$return"; } it is not working when i use same string in commandline it works fine. please any body help me to why it not working what to do tomake it work is ther any rights issue. I had give the execute permission to the site. thanks

    Read the article

  • Hiding part of a page from Search Server 2010 Express

    - by Jonathan
    I'm working on a soon-to-be-public-facing site, and we want to have our search live on day 1, and want it to be searchable but non-public during testing, so we're planning to use something whose crawling we can control -- Search Server 2010 Express. However, if I search for something in my top navigation bar, I get nearly every page as a hit. It kinda makes sense, as every page has that content, but it's completely irrelevant on most pages. I want it to crawl through my navigation, but ignore the text within the navigation for search results. I was hoping that it'd just figure that out on it's own (the HTML for the top nav is static), but it's apparently not. Is there some standard thing I can put in my HTML that will achieve the effect I'm going for? On a side note: when I go live, will I have the same problem with public search engines, or do they tend to be smarter?

    Read the article

  • Game Server Language Selection

    - by mr.LiKaShing
    I am planning to make a online-multiplayer game with my friends. The game is a browser card game (so, players act in turns) and players could host rooms in a lobby. Flex + actionscript will be used to write for the client side. We are discussing what should be used for the server side. I suggested C#/Java and my friend suggested PHP. I kw there are couple of questions asking for what language to use but I think it should depend on specific conditions. Is there any suggestion for us? Thanks.

    Read the article

  • Open Source .NET embedded web/http server

    - by Daniel Mošmondor
    I am working on a project where I need to embed a web server into my C# application so the application could display it's status via HTTP. I suppose I'll want to configure it through the http also. I am looking for an open-source library written in C# and with a licensing scheme that will allow me to link it into my existing closed source code (LGPL). Any suggestions of specific products or where to look first? It would be great if that product could have some kind of scripting, at least templates. All html output would go from the application, only resources would be stored on the disk (images, icons, ...) EDIT: I would like it to run under .NET 2.0, however.

    Read the article

  • Zend_Soap with attachments (server)

    - by Tom
    i'm trying to build a SOAP service with Zend_Soap. Everything is working great but the client needs the ability to send attachments to the service (not base64 encoded strings, as this service will be called multiple times a day with various file sizes so processing all that in memory is not possible. So I'd like to handle a normal SOAP attachment (DIME/MIME) with the SOAP server in Zend Framework however I'm unable to find documentation about it. Can I access it with $_FILES[] or any other way? Is it even possible in Zend_Soap (as there's not that much info available). SOAP is a must - so thanks for the advice but it has to be SOAP, not REST.

    Read the article

  • Choosing an Open Source Application Server for J2EE

    - by Rafael
    Hello, I know this may be a recurring topic, but I have read a lot of articles and I still have doubts. Also, I would like to hear more recent opinions about this. The main requirements of my application server are: flexible configuration, support for a extremely high number of concurrent users. It will be a system for the mobile communications industry, so it must have high availability as well. I am going to develop a J2EE application and Open Source Applications Servers are my only option. I have use GlassFish for a very small project and I really liked it. Thank you very much for your advise.

    Read the article

  • Firewall configuration. (Windows server 2008)

    - by Jon
    Hello. I'm having a little problem with configuring the firewall on my server. I only want a specific range of IPs to be able to access specific ports. For example, I'm having alot of password attempts to some of my servers, so I want to make it more safe by only allowing incoming connections from a specific range of domain. Example: My IP is usually adsl-324-4.somecompany.com so I want to allow *.somecpompany.com to connect, as my IP is dynamic. That would get rid of alot of attempts to hack into my servers. But I have no idea how to mask a domain like that for the firewall. How could I for example allow all incoming connections from *.is? Thanks.

    Read the article

  • Uploading to a remote server periodically?

    - by user1048138
    I have been working on an app that takes screen shots, kinda like http://puush.me/ however, I would like to be able to upload the screen shots to a remote server. What protocols can I use to do so. Needs to be cross platform and secure. I know that SSH, SFTP and FTP are options, however, they all require logins that I dont want to provide to the end user. Nor do I want to sign a key for them as it would still allow their machines to remotely log in.

    Read the article

  • Clone virtual machine with Server 2008 R2 and Hyper-V?

    - by bwerks
    Hi all, I've recently just started working with Hyper-V, and so far it's quite nice. However, I've been running into problems with what seems like it should be the most basic of workflows. I've set up a baseline Server 2008 R2 configuration, and exported it with the intention of using the export for cloning. I entered "C:\Exports\" as the export folder. However, I run into problems when I try to import the image. From the Hyper-V manager, I select "Import Virtual Machine" and in the resulting window I entered "C:\Exports\BuildServer\" as the folder, set the radial to "Copy the virtual machine (create a new unique ID)" and checked the checkbox for "Duplicate all files so the same virtual machine can be imported again." Doing so results in the following error: "Import failed. Import task failed to copy file from 'H:\Exports\BuildServer\Virtual Hard Disks\BuildServer.vhd' to 'C:\Hyper-V\Virtual Hard Disks\BuildServer.vhd': The file exists. (0x80070050)" Have I somehow messed something up in configuration? Or is this a known thing? I've read it should be possible to clone VMs by copying them in the filesystem but I'd prefer to keep things in the management Ui if possible.

    Read the article

  • Trying to backup system state on server 2003 sp2, getting "Faulting application vssvc.exe - system state backup failed" in applciation log

    - by IT_Fixr
    Trying to backup system state on Windows Server 2003 (SP2), getting "Faulting application vssvc.exe - system state backup failed" in application log. Volume shadow copy creation: Attempt 1. "MSDEWriter" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "Event Log Writer" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "Registry Writer" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "COM+ REGDB Writer" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "Removable Storage Manager" has reported an error 0x0. This is part of System State. The backup cannot continue. Error returned while creating the volume shadow copy:800423f2 Aborting Backup.

    Read the article

  • Sync clock on Windows XP machine to external (non-domain, non-workgroup) Windows Server 2008 R2 machine

    - by Eric
    I have two machines and I'd like their clocks to be in sync for various reasons. Machine 1 is an XP machine located in the office. Machine 2 is a VPS hosted by a third party running Windows Server 2008 R2. These machines are not in any kind of workgroup or on a domain together. They are completely separate machines. Machine 2 is currently syncing once a week to time.windows.com. The clock on Machine 2 does seem to wander a bit within that week interval. What I would like to do is have Machine 1 set its clock based on the clock of Machine 2. I have tried configuring w32tm on the XP machine. This is what I used for configuration: w32tm /config /syncfromflags:manual /manualpeerlist:"<ip address of machine 2>" However, whenever I issue the /resync command I get "The computer did not resync because no time data was available". I have made sure to start the windows time service on machine 2, and I have added firewall exceptions for UDP port 123. Is there something I need to configure on Machine 2 (other than just starting the time service) in order to get it to respond? Edit: I have also run w32tm /config /reliable:YES /update on Machine 2. I am still getting "The computer did not resync because no time data was available". Is there something else I'm missing?

    Read the article

  • Accessing resources on localhost using domain credentials

    - by jas
    I'm trying to set up Team Foundation Server 2010, Sharepoint Server 2010 and Report Server 2008R2. I apologize for how long my question/problem is but I'm really lost on where to even look so am being as descriptive as possible in hopes that I'm making sense. The goal: Since developers can be inside or outside the firewall there needs to be a single http point of entry to TFS that works regardless of which side of the firewall you are and needs to work with external access to SharePoint and Report Server. Meaning we have it set up in DNS so buildserver.mydomain.com: points to the build service box which contains all of the services listed at the top of this post and specific services are defined/located by the port number. This is working great on every machine inside and out except for from the build server itself. All services must be able to work using external URLs. If I use http:// buildserver.mydomain.com:4800/tfs (the external URL) from my notebook which is behind the firewall I'm able to login with my domain credentials as expected. If the other developer points to the same URL from their home which isn't on the domain they are also able to login using their domain credentials. However if I am directly on buildserver and call SharePoint, TFS or Reporting Server from (i.e. http:// buildserver.mydomain.com:4800) itself using the external URL, I am prompted for a username and password. Entering my domain credentials results in another prompt to enter my credentials again. It will prompt three times regardless of which credentials are used (I have rights as a domain admin) and then after the third prompt directs me to a blank white page as though access was denied. There are no errors displayed on the page and nothing ends up in the event viewer. From buildserver if i use just the host name (the internal URL), then I'm prompted a single time for credentials and it works. i.e. http:// buildserver:4800/tfs works from the server itself. The behavior is identical for any service requiring authentication. Meaning from the box itself Sharepoint Central Admin, SharePoint WebApp, TFS, TFS Web Access, Report Server and Report Manager all fail using the external URL but will succeed if called using the interal URL. So the problem comes into play when configuring all of the services to work together. The only way to configure TFS is locally from the server which means I must point to the internal reporting server url (http:// buildserver:4800/reports and reportServer respectively instead of http:// buildserver.domainname.com:4800 like they need to be) since external URLs aren't working from itself. If I configure TFS to use the internal URL for Report Server then creating team projects or working in the SharePoint site for the team project fails for anyone not inside the domain since their machines have no idea who http:// buildserver:/reports even is or how to resolve them. I have configured Sharepoint with Alternate Access Mappings as well as set up Report Server to listen for external URLs. The external URLs simply aren't working when called from the server itself. I hope this makes sense. Thanks for taking the time to read this rather verbose plea for help.

    Read the article

  • Constant CMS Session Expiry On 1&1 Cloud Server?

    - by leen3o
    I have a couple of 1&1's 'Dynamic Cloud Servers' and running Win2008R2 and they are setup as web servers, I have a number of Umbraco CMS installs on them and they have been running fine for over a year. On Saturday on BOTH servers, a very strange thing happened - As soon as I login to the CMS/Umbraco admin I am logged out with about 5 seconds? It's as if my session expires the moment I login? I have checked everything I can as I'm not really a server admin, and everything seems to be exactly as it was last week? Like I say this has happened EXACTLY the same time (Saturday) on TWO different servers? I'm just looking for ideas of what I should be looking for? Also the front end of the sites seem fine... Its only the backend when I login. I have gone to 1&1 about this, and as usual they have washed their hands saying its nothing to do with them - When I am certain it is. How can this happen on two different servers, and affect the same sites in exactly the same way? Any help, tips, things to try would be greatly appreciated.

    Read the article

  • Having munin server monitoring problem: Graphs not being generated.

    - by geerlingguy
    When I run munin-cron (munin-cron --debug), I get the following error: 2010/05/10 13:39:01 [WARNING] Call to accept timed out. Remaining workers: archstl.org;archstl.archstl.org 2010/05/10 13:39:01 [DEBUG] Active workers: 1/8 These errors simply keep repeating themselves until I quit munin-cron. I've followed the directions for debugging munin on the 'Debugging Munin plugins' wiki page, but I get the following results when going through their directions: After telnetting to localhost 4949, I can see a list of plugins, see a node at archstl.archstl.org, but can't fetch anything. The output is as follows: >fetch cpu . However, on the same machine (which is both the node and the master munin server), I can run munin-run cpu, and it prints the results correctly to the command line, like so: user.value 100829130 nice.value 3479880 system.value 13969362 idle.value 664312639 iowait.value 12180168 irq.value 14242 softirq.value 199526 steal.value 0 Looking at the wiki page mentioned above, it looks like it might be a plugin environment problem, but I can't figure out how to fix/change this... If the plugin does run with munin-run but not through telnet, you probably have a PATH problem. Tip: Set env.PATH for the plugin in the plugin's environment file.

    Read the article

  • Server 2008 R2 boot is at 2 hours and counting. What now?

    - by Jesse
    This morning, we rebooted our Server 2008 R2 box. No problem, came right back up. Then we shut it down and let it install windows updates. While it was off, we added some RAM. Then we turned it back on. The system came right back up to the "press ctrl-alt-delete" screen, so far, so good. I logged in. The system got as far as "Applying Group Policy" -- then spent almost an hour applying drive mappings. Finally finished that, and has now spent 30 minutes on waiting for the Event Notification Service. I still haven't been able to log in. Remote desktop service doesn't appear to be running yet. I tried viewing the event log from another machine. I see that the box is writing to the Security log, but there are no events in System or Application in the last 45 minutes. Digging through the System log of events from 45 minutes ago, I see a bunch of timeouts: A timeout (30000 milliseconds) was reached while waiting for a transaction response from the ShellHWDetection service. [lots of these] A timeout (30000 milliseconds) was reached while waiting for a transaction response from the wuauserv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the SessionEnv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the Schedule service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the CertPropSvc service. What can I do? Should I try shutting it down remotely, or will that do more damage?

    Read the article

  • Noob with git repository on Windows Storage Server 2008?

    - by HibbyHoo
    I have a Western Digital Sentinel at home running Windows Storage Server 2008 R2 Essentials. I have several git repositories on it for my own personal projects, and have no problem pushing and pulling over my local network. I want to be able to access those repos remotely from anywhere. I am able to log in and remotely access folders and files on it, but I cannot clone repos using the same address. It hangs for a REALLY long time before finally failing with an error: git.exe clone --progress -v "https://myIpAddressHere/Remote/fs/files.aspx?path=%5C%5Cmydevicename%5Cmyreposfolder%5Cmyrepo.git" "D:\repo" Cloning into 'D:\repo'... error: Failed connect to myIpAddress:443; No error while accessing https://myIpAddress/Remote/fs/files.aspx?path=%5C%5Cmydevicename%5Cmyreposfolder%5Cmyrepo.git/info/refs fatal: HTTP request failed git did not exit cleanly (exit code 128) I'm not too privy to networking or web development, and I have only a rudimentary understanding of how to use git (with TortoiseGit). I'm having a hard time finding search results for this specific problem and a hard time interpreting generic tutorials for the general scope of this problem. TortoiseGit version: 1.7.13.0. git version: 1.7.10.mysysgit.1.

    Read the article

  • Databases in Source Control

    - by Grant Fritchey
    I’ve been working as a database professional for quite a long time. But originally, I was a developer. And I loved being a developer. There was this constant feedback loop of a job well done, your code compiled and it ran. Every time this happened successfully, you’d check it into source control. These days you have to add another step; the code passed all the tests, unit, line, regression, qa, whatever, then into source control it goes. As a matter of fact, when I first made the jump from developer to DBA/database developer/database professional, source control was the one thing I couldn’t believe was missing from the DBA toolbox. Come to find out, source control was only the beginning of what was missing from your standard DBAs set of skills. Don’t get me wrong. I’m not disrespecting the DBA. They’re focused where they should be, on your production data. But there has to be a method for developing applications that include databases and the database side of that development and deployment process has long been lacking. This lack of development and deployment methodologies is a part of what has given rise to some of the wackier implementations of Object Relational Mapping tools, the NoSQL movement, and some of the other foul cursing that is directed towards databases, DBAs, and database development by application developers. Some of that is well earned. A lot isn’t. But it is a fact that database professionals, in general, do not have as sophisticated a model for managing development and deployment as application developers do. We could charge out and start trying to come up with our own standards and methods. I’m sure people have done exactly that. However, I’m lazy, and not terribly bright. Rather than try to invent a whole new process, I’m going to look to my developer roots and choose instead to emulate the developers. They’re sitting over there across the hall from me working with SCRUM/Agile/Waterfall/Object Driven/Feature Driven/Test Driven development processes that they’ve been polishing for years. What if I just started working on database development the same way they work on code development? Win! Ah, but now I have to have a mechanism for treating my database like application code. First, I need a method for getting it into source control. That’s where Red Gate’s SQL Source Control comes into the picture. SQL Source Control works within SQL Server Management Studio to connect your database objects up to the source control system of your choice. Right out of the box SQL Source Control can link to TFS, SVN or Vault. With a little work you can connect it to Git or just about any other source control system. With the ability to get my database into source control, a lot of possibilities for more direct integration with the application development teams open up.

    Read the article

  • Open Your Windows - 4/Maio/10

    - by Claudia Costa
    This FREE technical briefing is designed to show ISVs/SIs how to leverage the Oracle11g Technology especially in the small to medium business. The briefing focuses on Oracle's 11g platform on Windows & Linux and gives a very comprehensive technical competitive overview to the products offered by Microsoft. The technical part covers Integration and Migration aspects of various Microsoft products such as SQL Server, .NET and Active Directory. Register Today! With Oracle11g Oracle introduced various products (ApplicationExpress, OracleExpress Edition, ADF, BPEL) and licenses (Oracle Database Standard Edition One, Application Server Java Edition) specifically targetting the small to medium business market and to show that Oracle Database and Application Server are as easy to use and costs less than Microsoft products in terms of purchase price and ongoing support & maintenance and even much much less when considering the Linux platform.. For those ISVs have already adopted Microsoft .NET framework and using SQL Server as their database layer, we will demostrate that Oracle11g Database is as easy as SQL Server to install, configure, and manage. In addition to that, their application development .NET platform does not requires dramatic changes to enable it to run on the Oracle database. Besides the standard functionalities, Oracle has enhanced some of the advanced features; such as Intermedia, Security, Ref Cursor, etc., tightly integrated with .NET framework so that .NET developers can take full advantage of the Oracle technology, without worrying or programming the complexity components. Objectives ·         Understand Oracle's strategy and commitment on Windows & Linux ·         Learn how to migrate from SQL Server to Oracle on Windows AND Linux ·         Understand that Oracle11g is easy to manage and to install on Windows & Linux ·         Learn how to integrate Windows products with the Oracle11g Platform ·         Learn how Oracle products interoperate & integrate with Microsoft .NET ·         Learn how an Oracle database on Windows will easily be ported to a lower cost Linux database platform and interoperate with a .NET application Prerequisites General Operating System expertise including MS-Windows and Linux. Agenda ·         Welcome and Intro ·         Oracle at a glance ·         Strategy; Small to Medium Business, Microsoft and Linux ·         Oracle 11g Architecture on Linux & Windows ·         Managing Oracle 11g on Linux & Windows ·         Application Development ·         Migration ·         Value propositions for ISVs & Wrap-up   ---------------------------------------------------------------------------- Para mais informações/inscrições, contacte: [email protected].

    Read the article

  • Windows Azure Use Case: Infrastructure Limits

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Physical hardware components take up room, use electricity, create heat and therefore need cooling, and require wiring and special storage units. all of these requirements cost money to rent at a data-center or to build out at a local facility. In some cases, this can be a catalyst for evaluating options to remove this infrastructure requirement entirely by moving to a distributed computing environment. Implementation: There are three main options for moving to a distributed computing environment. Infrastructure as a Service (IaaS) The first option is simply to virtualize the current hardware and move the VM’s to a provider. You can do this with Microsoft’s Hyper-V product or other software, build the systems and host them locally on fewer physical machines. This is a good option for canned-applications (where you have to type setup.exe) but not as useful for custom applications, as you still have to license and patch those servers, and there are hard limits on the VM sizes. Software as a Service (SaaS) If there is already software available that does what you need, it may make sense to simply purchase not only the software license but the use of it on the vendor’s servers. Microsoft’s Exchange Online is an example of simply using an offering from a vendor on their servers. If you do not need a great deal of customization, have no interest in owning or extending the source code, and need to implement a solution quickly, this is a good choice. Platform as a Service (PaaS) If you do need to write software for your environment, your next choice is a Platform as a Service such as Windows Azure. In this case you no longer manager physical or even virtual servers. You start at the code and data level of control and responsibility, and your focus is more on the design and maintenance of the application itself. In this case you own the source code and can extend or change it as you see fit. An interesting side-benefit to using Windows Azure as a PaaS is that the Application Fabric component allows a hybrid approach, which gives you a basis to allow on-premise applications to leverage distributed computing paradigms. No one solution fits every situation. It’s common to see organizations pick a mixture of on-premise, IaaS, SaaS and PaaS components. In fact, that’s a great advantage to this form of computing - choice. References: 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0  Application Patterns for the Cloud: http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx

    Read the article

< Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >