Search Results

Search found 3558 results on 143 pages for 'hosted'.

Page 24/143 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Transfer .com domain to GoDaddy - websites running on same domain - 3 weeks left until expiration, 2 days left web hosting

    - by Eric Nguyen
    Our company purchased this abc.com domain from a local registrar. The domain will expire in about 3 weeks. We have our main websites running on this abc.com domain and they cannot be down for too long. The web hosting service will end in 2 days. Our websites are already hosted and they are up and running on Amazon EC2. We would like to transfer the domain to GoDaddy now or as soon as possible. (since we have many other domains there and we belive GoDaddy will be better in long-term considering the prices and the features it offers) There are many questions on the decision to transfer the domain to GoDaddy: 1) Cost and time required to move out of our local registrar? This is currently unknown as I'm still trying to retrieve the agreement we have with them 2) How does the 3 week time left until expiration of the domain matters here? Should we wait until the domain expires and then purchase in through GoDaddy? How long would such process take as I suppose our websites will be down during that time? Any other drawbacks? 3) What can I do to ensure our websites will continue functioning regardless of the domain transfer process? It seems the actual registrar here is enom.com and the local registrar here just partners with it I suppose I should then park the abc.com domain with enom.com and make changes to DNS settings so that our websites can continue to be hosted on EC2 as normal. How long does it normally take the domain to be transferred to GoDaddy completely? Is it even possible at all to keep our websites are up and running during the whole domain transfer process? Apologies that I'm throwing many questions at the same time here. It's rather last minutes and I suddenly realised there are too many unknown risks.

    Read the article

  • Economical DNS hosting separate from local registrar for country specific TLDs - email or web hosting not required

    - by Eric Nguyen
    Our company owns many country specific top level domains (TLDs; .sg, .my). We will purchase more for other countries in all South East Asia. These domains are associated with our websites hosted on Amazon EC2. The DNS records are currently hosted on a dedicated server that will shut down tomorrow. (The name servers are set to the ones of a web hosting company) Therefore, I will need to host the DNS records somewhere else. Hosting the DNS records with the local registrar costs SGD18 a year per domain in addition to the domain price (which is already very expensive but we have no choice). It would be convenience to host DNS recors for all the country specific TLDs we have using a single service, separate from the local registrars from which we bought the domains. A few searches prompted examples like Amazon Route 53 and dnsmadeeasy.com and the likes. However, since I'm only concern about the country specific TLDs, not .com 1) Is it really economical to host DNS records of all domains in 1 single place as described above? (Have the relevant countries and/or the local registrars done something to keep their monopoly and always charges ridiculous prices for their country specific TLDs?) 2) I would imagine I will need to tell the local registrars to update the name servers to those of the DNS hosting service provider e.g. dnsmadeeasy.com here. Am I correct about how it works here? 3) Will I be able point the TLDs themselves to IP addresses I desire (the EC2 instances where my websites are) or will I only able to do so with the subdomains? 4) Are there any drawbacks that I should know here? Background about our needs: We need the websites associated with the country TLDs to be up and running all the times Also, we'll need to be able to add/edit A and CNAME records We use Google Apps for Business for internal email so I will need to be able to add/edit MX records and TXT records

    Read the article

  • WebSocket@QCon NY

    - by reza_rahman
    QCon NY was held on June 10-14 at the New York Marriott/Brooklyn Bridge. Part of the QCon franchise, this is one of the most significant IT conferences in the greater NYC area. It was an honor to do a WebSocket (JSR 356) talk at the conference. Unfortunately, my schedule was such that I could only attend one day of the conference and did not really get a chance to attend many sessions or do much networking. I did get a chance to talk to fellow Oracle speakers Doug Clarke, Stephen Chin and Frederic Desbiens, which was great. My session, titled Building Java HTML5/WebSocket Applications with JSR 356 was very well attended and I had some excellent Q & A. The talk introduces HTML 5 WebSocket, overviews JSR 356, tours the API and ends with a small WebSocket demo on GlassFish 4. The slide deck for the talk is posted below. Building Java HTML5/WebSocket Applications with JSR 356 from Reza Rahman The demo code is posted on GitHub: https://github.com/m-reza-rahman/hello-websocket. Oracle hosted a reception in the evening which was very well attended. Later in the evening the QCon organizers hosted a very nice speakers' dinner at a local boutique restaurant with excellent atmosphere and good food.

    Read the article

  • Hosting WCF over internet

    - by user1876804
    I am pretty new to exposing the WCF services hosted on IIS over internet. I will be deploying a WCF service over IIS(6 or 7) and would like to expose this service over the internet. This will be hosted in a corporate network having firewall, I want this service to be accessible over the internet(should be able to pass through the firewall) I did some research on this and some of the pointers I got: 1. I could use wsHTTPBinding or nettcpbinding (the client is intended to be .net client). Which of the bindings is preferable. 2. To overcome the corporate I came across DMZ server, what is the purpose of this and do I really need to use this). 3. I will be passing some files between the client and server, and the client needs to know the progress of the processing on server and the end result. I know this is a very broad question to ask, but could anyone give me pointers where I could start on this and what approach to take for this problem.

    Read the article

  • iFrame content pageviews not matching parent page pageviews

    - by surfbird0713
    I have a page with content hosted in an iFrame, both using the same GA account ID. When I look at the pages report, the parent page has about 9000 unique views, but the iFrame content only has 3700. Anyone have an idea what could cause that kind of discrepancy? My only guess is that it would be caused by people moving on before the iFrame content has a chance to load, but the average time on page for the host page is 56 seconds, so that doesn't seem possible. This is the page in question: http://cookware.lecreuset.com/cookware/content_le-creuset-lid_10151_-1_20002 The flipbook is hosted in the iFrame on a separate domain. I have each page of the flipbook triggering a virtual pageview to try to evaluate engagement with the book - when the flipbook loads, it fires a pageview for the page it is on, so that is the page I'm using for the 3700 number. I also looked at the source of the iFrame in the pages report, and that number just about matches the virtual pageviews so that piece is consistent. Any ideas on this are much appreciated. Thanks!

    Read the article

  • Is there a visual web application builder or rapid webapp prototyping framework?

    - by Jesper Mortensen
    Question: Is there such a thing as a self-hosted framework or CMS especially tailored towards the creation of interactive web applications without -- or with an absolute minimum of -- programming? (Substantially less programming than say a simple Rails app or a plugin for Wordpress, Joomla etc would require.) As for desired features I'd settle for whatever is available, but some ideas could be: A User authentication and Permissions system. A GUI-driven input form builder. A GUI-driven template / visual site design builder. A simple scripting language (think AppleScript-like simplicity) A highly modular architecture, with high-level business objects (users, forms data, etc) exposed for easy re-use. If something like the above doesn't exist, then what comes near this? Need: This is for self-hosted rapid prototyping of web applications, and limited user testing of webapp user interface designs in a closed user test. Notes: I know about Ruby on Rails (Rails), Django, Pyramid etc. I'm looking for something much faster to work in, for making prototypes. I know about CMS's in general but find that most of them are tailored towards displaying information to the end users. If there is an exceptionally easy-to-master CMS with easy scripting (lets say much more so than for example Wordpress) then I'd be interested.

    Read the article

  • Site migration and SEO impact

    - by John Smith
    I'd greatly appreciate a response on the following question relating to site migration and SEO impact. Here's some background on how my domain name and site is currently configured: My domain name provider has the following settings: host name @ is an A NAME record and points to IP address x.x.x.x host name www is an A NAME record and points to IP address x.x.x.x sub-domain host name new.example.com is an A NAME record and points to IP address x.x.x.x My hosting provider has the following settings: host record @ is an A NAME record and points to IP address x.x.x.x, folder home/public_html/old host record www is a C NAME record and points to example.com sub-domain host record new.example.com points to home/public_html/new I want to: point the domain (example.com AND www.example.com) to the content hosted under folder home/public_html/new, which is currently the content directory for new.example.com retire the content hosted under folder home/public_html/old retire the sub-domain host record new.example.com I believe the easiest method of doing this, is: removing the sub-domain host record new.example.com; and changing the following line in the .htaccess file in home/public_html from # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/old/ to # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/new/ But I don't understand how this will impact my SERP - ideally, I'd like it to remain the same. Research on this topic resulted in the following Google page, which was no help, and this related StackExchange question, which suggests that this should not affect my SERP (at least, not permanently). But I wanted to make certain with a more specific example, and hopefully contribute to the community at the same time. I'd appreciate any feedback on this. Is there a better/recommended method to migrate sites this way? Is there an SEO impact?

    Read the article

  • PCI compliance when using third-party processing

    - by Moses
    My company is outsourcing the development of our new e-commerce site to a third party web development company. The way they set up our site to handle transactions is by having the user enter the necessary payment info, then passing that data to a third party merchant that processes the payment, then completing the transaction if everything is good. When the issue of PCI/DSS compliance was raised, they said: You wont need PCI certification because the clients browser will send the sensitive information directly to the third party merchant when the transaction is processed. However, the process will be transparent to the user because all interface and displays are controlled by us. The only server required to be compliant is the third party merchant's because no sensitive card data ever touches your server or web app. Even though I very much so trust and respect the knowledge of our web developers, what they are saying is raising some serious red flags for me. The way the site is described, I am sure we will not be using a hosted payment page like PayPal or Google Checkout offers (how could we maintain control over UI if we were?) And while my knowledge of e-commerce is laughable at best, it seems like the only other option for us would be to use XML direct to communicate with our third party merchant for processing. My two questions are as follows: Based off everything you've read, is "XML Direct" the only option they could conceivably be using, or is there another method I don't know of which they could be implementing? Most importantly, is it true our site does not need PCI certification? As I understand it, using the XML direct method means that we do have to be PCI/DSS certified, and the only way around getting certified is through a payment hosted page (i.e. PayPal).

    Read the article

  • Forwarding a subdomain to main domain using Godaddy

    - by Ryan Hayes
    I have current blog, which was hosted on Tumblr at http://blog.ryanhayes.net. I'm moving it over to http://ryanhayes.net, and have all the 301 redirects set up for the blog entries to map to my new blog, which is hosted using Godaddy (domain included). When I try to set up a subdomain forward, I'm greeted with a nice 403 Forbidden response (as of this writing, you can see it at http://blog.ryanhayes.net. When I try to ping both the subdomain and domain, they point to the same IP address, so I know blog subdomain has at least switched over to point to the same content. I don't really understand why I would get a 403 Forbidden on the same content that I can see perfectly fine via another domain. Currently, I have a CNAME of blog pointing to @, which is how "www" is set up to forward, so I'm assuming it would do the same thing. My question is what is the proper way to set up my DNS to make the blog subdomain forward to my main domain (301) using the GoDaddy DNS manager? Bonus: What is the background on why I am getting a 403 error the current way? Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. UPDATE 12/7/2010 Error on site has been fixed, you can no longer view it from my site.

    Read the article

  • Google results show .info domain instead of .com

    - by user481913
    I am on shared hosting currently and i registered this account with a .info domain as the main domain.... say MyDomain.info . However, the site runs from MyDomain.com . This is a cpanel based shared hosting account. MyDomain.info has nothing hosted at all... i.e no content files... MyDomain.com is setup as an Add On Domain and run from /public_html/MyDomain under MyDomain.info The problem is that when i type MyDomain as the keyword for search in Google , it shows result(s)for Mydomain.info although this is not the intended site and has no content hosted on itself. I tried to solve the issue by issuing a 301 permanent redirect from MyDomain.info to MyDomain.com, however Google keeps on displaying results as mydomain.info as the main site even after 1 month of the redirect. I want google to index MyDomain.com as the main site and remove MyDomain.info from the results. Also is this harmful from the seo point of view? How can i improve the seo if it is?

    Read the article

  • Storing data for use on Android and Windows Applications

    - by Andy Mepham
    I posted this last night on StackOverflow and was advised to move it over to StackExchange, thank you for taking a moment to look at my question. I'm developing a project proposal for my final year project at University and as I aim to use programming languages I am currently not too familiar with I'm looking for some guidance - I can't include details of my project but hopefully you will understand what I'm after. I'm going to be creating an Android application (in Java) and a Windows Application (in C#) that will ideally access, query and update a remotely hosted Database or set of XML files (this would most likely be over the Internet). I've done some looking around the internet and SQLite seems like a safe-bet for cross-platform manipulation of the database; however I would like to keep the system as lightweight as possible and I'm wondering whether XML files may provide a better alternative? Anyone out there that has experience using SQLite and/or remotely hosted XML for the purposes of Android and/or C# development that could point me in the right direction? If there is an alternative solution other than those I have mentioned I would be interested to hear about them too. Thank you for taking the time to read my question. Edit: The purpose of this application is for a small scale business, the data source would not need to be updated by more than one source but may be view from multiple sources (i.e. through multiple phones and a desktop PC). The database wouldn't be updating masses of data at a time (most likely single rows of a few tables at the most).

    Read the article

  • How to combat negative SEO?

    - by Perturbed
    Someone has decided to create a hate blog on a hosted blogging service (wordpress.com) that bashes my company. The blog contains posts that completely flame myself, my service, and contains complete falsehoods about how I run my business. Without going into details, I'm pretty sure the author of this blog is an owner of a competing service (although it is authored completely anonymously). Frankly, I'm not sure if the content would qualify for defamation or not, but I really don't like the idea of spending money on a lawyer to even attempt to prove this. I also have no interest in retorting or even replying to the blog in any sort of way -- I feel this would justify the ludicrous claims that have been posted. Unfortunately, whoever wrote the blog was pretty smart about using key words that people commonly use to search for my service. Because my customer base is relatively small and local, our PageRank is not incredibly high. As a result, when someone Google's our business name, this blog is usually within the top five results (thankfully, it's never above the business' actual website, but it's usually within eyeshot). It's incredibly frustrating to hear from customers who have seen the link (luckily, most of the time they think the author is crazy). Is there anything I can do to combat this? Would it be worthwhile to setup my own hosted wordpress.com branded blog, in an effort to trump this wordpress.com with a blog that is more active of my own? TL;DR: Someone made a hate blog using wordpress.com and is now on the first page of my business' search results. What are my options?

    Read the article

  • For a large website developed in PHP, is it necessary to have a framework?

    - by Martin
    I am wondering if it is necessary to have a framework or if it is a must-have if I plan to make a large website. Large website could mean a lot of things: in other words, multiple dynamic web pages (40-50 dynamic pages, mysql content) and a lot of visitors (+- a million hits per month). The site will be hosted in a dedicated server environment. I know that it could simplify coding for a developer team, that it includes libraries and a lot of advantages. But I just feel that I don't need that. I think that learning how it works, managing it and installing it would take more time and I could use that time to code. I write PHP the simplest way I could (with performance in mind) and I try to reuse my code/functions/classes most of the time and I make sure that if another developer joins the team, that he won't be lost in the code. I am also planning to use MemCached or another Cache for PHP. As I said, the site will be hosted in a dedicated server environment but will be entirely managed by the hosting company. I am pretty sure the control panel for me to control the basic stuff will be Cpanel. For a developer like me that only knows PHP, Javascript, HTML, CSS, MYSQL and really basic server management, I feel that it seems to complicated to have a framework. Am I wrong? Is it worth the time to learn all about it? Thank you for your opinions and suggestions.

    Read the article

  • Moving domain currently on Google Apps to my own machines

    - by mag
    I own a domain which currently uses Google Apps (it was free back then). Due to recent events I have decided I want to move everything right where I can control the data, which basically means I have the idea of either using a dyndns solution and have at home my own mail server or, better, use an hosted machine to retrieve the mail and the move it via mua to a local machine. My problem is that I know basically next to nothing about DNS and stuff, while I have installed mail servers before. :) [I simply never had the need to work with DNS...] I'd really appreciate if someone could point me in the right direction: what do I need to do to: terminate the Google Apps management of the domain setup dns entries to that mail goes to the new server instead of Google Apps Additionally: if I decide to go the "hosted route" and I put up a machine via a supercheap hosting seller (say, the cheapest OVH one is a few euros a month), how can I tell the world that that machine is the mail server? Of course I would be wise to do a full backup of the mail on Google Apps, and that's easy enough to do by imap (right?).

    Read the article

  • Speaking at Sinergija12

    - by DigiMortal
    Next week I will be speaker at Sinergija12, the biggest Microsoft conference held in Serbia. The first time I visited Sinergija it was clear to me that this is the event where I should go back. Why? Because technical level of sessions was very well in place and actually sessions I visited were pretty hardcore. Now, two years later, I will be back there but this time I’m there as speaker. My session at Sinergija12 Here are my three almost finished sessions for Sinergija12. ASP.NET MVC 4 Overview Session focuses on new features of ASP.NET MVC 4 and gives the audience good overview about what is coming. Demos cover all important new features - agent based output, new application templates, Web API and Single Page Applications. This session is for everybody who plans to move to ASP.NET MVC 4 or who plans to start building modern web sites.   Building SharePoint Online applications using Napa Office 365 Next version of Office365 allows you to build SharePoint applications using browser based IDE hosted in cloud. This session introduces new tools and shows through practical examples how to build online applications for SharePoint 2013.   Cloud-enabling ASP.NET MVC applications Cloud era is here and over next years more and more web applications will be hosted on cloud environments. Also some of our current web applications will be moved to cloud. This session shows to audience how to change the architecture of ASP.NET web application so it runs on shared hosting and Windows Azure with same code base. Also the audience will see how to debug and deploy web applications to Windows Azure. All developers who are coming to Sinergija12 are welcome to my sessions. See you there! :)

    Read the article

  • Template syntax for users - is there a right way to do it?

    - by RickM
    Ok, I'm in the middle of building a saas system, and as part of that, the hosted clients need to be able to edit certain layout templates, baqsically just html, css and javascript files. I'm obviously going to be wanting to use a template syntax here as it would be dumb to let people execute PHP code, so in this instance template syntax does need to be used. I know that in the grand scale of things, this is a very minor thing, but what template syntax do you use, and why? Is there one that's considered better than others? I've seen all sorts being used with no real consistency, for example: Smarty Style: {$someVar} {foreach from="foo" item="bar"} {$bar.food} {/foreach} ASP Style: {% someVar %} {% foreach foo as bar %} {% bar.food %} {% endforeach %} HTML Style: <someVar> <foreach from="foo" item="bar"> <bar:food> </foreach> PyroCMS/FuelPHP "LEX" Style: {{ someVar }} {{ foreach from="foo" item="bar" }} {{ bar:food }} {{ endforeach }} Obviously these arent 100% accurate (for example, LEX is used alongside PHP for loops), and are only to give you an example of what I mean. What, in your opinion would be the best one (if any) to go with. I ask this bearing in mind that people using this are likely to be novice users. I did look around at a bunch of hosted CMS and E-Commerce systems as these seem to make use of user-editable templates, and most seem to be using some form of their own syntax. I should note that whatever style I end up going with, it will be with a custom template handler due to the complexity of the system and how template files are stored. Plus I'd not want to touch the likes of Smarty with a barge pole!

    Read the article

  • NFS mount of /var/www to OS X

    - by ploughguy
    I have spent 2 hours trying to create an NFS mount from my Ubuntu 10.04 LTS server to my OS X desktop system. Objective: three way file compare between the code base on the Mac, the development system on the local Linux test system, and the hosted website. The hosted service uses cpanel so I can mount a webdisk - easy as pie - took 10 seconds. The local Ubuntu box, on the other hand - nothing but pain and frustration. Here is what I have tried: In File Browser, navigate to /var/www/site and right-click. Select share this folder. Enter sharename wwwsite and a comment. Click button "Create Share". Message says - you can only share file systems you own. There is a message on how to fix this, but the killer is that this is sharing by SMB. It will change the LFs to CR-LFs which will affect the file comparison. So forget this option. In a terminal window, run shares-admin (I have not been able to convince it to give me the "Shared Folders" option in the System Administration window - Maybe it is somewhere else in the menu, but I cannot find it) define an NFS export. Enter the path /var/www/site, select NFS enter the ip address of the iMac and save. On the mac, try to mount the file system using the usual methods - finder, command line "mount" command - not found. Nothing. Tried restarting the linux box in case there is a daemon that needs restarting - nothing. So I have run out of stuff to do. I have tried searching the documentation - it is pretty basic. The man page documentation is as opaque as ever. Please, oh please, will someone help me to get this @38&@^# thing to work! Thanks for reading this far... PG.

    Read the article

  • Is it possible to keep only one Database for both web and desktop applications?

    - by B4NZ41
    I'm experiencing a trouble with my business model, let me explain better. I'm developing a software for 1 year and few months, it's for the food industry, more exactly a software to: Delivery, Take Way, Table Reservation, POS, Accounts Payable and Receivable, Prints(receipt), Kitchen Monitors Orders, Customers Orders Control and Fiscal Area. Well, I had separated the software mainly in two areas, one is web area and the other is desktop area (Used by Admins only) and local installed. 1 - Web Area (Basically do the follow:) Show Catalog with the products Customers Make Orders Customers Pay for the Orders etc ... as mentioned above 2 - Desktop Area Manage Orders Manage Customers Manage Suppliers Manage Accounts Payable and Receivable etc ... as mentioned above The web area is hosted in an online web server (scripts and database are online). The Desktop area is hosted locally in a Linux machine with a local database and local scripts files. My question is: Is it possible to keep only one Database for both applications? If YES, please what is the best approach? Follow my technical specification environment Database: Actually I have two databases working and I would love to keep only one. Operating System: Linux (Kernel 2.6.X and above) or Windows (XP and above) For the Web Area Apache, PHP, Python, Java Script, Shell Script and MySQL. For the Desktop Area: PHP-GTK2, Apache, PHP, MySQL and Shell Script.

    Read the article

  • What service or software should I use to serve advertising on a site with about 120k monthly page views?

    - by JasonBirch
    I have a site that is generating about 120k monthly page views and is being hosted on a shared FreeBSD server where I have access to PHP and MySQL. I am using some custom PHP server-side scripts that give each of my ad networks (AdSense, Tribal Fusion, etc) an adjustable percentage of impressions in each of the ad positions on my pages. I am looking for a better way of managing and measuring the delivery of these ads, and would also like to be able to take direct placements and provide statistics to the clients. I am looking at options including OpenX self-hosted, OpenX community, and Google DoubleClick for Publishers Small Business (DFP), but am having difficulty determining which one will best meet my needs. They all seem to have pretty steep learning curves compared to my simple scripts. What I have taken away so far as the benefit of self-hosting is that I don't have to pay for the service if I exceed a maximum number of ad impressions, while both OpenX Community and DFP have free impression limits. Of course, if I was doing those kind of numbers I'd need to upgrade my hosting account, but I'm not sure even at that point whether it would be cheaper to serve the ads myself than pay for a premium service. Apart from this, I really need insights into what features differentiate these services, why I might want to choose one over another, and if there are any other competing products or service of the same quality that I should look into. Answers from webmasters who have used both (or all three) services and can talk to usability and ease of ad management would be highly appreciated.

    Read the article

  • How do web servers enforce the same-origin policy?

    - by BBnyc
    I'm diving deeper into developing RESTful APIs and have so far worked with a few different frameworks to achieve this. Of course I've run into the same-origin policy, and now I'm wondering how web servers (rather than web browsers) enforce it. From what I understand, some enforcing seems to happen on the browser's end (e.g., honoring a Access-Control-Allow-Origin header received from a server). But what about the server? For example, let's say a web server is hosting a Javascript web app that accesses an API, also hosted on that server. I assume that server would enforce the same-origin policy --- so that only the javascript that is hosted on that server would be allowed to access the API. This would prevent someone else from writing a javascript client for that API and hosting it on another site, right? So how would a web server be able to stop a malicious client that would try to make AJAX requests to its api endpoints while claiming to be running javascript that originated from that same web server? What's the way most popular servers (Apache, nginx) protect against this kind of attack? Or is my understanding of this somehow off the mark? Or is the cross-origin policy only enforced on the client end?

    Read the article

  • Part 1 - Load Testing In The Cloud

    - by Tarun Arora
    Azure is fascinating, but even more fascinating is the marriage of Azure and TFS! Introduction Recently a client I worked for had 2 major business critical applications being delivered, with very little time budgeted for Performance testing, we immediately hit a bottleneck when the performance testing phase started, the in house infrastructure team could not support the hardware requirements in the short notice. It was suggested that the performance testing be performed on one of the QA environments which was a fraction of the production environment. This didn’t seem right, the team decided to turn to the cloud. The team took advantage of the elasticity offered by Azure, starting with a single test agent which was provisioned and ready for use with in 30 minutes the team scaled up to 17 test agents to perform a very comprehensive performance testing cycle. Issues were identified and resolved but the highlight was that the cost of running the ‘test rig’ proved to be less than if hosted on premise by the infrastructure team. Thank you for taking the time out to read this blog post, in the series of posts, I’ll try and cover the start to end of everything you need to know to use Azure to build your Test Rig in the cloud. But Why Azure? I have my own Data Centre… If the environment is provisioned in your own datacentre, - No matter what level of service agreement you may have with your infrastructure team there will be down time when the environment is patched - How fast can you scale up or down the environments (keeping the enterprise processes in mind) Administration, Cost, Flexibility and Scalability are the areas you would want to think around when taking the decision between your own Data Centre and Azure! How is Microsoft's Public Cloud Offering different from Amazon’s Public Cloud Offering? Microsoft's offering of the Cloud is a hybrid of Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) which distinguishes Microsoft's offering from other providers such as Amazon (Amazon only offers IaaS). PaaS – Platform as a Service IaaS – Infrastructure as a Service Fills the needs of those who want to build and run custom applications as services. Similar to traditional hosting, where a business will use the hosted environment as a logical extension of the on-premises datacentre. A service provider offers a pre-configured, virtualized application server environment to which applications can be deployed by the development staff. Since the service providers manage the hardware (patching, upgrades and so forth), as well as application server uptime, the involvement of IT pros is minimized. On-demand scalability combined with hardware and application server management relieves developers from infrastructure concerns and allows them to focus on building applications. The servers (physical and virtual) are rented on an as-needed basis, and the IT professionals who manage the infrastructure have full control of the software configuration. This kind of flexibility increases the complexity of the IT environment, as customer IT professionals need to maintain the servers as though they are on-premises. The maintenance activities may include patching and upgrades of the OS and the application server, load balancing, failover clustering of database servers, backup and restoration, and any other activities that mitigate the risks of hardware and software failures.   The biggest advantage with PaaS is that you do not have to worry about maintaining the environment, you can focus all your time in solving the business problems with your solution rather than worrying about maintaining the environment. If you decide to use a VM Role on Azure, you are asking for IaaS, more on this later. A nice blog post here on the difference between Saas, PaaS and IaaS. Now that we are convinced why we should be turning to the cloud and why in specific Azure, let’s discuss about the Test Rig. The Load Test Rig – Topology Now the moment of truth, Of course a big part of getting value from cloud computing is identifying the most adequate workloads to take to the cloud, so I’ve decided to try to make a Load Testing rig where the Agents are running on Windows Azure.   I’ll talk you through the above Topology, - User: User kick starts the load test run from the developer workstation on premise. This passes the request to the Test Controller. - Test Controller: The Test Controller is on premise connected to the same domain as the developer workstation. As soon as the Test Controller receives the request it makes use of the Windows Azure Connect service to orchestrate the test responsibilities to all the Test Agents. The Windows Azure Connect endpoint software must be active on all Azure instances and on the Controller machine as well. This allows IP connectivity between them and, given that the firewall is properly configured, allows the Controller to send work loads to the agents. In parallel, the Controller will collect the performance data from the agents, using the traditional WMI mechanisms. - Test Agents: The Test Agents are on the Windows Azure Public Cloud, as soon as the test controller issues instructions to the test agents, the test agents start executing the load tests. The HTTP requests are issued against the web server on premise, the results are captured by the test agents. And finally the results are passed over to the controller. - Servers: The Web Server and DB Server are hosted on premise in the datacentre, this is usually the case with business critical applications, you probably want to manage them your self. Recap and What’s next? So, in the introduction in the series of blog posts on Load Testing in the cloud I highlighted why creating a test rig in the cloud is a good idea, what advantages does Windows Azure offer and the Test Rig topology that I will be using. I would also like to mention that i stumbled upon this [Video] on Azure in a nutshell, great watch if you are new to Windows Azure. In the next post I intend to start setting up the Load Test Environment and discuss pricing with respect to test agent machine types that will be used in the test rig. Hope you enjoyed this post, If you have any recommendations on things that I should consider or any questions or feedback, feel free to add to this blog post. Remember to subscribe to http://feeds.feedburner.com/TarunArora.  See you in Part II.   Share this post : CodeProject

    Read the article

  • Windows Azure: Announcing release of Windows Azure SDK 2.2 (with lots of goodies)

    - by ScottGu
    Earlier today I blogged about a big update we made today to Windows Azure, and some of the great new features it provides. Today I’m also excited to also announce the release of the Windows Azure SDK 2.2. Today’s SDK release adds even more great features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter The below post has more details on what’s available in today’s Windows Azure SDK 2.2 release.  Also head over to Channel 9 to see the new episode of the Visual Studio Toolbox show that will be available shortly, and which highlights these features in a video demonstration. Visual Studio 2013 Support Version 2.2 of the Window Azure SDK is the first official version of the SDK to support the final RTM release of Visual Studio 2013. If you installed the 2.1 SDK with the Preview of Visual Studio 2013 we recommend that you upgrade your projects to SDK 2.2.  SDK 2.2 also works side by side with the SDK 2.0 and SDK 2.1 releases on Visual Studio 2012: Integrated Windows Azure Sign In within Visual Studio Integrated Windows Azure Sign-In support within Visual Studio is one of the big improvements added with this Windows Azure SDK release.  Integrated sign-in support enables developers to develop/test/manage Windows Azure resources within Visual Studio without having to download or use management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer inside Visual Studio and choose the “Connect to Windows Azure” context menu option to connect to Windows Azure: Doing this will prompt you to enter the email address of the account you wish to sign-in with: You can use either a Microsoft Account (e.g. Windows Live ID) or an Organizational account (e.g. Active Directory) as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio Server Explorer (and you can start using them): With this new integrated sign in experience you are now able to publish web apps, deploy VMs and cloud services, use Windows Azure diagnostics, and fully interact with your Windows Azure services within Visual Studio without the need for a management certificate.  All of the authentication is handled using the Windows Azure Active Directory associated with your Windows Azure account (details on this can be found in my earlier blog post). Integrating authentication this way end-to-end across the Service Management APIs + Dev Tools + Management Portal + PowerShell automation scripts enables a much more secure and flexible security model within Windows Azure, and makes it much more convenient to securely manage multiple developers + administrators working on a project.  It also allows organizations and enterprises to use the same authentication model that they use for their developers on-premises in the cloud.  It also ensures that employees who leave an organization immediately lose access to their company’s cloud based resources once their Active Directory account is suspended. Filtering/Subscription Management Once you login within Visual Studio, you can filter which Windows Azure subscriptions/regions are visible within the Server Explorer by right-clicking the “Filter Services” context menu within the Server Explorer.  You can also use the “Manage Subscriptions” context menu to mange your Windows Azure Subscriptions: Bringing up the “Manage Subscriptions” dialog allows you to see which accounts you are currently using, as well as which subscriptions are within them: The “Certificates” tab allows you to continue to import and use management certificates to manage Windows Azure resources as well.  We have not removed any functionality with today’s update – all of the existing scenarios that previously supported management certificates within Visual Studio continue to work just fine.  The new integrated sign-in support provided with today’s release is purely additive. Note: the SQL Database node and the Mobile Service node in Server Explorer do not support integrated sign-in at this time. Therefore, you will only see databases and mobile services under those nodes if you have a management certificate to authorize access to them.  We will enable them with integrated sign-in in a future update. Remote Debugging Cloud Resources within Visual Studio Today’s Windows Azure SDK 2.2 release adds support for remote debugging many types of Windows Azure resources. With live, remote debugging support from within Visual Studio, you are now able to have more visibility than ever before into how your code is operating live in Windows Azure.  Let’s walkthrough how to enable remote debugging for a Cloud Service: Remote Debugging of Cloud Services To enable remote debugging for your cloud service, select Debug as the Build Configuration on the Common Settings tab of your Cloud Service’s publish dialog wizard: Then click the Advanced Settings tab and check the Enable Remote Debugging for all roles checkbox: Once your cloud service is published and running live in the cloud, simply set a breakpoint in your local source code: Then use Visual Studio’s Server Explorer to select the Cloud Service instance deployed in the cloud, and then use the Attach Debugger context menu on the role or to a specific VM instance of it: Once the debugger attaches to the Cloud Service, and a breakpoint is hit, you’ll be able to use the rich debugging capabilities of Visual Studio to debug the cloud instance remotely, in real-time, and see exactly how your app is running in the cloud. Today’s remote debugging support is super powerful, and makes it much easier to develop and test applications for the cloud.  Support for remote debugging Cloud Services is available as of today, and we’ll also enable support for remote debugging Web Sites shortly. Firewall Management Support with SQL Databases By default we enable a security firewall around SQL Databases hosted within Windows Azure.  This ensures that only your application (or IP addresses you approve) can connect to them and helps make your infrastructure secure by default.  This is great for protection at runtime, but can sometimes be a pain at development time (since by default you can’t connect/manage the database remotely within Visual Studio if the security firewall blocks your instance of VS from connecting to it). One of the cool features we’ve added with today’s release is support that makes it easy to enable and configure the security firewall directly within Visual Studio.  Now with the SDK 2.2 release, when you try and connect to a SQL Database using the Visual Studio Server Explorer, and a firewall rule prevents access to the database from your machine, you will be prompted to add a firewall rule to enable access from your local IP address: You can simply click Add Firewall Rule and a new rule will be automatically added for you. In some cases, the logic to detect your local IP may not be sufficient (for example: you are behind a corporate firewall that uses a range of IP addresses) and you may need to set up a firewall rule for a range of IP addresses in order to gain access. The new Add Firewall Rule dialog also makes this easy to do.  Once connected you’ll be able to manage your SQL Database directly within the Visual Studio Server Explorer: This makes it much easier to work with databases in the cloud. Visual Studio 2013 RTM Virtual Machine Images Available for MSDN Subscribers Last week we released the General Availability Release of Visual Studio 2013 to the web.  This is an awesome release with a ton of new features. With today’s Windows Azure update we now have a set of pre-configured VM images of VS 2013 available within the Windows Azure Management Portal for use by MSDN customers.  This enables you to create a VM in the cloud with VS 2013 pre-installed on it in with only a few clicks: Windows Azure now provides the fastest and easiest way to get started doing development with Visual Studio 2013. Windows Azure Management Libraries for .NET (Preview) Having the ability to automate the creation, deployment, and tear down of resources is a key requirement for applications running in the cloud.  It also helps immensely when running dev/test scenarios and coded UI tests against pre-production environments. Today we are releasing a preview of a new set of Windows Azure Management Libraries for .NET.  These new libraries make it easy to automate tasks using any .NET language (e.g. C#, VB, F#, etc).  Previously this automation capability was only available through the Windows Azure PowerShell Cmdlets or to developers who were willing to write their own wrappers for the Windows Azure Service Management REST API. Modern .NET Developer Experience We’ve worked to design easy-to-understand .NET APIs that still map well to the underlying REST endpoints, making sure to use and expose the modern .NET functionality that developers expect today: Portable Class Library (PCL) support targeting applications built for any .NET Platform (no platform restriction) Shipped as a set of focused NuGet packages with minimal dependencies to simplify versioning Support async/await task based asynchrony (with easy sync overloads) Shared infrastructure for common error handling, tracing, configuration, HTTP pipeline manipulation, etc. Factored for easy testability and mocking Built on top of popular libraries like HttpClient and Json.NET Below is a list of a few of the management client classes that are shipping with today’s initial preview release: .NET Class Name Supports Operations for these Assets (and potentially more) ManagementClient Locations Credentials Subscriptions Certificates ComputeManagementClient Hosted Services Deployments Virtual Machines Virtual Machine Images & Disks StorageManagementClient Storage Accounts WebSiteManagementClient Web Sites Web Site Publish Profiles Usage Metrics Repositories VirtualNetworkManagementClient Networks Gateways Automating Creating a Virtual Machine using .NET Let’s walkthrough an example of how we can use the new Windows Azure Management Libraries for .NET to fully automate creating a Virtual Machine. I’m deliberately showing a scenario with a lot of custom options configured – including VHD image gallery enumeration, attaching data drives, network endpoints + firewall rules setup - to show off the full power and richness of what the new library provides. We’ll begin with some code that demonstrates how to enumerate through the built-in Windows images within the standard Windows Azure VM Gallery.  We’ll search for the first VM image that has the word “Windows” in it and use that as our base image to build the VM from.  We’ll then create a cloud service container in the West US region to host it within: We can then customize some options on it such as setting up a computer name, admin username/password, and hostname.  We’ll also open up a remote desktop (RDP) endpoint through its security firewall: We’ll then specify the VHD host and data drives that we want to mount on the Virtual Machine, and specify the size of the VM we want to run it in: Once everything has been set up the call to create the virtual machine is executed asynchronously In a few minutes we’ll then have a completely deployed VM running on Windows Azure with all of the settings (hard drives, VM size, machine name, username/password, network endpoints + firewall settings) fully configured and ready for us to use: Preview Availability via NuGet The Windows Azure Management Libraries for .NET are now available via NuGet. Because they are still in preview form, you’ll need to add the –IncludePrerelease switch when you go to retrieve the packages. The Package Manager Console screen shot below demonstrates how to get the entire set of libraries to manage your Windows Azure assets: You can also install them within your .NET projects by right clicking on the VS Solution Explorer and using the Manage NuGet Packages context menu command.  Make sure to select the “Include Prerelease” drop-down for them to show up, and then you can install the specific management libraries you need for your particular scenarios: Open Source License The new Windows Azure Management Libraries for .NET make it super easy to automate management operations within Windows Azure – whether they are for Virtual Machines, Cloud Services, Storage Accounts, Web Sites, and more.  Like the rest of the Windows Azure SDK, we are releasing the source code under an open source (Apache 2) license and it is hosted at https://github.com/WindowsAzure/azure-sdk-for-net/tree/master/libraries if you wish to contribute. PowerShell Enhancements and our New Script Center Today, we are also shipping Windows Azure PowerShell 0.7.0 (which is a separate download). You can find the full change log here. Here are some of the improvements provided with it: Windows Azure Active Directory authentication support Script Center providing many sample scripts to automate common tasks on Windows Azure New cmdlets for Media Services and SQL Database Script Center Windows Azure enables you to script and automate a lot of tasks using PowerShell.  People often ask for more pre-built samples of common scenarios so that they can use them to learn and tweak/customize. With this in mind, we are excited to introduce a new Script Center that we are launching for Windows Azure. You can learn about how to scripting with Windows Azure with a get started article. You can then find many sample scripts across different solutions, including infrastructure, data management, web, and more: All of the sample scripts are hosted on TechNet with links from the Windows Azure Script Center. Each script is complete with good code comments, detailed descriptions, and examples of usage. Summary Visual Studio 2013 and the Windows Azure SDK 2.2 make it easier than ever to get started developing rich cloud applications. Along with the Windows Azure Developer Center’s growing set of .NET developer resources to guide your development efforts, today’s Windows Azure SDK 2.2 release should make your development experience more enjoyable and efficient. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Host new WF4 workflows in appfabric

    - by racingcow
    Hello, I am new to using AppFabric to host WF services. I am trying to write a workflow admin application that will allow users to create xaml workflow definitions using the hosted WF4 designer, and then somehow allow those workflow defitions to be automatically deployed and hosted in AppFabric with the click of a button. I have the designer going, and I have read a couple of tutorials on how to host workflow services in AppFabric such as http://msdn.microsoft.com/en-us/library/ee677238.aspx, but my problem is how to deploy and host the workflow services via code. Does anyone know if this sort of "autodeploy/host" thing can be done with AppFabric? If so, could you point me in the right direction on this? -David

    Read the article

  • Classic ASP vs. ASP.NET encryption options

    - by harrije
    I'm working on a web site where the new pages are ASP.NET and the legacy pages are Classic ASP. Being new to development in the Windows env, I've been studying the latest technology, i.e. .NET and I become like a deer in headlights when ever legacy issues come up regarding COM objects. Security on the website is an abomination, but I've easily encrypted the connectionStrings in the web.config file per http://www.4guysfromrolla.com/articles/021506-1.aspx based on DPAPI machine mode. I understand this approach is not the most secure, but it's better than nothing which is what it was for the ASP.NET pages. Now, I question how to do similar encryption for the connection strings used by the Classic ASP pages. A complicating factor is that the web sited is hosted where I do not have admin permissions or even command line access, just FTP. Moreover I want to avoid managing the key. My research has found: DPAPI with COM interop. Seems like this should already be available, but the only thing I could find discussing this is CyptoUtility (see http://msdn.microsoft.com/en-us/magazine/cc163884.aspx) which is not installed on the hosting server. There are plenty of other third party COM objects, e.g. Crypto from Dalun Software http://www.dalun.com, but these aren't on the hosted server either, and they look to me to require you to do some kind of key management. There is CAPICOM on the hosted server, but M$ has deprecated it and many report it is not the easiest to use. It is not clear to me whether I can avoid key management with CAPICOM similar to using DPAPI for ASP.NET. If anyone happens to know, please clue me in. I could write an web service in ASP.NET and have the classic ASP pages use it to get the decrypted connection strings and then store those in an application variable. I would not need to use SSL since I could use localhost and nothing would be sent over the internet. In the simpliest form I could implement what someone termed a poor man's version based on a simple XML stream, however, I really was looking to avoid any development since I find it hard to believe there is not a simple solution for Classic ASP like there is for ASP.NET. Maybe I'm missing some options... Recommendations are requested...

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >