Search Results

Search found 8849 results on 354 pages for 'cloud hosting'.

Page 21/354 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Win 2 years free web hosting for your site!!!

    - by mcp111
    EggHeadCafe is giving away a free 2 year Personal Class Account to Arvixe ASP.NET Web Hosting! In fact, all members who enter the drawing below win a 20% discount off a Personal Class Account. The nice thing about Arvixe is that they also accept Google checkout and Paypal. http://www.eggheadcafe.com/tutorials/aspnet/828f2029-b7be-4d15-877c-0d9e9ab74fc5/review-of-arvixecom-web-site-hosting.aspx  Tweet

    Read the article

  • Will YouTube (or any video hosting service) provide a mp4 link to an uploaded video? [closed]

    - by DoubleJ
    I've looked into a number of video hosting options, but none seem to provide a .mp4 link to your uploaded video, which I require for use in a project using BigVideo.js. The only service I've found to provide this functionality so far is VimeoPro, which I can't afford. Is there any way to do this for free in YouTube? Or, otherwise, are there any other (preferably low- or no-cost) video hosting providers that will provide a .mp4 link?

    Read the article

  • Rackspace Cloud Server DNS Add SPF Records

    - by user625435
    I've setup my new LAMP server on Rackspace Cloud and the Basic A, C and MX DNS setup is no problem. I need to add an SPF record for a project I am migrating over to this new server that allows emails from a 3rd party server and I can't seem to figure out how to do this. There doesn't seem to be an option to add a TXT record in my Rackspace Cloud Server interface and I installed the BIND DNS on my Apache server, but I am not sure how to get that to been seen, etc.

    Read the article

  • Backup the Windows user folder in the cloud?

    - by Benjamin
    As I understand it, Google Drive and Dropbox, the two cloud storage providers I happen to know, can only sync a predefined folder that is created upon installation. I'd be happy to have an automated synchronisation of my folders in the cloud, but I'm not ready to change my habits, and start saving all my documents in the folder imposed by the provider. Is it possible with one of these, or any other you might know, to sync the full Windows user folder instead?

    Read the article

  • How much bandwidth does my server require?

    - by Jagira
    Hello, I have blog + website hosted on Godaddy server. I get somewhere around 50k hits a day. I was thinking of hosting it on my own. I want to know whether a 2mbps connection is sufficient or not? Somebody with self-hosting experience please guide...

    Read the article

  • Can I host my web application in Cloud ?

    - by Lakshmanan
    Hi, I have made a small web application created in J2EE which i want to develop as a "business". Can i host it in one of the cloud services ? Please do advice me as well on this issue. Will the cloud service be reliable over a long time ? Thanks in advance

    Read the article

  • Personal cloud storage options

    - by rhaddan
    I'm looking for some personal cloud storage options. My biggest concern about moving to a hosted storage solution is the long-term viability of the provider. Has anyone used a cloud service that you're crazy about? I'm a Mac user, so I need to have something that will work on the Mac OS and ideally the iPhone as well.

    Read the article

  • Reach self hosted server from LAN

    - by Freefri
    I have a self hosted server with Apache2 pointed with the domain example.com. I have also some virtual servers www.example.com, cloud.examle.com, etc. This server is in my LAN, and when I try to acces to my server within the lan throw www.examle.com y get my router's configuration page. From outside the LAN www.example.com and cloud.examle.com works properly. From inside the LAN 192.168.1.33 (server internal IP) shows the default webpage (www.examle.com), but I can not get cloud.examle.com I also have a LAN name server in 192.168.1.33 with bind9. I set up my gateway 192.168.1.1 with my LAN-NS as primary NS I solve this problem creating a new dns zone in the NS. This are my config files: ;ZONE-1 $ORIGIN . $TTL 86400 ; 1 day home.lan. IN SOA server.home.lan. hostmaster.home.lan. ( 2008080901 ; serial 8H ; refresh 4H ; retry 4W ; expire 1D ; minimum ) home.lan. IN NS server.home.lan. $ORIGIN home.lan. ; Set the address for localhost.home.lan localhost IN A 127.0.0.1 router IN A 192.168.1.1 server IN A 192.168.1.33 mypc IN A 192.168.1.132 ;ZONE-2 $ORIGIN . $TTL 86400 ; 1 day example.com. IN SOA www.example.com hostmaster.home.lan. ( 2008080902 ; serial 8H ; refresh 4H ; retry 4W ; expire 1D ; minimum ) example.com. IN NS 192.168.1.33 $ORIGIN examle.com. localhost IN A 127.0.0.1 www IN A 192.168.1.33 cloud IN A 192.168.1.33 My DNS and my names are working properly now My question are: What do you think about my solution? Can I change the A zone with CNAME to server.home.lan (this is the domain in the LAN to the server)? How can I set a default IP for all my whatever.example.com?

    Read the article

  • Open Different Types of New Google Documents Directly with These 7 New Chrome Apps

    - by Asian Angel
    Every time you want to open a new document of one kind or another in Google Drive you have to go through the whole ‘menu’ and ‘type selection’ process to do so. Now you can open the desired type directly from the New Tab Page using these terrific new Chrome apps from Google! The best part about this new set of apps is the ability to choose only the ones you want and/or need, then be able to start working on those new documents quickly without all the ‘selection’ hassle. How Hackers Can Disguise Malicious Programs With Fake File Extensions Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • The Windows Azure Software Development Kit (SDK) and the Windows Azure Training Kit (WATK)

    - by BuckWoody
    Windows Azure is a platform that allows you to write software, run software, or use software that we've already written. We provide lots of resources to help you do that - many can be found right here in this blog series. There are two primary resources you can use, and it's important to understand what they are and what they do. The Windows Azure Software Development Kit (SDK) Actually, this isn't one resource. We have SDK's for multiple development environments, such as Visual Studio and also Eclipse, along with SDK's for iOS, Android and other environments. Windows Azure is a "back end", so almost any technology or front end system can use it to solve a problem. The SDK's are primarily for development. In the case of Visual Studio, you'll get a runtime environment for Windows Azure which allows you to develop, test and even run code all locally - you do not have to be connected to Windows Azure at all, until you're ready to deploy. You'll also get a few samples and codeblocks, along with all of the libraries you need to code with Windows Azure in .NET, PHP, Ruby, Java and more. The SDK is updated frequently, so check this location to find the latest for your environment and language - just click the bar that corresponds to what you want: http://www.windowsazure.com/en-us/develop/downloads/ The Windows Azure Training Kit (WATK) Whether you're writing code, using Windows Azure Virtual Machines (VM's) or working with Hadoop, you can use the WATK to get examples, code, PowerShell scripts, PowerPoint decks, training videos and much more. This should be your second download after the SDK. This is all of the training you need to get started, and even beyond. The WATK is updated frequently - and you can find the latest one here: http://www.windowsazure.com/en-us/develop/net/other-resources/training-kit/     There are many other resources - again, check the http://windowsazure.com site, the community newsletter (which introduces the latest features), and my blog for more.

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Moved DNS and Email Hosting, Now Can't Send/Receive To/From Domains Hosted on Previous Host

    - by maxfinis
    Our company had 4 domains whose emails and DNS were hosted by one company, and then we moved the email and DNS hosting for 3 of the 4 domains to a new company. Now, the 3 domains that were moved can't send or receive emails to and from the one domain still left on the old server. All other email functions work fine for all 4 domains. There are no bouncebacks, error messages, or emails stuck in queue, and no evidence of these missing emails hitting the new servers. The new hosting company confirms that everything is fine on their end, and assures me that it's most likely an old zone file still remaining on the old nameserver, and so the emails sent from the old host is routed to what it believes is still the authoritative nameserver. Because the old zone file's MX records still contain the old resource, the requests never leave the old nameserver to go online to do a fresh search for the real (new) authoritative nameserver. The compounding problem is that the old company is rather inept and doesn't seem to have the technical expertise to identify the problem, much less fix it. (I know, I know.) Is the problem truly that this old zone file just needs to be deleted from the old company's nameserver? If so, what's the best way for me to describe this to them? If not, what do you think could be the issue? Any help is much appreciated. I'm not in IT, so all this is new to me. I know it seems weird for me (the client) to have to do this legwork, but I just want to get this resolved. Here's what I've done: Ran dig to verify that the old server's MX records still point to the old authoritative server, instead of going online to do a fresh search: ~$ dig @old.nameserver.com domainthatwasmoved.com mx ; << DiG 9.6.0-APPLE-P2 << @old.nameserver.com domainThatWasMoved.com mx ; (1 server found) ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NOERROR, id: 61227 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; QUESTION SECTION: ;domainthatwasmoved.com. IN MX ;; ANSWER SECTION: domainthatwasmoved.com. 3600 IN MX 10 mail.oldmailserver.com. ;; ADDITIONAL SECTION: mail.oldmailserver.com. 3600 IN A 65.198.191.5 ;; Query time: 29 msec ;; SERVER: 65.198.191.5#53(65.198.191.5) ;; WHEN: Sun Dec 26 16:59:22 2010 ;; MSG SIZE rcvd: 88 Ran dig to try to see where the new hosting company's servers look when emails are sent from the 3 domains that were moved, and got refused: ~$ dig @new.nameserver.net domainStillAtOldHost.com mx ; << DiG 9.6.0-APPLE-P2 << @new.nameserver.net domainStillAtOldHost.com mx ; (1 server found) ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: REFUSED, id: 31599 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;domainStillAtOldHost.com. IN MX ;; Query time: 31 msec ;; SERVER: 216.201.128.10#53(216.201.128.10) ;; WHEN: Sun Dec 26 17:00:14 2010 ;; MSG SIZE rcvd: 34

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • Why do I need two Instances in Windows Azure?

    - by BuckWoody
    Windows Azure as a Platform as a Service (PaaS) means that there are various components you can use in it to solve a problem: Compute “Roles” - Computers running an OS and optionally IIS - you can have more than one "Instance" of a given Role Storage - Blobs, Tables and Queues for Storage Other Services - Things like the Service Bus, Azure Connection Services, SQL Azure and Caching It’s important to understand that some of these services are Stateless and others maintain State. Stateless means (at least in this case) that a system might disappear from one physical location and appear elsewhere. You can think of this as a cashier at the front of a store. If you’re in line, a cashier might take his break, and another person might replace him. As long as the order proceeds, you as the customer aren’t really affected except for the few seconds it takes to change them out. The cashier function in this example is stateless. The Compute Role Instances in Windows Azure are Stateless. To upgrade hardware, because of a fault or many other reasons, a Compute Role's Instance might stop on one physical server, and another will pick it up. This is done through the controlling fabric that Windows Azure uses to manage the systems. It’s important to note that storage in Azure does maintain State. Your data will not simply disappear - it is maintained - in fact, it’s maintained three times in a single datacenter and all those copies are replicated to another for safety. Going back to our example, storage is similar to the cash register itself. Even though a cashier leaves, the record of your payment is maintained. So if a Compute Role Instance can disappear and re-appear, the things running on that first Instance would stop working. If you wrote your code in a Stateless way, then another Role Instance simply re-starts that transaction and keeps working, just like the other cashier in the example. But if you only have one Instance of a Role, then when the Role Instance is re-started, or when you need to upgrade your own code, you can face downtime, since there’s only one. That means you should deploy at least two of each Role Instance not only for scale to handle load, but so that the first “cashier” has someone to replace them when they disappear. It’s not just a good idea - to gain the Service Level Agreement (SLA) for our uptime in Azure it’s a requirement. We point this out right in the Management Portal when you deploy the application: (Click to enlarge) When you deploy a Role Instance you can also set the “Upgrade Domain”. Placing Roles on separate Upgrade Domains means that you have a continuous service whenever you upgrade (more on upgrades in another post) - the process looks like this for two Roles. This example covers the scenario for upgrade, so you have four roles total - One Web and one Worker running the "older" code, and one of each running the new code. In all those Roles you want at least two instances, and this example shows that you're covered for High Availability and upgrade paths: The take-away is this - always plan for forward-facing Roles to have at least two copies. For Worker Roles that do background processing, there are ways to architect around this number, but it does affect the SLA if you have only one.

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Convert C# Silverlight App To AZURE CLOUD Platform?!?!

    - by Goober
    The Scenario I've been following Brad Abrams Silverlight tutorial on his blog.... I have tried following Brads "How to deploy your app to the Cloud" tutorial however i'm struggling with it, even though it is in the same context as the first tutorial.... The Question Is the application structure essentially the same as the original "non-cloud based version"!? If not, which parts are different? (I get that there is a Cloud Service project added to the solution) - but what else?! Connection String Issue In my "Non-Cloud based application", I make use of the ADO.Net Entity Framework to communicate with my database. The connection string in my web.config file looks like: <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=CHASEDIGITALWS3;Initial Catalog=InmarsatZenith;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /></connectionStrings> However However the connection string that I get from SQL AZURE looks like: Server=tcp:k12ioy1rsi.ctp.database.windows.net;Database=master;User ID=simongilbert;Password=myPassword;Trusted_Connection=False; So how do I go about merging the two when I move the "non-cloud based application" to THE CLOUD?! Any help regarding converting a silverlight application to a cloud service and deploying it would be greatly appreciated

    Read the article

  • Asp.net hosting equivalent of Dreamhost (pricing, features and support)

    - by Cherian
    Disclaimer: I have browsed http://stackoverflow.com/questions/tagged/asp.net+hosting and didn’t find anything quite similar in value to Dreamhost. One of the biggest impediments IMHO for developing web applications on asp.net is the cost of deployment. I am not talking about building sites like Stackoverflow.com or plentyoffish.com. This is about sites that are bigger than brochureware and smaller than ones that require dedicated servers. Let me give you an example. xmec.org is an asp.net site I maintain for my college alumni. On an average it’s slated to hit around 1000-1100 views per day. At present it’s hosted on godaddy. The service is so damn pathetic; I am using it only because of the lack of options. The site doesn’t scale (no, it’s not the code) and the web control panels are extremely slow. The money I pay doesn’t justify the service or the performance. Every deployment push is a visit to the infuriating web control panel to set the permissions and the root directories. Had I developed it in python, this would have been deployed on Dreamhost.com with $10/year hosting fees (they have offers running all throughout) 50 GB space 5 MySQL Databases Shell / FTP Users POP / SMTP Access Unlimited Domains hosting Unlimited Sub domains hosting Unlimited Domains Forwarded/Mirrored Custom DNS (These are the only ones I could think of. More at the feature page) With a dream host shell, I even have a svn checked-out version of wordpress for my blog. Now, that’s control! To my question: Is there any asp.net (preferably .net 3.5. Dreamhost keeps on updating versions every fortnight) hosting company providing remotely similar feature-sets and pricing like Dreamhost. My requirements are: Less than $15-25/ year Typical WISP minus PHP .net 3.5 SP1 Full Trust mode(I can live with medium trust, if not for the IL emitting libraries) Isolated Application Pool 5 – 10 MySQL db’s Unlimited domain hosting MsSql 2005 or 2008 FTP support At Least 5 GB space SMTP IIS 7 Log files Accessibility Moderately good control panel Scripting, shell support Nominal bandwidth Another case in point: Recently I’ve been contemplating building a tool-website to find duplicates and weird characters in my Google contacts and fix them. With asp.net, the best part is that I can do this with LINQ to XML in less than 100 lines of code. What’s bad is the hosting part. I don’t think I stand to make any money out of this and therefore can’t afford to host it on GoGrid or DiscountAsp.net. Godaddy is not an option either. If I do this in python, I can push to this my existing $10 Dreamhost account with another domain pointed. No extra cost. Svn exported with scripts (capability) to change the connection string! Looking at the problem holistically, I think I represent a large breed of programmers playing it cheap and experimenting different things on a regular basis, one of which will become the next twitter/digg.

    Read the article

  • Time drift in Cloud Server - need to mainpulate GRUB config

    - by Aditya Advani
    We are hosting a VPS on a popular host and are experiencing a regular time drift of several minutes a day forward (approx 7). Linux Kernel: 2.6.18-164.11.1.el5 GNU/Linux Distro: CentOS release 5.4 (Final) We reached out to our hosting provider and their support advised us " This is a known issue with Cloud Servers. To fix this you will need to add one line to your grub config located at: /boot/grub/menu.lst The line you need to add is: noapic nolapic divider=10 nolapic_timer This should correct this issue. You will need to restart after this is added in. " Because I am wary of manipulating grub, mostly I'm terrified that our server may fail to restart - I ask you guys, the pro *nix admins - where exactly in this file does the recommended insertion below: # line from 1&1 for time syncing issue (Case 5163) noapic nolapic divider=10 nolapic_timer go? Please specify where exactly, and whether the order of commands is or is not important. Why is the block below "title CentOS ..." indented? If someone could give me an overview of how this works or point me to a resource that's easy to follow, that's what I'm looking for immediately, a light overview or basic understanding of what I;m doing. If GRUB and bootloaders are a deep dark treasure trove of kernel hacking or something, that's great well-recommended in-depth resources are also very welcome. This is my current /boot/grub/menu.lst # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file #boot=/dev/sda # serial --unit=0 --speed=57600 terminal --timeout=5 serial console timeout=5 title CentOS (2.6.18-164.11.1.el5) root (hd0,0) kernel /boot/vmlinuz-2.6.18-164.11.1.el5 ro root=/dev/hda1 console=tty0 console=tty initrd /boot/initrd-2.6.18-164.11.1.el5.img MOST IMPORTANT: I need to know where in the file above it is appropriate to paste the suggested line so I can confidently restart my VPS after manipulating GRUB config

    Read the article

  • Recommendations for hosting large videos

    - by Clinton Blackmore
    I recently created and put a 45-minute, 300 MB video file on my website and told a mailing list about it. Checking my site stats, I see that I've used 20% of my "unlimited" bandwidth for the month. As I want to be able to have several videos like this, clearly, I need to consider other options. The appeal to hosting files as my own site (aside from the supposedly unlimited disk space and bandwidth), is to be able to have control over the format, resolution, and quality of the video(s), as well as to ensure that it is clear that I'm the copyright holder (although the videos will be under a creative commons license). I find that for the screencasts I'm making, having a high resolution (say 3/4 of 1024 * 768) really makes seeing what is going on on the screen easier. It is also always a plus to not have the experience marred by advertisements. One more wrench to throw in is that while the videos are non-commercial, they do promote a club, and it seems that that falls afoul of some terms of services (especially for free services; while free is very nice, I will certainly consider putting up some money.) What recommendations do you have for (fairly) long, high-resolution videos? Should I look in depth at sites like YouTube and Vimeo, should I be considering a filesharing site [I have no qualms with someone downloading the entire video first -- I wouldn't want to watch 45 minutes in my browser!], hosting files with Bittorent (ugh -- I think that'd reduce my audience), or should I be looking into other web hosts (and if so, who?)

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >