Search Results

Search found 4459 results on 179 pages for 'rackspace cloud'.

Page 133/179 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • ASP.NET MVC multi-instance session management on amazon ec2

    - by gandil
    I have a web application written in asp.net mvc2. Currently hosted on amazon cloud ec2. Because of growing traffic we want move multi instance enviorenment. I have a custom session class which currently initiate at session start (global asax) and i am using via getter or setter class in application. Because of multi instance chore i have to handle hole security architecture. I am looking a better way to handle this problem. I am looking for good implementation of session and how to apply on amazon ec2 multi instance environment. What is road blocks for system architecture?

    Read the article

  • Fast, reliable data transfers from/to China

    - by Nils
    We are a small company and we will need to transfer rather large amounts of data (10GB+ each time) between Europe and China in the near future. As many may have experienced, Internet connections to or from China can be rather unreliable and slow at times without any apparent reason. For example, while sending data to China via FTP generally works well, it can be painfully slow in the other direction. Currently, we are investigating new ways to have high transfer rates in both directions. So far we have tried: FTP (see above) FTP over VPN services (generally slower than direct connections) F2F (like Retroshare or Freenet - slow!!) Aspera (fast but expensive!) BitTorrent (unreachable end nodes, b/c of firewalls which we must not configure) We would like to try: Cloud storage (e.g. Amazon S3, Google Storage) - are those services always and reliably reachable from inside China? Point-to-Point VPN (currently not possible, b/c of the network, see above) I'd be especially grateful to hear from people who have already dealt with this kind of problem before.

    Read the article

  • How are my DNS entries safe in a shared hosting environment?

    - by Jake
    I'm trying to understand how DNS works in a shared hosting environment. I went to my registrar and set my name servers to my host's ns1.foo.com and ns2.foo.com. I'm using a cloud hosting provider who has a web portal where I can set my DNS entries. However I am confused by the lack of security. when I entered in the entries for my domain there was never any step to prove that I actually own that domain. What is to stop somebody else on the same hosting service (a nasty neighbor) from writing over my DNS entries and pointing my traffic to their server instead?

    Read the article

  • FTP script download from linux to windows

    - by user53864
    I'm using following FTP script on windows xp to download zip files from ubuntu cloud servers. A zip file is created every day on ubutnu servers and I will download it to windows via this ftp script. I run this script everyday manually as I have to edit the last line(mget /usr/backup_02-11-2010.Zip) of the script to match today's date. I want to edit this script so that it will download only today's zip file at the scheduled time without needing to edit it everyday, when scheduled. It's clear that date is appended to the zip files and is in the format dd-mm-yyyy. Need help... open server-ip-here username-here user-password-here lcd C:\Backup\files bin hash prompt mget /usr/backup_02-11-2010.zip

    Read the article

  • Single hardware unit to protect web servers and implement smart publishing

    - by Maxim V. Pavlov
    Thus far we've been using the combination of Forefront TMG 2010 as an edge firewall + intrusion prevention system + web site publishing mechanism in the data center to work with a few web server machines. Since we develop on ASP.NET, we are IIS and in general - Microsoft crowd. Since TMG is being deprecated, we need to come up with a hardware alternative to protect and serve our data center web cloud. Could you please advise a hardware or virtual appliance solution that can provide routing, flood prevention and smart web-site publishing (one IP - many web sites based on domain name filter) all in one. Even if it is hard to configure, as long as it covers all these features, we will invest to learn and replace TMG eventually.

    Read the article

  • MySql calculate number of connections needed

    - by Udi I
    I am trying to figure my needs regarding web service hosting. After trying Azure I have realized that the default MySql they provide (through a third party) limits the account to 4 connections. You can then upgrade the account to 15, 30 or 40 connections (which is quite expensive). Their 15 connections plan is descirbed as: "Excellent choice for light test and staging apps that need a reliable MySQL database". I have 2 questions: if my application is a web service which needs to preform ~120k Queries a day (Normal/BELL distribution) and each query is ~150ms(duration)/~400ms(fetch), how many connection do I need? If instead of using cloud computing, I will choose a VPS, how many connections will I be able to handle on a 1GB 2 cores VPS? Thank you!

    Read the article

  • Recommend a mail server setup for multiple domains

    - by Greg
    Hi all, I've just set up a new Debian web server which I have done plenty of times before, but I want to add a mail server which I have never done before. I am aware of this question, but I would like someone to recommend packages and briefly explain how to use them for providing pop/imap access on multiple domains, a concept that has confused me for a while. I'm planning for this server to grow slowly but surely, from serving an initial 5 or 6 domains to about 20 in the first year, continuing at this rate. (yes, I've jumped on the cloud bandwaggon). At the moment, I have a DNS-A record pointing to my server's IP and nothing else. I'm assuming that I need a DNS-MX record pointing there too, but I haven't read up about it yet so today that's what I'll be doing. Hopefully reading up on the subject and the help that I get here will get my server up and running in no time. Thanks!

    Read the article

  • Cannot acess the new cloned server even after new IP address assignment

    - by tough
    I was able to clone a Ubuntu 10.04 server residing in Cloud. It appeared that I was not getting some IP for the new VM so I followed some of these: # cd /etc/udev/rules.d # cp 70-persistent-net.rules /root/ # rm 70-persistent-net.rules # reboot I didn't follow the later commands as I was unable to see two eth MACs as available in the referenced site. After this I am able to see some the IP for it, and is different form the original IP, I have added new IP to DNS server. Now when I try to access it with its assigned(new) domain it is directed to the old server. I can see both the VMs running with different IP. Where I might have gone wrong, I am new to this admin thing.

    Read the article

  • How do I force a specific MTU for only certain TCP ports?

    - by Dave S.
    Background I have a set of embedded hardware deployed in the field. These remote machines connect back to my servers at AWS running Ubuntu and I use the iptables mangle chain to lower the MTU to 500 so these devices are happy. For reference, this is the iptables rule I am using: -A POSTROUTING -p tcp --sport 12345 --tcp-flags SYN,RST SYN -o eth0 -j TCPMSS --set-mss 500 Current Problem I'm trying to spin up some servers on the Joyent Cloud using SmartOS, but I can't find any information on selectively changing the MTU like I can on Linux (e.g. all info I've found is on changing it globally, which is not what I want). How would I do it so that all connections on TCP port 12345 get the MTU I want?

    Read the article

  • Providing high availability and failover using MySQL on EC2

    - by crb
    I would like to have a highly-available MySQL system, with automatic failover, running on Amazon EC2 instances. The standard approach to solving this is problem Heartbeat + DRBD, but I've found a lot of posts suggesting DRBD doesn't work on EC2, though none saying exactly why. Obviously, a serial heartbeat or distinct network is out of the question in the virtualised environment. It would also be good to have the different servers be in different availability zones, but we're getting into a much harder problem there. What are peoples' opinion on having a high uptime solution in "the cloud"? Note: This question was asked before RDS with multi-AZ was announced, which is the nice automatic answer for today's modern IT professional. :)

    Read the article

  • Storing Cards and PCI Compliance

    - by Nimbuz
    I'm developing a SaaS service and will be managing payments as a merchant for customers, and since we'll be using multipe payment processors depending on users location, amount and other factors so its important to store card details. I did some research and from what I understood all you need is a PCI compliant host (VPS, Dedicated or Private Cloud) and get it validated and certified through some provider like TrustWave etc... Is that correct or am I missing something? Also, would be great if you could suggest a few (not necessasrily cheap, but affordable) PCI compliant hosts. Many thanks

    Read the article

  • Appropriate Network switch for small server cluster

    - by Chris Dutrow
    Need to build a small business server cluster for the purpose of crunching data. It will not host a web site that needs to be available 24/7. It does need to support servers that host Redis, a Cassandra database cluster, and a Python web server. Operating system will most likely be Centos 6.4 Other servers in the cluster should be able to communicate very fast with each other, especially the Redis server. This will probably require the use of internal IP addresses. We will need to use multi-data center replication to synchronize the Cassandra cluster with the one that we currently have hosted on the cloud Was looking into network switches and we are unsure of the appropriate specifications that we should be looking for. Does the switch need to be "managed" or can it be "unmanged"? Does the switch need to support IPv6 or just IPv4? Do we need an enterprise level Cisco switch, or can we go with something like a $200 DLink managed (or unmanaged) small business switch? Thanks so much!

    Read the article

  • How do I negotiate for colo space?

    - by randy melder
    I guess this isn't a technical question, but it definitely is something IT teams deal with, so here goes: I'm looking at getting a rack at a local colocation facility. I'm weighing the options versus building out in a cloud platform. We are REALLY low bandwidth and power. There's a total of six hosts for the total operation. You can assume we use <= 10 amps of power and <= 2Mbps 95th percentile. Do you have any advice for getting the best deal?

    Read the article

  • Serve my website from different server during downtime

    - by nfedyashev.mp
    I have a VPS server running in the cloud. Fully automated server image upgrade/downgrade(by RAM/HDD plans). The problem is that server upgrade/downgrade takes time and involves total unavailability during this period(up to 30 minutes). Goal: during this downtime server my website(http://mydomain.here) from different server with some message like "Under construction". How can I do this? -- mydomain.here is hosted on godaddy and uses its DNS(If I call it right). It's pointing with A-record to my VPS's IP address now. Change in these DNS settings will take more than 30minutes, so it's not an option. How can I find mode "dynamic" DNS? What should I learn?

    Read the article

  • Best practice for ONLY allowing MySQL access to a server?

    - by Calvin Froedge
    Here's the use case: I have a SaaS system that was built (dev environment) on a single box. I've moved everything to a cloud environment running Ubuntu 10.10. One server runs the application, the other runs the database. The basic idea is that the server that runs the database should only be accessible by the application and the administrator's machine, who both have correct RSA keys. My question: Would it be better practice to use a firewall to block access to ALL ports except MySQL, or skip firewall / iptables and just disable all other services / ports completely? Furthermore, should I run MySQL on a non-standard port? This database will hold quite sensitive information and I want to make sure I'm doing everything possible to properly safeguard it. Thanks in advance. I've been reading here for a while but this is the first question that I've asked. I'll try to answer some as well = )

    Read the article

  • Backing up a Linux VPS with RSync to Vista

    - by Frank
    I've been working to setup a Linux VPS to host a couple of Wordpress sites and eventually a Mercurial server. I've setup one site and things have gone well. However, before I start moving other things to the VPS, I need to setup a backup solution. My provider, Linode, suggest RSync (among a couple of other options) to do backups. I've seen a few posts on this site that suggests other backup solutions including going to the Amazon Cloud but that costs money and the VPS is all the money I want to spend on this for the time being. So, to help solve that I want to have my backup computer be my home desktop computer. Assuming I'm using RSync, is it possible to use my Vista based home computer to become the destination for the backup? And if it is possible, what type of command or connection would I need to configure on the vista machine? Any insight would be helpful. It's probably obvious, but I've never used RSync.

    Read the article

  • Developing and implementing a testing plan for a software app deployed on a web server

    - by Abhzoo
    A company in the USA is building a new Web App that will be offered SaaS to customers and the development is being done by a software development team located in a different country(India). They are about to take delivery of a first demo to provide live feedback to the team in India. The overseas team requires a cloud server (Windows + SQL Standard, 8GB Ram, 8 vCPUs, 40GB SSD system disk, 80GB SSD data disk, 1600Mb/s network bandwidth) to serve as a tester server. When the tester is setup the team will install the app on the test server to get live feedback. Q:Explain in detail how you will develop and implement a testing plan for the software App. Be sure to explain the specifics. PLEASE HELP, NEED ANSWER ASAP

    Read the article

  • Virtual MS Sql Server not consuming enough CPU

    - by rocketman
    We have a Win2008 server 32 bit running as a virtual machine under ESX server. It has 6 CPU cores of 2Gz each and 4GB ram. It's running MS Sql Server 2008 R2 only. Problem: The server is heavily loaded and responds slowly. From windows taskmanagers point of view, it really looks overloaded, CPU wise. However, our external "cloud manager" says it's only using 2.5GHz worth of CPU-cycles in the cluster. I/O times looks "good". We have already tried to set the SQL servers number of working threads from 0(auto) to 256, to no effect. How to tune the VM host, guest or SQL to use all of it's alotted resources? Does it sound possible att all?

    Read the article

  • Azure VM : Connection refused by host

    - by Simon Kérouack
    I recently stopped a subscription with 14 VMs in it and restarted it a few days later. Now all my VMs are working just fine at the exception of 6 used for MongoDB. They respond to ping and so they show as online in the azure dashboard but they do not answer to anything else. I tried (from different locations, in and out of the azure cloud) ssh : connect to host * port *: Connection refused telnet : Unable to connect to remote host: Connection refused mongo : exception: connect failed The ports for ssh and mongo are opened in azure. I tried restarting the VMs a few times trough the azure dashboard, they seem to restart successfully but still refuse all connections. I already looked for similar issues and the best solutions I found was to wait... the issue has been happening for 7 days and waiting is no more an option.

    Read the article

  • Tracking costs within one AWS account

    - by caius howcroft
    I have what I'm sure is a very common problem. Our company has many projects and groups working for different clients. We do a lot of our development work in the cloud and deploy our solutions there. We have a VPC set up that isolates projects from each other in their own subnet and that VPC is getting a hardware VPN connection back to HQ. We need to keep track of the cost run up by every project. The way I currently implement this is by providing my own tools for starting and stopping instances which log which user (and thus which project) to bill the instance too. This works okay for BoxUsage costs but not for other costs. I could create a separate account for each project and use consolidated billing, this I think would allow me to pay once but track costs per "project", but I would then not be able to share common resources (like bring account B's running instances inside the same VPC). Does anyone have any suggestions? Cheers C

    Read the article

  • Slow Inserts SQL Server 2005

    - by Achilles
    I'm researching an issue with the following information: We had a logging table with about 90k records in it that had inserts taking several seconds(approximately 10 to 20s) in extreme cases. One of the columns of the table stores XML as the XML datatype. The XML isn't being parsed during the insert, just stored. We tried truncating the table assuming that the issue was related the number of records(althought 90k seemed "normal") and the inserts still are performing poorly. While I know there are other issues that can cloud the issue, what would be some "check this first" ideas that could help me debug this issue? Thanks for any suggestions and help in advance.

    Read the article

  • Does Apache 2.2 (windows) have any default bandwidth limit?

    - by igino manfre'
    I'm running Apache on a server in cloud (Windows server 2008 R2 on VMware, 1 Gbps of BW, http://95.110.164.61 ). I'm streaming many live DVB MPEG Transport Stream, precompressed in loop, (not flash) generated by VLC on port 640xx and then reverse proxied by Apache on port 80. The server's firewall is open for VLC and Apache on all ports. Above 1.5 Mbps the reproduction is affected by continous stop & go. Please note that if you request a stream generated by VLC directly at http://95.110.164.61:64087/mpg2_6.4 you see a correct stream, while if you request http://95.110.164.61/mpg2_6.4 you do not. I know that Flash streaming Server uses Apache to stream on port 80 (and it works). I'm not an expert with Apache, can anyone tell me if any "special" module is required to increase the bandwidth?

    Read the article

  • Our server hosting provider asked for our root password

    - by Andreas Larsson
    I work at a company that develops and hosts a small business critical system. We have an "Elastic cloud server" from a professional hosting provider. I recently got an email from them saying that they've had some problems with their backup solution and that they needed to install a new kernel. And they wanted us to send them the root password so they could do this work. I know that the email came from them. It's not [email protected] or anything like that. I called them and asked them about this, and they were like "yep, we need the password to do this". It just seems odd to send the root password over email like this. Do I have any reason to be concerned?

    Read the article

  • Can I associate my spare Elastic IP addresses to an Amazon EC2 instance started in an Autoscale group and Monitoring?

    - by undefined
    I want to know if I can reserve a number of Amazon Elastic IP addresses and assign them to instances started by Autoscale. So basically, when a new instance is started because a trigger has been triggered can I also set the API to look for a spare IP address and allocate it to the instance. I need to do this because the started instance will need to communicate to a server outside the cloud and get through a firewall which will only allow remote access from a predefined set of IP addresses. So i think i need to reserve some IPs, add them to my firewall settings then allocate them (automatically) when a new instance is started. Any ideas?

    Read the article

  • Replicate portion of an LDAP directory to external server

    - by colemanm
    We're in the process of setting up a Jabber server on Amazon EC2 right now, and we'd like to have our internal users authenticate via LDAP so we don't have to create/manage a separate set of user accounts than the master directory in the office. My question is: is there a way to copy, unidirectionally, a segment of our internal LDAP directory (the user accounts OU) to an external LDAP server and authenticate Jabber against that? We're trying to work around having our externally hosted machines out in the cloud accessing our internal network directly... If we can replicate in one direction only a subset of the user accounts, then if that gets compromised we don't necessarily have a critical security breach into our internal network.

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >