Search Results

Search found 1941 results on 78 pages for 'infrastructure'.

Page 37/78 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • What are the challenges when my enterprise desires to move the processing component of an applicatio

    - by Berkay
    Assume that i have an enterprise accounting application that consists of a front-end interface, a processing tier, and a back-end database. This is an application that contains private business data, and thus is traditionally run in a secure private network environment within the enterprise. What are the challenges that appear when my enterprise desires to move the processing component of this application to a cloud computing data center in order to achieve greater scalability or to reduce IT costs ? Pls note: do i have to make significant changes to my own infrastructure to enable external access to formerly private resources? do i have to modify the application code to handle new network topology ? thanks, if you give your answers in a simple manner, really appreciated.

    Read the article

  • What amount of physical RAM would a typical "commodity class" server have, as of late 2013?

    - by marathon
    I'm trying to spec out servers for my company's infrastructure group to build. They tell me anything more than 2GB is too much, which I find ridiculous considering cheap DRAM is about 15 bucks a dimm in bulk and our particular software runs better with more memory. I tried to find out how much google servers use, and pinning down a number is hard. Best I could find in a google research paper was that in 2008, their commodity servers were using 2 and 4GB dimms, but the paper never said how many. I realize "commodity server" is a vague term, but I'm just looking for a rough range in RAM used. I suspect at least 16GB is going to be the norm.

    Read the article

  • Implementing Variable Envelope Return Path (VERP) using Exchange

    - by iammichael
    We're looking into implementing Variable Envelope Return Path (VERP) for improved bounce processing for our application. Our current mail infrastructure is MS Exchange 2007 but are in the process of upgrading to 2010. We're also implementing Postini for spam filtering. Exchange doesn't support sub-addressing (see also this question on disposable addresses) -- and VERP is somewhat of a specialized application of sub-addressing. Are there any options for implementing VERP in Exchange without putting another non-Exchange SMTP relay in front of Exchange to pre-process incoming messages? Specifically could a transport rule be created that could match against the target (non-existing) recipient, store that recipient address in a special header added to the message, and redirect the message to a pre-created mailbox? Note: we have developer resources available if custom code could be used somehow.

    Read the article

  • Windows Server 2008 Active Directory DNS setup

    - by Mister IT Guru
    I have to setup a small windows network inside my bigger linux/mac infrastructure. In order to get the windows clients logging onto the domain, I have had to make the DC their primary DNS server, which seems to have worked. I would much prefer to have one DNS server running on my network, or at least one authoritative server running on the network. I have a USG 200 router/firewall and I can configure some static records for DNS, but I an not sure what I need to put in order to get DNS and AD working together, and hints and tips appreciated.

    Read the article

  • Using standard e-mail address as user system wide name

    - by PeterMmm
    I'm going to re-build a very old Lotus Notes infrastructure coming from 4.x towards 8.5. I'm trying to setup Domino so that all user names should be of a single string or the internet e-mail address. For example the user "John Smith/ACME" should be in the whole system jsmith or [email protected] . I still get jsmith/ACME all around. Where it is most annoying is in the NAB when creating a new message. Is there a way to get all addresses in uniform standard e-mail adress format at least in mail ? The mixup in the destination like "John Smith/ACME, [email protected]" confused the users.

    Read the article

  • Steps to take when technical staff leave

    - by Tom O'Connor
    How do you handle the departure process when privileged or technical staff resign / get fired? Do you have a checklist of things to do to ensure the continuing operation / security of the company's infrastructure? I'm trying to come up with a nice canonical list of things that my colleagues should do when I leave (I resigned a week ago, so I've got a month to tidy up and GTFO). So far I've got: Escort them off the premises Delete their email Inbox (set all mail to forward to a catch-all) Delete their SSH keys on server(s) Delete their mysql user account(s) ... So, what's next. What have I forgotten to mention, or might be similarly useful? (endnote: Why is this off-topic? I'm a systems administrator, and this concerns continuing business security, this is definitely on-topic.)

    Read the article

  • Recommended Configuration to setup Tomcat 7 on windows OS

    - by yashbinani
    I have created a small web application using java/jee which will be deployed in LAN environment. I want to know the recommended hardware configuration. Details are as follows 1) Expected number of hits: 20 hits / hour 2) Number of clients 5-7 3) Application Server : Tomcat 7 4) Database server : MySql App and DB shall be deployed on same machine 5) OS Configuration : Windows XP or any unix flavour ? Can a simple p4/celeron machine with 1 gb ram 8-10 GB hard disk will be sufficient to cater client requests? Server will not be storing too many files/images/videos Client does not want to spend too much on infrastructure.

    Read the article

  • Sharing storage on Linux and Solaris

    - by devlearn
    I'm looking for a solution in order to share a san mounted volume between several hosts running on Linux (RHEL) and/or Solaris (Sparc). Note that I basically need to share a set of directories containing large binary files that are accessed in random R/W mode. I have the following reqs : keep the data on the SAN suitable i/o performances as the software is pretty demanding on IOPS stick to a shared file system as I can't afford a cluster fs (lack of MDS/OSS infrastructure) compression could be really usefull For now I've found only the following candidates : GFS2 , supports Linux only, no compression VxFS , supports Linux and Solaris, compression supported So if you have some suggestions for this list, I'll really welcome them. Thanks in advance,

    Read the article

  • Multi-Application Server Environment and Memcached Security

    - by jocull
    We are looking to integrate Memcached into our infrastructure, but have a security concern before we do. We run several platforms including ASP.NET and Coldfusion and have many app developers working many little applications across the different platforms. The concern is this: App A places item "dog" into cache. App B reads item "dog" (or worse: App B updates item "dog") After this happens, App A either retrieves bad information, or has already had its information viewed, aka "stolen". What we would like to do is make it so that each app can only interact with its own sandbox, and may not interfere with or read other application's data. Is this possible? Thanks.

    Read the article

  • Disk / system configuration for log collection / syslog server

    - by Konrads
    I am looking into building a syslog / logging infrastructure and am pondering about some architecture best practices. Essentially, I see that a syslog system needs to support two conflicting workloads: log collection. Potentially massive streams of data need to be written quickly to disks and indexed. log querying. logs will be queried by both fixed fields such as date and source as well as text search. What is the best disk/system setup assuming I'd like to keep it to a single server for now? Should I use SSDs or ramdisk to off-load some processing? some disks in stripe and some in raid5? I am particularly eyeing Graylog2 with ElasticSearch/MongoDB

    Read the article

  • Anyone interested in obtaining a cable list from a visio Diagram? [closed]

    - by Alex
    it's my first post here. Just wondering if anyone had to deal with the following problem - you have to install over say a 100 network elements and servers - cabling is done via contractors, so you need to provide them with an accurate, error-free cable list - your inputs are a set of detailed, port by port, visio diagrams. Prb is to obtain the cable list and get the cabling started while you're busy crafting the switch/routers configs. I coded a Visio plugin, which I plan to release under the GNU license, that returns a cable list from a diagram, and tested it on intermediate size infrastructure, 2K+ cables. It works well. The tool needs a little work to be user friendly, so before getting started, I wanted to know if that was worth the effort. Questions are welcomed, let me know -A PS: the tool is targeted for those who need a port by port description of their network, in the form Source/slot/port/Destination/slot/port.

    Read the article

  • Keeping packages on a large number of openSUSE servers updated

    - by Kamil Kisiel
    Question for anyone out there managing a network of openSUSE machines. How do you keep track of and apply updates? I know about YaST Online Update (YOU) but it seems more geared towards keeping a single machine up to date. It doesn't seem to scale well to a larger number of machines. How do you keep your machines updated? Our network is fairly heterogenous in terms of package installation as the servers are mostly infrastructure machines with varying roles. I know that SUSE Linux Enterprise has tools to manage updates network-wide, but updating to that is currently not an option for budget reasons.

    Read the article

  • Normalize Accept-Encoding via HAProxy for optimized Squid hit rate

    - by Matt Beckman
    Our website infrastructure uses HAProxy for load balancing, a Squid cluster for caching, and application data is on an IIS cluster. We load balance HAProxy by URI to optimize the Squid hit-rate, but we know that Squid is holding different copies of each page based on the Accept-Encoding header passed to it by the browser, and so IE (gzip, deflate) will have a different copy of a cached page than Firefox (gzip,deflate) or Chrome (gzip,deflate,sdch). We want to normalize the Accept-Encoding headers and I think the best place to do so would be in HAProxy. I'd appreciate it if someone could offer some ideas on how to accomplish this without breaking support for clients without gzip or deflate support.

    Read the article

  • Experiences with Google TiSP?

    - by Zypher
    i got an email from google a couple of hours ago (around 12AM EST today) that Google's TiSP service is now available in my area. this seems like a great deal compared to my cUrrent 16Mbps cable coNction at work, however i'm a lIttle nervous about the fact that linux support is "Coming soon". i was wOndering if anyone had successfully installed this system and gotten it woRking with their linux infrastructure? I'm assuming that there shouldn't be any issues siNce we have an ASA in front of our internet. TiSP Shouldn't care what is behind that. Any insight would be greatly appreciated!

    Read the article

  • Cannot find the 2nd datastore after upgrading ESXi 3.5

    - by aXqd
    I have an ESXi server (version 3.5) with about 60 VMs. It has 2 hard disks, each of which is regarded as a datastore. After upgrading through 'VMware Infrastructure Update' tool(still staying with 3.5 instead of 4.0) and a reboot, I can only see the 1st datastore. Hence many VMs are, now, inaccessible. I wonder how I can get the 2nd datastore back. I am sorry, but I did't have the 2nd datastore backuped before. BTW, I am still thinking of upgrading directly to version 4.0 to see if it can fix the driver problem. How about that?

    Read the article

  • Incident Management-Monitoring Ideas

    - by sprsr
    Hello all, What we are tring to do at our company (banking industry) is to apply some ITIL (Information Technology Infrastructure Library) principles and I need some ideas to develop our incident management system of our company. For those who have experienced with incident management, what are the things that helps you most ? What are the things that you can't live without while managing the incidents. Do you have some good screenshots of such a monitoring software ? Since we choosed to develop our own system instead of buying a big system, there are lots of things we may miss, and we are brainstorming here. I need some key points that most crucial in incident management and monitoring. Thanks.

    Read the article

  • Best configuration and deployment strategies for Rails on EC2

    - by Micah
    I'm getting ready to deploy an application, and I'd like to make sure I'm using the latest and greatest tools. The plan is to host on EC2, as Heroku will be cost prohibitive for this application. In the recent past, I used Chef and the Opscode platform for building and managing the server infrastructure, then Capistrano for deploying. Is this still considered a best (or at least "good") practice? The Chef setup is great once done, but pretty laborious to set up. Likewise, Capistrano has been good to me over the past several years, but I thought I'd take some time to look around and seeing if there's been any landscape shifts that I missed.

    Read the article

  • Is it still "wrong" to require TLS on incoming SMTP messages

    - by jackweirdy
    According to the STARTTLS Spec Section 5: A publicly-referenced SMTP server MUST NOT require use of the STARTTLS extension in order to deliver mail locally. This rule prevents the STARTTLS extension from damaging the interoperability of the Internet's SMTP infrastructure. A publicly-referenced SMTP server is an SMTP server which runs on port 25 of an Internet host listed in the MX record (or A record if an MX record is not present) for the domain name on the right hand side of an Internet mail address. However, this spec was written in 1999, and considering it's 2014, I'd expect most SMTP clients, servers, and relays to have some kind of implementation of STARTTLS. How much email can I expect to lose if I require TLS for incoming messages?

    Read the article

  • What is the cheapest non-colocation way to serve about 10 static files at a rate of 100 megabits per

    - by Mark Maunder
    I've looked at Amazon S3 and it costs roughly $4746 per month for 100 megabits/s (which translates into 31,640 Gigabytes of data transferred. That's at a rate of $0.15 per gig.) I haven't found a cheaper "cloud" option. I'm curious if there's any other cloud hosting option out there cheaper than S3. Uptime is not an issue because I can build failover for most things into the browser. e.g. I can use javascript to say "if the image didn't load then go to this other URL instead." FYI I'm currently using a colocation facility which is about 30% cheaper than S3 and I'm familiar with colo prices - so this question is really about "cloud" services and by that I mean services where I don't have to worry about the infrastructure.

    Read the article

  • Serving static web files off a non-standard port

    - by Nimmy Lebby
    I'm close to deploying a Django project to production. I'm looking over some infrastructure decisions. Something that came up was serving static files with a different server such as lighttpd. However, we're starting off with a single dedicated server so our only option would be to use a non-standard port for the static file webserver. Is there precedence for this? I.e. Does anyone "big" do this? Any particular port I should use or shy away from using? Can anyone thing of some downsides of going this route?

    Read the article

  • Squid - Selective reverse proxy and forward proxy

    - by Dean Smith
    I'd like to setup a squid instance to do selective reverse proxy for a configured list of URLs whilst acting as a normal forward proxy for everything else. We are building new infrastructure, parallel live as it where, and I want to have a proxy that people can use that will force selective traffic into the new platform whilst just acting as a forward proxy for anything else. This makes it very easy for people/systems to test the portions of the new platform we want without having to change too much, just use a proxy address. Is such a setup possible ?

    Read the article

  • How do I configure NTLM authentication in Firefox on Linux?

    - by tolomea
    Our IT department have NTLM deployed through the intranet servers. I've set network.automatic-ntlm-auth.trusted-uris value in Firefox on some of the Windows machines and that works fine. However setting it in Firefox on the Linux machines is not working. This doesn't surprise me at all, I've no notion of where Firefox on Linux is supposed to get the authentication details from. So how is this process supposed to work? what bits of config / infrastructure am I missing?

    Read the article

  • Locking down a server for shared internet hosting.

    - by Wil
    Basically I control several servers and I only host either static websites or scripts which I have designed, so I trust them up to a point. However, I have a few customers who want to start using scripts such as Wordpress or many others - and they want full control over their account. I have started to do the basics - like on php.ini, I have locked it down and restricted commands such as proc, however, there is obviously a lot more I can do. right now, using NTFS permissions, I am trying to lock down the server by running Application Pools and individual sites in their own user, however I feel like I am hitting brick walls... (My old question on Server Fault). At the moment, the only route I can think of is either to implement an off the shelf control panel - which will be expensive and quite frankly, over the top, or look at the Microsoft guide - which is really for an entire infrastructure, not for someone who just wants to lock down a few servers. Does anyone have any guides that can put me on the correct path?

    Read the article

  • RDS Replication across regions

    - by Bryan Migliorisi
    We are using Amazon AWS for our web services but given the recent instabilities in their infrastructure, we are trying to figure out how to run our application across multiple regions for additional redundancy. Ideally, we would run our entire app in a active-active configuration in multiple regions but our main concern is that we are using RDS, which I understand cannot replicate across regions. One possible solution (though we have not tried or proven it would work) would be to do mysqldump or EBS snapshots every hour or so but this would mean that we would be forced to run in an active-passive configuration. Our data would be at most an hour behind. This carries its own issues around data synchronization when we failover and the master comes back up, so its not the best solution. Are there any proven solutions for replicating RDS across regions?

    Read the article

  • Interface to collect successful remote backups status

    - by Aseques
    I would like to deploy into our infrastructure a web interface that could register when the copies are finished and if for some reason they haven't. The current issue is that we are doing on site backups for customers, for each backup a mail is sent ad the end of the backup, the problems is that sometimes the mail isn't sent for a variety of reasons: System doesn't have internet Backup system crashed before sending the mail etc.. What I'd like to do is to have a web interface that the backup software cant visit after doing the backup (either if it's a success or a fail), that acknowledges that the backup has finished, after some time, I'd like to receive a report of the machines that hadn't done the backup. Is there anything remotely similar to this that I could use/adapt to our environment? UPDATE: Just found out this (paessler.com) that seems to be a privative solution of what I intended.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >