Search Results

Search found 8369 results on 335 pages for 'company'.

Page 274/335 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • How to configure a shortcut for an SSH connection through a SSH tunnel

    - by Simone Carletti
    My company production servers (FOO, BAR...) are located behind two gateway servers (A, B). In order to connect to server FOO, I have to open a ssh connection with server A or B with my username JOHNDOE, then from A (or B) I can access any production server opening a SSH connection with a standard username (let's call it WEBBY). So, each time I have to do something like: ssh johndoe@a ... ssh webby@foo ... # now I can work on the server As you can imagine, this is a hassle when I need to use scp or if I need to quickly open multiple connections. I have configured a ssh key and also I'm using .ssh/config for some shortcuts. I was wondering if I can create some kind of ssh configuration in order to type ssh foo and let SSH open/forward all the connections for me. Is it possible? Edit womble's answer is exactly what I was looking for but it seems right now I can't use netcat because it's not installed on the gateway server. weppos:~ weppos$ ssh foo -vv OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /Users/xyz/.ssh/config debug1: Applying options for foo debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Executing proxy command: exec ssh a nc -w 3 foo 22 debug1: permanently_drop_suid: 501 debug1: identity file /Users/xyz/.ssh/identity type -1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_rsa type 1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_dsa type 2 bash: nc: command not found ssh_exchange_identification: Connection closed by remote host

    Read the article

  • XP SP2 Event log not logging events

    - by Weedfreer
    I have a problem whereby a terminal appears not to be logging events correctly and occasionally appears to have problems communicating accross the network.The terminal has previously been infected with a virus which apears to have 'played' with the default group policy in the standard user profile. Although, outwardly, the terminal appears to be working normally I still have a nagging feeling that it isn't quite back to the way it was. It was infected by a user plugging in a USB Stick while the company was using the older version of the AV software...typically a week or so before it was updated.I have configured the Event logs to Overwrite as required and to be 5056KB in Maximum size. I have also attempted:- Disabling the Event Log service & restarting Renewing the EVT files in Windows\system32\config directory Restarting the event log service and restarting Clearing the event log in the Services MMC Resetting the Filters to Default in the services MMC Using the EVENTCREATE command remotely from a CMD window on the server to force an event creation event. So far the only operation to have any sort of success is the remote computer EVENTCREATE command from a CMD window on the server. As it stands, the only other time that the computer has managed to create events is while it is being restarted.Has anyone gotany ideas on how to proceed? I'm thinking that possibly a refresh of the 'Windows\system32\config\SystemProfile' folder. I'm also thinking about running a tool such as Malwarebytes but this could be slightly controvertial as the system needs to be running on 'up-time' for as long as possible. I'm also wonderign whether anyone knows of any Windows admin tools that allow me to control the event logging options or default security options so that i could get it back to some sort of standard.What I'm trying to avoid is a complte re-imaging of the terminal. Although this is an option, I dont really want to have to take it if i dont need to.Many thanks in advance for any suggestions anyone may be able to provide.

    Read the article

  • Is there the equivalent of cloud computing for modems?

    - by morpheous
    I asked this question on SF, and someone recommended that I ask it here - (I don't think I have enough points to move a question from SF to SO - and in any case, I don't know how to do it - so here is the question again): I am interested in the concept of PAAS (platform as a service). However, all talk about SAAS/PAAS seems to focus on only the computer itself - not its peripherals. Is it possible to 'outsource' modems as a resource - so that an app running remotely can pump data to a modem in the cloud? As a bit of background to the question, a group of us are thinking of starting a company that offers similar services to companies like twilio etc - but I want to 'outsource' both the computing hardware (thats PAAS - the easy bit) and the modems (thats what I cant seem to find any info on). Does anyone know if modems can be bundled as part of a PAAS service? - alternatively, is there a way that an application running on one computer can communicate (i.e. pump data) to a remote modem residing on another machine?. I assume I can come up with some protocol over UDP or TCP - but there is no point reinventing the wheel - if such a protocol like that already exists (or if it some open source software allows one to do this). Any suggestions on how to solve this problem?

    Read the article

  • Search inside of text files

    - by Matt
    So here is the situation. I currently run a mail server for my small non-profit company. My mail server (Merak Mail Server) keeps logs in .log files and mail as .tmp files. Essentially these are just text files that are kept on the server. Problem is that when I put text into the "Containing text" field on Windows Explorer, it always misses the files and tells me no results returned. Then when I search the files one by one (painful at best), I find the files I need. Do I not understand the search feature well enough, or maybe I have it indexing wrong. I really don't care what I need to use to search the files, even a third-party app is fine with me, I just want to type an email address into a box and search all of my log files or email files and find out which one I am looking for. It can be Windows Search or something else, as long as I can find a way to get the job done I will be happy. Pay solutions are fine as well. Thanks everyone in advance.

    Read the article

  • Active FTP client blocked on Windows XP

    - by Ian Hannah
    Summary We have a FTP server (running in active mode). We have an FTP client which is connecting to the server, carrying out a task and then closing the connection. The FTP Client can perform this operation on multiple threads. Problem We have a situation where customers are experiencing occasional failures to carry out operations on a FTP connection. The actual connection has been made to the server but when the server attempts to return data on the data port if fails. Observations We have a simple test FTP client which is running two separate threads. Each thread is performing a recursive listing of files from a root directory. With the firewall running on the client machine the hang happens within a few minutes. If the firewall is turned off on the client machine, the test application seems to run correctly. This does point to a potential firewall issue. However, with the firewall on we can list files on our company FTP server without any issues. If the simple test FTP client runs a single thread then we do not experience any problems whether or not the firewall is turned on. We have another simple test FTP client which was running 4 threads (with each opening a new FTP connection, doing a directory listing and closing the FTP connection as fast as possible) overnight with the firewall turned off. With the firewall turned on it fails in a short space of time. The confusing thing is that if the test FTP client and the FTP server are run on the same machine the failure occurs even though the firewall is turned off. This means that the problem may not be firewall related. Any help with what this could be would be much appreciated. Thanks Ian

    Read the article

  • ESXi 4.1 host not recognising existing VMFS datastore

    - by ThatGraemeGuy
    Existing setup: host1 and host2, ESX 4.0, 2 HBAs each. lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here). This has been working just fine all along. I added host3, ESXi 4.1, 2 HBAs. If I view Configuration / Storage Adapters, I can see that both HBAs see both LUNs, but if I view Configuration / Storage, I only see 1 datastore. host1/2 can see both LUNs and I have VMs running on both too. I have rescanned, refreshed and even rebooted, but host3 refuses to acknowledge 1 of the datastores. Does anyone know what's going on? Update: I re-installed the host with ESX (not i) 4.0, same version as the existing hosts and it's still not recognising the vmfs. I think I'm going to SVmotion everything off that datastore then format it. Update2: I've created the LUN from scratch and the problem gets even weirder. I've presented the LUN to all 3 hosts, and I can see the LUN in the vSphere client's Configuration / Storage Adapters section on all 3 hosts. If I create a datastore on the LUN via the Configuration / Storage section on host1, it works fine and I can create an empty folder via datastore browser, but the datastore is not seen by the host2 and host3. I can use the Add Storage wizard on host2 and it will see the LUN. At this point the "VMFS Label" column has the label I gave with "(head)" appended. If I try the Add Storage wizard's "Keep the existing signature" option, it fails with an error "Cannot change the host configuration." and a dialog box that says 'Call "HostStorageSystem.ResolveMultipleUnresolvedVmfsVolumes" for object "storageSystem-17" on vCenter Server "vcenter.company.local" failed.' If I try the Add Storage wizard's "Assign a new signature" option on host2, it will complete and the VMFS label will have "snap-(hexnumber)-" prepended. At this point its also visible on host3, but not host1. I have a similar setup in a different datacenter which didn't give me all this trouble.

    Read the article

  • Access keystore on Sun ONE Webserver 6.1 for 2048 bit key length SSL

    - by George Bailey
    We want to get 2048 bit key length CSR requests. The browser based GUI provides us with a 1024 bit CSR and I don't know how to change that. It seems that 1024 bit key lengths will no longer supported by SSL companies. (Lower cost options only support 2048 bit. Thawte who is much more expensive say they accept 1024 for only one or two year certificates, but not 3). The legacy systems in question are running Sun ONE Webserver 6.1. Upgrading would be time consuming and we would rather not have to do that right now. We will be phasing these out but it will take awhile, so... Got it!! http://middlewarekb.wordpress.com/2010/06/30/how-to-generate-2048-bit-keypair-using-sun-one-or-iplanet-6-1-servers/ It is for the same version webserver I am using. /opt/SUNWwbsvr/bin/https/admin/bin/certutil -R -s "CN=sub.domain.ext,OU=org unit,O=company name,L=city,ST=spelled state,C=US,E=email" -a -k rsa -g 2048 -v 12 -d /opt/SUNWwbsvr/alias -P https-sub.domain.ext-hostname- -Z SHA1 Previous efforts edited out.

    Read the article

  • "Server Unavailable" and removed permissions on .NET sites after Windows Update

    - by tags2k
    Our company has five almost identical Windows 2003 servers with the same host, and all but one performed an automatic Windows Update last night without issue. The one that had problems, of course, was the one which hosts the majority of our sites. What the update appeared to do was cause the NETWORK user to stop having access to the .NET Framework 2.0 files, as the event log was complaining about not being able to open System.Web. This resulted in every .NET site on the server returning "Server Unavailable" as the App Domains failed to be initialise. I ran aspnet_regiis which didn't appear to fix the problem, so I ran FileMon which revealed that nobody but the Administrators group had access to any files in any of the website folders! After resetting the permissions, things appear to be fine. I was wondering if anyone had an idea of what could have caused this to go wrong? As I say, the four other servers updated without a problem. Are there any known issues involved with any of the following updates? My major suspect at the moment is the 3.5 update as all of the sites on the server are running in 3.5. Windows Server 2003 Update Rollup for ActiveX Killbits for Windows Server 2003 (KB960715) Windows Server 2003 Security Update for Internet Explorer 7 for Windows Server 2003 (KB960714) Windows Server 2003 Microsoft .NET Framework 3.5 Family Update (KB959209) x86 Windows Server 2003 Security Update for Windows Server 2003 (KB958687) Thanks for any light you can shed on this.

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by user31459
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • Unable to remove Read-Only attribute from folder in Windows XP

    - by elcuco
    I have this directory which I cannot remove the read only attribute from. The computer is running XP SP2 (or SP3, not sure) and the directory sits in a NTFS file system. Looking into the web I found this: http://support.microsoft.com/kb/256614 which tells that if the directory is "customized" it's treated as a system folder and thus "read only". I don't think this is a scenario in my case, but anyway it's not helping, their recommendation is more or less: attr -r -s /d /s d:\data and this is not working for me. Any other ideas? More info: The directory is served to an HTTP server (wamp) and the directory is an SVN check out. What happens is that the web server cannot write files into the directory (imagechace from drupal is you are really interested). Edit 2: The original post claimed that the directory sits on a VFAT FS, however I booted Fedora 11 from livecd and the partition is marked as NTFS. Edit 3: I left the company which I worked on, on which this situation happened... so I cannot fully close this question. But things get even worse: I tested the "attr -r" answer I put, it did not work for me, and now the developer said that it worked for her. A nice WTF moment. Probably a reboot helped... Sorry for loosing details. If anyone has the same problem, and one of the answers helps him - please comment.

    Read the article

  • Being a more attractive job candidate - Certs XOR Degree

    - by Zephyr Pellerin
    I'm currently working in an IT position, where I do helpdesk stuff, and predominantly security related issues/consulting (In the loosest sense of the term) In-House and for Service-Contract clients (as the only/acting CCSP [I guess I should say only person with Cisco experience] in my organization). I've professionally written Kernel Mode drivers for a gaming company. Among other things that I'm proud to put on a resume. I think of myself as very reasonably qualified as a System Administrator, With excellent Cisco experience, among other things I think would make a good addition to almost any IT staff in need of a new employee. However, Something has always tripped me up - Human Resources. Let me explain, I decided to skip the university route - I'm immensely glad that I did, The computer science graduates that I've met and work with rarely know much of anything about Computers (Until they gain some 'real' experience), Even when asked about Theoretical Computing fundamentals they can rattle something off about Turing Completeness but rarely do they understand the mathematical underpinnings. In short, I think instead of going to college, I'd rather pick up some real world experience. However, Apparently, Employers rarely think the same way. A quick perusal of jobs through the standard job search engine yields nothing short of a conspiracy to exclude anyone without 'A Bachelors Degree in Computer Science or Equivalent'. Interviews I've had in the past have almost always been entangled with - 1. My Age (Which I can't really change) and 2. Lack of Degree. Employers frequently disregard the CCNA/CCSP, The experience I've gained through internships, My extensive experience in x86 assembly and C, among so many other things I like to think are valuable to employers - In lieu of the fact that I don't have a piece of paper. So, AS AN EMPLOYER - Is it even worth working on my CCIE? Or should I pad my resume with certifications that are easier to acquire (Like CISSP, MSCE, Network+, etc.). Or should I ditch the whole idea and head back to get a Mathematics or CS degree?

    Read the article

  • Using virtualization infrastructure for J2EE application distribution- viable alternative?

    - by Dan
    Our company builds custom J2EE web solutions. At the moment, we use standard J2EE distribution mechanisms (ear/war archives). Application servers are generally administered by our clients' IT departments and since we do not have complete control over the environment, a lot of entropy can be introduced into the solution. For example: latest app. server patch not applied conflicting third party libraries inside the app. server root server runtime and tuning parameters not configured (for example, number of connections in database pool) We are looking into using virtualization infrastructure for J2EE application distribution. Instead of sending the ear/war archive, we’d send image with application server node and our application preinstalled. Some of the benefits are same as using with using virtualization infrastructure in general, namely better use of hardware resources. For us, it reduces the entropy of hosting infrastructure - distributing VM should be less affected by hosting environment. So far, the downside I see can be in application server licenses, here they will have to use dedicated servers for our solution, but this is generally already done that way. Also, there is a complexity with maintaining virtualization infrastructure, but this is often something IT departments have more experience with than with administering and fine-tuning J2EE solutions. Anyone has experience with this model? What are the downsides? Will we not just replace one type of complexity with other?

    Read the article

  • I am looking for a tool to measure or detect "unresponsiveness" of a desktop PC

    - by Tom H
    I have a client that provides some server systems to a hospital, and a support ticket was raised that the desktop application was hanging waiting for the server. We did some extensive testing and its pretty clear that the server is responsive, and the network is fine, and that the problem is on the client end. (no requests are received during the hang etc...) We take a look at the desktop machines and they should be fine, so we raise tickets with the software vendor who says that it must be the hardware, the hardware company says that it is the software, etc etc Anyway, so talking to the nurses, they say that these machines often "hang" for 30 seconds at a time, and sometimes during important moments where they need to get data for a patient who is unwell, such as charts and status. So I want to stick a client on these machines that would be able to detect arbitrary "unresponsiveness" of the keyboard/mouse and log that for analysis later. Obviously I am wary to suggest some application that takes resources and makes the problem even worse, so I would interested to see any tools that would detect these (is it correct to say that the keyboard interrupts are being discarded?) scenarios by looking for the OS discarding the interrupts, or whatever is appropriate here. so go on then serverfault, here is your chance to save a life.... ;-) Edit: I am starting to think that some of the tools associated with real time systems might be appropriate, at least as a diagnostic.

    Read the article

  • how i can identify which process is making UDP traffic on linux?

    - by boos
    my machine is continously making udp dns traffic request. what i need to know is the PID of the process generating this traffic. The normal way in TCP connection is to use netstat/lsof and get the process associated at the pid. Is UDP the connection is stateles, so, when i call netastat/lsof i can see it only if the UDP socket is opened and it's sending traffic. I have tried with lsof -i UDP and with nestat -anpue but i cant be able to find wich process is doing that request because i need to call lsof/netstat exactly when the udp traffic is sended, if i call lsof/netstat before/after the udp datagram is sended is impossible to view the opened UDP socket. call netstat/lsof exactly when 3/4 udp packet is sended is IMPOSSIBLE. how i can identify the infamous process ? I have already inspected the traffic to try to identify the sended PID from the content of the packet, but is not possible to identify it from the contect of the traffic. anyone can help me ? I'm root on this machine FEDORA 12 Linux noise.company.lan 2.6.32.16-141.fc12.x86_64 #1 SMP Wed Jul 7 04:49:59 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • Looking to replace Ghost with FSArchiver or Clonezilla, few questions about capabilities

    - by Daniel Wright
    I work for a PC Repair company and we are looking into setting up a dedicated machine with externally accessible SATA bays to clone harddrives as a safety net incase something goes wrong during a repair. We currently use a SATA/PATA to USB bridge called MagicBridge and Norton Ghost on any workstation, but we're looking to move away from Ghost. We have a computer with a large RAID5 array with Windows Server 2008 Standard currently installed, but this can be replaced with a flavour of *nix. I have some experience with Clonezilla, but FSArchiver also seems like a suitable replacment too. My Head Technician wants to know if my chosen solution (probably Clonezilla or FSArchiver, but I'm open to free suggestions) is capable of: Cloning a degraded RAID, such as a single drive from a RAID1 mirror without complaining Producing images that are easily mountable (he'd prefer them to be mountable in Windows, but if there is no other easy way, *nix should be fine) akin to Ghost Explorer so individual files can be restored as well as being able to do bare metal restores. My apologies for wordiness but I wanted to be thorough in my explaination. Thanks for any suggestions or tips :) EDIT: I've just found out that Clonezilla has a workaround for cloning RADI1 drives EDIT2: Found the answer to both of my questions, aparently I wasn't phrasing my searches right, could this question be deleted please?

    Read the article

  • how would it be possible to discover a cable modem's MAC remotely?

    - by amateurenthusiast
    i was reading the back archives of a canadian privacy law blog, and he linked to a judicial decision. apparently as part of an investigation in which were used yahoo chat and google's old 'hello' image trading program the officer was able to determine a suspect's modem's MAC address: In order to determine who STEPHTOSH was, the officer did a trace on a programme called WHO IS in an effort to learn from where STEPHTOSH was coming. WHO IS is a command program available to the public. The officer was able to ascertain that the person using the name STEPHTOSH was a Rogers Internet customer. The officer was able to obtain the Internet Protocol address, also known as the I.P. There is only one location for an I.P., which is unique to that subscriber. By use of the website known as DNS STUFF.com, one is able to find with which company this I.P. is registered. It was ascertained that the I.P. address used by STEPHTOSH was registered to Rogers Cable, from the Toronto area. The officer also learned the Cable Modem MAC address used by STEPHTOSH. This was all the information the officer was able to amass. now it was my understanding that the MAC address of any given device can only be accessed if you're only one 'hop' away on the Internet. the suspect in question was in Markham and the officer part of the Toronto Police, so it's conceivable that they both might have used Rogers internet. but would that still put them only one 'hop' away from each other? i thought the first hop after the modem was usually the ISP? and if he'd used a netBIOS query against this guy's machine it would return the ethernet card's MAC, not the modem's. so is this guy on the same rogers subnet as the suspect's cable modem, is that functionality part of google's Hello (i could only think that it would be possible if Hello operated as a virtual LAN or something), does the officer have remote access to the arp caches of the routers at Rogers or is he just full of crap and lying to make his case stronger?

    Read the article

  • What's next for all of these Microsoft "overlapping" and "enhanced" products ?

    - by indyvoyage
    Recently I attended a road show, organised by MS Gold Partner company in the UK. The products discussed were: SharePoint server (2010 and 2007), Exchange server, Office Communication Server 2007, Exchange hosted services Office Live meeting, Office Communicator, System Center Configuration Manager and Operation Manager, VMware, Windows 7 etc. As Microsoft claims the enhancement in the each product against higher version, I felt that clients are not much interested in all these details. For example Office Communicator, surely they have improved a lot the product and first site all said 'WOW' great product, but nobody wish to pay money for all these extra features. Some argued, they are bogged down by all these increased number of menus. They don't need soft call feature included with mobile call. It apply for all other products as well such as MS office (next what 2 ribbons ?), windows OS and many more. Indeed there must be good features in all these products, but is it worth to spend money and time to update the older system ? Also sometimes these feature will decrease the productivity instead increase it. *So do you think what ever enhancement MS is doing in the products is only for selling purpose, not a real use ?? and I think also keep the developer busy learning the new tools and features. * I am sure some some people here will argue that some people need this sort of features. But I am not talking about NASA or MI5 guys. I am talking of usual businesses and joe public. Any ideas welcome.

    Read the article

  • How do I fix Internet Explorer / Firefox differences in Oracle JD Edwards Enterprise?

    - by CT
    Our company uses Oracle's JD Edwards Enterprise to do accounting work. We have an application consultant working for us to just deal JD Edwards but he is of non-technical background so only helps in application-specific support. All of our users should use IE 7 for JDE. About 15-20% of those users have problems which cause them to not be able to use IE 7. So those problematic users use firefox. For the most part firefox works fine but there are a few issues. Some menus / options within JDE that are present in IE 7 are not present in firefox. If I could get all of our users working under IE 7, all the firefox issues would be mute points. I have cross referenced Internet Options settings from a working IE 7 user and a non-working IE 7 user. All settings seem to be mirrored. I'm not sure how to go about continuing to troubleshoot this issue. So not only answers but just suggestive ideas for attack would be appreciated. Thanks

    Read the article

  • Which is more secure: Tomcat standalone or Tomcat behind Apache?

    - by NoozNooz42
    This question is not about performance, nor about load-balancing, etc. Which would be more secure: running Tomcat in standalone mode or running Tomcat behind apache? The thing is, Tomcat is written in Java and hence it is pretty much immune to buffer overrun/overflow (unless a buffer overrun in a C-written lib used by Tomcat can be triggered, but they're rare [the last I remember was in zlib, many many moons ago] and one heck of a hack to actually exploit), which gets rid of a lot of potential exploits. This page: http://wiki.apache.org/tomcat/FAQ/Security has this to say: There have been no public cases of damage done to a company, organization, or individual due to a Tomcat security issue... there have been only theoretical vulnerabilities found. All of those were addressed even though there were no documented cases of actual exploitation of these vulnerabilities. This, combined with the fact that buffer overrun/overflow are pretty much non-existent in Java, makes me believe that Tomcat in standalone mode is pretty secure. In addition to that, I can install both Java and Tomcat on Linux without needing to be root. The only moment I need to be root is to set up a transparent port 8080 to port 80 forwarding (and 8443 to 443). Two iptables line as root, that's all root is needed for. (I don't know for Apache). Apache is much more used than Tomcat and definitely does not have a security track record as good as Tomcat. What would make Tomcat + Apache more secure? What would make Tomcat + Apache less secure? In short: which is more secure, Tomcat standalone or Tomcat with Apache? (remembering that performance aren't an issue here)

    Read the article

  • Grant HTTP access based on unix user group

    - by Sander Marechal
    Is it possible to grant network access or HTTP access based on a user's group? At my company we want to set up an internal composer server using Satis to manage packages for the projects we write (e.g. on repository.mycompany.com), with the packages themselves in our SVN server (svn.mycompany.com). We have several webservers with many different users on them. Some users should be able to reach the composer and SVN server. Some should not. Users that should be able to reach these servers all belong to the same group. How can I set up Apache on the Composer and SVN server to only grant access to those users in that group? Alternatively, can I set up the webservers in such a way that only users from that group are able to make a connection to our Composer and SVN servers? The best thing we have come up with so far is using SSL client certificates. We simply place a client certificate on all servers which can be used to access Composer and SVN. Only the right usergroup will have read access to the certificate. A bit clunky but it may work. But I'm looking for something better.

    Read the article

  • Diagnosing Microsoft SQL Server error 9001: The log for the database is not available.

    - by Scott Mitchell
    Over the weekend a website I run stopped functioning, recording the following error in the Event Viewer each time a request is made to the website: Event ID: 9001 The log for database 'database name' is not available. Check the event log for related error messages. Resolve any errors and restart the database. The website is hosted on a dedicated server, so I am able to RDP into the server and poke around. The LDF file for the database exists in the C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA folder, but attempting to do any work with the database from Management Studio results in a dialog box reporting the same error - 9001: The log for database is not available... This is the first time I've received this error, and I've been hosting this site (and others) on this dedicated web server for over two years now. It is my understanding that this error indicates a corrupt log file. I was able to get the website back online by Detaching the database and then restoring a backup from a couple days ago, but my concern is that this error is indicative of a more sinister problem, namely a hard drive failure. I emailed support at the web hosting company and this was their reply: There doesn't appear to be any other indications of the cause in the Event Log, so it's possible that the log was corrupted. Currently the memory's resources is at 87%, which also may have an impact but is unlikely. Can the log just "become corrupted?" My question: What are the next steps I should take to diagnose this problem? How can I determine if this is, indeed, a hardware problem? And if it is, are there any options beyond replacing the disk? Thanks

    Read the article

  • Outlook won't re-connect to exchange after network is re-connected

    - by stan503
    I have a setup at my desk where I connect my computer to a an RJ45 switch that switches between two networks. One network is the corporate network, which is maintained by my company's IT, and the other is my own private network where I do testing (the two networks have to be separated). The corporate network hosts the exchange server where I get e-mail. When I switch from the private network to the corporate network, I expect Outlook to re-connect to the exchange server. However, I have found that sometimes when I come back, Outlook take an extremely long time to re-connect. Send/Receive will give me back the error 'The server is not available' (0x8004011D). It will sit there for 10 minutes to a few hours before it finally re-connects. The only other option is to reboot my computer, which is a huge pain for me since I run multiple VMs on it. This usually happens when I'm connected to the private network for a significant amount of time, so I'm thinking it's because Outlook has cached the network status. Is there a way to force Outlook to do a 'hard' re-connect to the exchange server? I'm using Windows XP SP 3 with Outlook 2007.

    Read the article

  • Setting Outlook Web Access as default mail client in Firefox

    - by Barton Chittenden
    The company that I work for is nice enough to let me use Linux for work, but they use Outlook. I've been using Outlook Web Access (OWA) as my mail client, which is more or less acceptable. The only problem is that whenever I click on a mailto link or use the "Send Link" menu option in firefox, I'm prompted to use evolution. Since connecting to an exchange server through evolution seems to be sketchy at best, I would like to set OWA as my default mail client. I'm using Firefox 3.6.13 Here's what I've found so far: Default mail client can be found at Edit Menu -> Preferences -> Applications Tab -> mailto When I click on the drop down menu, one of the options is "Application Details" This shows two options by default: Google Yahoo! Mail Each of these shows how to launch that service. For Gmail: https://mail.google.com/mail/?extsrc=mailto&url=%s For Yahoo!: http://compose.mail.yahoo.com/?To=%s I presume that Outlook Web Access has something similar. Based on the googling that I've done so far, I think that this should look something like this: https://<server name>/owa/?cmd=compose... A little experimentation on my part shows that the following will compose a message: https://<email server>/owa/?ae=Item&a=New&t=IPM.Note but I still don't know how to specify recipient, subject or body of the email to be composed... What I want to know is a) does anyone know the URL parameters to compose a mailto in Outlook Web Access, including subject, recipient and body? else b) can someone give me a decent pointer for where to get this information?

    Read the article

  • How/when do you study?

    - by Sergei
    Would be interesting to find out how other sysadmins are educating themselves. I find myself in the need of constantly learning new things. I prefer to spend more time on the subject and know it thoroughly knowing that not doing so will kick me back in the future. This can be frustrating sometimes as it feels that I move too slowly. Our company has account on safari.oreilly.com and I am reading a book or two at any given time. I also read sysadmin related blogs for ideas and tips and to keep myself in the tune with the trends. I cannot do any study at home as I would rather spend my out of work hours with my family plus I find it hard/impossible to study at home due to the inability to concentrate at home. So I mostly study while on the train, luckily my commute time takes up to 2 hours a day. I also read a lot at work and don't feel guilty about it. To fix/implement/plan, I need to have a solid knowledge and if it requires time then this is a part of my job being a sysadmin. There is a joke that says "sysadmin is a person that knows a lot about everytihng and as a result knows nothing" - I think ther is a grain of truth here...

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >