Search Results

Search found 8369 results on 335 pages for 'company'.

Page 117/335 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Webapp in Jetty can't find properties file after running a couple days

    - by Cuga
    I have a webapp running in Jetty on Mac OS 10.6. After a few days of it running and without the server losing power or rebooting, it seems to stop working saying it can't find a properties file. This properties file is included inside the .war file deployed to the /webapps directory. If I restart Jetty as the superuser the web service works again just fine. Can anyone lend any advice to what's going on and how I can fix it? The error being shown when it isn't working is: Problem accessing /my-web-service. Reason: INTERNAL_SERVER_ERROR Caused by: java.lang.NullPointerException at com.company.service.Dao.readFromPropertiesFile(BwDao.java:35) at com.company.service.ServletHandler.doGet(ProxyClass.java:66) ... at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Here's where the properties files exist that it's trying to read from the .war file: And this is how the properties are being read from the classpath: Properties properties = new Properties(); properties.load(Thread.currentThread().getContextClassLoader().getResourceAsStream( "app.properties")); Again, this does work just fine if I have just restarted the server, but it seems to fail after running a few days.

    Read the article

  • Monitoring AWS Systems Behind ElasticBeanStalk

    - by A. Avadis
    So I'm getting a company set up in the Amazon Cloud -- creating IAAS protocol/solutions/standardized implementation, etc while also being the SysAdmin for individual systems, app environments, and day-to-day uptime. One of the biggest issues I'm having is tracking various system/application logs, as well as logging/monitoring/archiving system metrics like memory usage, cpu usage, etc etc In a centralized fashion. E.g. -- Nagios + Urchin. The BIGGEST impediment to my endeavors is the following: The company application is deployed in the form of a Java *.WAR file, uploaded to an Elastic BeanStalk application environment, load balancing and auto-scaling between 3(min) and 10(max) servers, and the EC2's that run the application are fired up and disposed of ad-hoc. That is to say, I can't monitor the individual EC2's for very long because so many are being terminated then auto-provisioned/auto-scaled on the fly -- so I'd constantly be having to "monitor what I'm monitoring", and continuously remove/add EC2 machine addresses to my monitoring lists. IS there some sort of way to use monitoring tools like Zabbix or Nagios to monitor the ElasticBeanStalk, and have it automatically add on new EC2's, and remove terminated/failed EC2's from its monitoring list automatically? Furthermore, is there anything I can do with GrayLog to achieve similar results with the aggregation/centralization of my application logs from multiple EC2 instances into ONE consolidated set of logs/events? If not GrayLog, is there ANYTHING LIKE GrayLog that can automatically detect what EC2 members are being added/removed from the environment, and collect the logs from them automatically? Any and all advice or direction is appreciated. Thanks much, and cheers!!

    Read the article

  • Sending bulkmail from different server?

    - by Omer Gencay
    I want to send bulk domains from my vps for a domain(cagetur.com) hosted in another company. The company(cagetur) will go on using the old hosting account for its mailing operations. The vps will just be used for smtp for once a week. I created an A record "vps.cagetur.com". directed it to the IP adress of vps then created a mx record with bigger preference number "50 vps.cagetur.com." on the domain control panel. When I trace the "vps.cagetur.com" i can reach my vps now. I installed hMail on the vps. Configure it (created domain, accounts). I have no information about "system" so i couldn't get further from this point. I can connect to the mail server with Outlook without errors. I can send an email from the account on the vps but it doesn't reaches. No errors, no emails. What do i have to do for getting it work? Thank you.

    Read the article

  • Small store infrastructure - where to begin?

    - by KevinM1
    It looks like my older brother is about to change jobs - from lawyer to shooting range proprietor - and since I'm the family 'computer guy' I have the task of coming up with and setting up the in-store equipment. Only problem, I don't know how to start or where to look. I'm a web programmer, not an IT specialist. To that end, I figured I should ask the pros. Users: 3 (myself, my brother, and his business partner) Equipment: 1 Windows (likely 7) desktop for POS software, 1 Windows desktop/laptop for backroom use (bookkeeping, etc.) Other: ?? I'm looking for a reliable and, well, idiot-proof way to handle backups. Neither my brother nor his business partner are tech savvy (A web browser, email, MS Word and Excel are about the extent of their knowledge), so I need something they can handle. On-site would be preferable to off-site, given my brother's hesitance to have sensitive business data be handled by an outside source. I'm also looking for a small on-site server. I estimate that, at most, only 2-3 users will need access. A linux solution would keep costs down, but I'm concerned about Windows <- linux interoperability. Would the store security cameras' storage be handled by the security company, or would we have to stream that data to our own server? I know from my own experience with personal security that the company gives/loans a recording device to the home owner, but I'm not sure about business security. I know this sounds like a shopping list, and it's pretty vague. I wish I could give more detail, but between my own ignorance and things not being 100% nailed down on the business end, I'm a bit stuck. At the very least I'd like a nudge - links on a place to start, what to look for, things I need to think about, etc. - for this endeavor. Thanks.

    Read the article

  • Why can't I see all of the client certificates available when I visit my web site locally on Windows 7 IIS 7?

    - by Jay
    My team has recently moved to Windows 7 for our developer machines. We are attempting to configure IIS for application testing. Our application requires SSL and client certificates in order to authenticate. What I've done: I have configured IIS to require SSL and require (and tried accept) certificates under SSL Settings. I have created the https binding and set it to the proper server certificate. I've installed all the root and intermediate chain certificates for the soft certificates properly in current user and local machine stores. The problem When I browse to the web site, the SSL connection is established and I am prompted to choose a certificate. The issue is that the certificate is one that is created by my company that would be invalid for use in the application. I am not given the soft certificates that I have installed using MMC and IE. We are able to utilize the soft certs from our development machines to our Windows 2008 servers that host the application. What I did: I have attempted to copy the Root CA to every folder location for the Current User and Location Machine account stores that the company certificate's root is in. My questions: Could I be mishandling the certs anywhere else? Could there be a local/group policy that could be blocking the other certs from use? What (if anything) should have to be done differently on Windows 7 from 2008 in regards to IIS? Thanks for your help.

    Read the article

  • Correct use of SMTP "Sender" header?

    - by Eric Rath
    Our web application sends email messages to people when someone posts new content. Both sender and recipient have opted into receiving email messages from our application. When preparing such a message, we set the following SMTP headers: FROM: [email protected] TO: [email protected] SENDER: [email protected] We chose to use the author's email address in the FROM header in an attempt to provide the best experience for the recipient; when they see the message in their mail client, the author is clear. To avoid the appearance of spoofing, we added the SENDER header (with our own company email address) to make it clear that we sent the message on the author's behalf. After reading RFCs 822 and 2822, this seems to be an intended use of the sender header. Most receiving mail servers seem to handle this well; the email message is delivered normally (assuming the recipient mailbox exists, is not over quota, etc). However, when sending a message FROM an address in a domain TO an address in the same domain, some receiving domains reject the messages with a response like: 571 incorrect IP - psmtp (in reply to RCPT TO command) I think this means the receiving server only saw that the FROM header address was in its own domain, and that the message originated from a server it didn't consider authorized to send messages for that domain. In other words, the receiving server ignored the SENDER header. We have a workaround in place: the webapp keeps a list of such domains that seem to ignore the SENDER header, and when the FROM and TO headers are both in such a domain, it sets the FROM header to our own email address instead. But this list requires maintenance. Is there a better way to achieve the desired experience? We'd like to be a "good citizen" of the net, and all parties involved -- senders and recipients -- want to participate and receive these messages. One alternative is to always use our company email address in the FROM header, and prepend the author's name/address to the subject, but this seems a little clumsy.

    Read the article

  • Should I keep my ex-employer's data?

    - by Jurily
    Following my brief reign as System Monkey, I am now faced with a dilemma: I did successfully create a backup and a test VM, both on my laptop, as no computer at work had enough free disk space. I didn't delete the backup yet, as it's still the only one of its kind in the company's history. The original is running on a hard drive in continuous use since 2006. There is now only one person left at the company, who knows what a backup is, and they're unlikely to hire someone else, for reasons very closely related to my departure. Last time I tried to talk to them about the importance of backups, they thought I was threatening them. Should I keep it? Pros: I get to save people from their own stupidity (the unofficial sysadmin motto, as far as I know) I get to say "I told you so" when they come begging for help, and feel good about it I get to say nice things about myself on my next job interview Nice clean conscience Bonus rep with the appropriate deities Cons: Legal problems: even if I do help them out with it, they might just sue me for keeping it anyway, although given the circumstances I think I have a good case Legal problems: given the nature of the job and their security, if something leaks, I'm a likely target for retaliation Legal problems: whatever else I didn't think about I need more space for porn. Legal problems. What would you do?

    Read the article

  • Does a VPN requirement kill the concept of having a Web Application in the Cloud?

    - by Christian
    Recently I posted a question in SO, but so far I got no answers. I wonder if I'm asking the wrong question. This is the problem: We need to design an application which offers a public http web service, but at the same time it must consume some services through a VPN connection from other existing company. There is no other alternative but to use a VPN connection to access those services. We want to host our application in some cloud infrastructure like Heroku or Amazon EC2. But there is no direct way to access the VPN services of the other company from there. The solution I'm thinking, but I don't like is to have a different server to expose the services from that VPN. But this will require the setup of another server which I prefer to avoid. In the case this is the solution, can I use an Amazon EC2 instance to connect to a VPN? This is what I was thinking, is it correct? I don't have experience using VPNs, tunnels or those kind of networking stuff. I will really appreciate if you can propose me an alternative solution, or just give me a comment.

    Read the article

  • Best Asp.net Hosting

    - by dotnetguts
    There are many asp.net web hosting companies which spends lot on advertisement and also gives you very cheaper rate, as low as $5, but when it comes to support they are simply hopeless. Everyone can you please pass your experience with your past hosting companies and suggest any good asp.net hosting company? Please consider following requirement factors 1) Asp.net 3.5 or 4.0 supported. 2) Url Rewriter support 3) GZip support (Dynamic through code) 4) Initial Setup support (If required) 5) SQL Server 2005 or 2008 6) Allow to access SQL Server DB using SQL Mgmt Studio 7) Environment supporting Backup and Restore of DB on my own, without involving tech support team 8) Full Text Search support 9) FTP support 10) I can able to send atleast 500 Emails daily. 11) 99.9% Up Time (No matter all web hosting say they have 99.9% Up Time, but its not true). 12) Alert Email to be sent when they do any maintenance or during downtime. 13) Hosting Price should be reasonable. Incase you feel i am missing something please add to the list. Can anyone suggest good webhosting company based on above factors?

    Read the article

  • All application passwords lost on Windows 7

    - by Rynardt
    A couple of days ago I changed my Windows 7 login password. My laptop is on my company's domain, so password changes are done over the internal network. Since changing the password I noticed that all my saved Chrome passwords are missing. Also Skype, Windows Live, Internet Explorer and Outlook lost their saved passwords. I guess there could be more applications with lost passwords, but I have not opened them yet. This makes me think that most applications saves their passwords to a general password vault on the Windows system and this vault got somehow corrupted when I changed my domain login password for windows. Do anyone have any idea of how to fix this and prevent it from happening again? EDIT : More Info I do development work at the office, so most of the time I bypass the firewall and connect directly to the internet gateway. Now and then I would connect to the company wifi network to do printing and access files on a NAS. So by default my laptop does not connect to the wifi hotspot. On this occasion to update the password, I had to connect to the wifi. So referring to the comment by OmnipotentEntity below, could this have happened when the system rebooted without a connection to the network as the laptop does not auto connect to the wifi hotspot?

    Read the article

  • Why do I need a managed switch and which one should I buy?

    - by ascanio1
    I bought a 2nd router and I want both routers to have direct WAN access to the modem. One of the 2 routers directs VOIP traffic to a telephone line port. This VOIP service is provided by the cable carrier which also leases the modem & the router. The cable company technician told me that this VOIP line uses IPv6 addressing and therefore I must employ an IPv6 capable/compliant Giga Hub/Switch or my telephone line won't work anymore. Pls advise me (brand/model) an IPv6 compliant, 2 port, switch to purchase. Pls educate me: By reading this forum I thought that hubs broadcast traffic to all ports, regardless of which input/output is being used and so, theoretically, they have nothing to do with IP. Correct? Same story for unmanaged switches, where the only difference is that these latter devices route traffic only to those ports which are detected to be in use. Correct? I also understood that unmanaged switches route traffic simply by detecting hardware use and not by selecting specific IP traffic. Correct? Finally, there are managed switches which DO select traffic based on IP and, therefore, only these managed switches are involved with IPv6... Why would my cable company explicitly tell me, over and over, that I must use an IPv6 compliant switch? Why would they need a managed switch instead of an unmanaged one? Thanks in advance for helping me understand!

    Read the article

  • Multiple Remote Desktop Connection in Windows Server 2003?

    - by Joel Bradley
    My company is transitioning all user PC's to Windows 7 64-Bit in anticipation of the 2014 cutoff for Windows XP support. So far everything has been going great except for one specific piece of software that will not run in Windows 7. The current plan is to give everyone a cheap secondary PC to run this software but I feel that's a little much for software that's not even used all the time, although it is essential. I've suggested we install virtual machines but the company does not want to pay for the XP licences. I have access to a copy of Windows Server 2003 that is no longer being used and I was wondering if it was possible to create a remote desktop server. I know it can be done on a one-to-one basis, but this is a 15 person helpdesk. I'd like to be able to support multiple remote dekstop sessions, each with their own logins and dekstops. Is this possible? Are there any other alternatives to my issue? FYI, I've been told that XP mode is only free for consumers. There are costs when used in a corporate environment.

    Read the article

  • Print over remote CUPS server, but just show a subset of the printers.

    - by jdm
    I'd like to print from my Ubuntu laptop (karmic) to some networked printers. Our organisation uses a CUPS server with several hundred printers. What I know I can do is: CUPS_SERVER=printers.company.com acroread document.pdf and then Adobe Reader shows me all available printers to select from. However, it takes a couple of minutes to display the large list, which is really annoying. (The desktop PCs here suffer from this, too.) The other option is to add a new printer with an address like ipp://printers.company.com/printer/bldg1_hp8150 (to the Ubuntu printer configuration = local CUPS server). However, it asks me for a driver. I don't want to / can't always specify a driver, since some printers don't appear in the list. I'd like to let the remote CUPS server handle the driver part (like it does when i set CUPS_SERVER), and do no more preprocessing/"driver stuff" on my side. The ideal thing would be if I could somehow add the remote printer list to my local cups server, and apply a filter, so that it would just display printers a la bldg1_*. This feature was available in KDE3.?, but I can't find something similar in Ubuntu/Gnome. Any suggestions?

    Read the article

  • torrent downloads not showing on Squid log

    - by noobroot
    hello, i have just a few months working as sysadmin, hence i still have lots to learn, first thing id like to do is as follows: We have an OpenBSD 4.5 box acting like firewall,dns,cache etc, the box has 2 network cards, one conected directly to the internet and the other to our switch, i used to work with sarg for the log analysis but then changed to the much faster free-sa. I use a daily free-sa report to check the bandwidth usage and report our top 5 bandwidth consumers (3 days a week being #1 and you will be buying the pizzas :D, we are a small company ~20 so we are very familiar). this was working really good until recently, one of us required to download some stuff via torrent (~3GB) and since the pizza rule is active for non-work related downloads, he told me (verified) that his download was indeed work related so i would dismiss that 3GB off his quota, but to my surprise the log didnt showed that 3GB, since his ip consumption was only around 290MB. More recently, since the FIFA world cup started, we know that some of the employees are watching the match's streaming, we know it and we dont care about it since, like already stated, we are a small company so we dont have restrictive policies, we all can chat, watch youtube, download anything we want BUT we are only allowed 300MB a day otherwise you'll get in the top5-pizza-board, anyway, that streaming consumption is also not showing in the free-sa reports. So my question is, why is these data being excluded from the reports? im thinking that the free-sa reports list only certain types of things but im also thinking if are the squid logs the ones that are not erm... logging these conections. Any help, guide, advice or clarification is appreciated.

    Read the article

  • choosing hosting for custom ecommerce site, shared, dedicated, what to look for?

    - by spirytus
    Hi, I have (almost) developed website for my client and now need to decide on hosting. Most of the users of the site will be located in Australia, and so am I and my client. Now, I want to consider everything before deciding on host and few questions comes to my mind: I cannot afford website being down, and all hosts say something like "99% uptime guranted". Should just that be enough or shall I ask hosts for some stats maybe? Does it make any difference if servers and whole hosting company is located in Australia or outside? I've been hosting few sites with JustHost.com on shared hosting (cheapest plan, servers in US I believe) and never seen any delays but could that be an issue? I would prefer Australian company so I can actually go to them and give them piece of my mind if something goes wrong, but US servers seem cheaper. Would share hosting do? Its ecommerce custom build php application, I know there are security issues with sessions etc on shared hosting. Will take precautions of course but could share hosting be an issue? Would dedicated be worthy option considering that my knowledge of server is very limited? I need to run php/mysql, with preferably unlimited bandwidth as with my experience I cannot tell what amount of traffic would be sufficient. Please let me know if I didn't provide you with enough information so you could answer my questions, will gladly explain further. In advance thanks for any answers :)

    Read the article

  • Nginx flv audio pseudo stream works but video is not loading

    - by sarah
    I am working on a development server for a company & they want nginx webserver to work with. So the requirements for their company is, it should be capable of doing following things i.e hotlink protection, mp4 & flv pseudo stream & secure streaming. However nginx fulfills their requirements and i am configuring their server from past 2 days as i am new to this field so i've only acheived hotlinking prevention in past 2 days. But the problem on which i am stuck is flv pseudo streaming, to make work to mp4 pseudo stream it was just a piece of paper but i am really fuc*ed up with flv pseudo stream. I have converted my flv videos with flvmdi tools to insert many keyframes but the problem is , when i try to seek video from following keyframes that are generated by flvmdi i.e test.flv?start=2681223, video does not load but audio pseudo works fine. So it means no problem with my flv configuration in nginx.conf file. And the forum that i used to compile my nginx-1.2.1 is http://h264.code-shop.com/trac/wiki/Mod-H264-Streaming-Nginx-Version2 & by adding additional module --with-http_flv_module. This forum is really active, hopes i will resolve my problem as soon as you guys will provide me some guide.

    Read the article

  • G4 server running slow

    - by Abby Kach
    I have HP proliant ML 350 servers. We have 8 remote locations where users connect and log on to our server through DYNDNS to access our company ERP's to conduct day to day work. The base of our company ERP's is oracle for which we have a separate server.Now the problem is day by day the load on the server is increasing and the speed is getting slower and slower and users are facing a lot of issues . so I are planning to implement Sonic wall VPN. I conducted a demo of sonic wall but it was slower than the current speed of dyndns. the configuration of my server is as follows :- Linux HP ProLiant 370 Intel Xenon 3.20 GHZ 150 GB (72 * 2) 3 GB Suse Omega HP ProLiant 370 Intel Xenon 3.20 GHZ 300GB (72.8 * 4) Raid 5 4 GB Windows Server 2K3 Enterprise Edition Storage Box HP Storage Works 1400 Intel Xenon 2.00 GHZ 4 TB(1 TB * 4) Raid 5 2 GB Windows Server 2K8 Enterprise Edition Domain & Terminal HP ProLiant 350 Intel Xenon 3.20 GHZ 250 GB(72.8 * 3) Raid 5 4 GB Windows Server 2K3 Enterprise Edition Can some one help me as to how can i speed up my network at remote locations and reduce the problems of speed etc..

    Read the article

  • Managing SharePoint permissions via Active Directory?

    - by rgmatthes
    My company has thousands of employees organized thoroughly via Active Directory. I have confidence in the accuracy of the Department and Title information displayed in the user profiles. I'm helping to put up a brand new SharePoint 2007 site, and I contacted IT about managing the site's permissions through AD Groups. The goal is to have the site automatically assign read/write/contribute/whatever permissions based on the information in AD. For example, we could create an AD Group called "Managers" that would contain anyone with the "Manager" title in their AD user profile. I would have SharePoint tap into this AD Group to mass assign permissions if I knew all managers would need a certain level of access (read/write/contribute/whatever). Then if a manager joins the company or leaves it, the group is automatically updated (provided AD gets updated, of course). My IT rep called back and said it couldn't be done. This seems like a pretty straightforward business requirement, and one of the huge benefits of having Active Directory, but maybe I'm mistaken. Could anyone shed some light on this? A) Is it possible to use dynamically-updated AD Groups when assigning permissions via SharePoint? (Does anyone know of a guide I could show my doubtful IT rep?) B) Is there a "best practice" way to go about this? I've read some debate on whether SharePoint Groups or AD Groups are the way to go. My main concern is dynamic updating. C) If this isn't available out of the box, can someone recommend third-party software that will provide the functionality I'm looking for? A big thanks to anyone who can help me out!!

    Read the article

  • Cisco configuration for public library internet

    - by AlternateZ
    I'm a C/C++ computer programmer turned IT support guy working for a public library. My day is usually spent helping random grandparents learn how to use email, so my networking knowledge is limited to what I can glean from google. Here's the situation. We have a public library with 20 PCs on a LAN and also public wifi access. Previously we were running all of this on 1 ADSL connection and people complained about low speeds. We hired a networking company to set up a Cisco dual-WAN router for us, and purchased an additional ADSL connection. The intention was to give the LAN PCs a guaranteed amount of bandwidth each, and then let the wifi users split the rest. The results were far worse than what we expected, and all we got from the company was excuses and they've since washed their hands of us. During busy periods, net performance on the LAN PCs are so poor that attaching files to gmail etc often times out and fails - far from the "guaranteed amount of bandwidth each" that we hope for! Sometimes it feels like performance is worse than before when we had 1 ADSL link and an unconfigured router? Anyways, surely this is a problem encountered a million times over across the world? (Sharing internet across many users effectively.) What are standard solutions for something like this? I admit to not even knowing the right jargon to google for (load balancing?) I'd appreciate any links to resources/guides that might help me get a better understanding of the problem/solutions, and perhaps some stories of your own experience in solving similar problems. This will help us evaluate and negotiate with network consultants in the future. If its relevant, our router config contains a section "policy-map" with "bandwidth percent" for each class of user (LAN, wifi), and "fair queue".

    Read the article

  • Extracting information from active directory

    - by Nop at NaDa
    I work in the IT support department of a branch of a huge company. I have to take care of a database with all the users, computers, etc. I'm trying to find a way to automatically update the database as much as possible, but the IT infrastructure guys doesn't give me enough privileges to use Active Directory in order to dump the users, nor they have the time to give me the information that I need. Some days ago I found Active Directory explorer from Sysinternals that allows me to browse through Active Directory, and I found all the information that I need there (username, real name, date when it was created, privileges, company, etc.). Unfortunately I'm unable to export the data to a human readable format. I'm just able to take a snapshot of the whole database in a machine-readable format. Doing the snapshot takes hours and I'm afraid that the infrastructure guys won't like me doing entire snapshots on a regular basis. Do you know of any tool (command-line is preferable) that would allow me to retrieve the values of the keys or export it to XML, CSV, etc?

    Read the article

  • KVM virtual machine unable to access internet

    - by peachykeen
    I have KVM set up to run a virtual machine (Windows Home Server 2011 acting as a build agent) on a dedicated server (CentOS 6.3). Recently, I ran updates on the host, and the virtual machine is now unable to connect to the internet. The virtual network is running through NAT, the host has an interface (eth0:0) set up with a static IP (virt-manager shows the network and its IP correctly), and all connections to that IP should be sent to the guest. The host and guest can ping one another, but the guest cannot ping anything above the host, nor can I ping the guest from anywhere else (I can ping the host). Results from the guest to another server under my control and from an external system to the guest both return "Destination port unreachable". Running tcpdump on the host and destination shows the host replying to the ping, but the destination never sees it (it doesn't even look like the host is bothering to send it on at all, which leads me to suspect iptables). The ping output matches that, listing replies from 192.168.100.1. The guest can resolve DNS, however, which I find rather odd. The guest's network settings (connection TCP/IPv4 properties) are set up with a static local IP (192.168.100.128), mask of 255.255.255.0, and gateway and DNS at 192.168.100.1. When originally setting up the vm/net, I had set up some iptables rules to enable bridging, but after my hosting company complained about the bridge, I set up a new virtual net using NAT and believe I removed all the rules. The VM's network was working perfectly fine for the last few months, until yesterday. I haven't heard anything from the hosting company, didn't change anything on the guest, so as far as I know, nothing else has changed (unfortunately the list of packages updated has since fallen off scrollback and I didn't note it down).

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • Correct use of SMTP "Sender" header?

    - by Eric Rath
    Our web application sends email messages to people when someone posts new content. Both sender and recipient have opted into receiving email messages from our application. When preparing such a message, we set the following SMTP headers: FROM: [email protected] TO: [email protected] SENDER: [email protected] We chose to use the author's email address in the FROM header in an attempt to provide the best experience for the recipient; when they see the message in their mail client, the author is clear. To avoid the appearance of spoofing, we added the SENDER header (with our own company email address) to make it clear that we sent the message on the author's behalf. After reading RFCs 822 and 2822, this seems to be an intended use of the sender header. Most receiving mail servers seem to handle this well; the email message is delivered normally (assuming the recipient mailbox exists, is not over quota, etc). However, when sending a message FROM an address in a domain TO an address in the same domain, some receiving domains reject the messages with a response like: 571 incorrect IP - psmtp (in reply to RCPT TO command) I think this means the receiving server only saw that the FROM header address was in its own domain, and that the message originated from a server it didn't consider authorized to send messages for that domain. In other words, the receiving server ignored the SENDER header. We have a workaround in place: the webapp keeps a list of such domains that seem to ignore the SENDER header, and when the FROM and TO headers are both in such a domain, it sets the FROM header to our own email address instead. But this list requires maintenance. Is there a better way to achieve the desired experience? We'd like to be a "good citizen" of the net, and all parties involved -- senders and recipients -- want to participate and receive these messages. One alternative is to always use our company email address in the FROM header, and prepend the author's name/address to the subject, but this seems a little clumsy.

    Read the article

  • Is there any method of backing up Google Drive files in some sort of versioning system?

    - by VictorKilo
    Backstory My company is utilizing Google Drive for our shared files. Each user has their own Drive account. In addition, we have a corporate Drive account which holds documents which are shared to each user. Each folder is shared to different users depending on their permissions and positions in the company. Many users are able to add files, and updated folders within this shared Drive account. This is fine. What is not fine, is when someone deletes something that they shouldn't. I have little to no way of knowing when I file is deleted wrongfully. Furthermore, anything that gets deleted goes into the trash bin of the file's creator, so I can't just restore it from the trash. Question Is there any method of backing up Google Drive files in some sort of versioning system that would allow me to revert files back to defined points in time? What i have Tried I currently have this corporate drive account synced up to my personal computer through the Google Drive application. Each night, I run a backup on the file using Windows "Backup and Restore." This allows me to at least get back files that are lost, but I a cleaner method than this. It's very possible that I may not have the very latest version of a document on my computer when the utility runs.

    Read the article

  • Sending same email through two different accounts on different domains using Outlook 2010

    - by bot
    I am a programmer and don't have experience in Outlook configurations. Our company has two email domains namely xyz.com and xyz.biz. Each employee has an email id on one of these domains but not both depending on the project they are working on. The problem we are facing is that when a communication email is sent from the Accounts, HR, Admin, etc departments, they need to send the email twice. Once through the xyz.com account to all employees with an email address on xyz.com and once through xyz.biz to all employees with an email address on xyz.biz. I am not sure why they have to send two separate emails but the IT team has directed all departments to do so as there is no other solution according to them. Even though two different groups have been created, sending an email to employees in a group of xyz.biz from xyz.com does not seem to work. I want to know if Outlook provides a feature such that we can configure some kind of rules to send an email through an id on xyz.com to all users on xyz.com and the same email gets sent automatically to users on xyz.biz through an id on xyz.biz. The only technical details I know is that we are using Exchange 2003 and the IT team claims that this is a limitation causing the problem. Edit: Our company is split into two main divisions depending on the type of projects. I am pretty sure I use domain XYZ wheras the employees in the other division use the doman ABC to log in into the windows machine or outlook itself. Also, employees in domain XYZ can access the machines on the network in domain ABC but not the other way around

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >