Search Results

Search found 8692 results on 348 pages for 'patterns and practices'.

Page 86/348 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • What kind of eye wear can I use to protect my eyes from being irritated from staring at a screen all

    - by dr dork
    Many of us stare at computer screens all day. Lately, my eyes have been irritated from prolonged staring at my computer screens. Does anyone use or know of any eye wear technology that helps with this? About five years back, I bought a pair of non-prescription eye glasses that had a no-glare layer put on them by an optometrist. It slightly helped, so I'm considering getting another pair. Is this the best option I have at this point? Thanks so much in advance for your wisdom!

    Read the article

  • What kind of eye wear can I use to protect my eyes from staring at a screen all day?

    - by dr dork
    Many of us stare at computer screens all day. Lately, my eyes have been irritated from prolonged staring at my computer screens. Does anyone use or know of any eye wear technology that helps with this? About five years back, I bought a pair of prescription-1 eye glasses that had a no-glare layer put on them. It slightly helped, so I'm considering getting another pair. Is this the best option I have at this point? Thanks so much in advance for your wisdom!

    Read the article

  • Best practice for ONLY allowing MySQL access to a server?

    - by Calvin Froedge
    Here's the use case: I have a SaaS system that was built (dev environment) on a single box. I've moved everything to a cloud environment running Ubuntu 10.10. One server runs the application, the other runs the database. The basic idea is that the server that runs the database should only be accessible by the application and the administrator's machine, who both have correct RSA keys. My question: Would it be better practice to use a firewall to block access to ALL ports except MySQL, or skip firewall / iptables and just disable all other services / ports completely? Furthermore, should I run MySQL on a non-standard port? This database will hold quite sensitive information and I want to make sure I'm doing everything possible to properly safeguard it. Thanks in advance. I've been reading here for a while but this is the first question that I've asked. I'll try to answer some as well = )

    Read the article

  • Evaluate a vendor laptop before deployment to user?

    - by NetWarrior
    I get numerous requests from executives and users for new smaller laptops for travel purposes. Most of my evaluation is based upon whether or not it can run certain applications. Mainly lotus notes, office, and video. Most of the laptops include windows 7 OS, and are fully loaded with ram, a high-end processor and a integrated graphics card. My boss whats me to document the usefulness of the laptop and performance. I'm just a little confused on how to setup a document that can be used by members of the IT department for future evaluations.

    Read the article

  • ESX guest machine floppy drives

    - by warren
    What purpose does having a virtual floppy drive on a guest in ESX serve? Is there a way to configure ESX by default to NOT include such a device? It's annoying to have to remove it by hand once a new VM is ready to be provisioned.

    Read the article

  • How to go about rotating logs which are arbitrary named and placed in deeply nested directories?

    - by Roman Grazhdan
    I have a couple of hosts which are basically a playground for developers. On these hosts, each of them has a directory under /tmp where he is free to do all he wants - store files, write logs etc. Of course, the logs are to be rotated, or else the disc will be 100% full in a week. The files can be plenty, but I've dealt with it with paths like /tmp/[a-e]*/* and so on and lived happily for a while, but as they try new cool stuff on the machine logrotate rules grow ugly and unmanageable, and it's getting more difficult to understand which files hit the glob. Also, logrotate would segfault if asked to rotate a socket. I don't feel like trying to enforce some naming policies in that environment, I think it's going to take quite a lot of time and get people annoyed and still would fail at some point. And I still need to manage the logs, not just rm the dirs at night. So is it a good idea in circumstances like these to write a script which would handle these temporary files? I prefer sticking with standard utilities whenever possible, but here I think logrotate is getting less and less manageable. And probably someone heard of some logrotate alternatives which would work well in such an environment? I don't need emailing logs or some other advanced features, so theoretically some well commented find | xargs would do. P.S. I do have a log aggregator but this stuff is not going to touch my little cute logstash machine.

    Read the article

  • How can I decrease the time spent reformatting / restoring user's workstations?

    - by CT
    I just working for a medium sized company (approx. 150 users). When user's workstations need to be reformatted for any variety of reasons, we reformat, reinstall windows from an oem disk, install drivers, install shop desired software, and restore user's documents from latest backup. While the process isn't very difficult it is very time consuming. What are some options simplify / speed up this process? Mostly a complete Windows shop with most servers running Win2k3 Enterprise and workstations running a variety of XP, Vista, and 7. Workstations are purchased through a variety of OEMs mostly Dell.

    Read the article

  • Network update solutions for a company of ~20 (5 local, 15 remote)?

    - by Margaret
    Hi all This is probably going to be a bit up in the air, because we're still in the "reaching towards solutions" phase, but I figured I'd see what you guys had to say. Plus I honestly know very little about systems and what is good and bad pratice. My organisation has always more or less worked on the concept of local machines; since it primarily employed contractors who were working from home, each of those people was largely responsible for their own machine and backup procedures and the like. We're now expanding, though we're still reasonably small (we're up to about 20 staff members). Most people still work remotely, but we have a central office where about five people are working. But we're getting large enough that we're starting to think it would be a good idea to have a central file server, and things like that - if someone gets hit by a bus, we want someone else to know where to look for the files to continue their work. A lot of the people who work for us remotely work on projects for other companies as well, so I don't want to force them to log in to our server whenever they're on a network. But I do want to make connection to be as painless as possible to do so, to improve utilisation. The other thing is that we're getting more people who would like to remote into the office server and do their work there. Our current remote connection application is an SSH install that allows people access to the network; the problem is, it's a black box to me, and I've never understood how to even connect to it (despite supposedly being de facto sysadmin). Thus far I've been able to bounce questions about how to get it working to the guy who does know it well, but he's leaving the company soon. So we probably need a solution for this that I actually understand. We were knocking around the idea of implementing a VPN with some form of remote desktop, and someone mentioned that this was largely a matter of purchasing a router capable of it; I'm not sure of the truth of that statement. This is what we have in the office: Two shiny new i7 servers, each running Windows Server 2008. Precise eventual layout is still being debated, a little, but the current suggestion is that one is primary database crunching, while the other is a warm backup of the databases, along with running Reporting Services. They currently have SQL Server 2008 installed on them, which is being connected to via the 'sa' account. We're hoping to make each person use their own account (preferably one tied to the 'central' password we set up, so we can use Windows Authentication). An older server, running XP Pro, that we are currently using as a test bed for a project that requires access to older versions of software. This machine is also being used to take backups, but I'm thinking of moving that functionality elsewhere. A spare desktop from a guy who left the company (XP Pro). We're thinking of bumping up the hard disk space and using it as the magical file server that's going to solve one particular everything. Assorted desktops, laptops, etc, at least one for each person in the office (mix of Win XP and Win 7; occasionally a person who normally works remotely might drop in to the office and bring a laptop bearing Vista, but it's pretty rare). All are set up as local user accounts at the moment; I don't know if it's the best arrangement. Purchasing more hardware is not a big problem, but we figure we might as well make use of what we've got first. Is Active Directory a big magic wand that's going to solve all the world's problems? Is there some other arrangement we should be looking to instead?

    Read the article

  • What should a hosting company do to prepare for IPv6?

    - by Josh
    At the time of writing The IPv4 Depletion Site estimates there are 300 days remaining before all IPv4 addresses have been allocated. I've been following the depletion of IPv4 addresses for some time and realize the "crisis" has been going on for many years and IPv4 addresses have lasted longer than expected, however... As the systems administrator for a small SaaS / website hosting company, what steps should I be taking to prepare for IPv6? We run a handful of CentOS and Ubuntu Linux systems on managed hardware in a remote datacenter. All our servers have IPv6 addresses but they appear to be link local addresses. Our primary business function is website hosting on a proprietary website CMS system. One of my concerns is SSL certificates; at the moment every customer with an SSL certificate gets a dedicated IPv4 IP address. What else should I be concerned about / what action should I take to be prepared for IPv4 depletion?

    Read the article

  • What is the best way to make a test duplicate of an AD DC?

    - by Puddingfox
    I have a production Active Directory Server running on a Windows Server 2008 R2 machine. I would like to make a duplicate of this machine with all setting the same, except the domain would be a slight variation of the current domain (think winnet to winnet2). Would it be easiest to try to clone the hard drive while the machine is running and change the domain on the clone in a different machine or export the data and import it on a different box?

    Read the article

  • Expertise Location [closed]

    - by Alexey Shatygin
    Is there some really working implementations of the expertise location systems exist? It is very hard to find anything about it. What sources to use, what to read, where to look? I've started with reading David W. McDonald's, Mark S. Ackerman's overview "Just talk to me" and now need just something more detailed. For those, who voting this question to be closed for off-topic: it is connected to IT sphere, if you don't know what it is - why ever vote? http://en.wikipedia.org/wiki/Expertise_finding http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.4654&rep=rep1&type=pdf

    Read the article

  • What to do before trying to benchmark

    - by user23950
    What are the things that I should do before trying to benchmark my computer: I've got this tools for benchmarking: 3dmark cinebench geekbench juarez dx10 open source mark Do I need to have a full spyware and virus scan before proceeding?What else should I do, in order to get accurate readings.

    Read the article

  • Security considerations in providing VPN access to non-company issued computers [migrated]

    - by DKNUCKLES
    There have been a few people at my office that have requested the installation of DropBox on their computers to synchronize files so they can work on them at home. I have always been wary about cloud computing, mainly because we are a Canadian company and enjoy the privacy and being outside the reach of the Patriot Act. The policy before I started was that employees with company issued notebooks could be issued a VPN account, and everyone else had to have a remote desktop connection. The theory behind this logic (as I understand it) was that we had the potential to lock down the notebooks whereas the employees home computers were outside of our grasp. We had no ability to ensure they weren't running as administrator all the time / were running AV so they were a higher risk at being infected with malware and could compromise network security. With the increase in people wanting DropBox I'm curious as to whether or not this policy is too restrictive and overly paranoid. Is it generally safe to provide VPN access to an employee without knowing what their computing environment looks like?

    Read the article

  • How to give write permissions to multiple users?

    - by Daniel Rikowski
    I have a web server and I'm uploading files using an FTP client. Because of that the owner and the group of the file are taken from the user used during the upload. Now I have to make this file writable by the web server (apache/apache). One way would be to just change the owner and the group of the uploaded file to apache/apache, but that way I cannot modify the file using the FTP account. Another way would be to give the file 777 permissions. Both approaches seem not very professional and a little bit risky. Are there any other options? In Windows I can just add another user to the file. Can something similar done with Linux?

    Read the article

  • Getting a Cross-Section from Two CSV Files

    - by Jonathan Sampson
    I have two CSV files that I am working with. One is massive, with about 200,000 rows. The other is much smaller, having about 12,000 rows. Both fit the same format of names, and email addresses (everything is legit here, no worries). Basically I'm trying to get only a subset of the second list by removing all values that presently exist in the larger file. So, List A has ~200k rows, and List B has ~12k. These lists overlap a bit, and I'd like to remove all entries from List B if they also exist in List A, leaving me with new and unique values only in List B. I've got a few tooks at my disposal that I can use. Open Office is loaded on this machine, along with MySQL (queries are alright). What's the easiest way to create a third CSV with the intersection of data?

    Read the article

  • RAIDs with a lot of spindles - how to safely put to use the "wasted" space

    - by kubanczyk
    I have a fairly large number of RAID arrays (server controllers as well as midrange SAN storage) that all suffer from the same problem: barely enough spindles to keep the peak I/O performance, and tons of unused disk space. I guess it's a universal issue since vendors offer the smallest drives of 300 GB capacity but the random I/O performance hasn't really grown much since the time when the smallest drives were 36 GB. One example is a database that has 300 GB and needs random performance of 3200 IOPS, so it gets 16 disks (4800 GB minus 300 GB and we have 4.5 TB wasted space). Another common example are redo logs for a OLTP database that is sensitive in terms of response time. The redo logs get their own 300 GB mirror, but take 30 GB: 270 GB wasted. What I would like to see is a systematic approach for both Linux and Windows environment. How to set up the space so sysadmin team would be reminded about the risk of hindering the performance of the main db/app? Or, even better, to be protected from that risk? The typical situation that comes to my mind is "oh, I have this very large zip file, where do I uncompress it? Umm let's see the df -h and we figure something out in no time..." I don't put emphasis on strictness of the security (sysadmins are trusted to act in good faith), but on overall simplicity of the approach. For Linux, it would be great to have a filesystem customized to cap I/O rate to a very low level - is this possible?

    Read the article

  • What UDP port number(s) is/are most likely to be unblocked at a client? [closed]

    - by mike
    For a custom UDP server servicing a wide variety of client machines sending custom UDP packets, what's the best port to choose as the standard listening port for the server (in that the port is not likely to be disabled at the client by a firewall or router)? My first inclination is to use port 80, since almost everyone is using HTTP, but that's TCP, and maybe blocking of UDP on port 80 has become common. What's the best port to choose?

    Read the article

  • What command line tools for monitoring host network activity on linux do you use?

    - by user27388
    What command line tools are good for reliably monitoring network activity? I have used ifconfig, but an office colleague said that its statistics are not always reliable. Is that true? I have recently used ethtool, but is it reliable? What about just looking at /proc/net 'files'? Is that any better? EDIT I'm interested in packets Tx/Rx, bytes Tx/Rx, but most importantly drops or errors and why the drop/error might have occurred.

    Read the article

  • Suspicious activity in access logs - someone trying to find phpmyadmin dir - should I worry?

    - by undefined
    I was looking over the access logs for a server that we are running on Amazon Web Services. I noticed that someone was obviously trying to find the phpmyadmin directory - they (or a bot) were trying different paths eg - admin/phpmyadmin/, db_admin, ... and the list goes on. Actually there isnt a database on this server and so this was not a problem, they were never going to find it, but should I be worried about such snooping? Is this just a really basic attempt at getting in to our system? Actually our database is held on another managed server which I assume is protected from such intrusions. What are your views on such sneaky activity?

    Read the article

  • How to use HttpContext.Current on asynchronous threads?

    - by Eran Betzalel
    I've a schedule tasks mechanism (very similar to DotNetNuke's) in my business logic library (a DLL that is used by ASP.Net website). When I use HttpContext.Current inside on of these tasks, it returns with a null value, because the current async thread (or task) was not initiated from a user's request. How can I use HttpContext.Current in these asynchronous threads? P.S - I think my question is more best-practices related than technical.

    Read the article

  • DDD with Grails

    - by Paul
    I cannot find any info about doing Domain Driven Design (DDD) with Grails. I'm looking for any best practices, experience notes or even open source projects that are good examples of DDD with Grails.

    Read the article

  • CSS style guide question

    - by Vasu
    What are the best practices with respect to styling HTML elements using CSS? What's the preferred granularity for styling HTML elements? i.e., do you have lots of div.searchbox input div.searchbox p div.searchbox p.help OR input.searchbox p.searchbox p.searchboxhelp Which css code is considered easy to maintain? Is using grid frameworks and reset considered best practice? Thanks

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >