Search Results

Search found 1614 results on 65 pages for 'emps (expensive managemen'.

Page 47/65 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Anyone have real world experience with Rackspace Cloud Sites at high scale?

    - by Allara
    I have a pure web service application layer using .NET. I was originally planning to use Amazon EC2, but rolling my own autoscaling procedures is a bit intimidating, and the scaling isn't very granular from a cost perspective. If the app is successful, we could be looking at relatively high scale (millions of requests per month). The app uses Amazon SimpleDB as the database layer. As a test, I have the app running successfully in Rackspace Cloud Sites. Performance seems to be equal to (if not better than) a standard EC2 instance, even with the added latency of the SimpleDB requests travelling to the Rackspace network. However, testing at this stage is at a very low scale. My question is this: has anyone had real-world experience running a high scale application on Rackspace Cloud Sites? Moreover, once you pass the "included" 10,000 compute cycles per month, does the overall cost seem to be lower than rolling lots of EC2 instances? My assumption would be that with completely smooth scaling (i.e. only adding compute resources as needed), the cost could be lower on average. However, their stated goal of calibrating 10,000 CCs as a single 1.2 Ghz CPU seems on average to be much more expensive than EC2. I like the idea of no-touch scaling, but is it too good to be true?

    Read the article

  • What to do about old MacBook

    - by John
    I have a White MacBook version 10.5.8. I accidentally dropped it and about an inch of the most right side of the screen is blacked out. I can't see the clock and Trash Can because of this. I checked how much it would cost to repair, and the cost of repairing is a flat rate of $300. I thought that was kind of expensive, like a third of the cost of the MacBook itself. accidentally dropping a notebook is something that would happen sooner or later, so I think getting a Lenovo is the best. I ended up getting a MacBook pro 15 inch, but do you think it's worth the money to fix my old MacBook? I even saw some used MacBooks available on Ebay for about $400. Anyways I don't know what to do with it. I do like its screen and keyboard better than the MacBook Pro ones, because the screen is less glare-y and the keyboard is more crisp and fun to type on, and its more wieldy when I'm using it while lying down.

    Read the article

  • Upgrade to Q9550 or i7 920 on a budget?

    - by evan
    I'm planning to upgrade my computer and torn between maxing out the system I have or investing in the X58 architecture. I'm currently using a E6600 Core 2 Duo with 4GB of RAM (800mhz) on an Asus PK5-E motherboard which I built two years ago. My original plan was that one day I'd upgrade machine to 8GB (1066mhz, the max the PK5-E allows) and to the Core 2 QuadQ9550 to give the machine a good four years of life. However, that was before the i7 came out. I use my computer mainly for software development , which I do inside Virtual Machines, and the i7 seems ideal for that because it no longer is limited by the speed of the FSB? And when I looked into it, getting 8GB DDR3 RAM isn't much more expensive than the 8GB of DDR2 and the i7 920 is comparable in price to the Q9550, which doesn't make much sense to me? So the question is it worth swapping the motherboard out for around $250 and upgrading all three components or using that money on SSD or 10rpm drive for the existing system's OS/Apps/Virtual Machine drive? Or just put the $250 towards a completely new machine in a year or two? Would the i7 really give that much of boost compared to the Q9550 for what I'd be using it for? Thanks in advance for your input!!!

    Read the article

  • Recover data from hard drive with partitions (but not most data) overwritten

    - by Macha
    I have a 500GB hard drive I've been keeping around to recover data from that I removed from a failing NAS drive that got sort of... erratic at the end. I finally got rid of the NAS when during a firmware update it removed the partition table. Fast forward to a week ago, when I was building a new PC, and a mixup resulted in me placing the hard drive in question in the new PC and installing Windows XP on the first 100GB. I'm presuming any data on that first 100GB is now gone, but for the rest of it, is there any way I can recover it at home, as professional data recovery is currently too expensive? I have a blank 1TB HDD if I can store any images of that hard drive on. The problem was definitely with the NAS and not the hard drive, as the hard drive had a successful install of Windows when mistakenly place in the new PC, and there were capacitors in the NAS's circuitry clearly broken. The data I want to recover (in order of priority) is: High: Some jpgs of family photos. Medium: Some RAW files. (There are also jpg versions of all of these) Low: Some mp3s, avis and ISOs, I can re-rip most of these if need be, but it'd be handy not to have to. (I don't need a backup lecture, and if you can hold it in from nagging Jeff Atwood for it, you can hold it in from nagging me for it) In short: The partition tables are gone and overwritten. The data is not overwritten, except for an amount equal to the size of a Windows XP SP3 installation.

    Read the article

  • Drowning in documents - recommend doc management solutions?

    - by Martin Day
    I've been researching document management lately. I want to organise my docs at home and also at the office. Finding affordable solutions one can actually test drive is quite hard. Some that I've downloaded just don't seem to work (testing on brand new Vista PC). I've seen some software on Amazon like Paperport but not really sure what they're like. For home I'd like something to organise files, full text search, good scanner integration, nice interface etc. But for the office it seems harder. I need something that does proper workflow and keeps versions. It will have an audit trail. Documents can be approved, checked in/out etc. I know a few clients who would like something similar. It would be great just to import thousands of documents from a shared drive and get them indexed with dupes killed. I'd like to be super clear about how/where the documents are being stored so that maintenance and backups are clear. My Google/twitter searches lead back to the same tired and vague webpages pushing what look like expensive and custom made solutions. Some might be very good I suppose but it's darn hard to tell. I don't mind a hosted package but all in all I don't think something like Google Docs, as good as it is now, will work. There are too many quirks and missing features (as compared to Office). Being able to work directly with the common Office file formats is important. I've noted a similar sounding question asked here back in August but it didn't seem to turn up too many solutions that I could easily and quickly apply. Also there could have been some changes since then so I feel it's worth asking.

    Read the article

  • Would a PHP application benefit from being served from a RAM drive?

    - by Tom Marthenal
    I am in charge of hosting a PHP application that is large and slow, but easy to scale. The application is entirely static, with writable disk storage needed. We've profiled the application, and the main bottleneck appears to come from loading the application and not the work the application does. The application is not CPU-intensive, although it does use a fair amount of memory (think Magento). Currently we distribute it by having a series of servers with the same PHP files on their hard drive and a load balancer in front of them. Easy but expensive. I've been reading about RAM disks and the IO benefits they offer, and was wondering if they would be well-suited to PHP applications. Since PHP applications are loaded from disk for every request and often involve lots of different files (as opposed to being kept in memory like with a Java application), I would figure that disk performance can be a severe bottleneck. Would placing the PHP files on a RAM disk and using the mount point as Apache's document root offer performance benefits? A startup script could create the RAM drive and then copy the files (which are plain-text and small) from a permanent location to the temporary RAM drive. Does this make sense, or should I just trust the linux kernel to cache the appropriate files in memory by itself?

    Read the article

  • WAN Optimization for Small Office/Home Office

    - by TiernanO
    I have been reading up on WAN optimization for the last while, mostly out of interest of speeding up my own internet connections, but also to speed up the office internet connection. At home, I have 2 cable modems plugged into a RouterBoard RB750, which load balances the connections. In the office, we have a single connection into a NetGear router. Most of the WAN Optimization products I have seen, seem to be prohibitively expensive, but also seem to be based on the idea of having multiple branches around the world. What I am looking for, ideally, is as follows: software install: I am "guessing" I need to install it in 2 places: one in the office or house, and one in "the cloud". any connections going to, say, The US (we are in Europe, but our backup's live in the US currently, which would be something important to speed up) would be "tunnelled" though the Optimizer. If downloading or uploading large files, open multiple connections between both "the cloud" and the optimizer... This is where a lot of speed could be gained. finally, for items not compressed, they would be compressed on the cloud side of things, also items that are already on the optimizer could be not sent again. kind of like RSync or Proxy servers... So, is there something that can be done? Is it available using off the shelf components (some magic script with SSH, Squid, Linux and duct tape) or is it something that needs to be purchased? or even an Open Source Project that does 90% of what i am asking?

    Read the article

  • Write once, read many (WORM) using Linux file system

    - by phil_ayres
    I have a requirement to write files to a Linux file system that can not be subsequently overwritten, appended to, updated in any way, or deleted. Not by a sudo-er, root, or anybody. I am attempting to meet the requirements of the financial services regulations for recordkeeping, FINRA 17A-4, which basically requires that electronic documents are written to WORM (write once, read many) devices. I would very much like to avoid having to use DVDs or expensive EMC Centera devices. Is there a Linux file system, or can SELinux support the requirement for files to be made complete immutable immediately (or at least soon) after write? Or is anybody aware of a way I could enforce this on an existing file system using Linux permissions, etc? I understand that I can set readonly permissions, and the immutable attribute. But of course I expect that a root user would be able to unset those. I considered storing data to small volumes that are unmounted and then remounted read-only, but then I think that root could still unmount and remount as writable again. I'm looking for any smart ideas, and worst case scenario I'm willing to do a little coding to 'enhance' an existing file system to provide this. Assuming there is a file system that is a good starting point. And put in place a carefully configured Linux server to act as this type of network storage device, doing nothing else. After all of that, encryption on the files would be useful too!

    Read the article

  • Servers - Buying New vs Buying Second-hand

    - by Django Reinhardt
    We're currently in the process of adding additional servers to our website. We have a pretty simple topology planned: A Firewall/Router Server infront of a Web Application Server and Database Server. Here's a simple (and technically incorrect) diagram that I used in a previous question to illustrate what I mean: We're now wondering about the specs of our two new machines (the Web App and Firewall servers) and whether we can get away with buying a couple of old servers. (Note: Both machines will be running Windows Server 2008 R2.) We're not too concerned about our Firewall/Router server as we're pretty sure it won't be taxed too heavily, but we are interested in our Web App server. I realise that answering this type of question is really difficult without a ton of specifics on users, bandwidth, concurrent sessions, etc, etc., so I just want to focus on the general wisdom on buying old versus new. I had originally specced a new Dell PowerEdge R300 (1U Rack) for our company. In short, because we're going to be caching as much data as possible, I focussed on Processor Speed and Memory: Quad-Core Intel Xeon X3323 2.5Ghz (2x3M Cache) 1333Mhz FSB 16GB DDR2 667Mhz But when I was looking for a cheap second-hand machine for our Firewall/Router, I came across several machines that made our engineer ask a very reasonable question: If we stuck a boat load of RAM in this thing, wouldn't it do for the Web App Server and save us a ton of money in the process? For example, what about a second-hand machine with the following specs: 2x Dual-Core AMD Opteron 2218 2.6Ghz (2MB Cache) 1000Mhz HT 16GB DDR2 667Mhz Would it really be comparable with the more expensive (new) server above? Our engineer postulated that the reason companies upgrade their servers to newer processors is often because they want to reduce their power costs, and that a 2.6Ghz processor was still a 2.6Ghz processor, no matter when it was made. Benchmarks on various sites don't really support this theory, but I was wondering what server admin thought. Thanks for any advice.

    Read the article

  • VOIP and internet connection speeds [cable vs. fiber]

    - by microchasm
    Our office is migrating to IP telephony. We have less than 10 employees that will be using the phones. We currently have cable internet, and they just bumped the speeds: There is a data center that was just recently built in our building, and we were considering co-lo'ing there in the near future. As a result, they offered us access to their triple-redundant internet, but it's quite expensive. They are offering 3mbps committed with up to 10mbps burst for $250/month (discounted). We pay ~$120 for our cable (which the plan was to keep--at least for TV). I want the phone system and LAN to be as separate as possible. Was thinking about keeping the cable for LAN, and using the other connection for the phones (until I saw the price). Now I'm thinking it might make sense to add on to our existing cable setup, and change our phone to only have DSL as a backup for the cable. Is there any real benefit to the fiber? Especially for the price? Any other suggestions or ideas? Thanks.

    Read the article

  • What sound cards are there besides Creative's that offer benefits for gaming?

    - by Vilx-
    Many years (6? 7?) ago I bought an Audigy sound card to replace the onboard sound and was astonished at the improvement in games. It was a completely different sound, the whole experience became way more immersive. As the time has passed however the card has become old. The support for the latest Windows versions is declining and newer technologies have definitely been developed. So I was starting to wonder - what newer hardware exists? Sure, there is the Sound Blaster X-Fi, but that's quite expensive and I'm not entirely thrilled by past policies of Creative either (like the whole affair with Daniel_K). But are there any alternatives? EAX is a patent by Creative, so it's doubtful that any other manufacturer has implemented it. And I haven't heard of any competing standards either. To clarify, what I would like is something like a "sound accelerator". A sound card that would offload sound processing from my CPU while at the same time giving astounding effects that would be impractical to do on CPU in the first place. I'm not interested in absurd sampling rates (for the most time I can't tell MP3 and a CD apart) or uncountable channels (I'm using stereo headphones). But I am interested in special effects in games. Are there any alternatives or is Creative a monopoly in this market?

    Read the article

  • Intel Motherboard Lightning Victim Dies Hard

    - by Stetson RDT
    Today, I have a more hardware-related question. I have an Intel board, and I really do not know which board it is, I built the machine for a relative, but he forgot to keep the documentation. Long story short, the computer was disconnected during a lightning storm, but a lightning strike travelled in via the ethernet cable (It was directly connected to a power brick commonly seen on those long distance ISP Wireless transmitters), and the motherboard was shocked. I am attempting to get this PC going. The problem is as follows: The computer will randomly reboot, just in the middle of anything as it pleases. May load to EFI (or whatever BIOS is nowadays), may load to bootloader, may even get to the OS. But before 5 minutes is up, the system will always die. Out of curiosity, I plugged my voltmeter in to a molex connector. On the 5V side, it gets a good, consistant +5.13V. On the 12V side, it fluctuates, as follows: Upon immediate startup, it soars to 12.11-12.13V. It will now do one of two things: it will immediately jump down to 12.04-12.05V, or hover for about a minute at 12.11-12.13, then jump down. It seems the longer the voltage stays at 12.11-12.13, the shorter the machine will stay running. Also, post codes, whenever the machine locks up, but does not die hard, seem to be between "AA" and "AC". Does this make any sense to anybody? Do you all think this motherboard is salvageable? It was an expensive bugger, and I'd prefer to not replace it.

    Read the article

  • Embedded Windows XP

    - by Kyle
    My company acquired a brake press for bending large structural steel plates. We received it second hand and it came with an embedded copy of windows XP. Now for the part that's driving me nuts: plug and play has been turned off, also accessibility options has been disabled. What does this mean for me? Keyboards will not work! Nothing that plugs into a USB port will work and it does not have a CD ROM drive. I have tried to turn on plug and play using the on screen keyboard but it is not there since accessibility options is turned off. I would just get an updated copy of the the embedded OS but they come from Sweden and are extremely expensive. I assume there has got to be a way to get a USB devices to work. We need to get a wifi adapter on it so I can use team viewer and remotely configure it for our needs. Things to keep in mind: There is no keyboard so everything has to be done with the mouse There is no PS2 port for a keyboard just a mouse. Odd. I am 5 states away from this location and have been working with a tech who is physically installing the machine. System 32 seems to be missing A LOT of files, the tech told me there is only 8 folders in there and no other files (I don't even understand how Windows is running like this). If anyone has ANY ideas I would appreciate it, I am unsure where to go from here.

    Read the article

  • Combat server downtime by duplicating server and re-routing when main server is down

    - by Wasim
    I have a CentOS server which at times either crashes or gets attacked with DDOS. At the moment I have an off site backup which is filled up with 1.7TB of data. I'm currently paying as much for the backup as I am for the server and I was looking for advice from experienced people as to what option is best to proceed from here. Would it be a viable solution to ditch the offsite backup, and instead purchase an additional server which is an exact duplication of the first server. So if the first server is down, users are re-routed to the second server without noticing the first server is even down. This would create an automatic backup of the first server (albeit not offsite) and relinquish the need for the expensive offsite backup. Is the above solution a true solution to pricey backup or is offsite backup absolutely necessary? How would I go about doing this (obviously it's pretty complex so just links to some reading material or the terminology of the procedure would be great)? Appreciate the help and advice.

    Read the article

  • Server Hosting + AWS

    - by ledy
    Since my dedicated servers are hosted at a "normal" hosting service, I wonder if there is a really cheap way to extend the server farm with AWS instances. E.g. it seems to be a effient and flexible solution with data storage and ressources for ocassional data processing, too. However, it might be very in-efficient to mix two data centres and transfering data from current webhoster to amazon and vice-versa. In my case, the traffic for this continuous data exchange seems to be expensive and the delay for moving the data back to the hoster leads into a lack or delay. How are best practises for mixing non-aws and aws systems? E.g.: How to move the hosters data to aws as log file storage to run urchin analysis and/or port the log file data into a bigtable for exhausting analysis there. After working with the data: how to bring it back to the hoster and use the data with the webservers there? I am not going to move all the server farm to amazon, only "separate" parts or tasks if the transfer/exchange does not lead to increased cost.

    Read the article

  • Concurrent users with Quickbooks?

    - by airietis
    I work in a company with 3 people who regularly use the same Quickbooks file. However, they work remotely on different networks. I need to implement a solution that allows all three of us to access Quickbooks at the same time remotely (and each make changes at the same time). We have a spare desktop PC that can be utilized as a server. So, my question is: what is the cheapest and most hassle-free solution to solving this problem? I've considered using application cloud hosting, however, it is very expensive ($40 per user a month) and we are on a tight budget. Is it possible to install Quickbooks on my own server, and have them connect to it remotely? If so, what is the best way to accomplish this? Remote desktop protocol? Or is there a built in feature for this with Quickbooks Premier 2013? EDIT: As MDMarra mentioned, I am looking for a solution that offers true simultaneous access. Will using a dedicated server and having users connect to a VPN be a viable solution?

    Read the article

  • Are there tools available for trimming PDF margins?

    - by Charles Duffy
    I have an ebook I'm trying to read in PDF format on a Kindle. Unfortunately, the page headers and footers have some content (page number and copyright info, respectively) preventing the device from scaling the actual text to match its usable area viewing area, thus leaving the actual content too small to read. Various tools are available which will trim off whitespace, but the Kindle already does this; my goal, by contrast, is to remove printed matter outside of a defined bounding box, and the only tool I've found for the purpose is moderately expensive commercial software. I could probably generate a mask in Inkscape; split out the individual pages using pdftk, apply the mask to each page individually (outputting to postscript), and recombine the numerous postscript files into a single PDF. However, this decode/reencode steps would be pretty unfortunate in terms of document size; something able to operate with a bit more finesse would be ideal. I have all major operating systems handy (Windows, several modern Linux distros, a Mac, etc) so solutions don't need to be constrained by platform. Suggestions? (I've reported the issue to the author, who mentioned it to his editor, who hasn't done anything about the issue over the course of more than a month, making the zero-work approach evidently nonproductive).

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • What is the correct approach i should use for an application that requires amazon S3 uploads and SimpleDB data management?

    - by Luis Oscar
    I am developing an application for iOS and that is going smoothly, the problem is that I am very new at server sided things. I am totally confused about how to correctly use Amazon Web Services for this purpose. What I want to do is very simple. I want my application to be able to query a servlet hosted in EC2 to be able to retrieve pictures and data based on some criteria from S3 and SImpleDB respectively. Also the application should be able to upload pictures into a S3 bucket and register the information in the SImpleDB. My main concerns are security and costs, So far i was using Amazon Token Vending Machine but I haven't been successful when trying to customize it, and while researching I discovered that on the long run it is very expensive. The ultimate goal is to handle a "social" picture service for my iOS application. Being able to register new users, authenticate these users. See what permissions they have to which pictures from the bucked. And all this without having to worry about Third party people from accessing the private pictures of my users. Sorry for this question but I am really clueless about how to handle this... I have tried reading many articles but all these server stuff looks very scary.

    Read the article

  • Workaround for Dell "Power Supply Not Recognised" issue

    - by Haedrian
    So, I have a Dell Inspirion and the power supply port appears to be damaged. Basically when I plug it in I get a nice popup telling me that it couldn't detect that its a Dell power supply so it won't charge the battery and underclocks the system. It still works for other purposes (that is, giving power) I thought it was the actual power supply cable so I bought a new one, that worked for a while, provided I inserted it at JUST THE RIGHT angle. But now that's not working anymore, so I assume its the part which connects to the computer. The battery charging I can live without, the underclocking I can't. I'd like a way around this issue. Things I've tried: Updating the BIOS Replacing the power supply cable Inserting it at different angles Turning it off and on again Swearing at it Twisting it while inserting it So, is there a workaround somehow? I'd like to avoid taking out my soldering kit and risking permanently damaging expensive equipment if that's allright. I'm hoping for a software solution. Added: The exact model is a Del Inspirion N5010

    Read the article

  • Efficient mirroring of directories using hardlinks

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

  • Why still use JPG compression? [closed]

    - by Torben Gundtofte-Bruun
    Back when the JPG image format was introduced, it made a lot of sense to reduce the file size, even accepting a loss in image quality, because files were being downloaded over a slow and expensive modem connection. In today's world, file size is no longer a concern, at least not regarding JPG where it seems silly to save 45kB on a photo. But my image editing apps still prompt me for the desired compression level when I save a file. Does it still make sense to go with the default 85? Why should I not crank it up to 100 for all files? Update based on comments: For web work, I might use PNG instead. But every smartphone and camera produces JPG files. The question arises when I save these edits. Audience is my own harddisk. We're talking photos, 2-5MB apiece. Chroma, subsampling, DCT - sorry, never heard of it. I'm a home user, not Photoshop guru. For the record, I use Paint Shop Pro on Win, and Gimp on Linux.

    Read the article

  • Best way to restrict access to a folder in Dropbox

    - by Joe S
    I currently run a business with around 10 staff members and we currently use Dropbox Pro 100GB to share all of our files. It works very well and is inexpensive, however, I am taking on a number of new staff and would like to move the more sensitive documents into their own, protected folder. Currently, we all share one Dropbox account, I am aware that Dropbox for teams supports this, but it is far too expensive for us as a small company. I have researched a number of solutions: 1) Set up a new standard Dropbox account just for use by management, which will contain all of the sensitive documents, and join the shared folder of the rest of my team to access the rest of the documents. As i understand it, this is not possible with a free account, as any dropbox shared folder added to your account will use up your quota 2) Set up some sort of TrueCrypt container, and install TrueCrypt on each trusted staff member's machine, and store the documents inside that. Would this be difficult to use? I'd imagine the sync-ing would not work so well as the disk would technically be mounted at the time of use and any changes would be a change to the actual container rather than individual files. I was just wondering if anyone knows a way to do this without the drawbacks outlined above? Thanks!

    Read the article

  • Two cisco gigabit switches refuse to link to each other. Any idea why?

    - by Prody
    I have 3 Linksys SR2024 switches which are basically non-managed 24 port Gbit + 2 miniGBIC. I now had to add another switch to the network, and my provider didn't have the SR2024 switch anymore, so I got the Cisco SLM2024 which was a bit more expensive. It's pretty much the same thing but with management (that I don't need). So I've connected the SLM2024 to a SR2024 via Cat6 cable, and for some strange reason, I get no link. If I connect any machine with a Gbit NIC to both switches, it links with 1Gbit autonegotiated. If I connect the SLM2024 to a non-Gbit switch (I have a cheap 4port ASUS switch), it will link just fine on 100Mbit full duplex. Since the SLM2024 has management, I've tried to see if something is misconfigured on it's side, but it's not, it advertises 1Gbit and lower. (hence the machines connecting succesfully at 1Gbit). Since the SR2024 that I'm trying to connect it to also connects successfully with another SR2024 and other Gbit machines, it means that it advertises Gbit too. But for some reason when I link the SR2024 to the SLM2024 I get no link. Please note that I've properly tested the wire. Does anyone have any idea what's wrong?

    Read the article

  • Is there such a thing as a file hosted container which deduplicates data held within?

    - by Mallow
    Background I have backups of a website which stores all of it's data into a single file. This file is several gigs large and I have many different backups of this file. Most of the data within is mostly the same plus whatever was added or changed to it. I want to keep all the concurrent backups I've made through the years in case I find a horrible surprise of data corruption along the line. However storing a 10gig file every month gets expensive. Seeking Solution I've often thought about different ways of alleviating this problem. One thought that comes up very often combines the idea of a duplicating file system which doesn't require it's own partitioned volume on a hard drive. Something like what truecrypt does, what it calls, "file hosted containers" which when using the truecrypt program allows you to mount and dismount that volume as a regular hard drive. Question Is there a virtual hard drive mounter which uses file-based container which uses data deduplicaiton file system? (This question is a little awkward to put into words, if you have a better idea on how to ask this question please feel free to help out.)

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >