Search Results

Search found 34580 results on 1384 pages for 'technology is good'.

Page 453/1384 | < Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >

  • Should I use /etc/bind/zones/ or /var/cache/bind/?

    - by nbolton
    Each tutorial seems to have a different opinion on this. For my ISC BIND zones, should I use /etc/bind/zones/ or /var/cache/bind/? In the last install, I used /var/cache/bind/ but only because I was guided to do so; however I just spotted a pid file in there for this new Debian install, so I figured that using the "working directory" to store zone files probably wasn't the best idea. It seems that many admins use this so they don't have to type the full path when declaring a new zone. For example: file "/etc/bind/zones/db.foobar.com"; Instead of: file "db.foobar.com"; Is obviously easier to type, but is it good or bad practice? Some may also suggest setting the working directory to /etc/bind/zones: options { // directory "/var/cache/bind"; directory "/etc/bind/zones"; } ... but something tells me this isn't good practice, since the pid file would be created there I assume (unless it's just in /var/cache/bind by coincidence). I took a look at the manpage but it didn't seem to say what the directory option was for, any ideas exactly what it was design for?

    Read the article

  • Hosting a javascript api file for third party sites the way sharethis, uservoice, analytics do it.

    - by Dayson
    I'm preparing to launch a service soon which will provide third party websites a widget. The widget requires my javascript file in the website's code. Exactly the same way services like analytics, uservoice, sharethis, getclicky, etc provide you with a javascript snippet to add to your page. Therefore, my javascript file is going to be hotlinked by tons of websites which possibly receive a lot of requests too. I need advice/opinions on the following aspects: What's the right location for hosting this file? Should I use a sub-domain for it? I was thinking of something like http://api.myservice.com/js/foo.js . Remember, once websites start embedding this file, its location CANNOT change under any circumstances. Right now we can afford just one dedicated server. So I have minified my file, enabled gzip and plan to use some good cache control headers through apache. Also, in the near future when the requests pickup, I will use a http proxy like Varnish. Is this a good plan for the near future? Should I be considering a CDN in the future (since we can't afford it now)? If so how do I make sure we're prepared to migrate to it without breaking services. Pros/Cons of moving just this file to a CDN? Also, since its just one javascript file(50kb), any affordable CDN so we could consider it in the beginning itself? Any other word of advice I could use? Anything I shouldn't overlook at this stage which I would regret later? (both in terms of server + javascript ajax limitations) Thanks in advance.

    Read the article

  • Managing Many External Hosts Using EC2 and Route 53

    - by futureal
    Looking for a "best practice" answer to managing externally-addressable hosts using the combination of Amazon EC2 and Amazon Route 53, without using Elastic IPs for each host. In my scenario I will have 30+ hosts that need to be accessible from outside EC2, so directly using internal DNS will not work. In the past, I have addressed hosts by assigning an elastic IP to that host (let's say, 55.55.55.55) and then creating an associated A record. For example, let's say I want to create "ec2-corp01.mydomain.com" I might do: ec2-corp01.mydomain.com. A 55.55.55.55 300 Then on that EC2 instance, I would assign the Elastic IP of 55.55.55.55, and everything works fine. Of course, to make this work, I need to have one Elastic IP per instance, which is something I'd like to avoid if possible; I'd like the infrastructure to be more dynamic. So my thought is to try something like: Create a script that queries the internal EC2 tools to determine an instance's private hostname On instance boot, call that script to determine its hostname, and then using the command-line Route 53 interface to find and update that hostname to its current internal hostname Since the host will have a relatively low TTL (let's say 300 as above, or 5 minutes) it should take effect pretty quickly Is this a good idea? Is there a better or more widely accepted way to handle it? If it IS a good idea, what type of record should I be creating? A CNAME that points to the internal host, like ec2-55-55-55-55.compute-1.amazonaws.com? Is an A record better or worse? Thanks!

    Read the article

  • SSD - Tweaks/Recommended Configuration

    - by Miky D
    I've just purchased my first SSD drive (a 32GB MLC from Imation) without doing enough research ahead of time in the spirit of giving the new technology a shot and getting myself up to speed by empirical research rather than reading countless reviews and I'm now at a crossroads. I've built a new server to test the new drive and at first I wanted to test it with Windows Server 2003 R2 x86 but after I loaded the OS on it and it had problems loading the drivers of the motherboard I went to the internet and did more research and the more I read the more I got confused. Finally I decided to try out Windows Server 2008 R2 x64 since it supposedly has certain support for SSD drives inherent in the NT 6.1 core. Indeed I've had much better luck with the new OS and got all the drivers installed but now I still have some questions: Should I set the drive to: IDE Emulation or AHCI in the BIOS? Should I make any other changes in the BIOS (I've read on the internet that Write Through should be changed to Write Back) Should I make any other adjustments in Windows (i.e. Tweaks such as disabling prefetching or disabling the Last Accessed Timestamp on the filesystem) and if so, is there a good/reliable online resource with instructions? I'm so tired of reading through countless online posts which spend 80% of coverage on the history of SSDs and benchmarks and explanations of how SSDs work. I got that, now I'd like to know if there's anything I should actually do to make sure Windows Server 2008 R2 makes good use of the SSD.

    Read the article

  • Free or Open Solution for Storing and Charting CSV data

    - by rrrfusco
    I'm presently storing CSV files, combining them, opening them in open office, creating pivot tables and then generating charts from the spreadsheet. I've looked at OOBase, but appending csv files to base is clunky for some reason. SQLite seems like a good database solution, but I've haven't found a good charting program that connects to it with ease. Although open office (or libreoffice) maintains the references and allows you to update the information, this process is far from efficient. There are too many steps and it seems one program should handle all of these tasks. A better program would be more intuitive, allow you to simply add inserts into a database, and include an interface for standard charting settings. EDIT Simplest Automated Analysis and Chart Generation Tool? The above answer references Spotfire and Tableau, each of which has a free 14 and 30 day trial. Each program is nicely streamlined and designed. I'm looking for a program between this quality and LibreOffice. Can you recommend a better open or free desktop solution for windows?

    Read the article

  • NIS: which mechanism hides shadow.byname for unpriviledged users?

    - by Mark Salzer
    On some Linux box (SLES 11.1) which is a NIS client I can do as root: ypcat shadow.byname and get output, i.e. some lines with the encrypted passwords, amongst other information. On the same Linux box, if I run the same command as unpriviledged user, I get No such map shadow.byname. Reason: No such map in server's domain Now I am surprised. My good old knowlege says that shadow passwords in NIS are absurd because there is no access control or authentication in the protocol and thus every (unpriviledged) user can access the shadow map and thereby obtain the encrypted passwords. Obviously we have a different picture here. Unfortunately I don't have access to the NIS server to figure out what is happening. My only guess is that the NIS master gives the map only to clients conection from a priviledged port (1024), but this is only an uneducated guess. What mechanisms are there in current NIS implementations to lead to a behavior like the above? How "secure" are they? Can the be circumvented easily? Or are shadow passwords in NIS as secure as the good old shadow files?

    Read the article

  • Laptops with easy heat sink service?

    - by Niten
    Can you recommend a current laptop model with easy heat sink access – or better yet, a removable air intake filter – making it easy to periodically clean out the dust and lint that always packs up in these things? Every laptop I've owned has eventually overheated on account of a clogged heat sink. (I suppose it doesn't help that I have a cat who loves to hang out where I'm working, or that my laptop is almost always running.) One of the things I really love about my current system, a Dell Inspiron 1420n, is how easy it is to service its cooling system: whenever I notice the fan starting to work harder and the CPU temperature climbing higher than it should be, I merely have to unscrew a single panel from the bottom of the machine, clean out the heat sink, and then I'm good for another few months. Which current models of the "business laptop" variety offer similar easy cooling system service? I'm looking for something roughly along the lines of: 14- or 15-inch display Nehalem-based CPU Solid construction – magnesium chassis or better (like the Inspiron) TPM (for BitLocker) ideal, but not mandatory Docking adapter ideal, but not mandatory Good battery life For example, the ThinkPad T410 would have been my top choice, but it seems like it would be a serious chore to service its heat sink. For the current MacBook Pros it looks downright impossible. No matter how nice the laptop is in other respects, it'll be of no use to me when it's overheating. So, any suggestions? Thanks in advance... (I'm constantly surprised that customers and manufacturers don't pay more attention to this feature, at least in the business laptop subcategory. In the last couple months I've fixed two friends' laptops which were also overheating due to clogged cooling systems; clearly I'm not the only one affected by this.)

    Read the article

  • INFORMIX - listener thread err 25582

    - by Samuel Lao
    I´ve been digging different forums in the last 7 days looking for a possible solution.... Our database is based on informix running in a Linux server (LINUX SUSE 11). Suddenly, last saturday informix began to show an error message: listener-thread err=-25582 oserr=0, network connection is broken End users started to call reporting about slow network performance to this server, moments where the database application lost connection with server...so we proceeded doing a ping to the db server, getting good responses (1ms) without losing packets. I tried typing telnet (ipserver) 1526 which is informix's port for the application, it works. We had to disconnect the server and enable a backup db server which is located on another branch. It has been working in a regular way because the backup server hasn´t good specs (it is an old dell server model). So, I scanned the main server looking for viruses using Trend Micro Server Protect, it didn´t find anything (0 viruses and spywares). I revised the switches and routers, but I haven´t find anything strange... What else could be ? Thanks in advanced for your time and help with this issue.....I would really appreciate any advice...

    Read the article

  • Dell PE2950 - slow IO rates for writing and reading locally

    - by OrenM
    I'm having a serious issue with dell server PE2950. The server has really slow IO rates, so slow that I'm not able to use it anymore I tried few things to solve this: changing disks to new disks (configured them as raid1) changing perc card + perc cables reinstalling the OS of course, had to cause of changing of disks, centos 5.5 x64bit firmware update to everything virtual disks policy: No Read Ahead,Write Back, disk cache policy disabled. openmanage doesn't alert about anything, also i ran dell's diag tests, everything passed, also dell didn't see anything in deset log. dell offered to reseat everything, including the cpu, we did that as well, still io rates are slow I have several PE2950 servers, and I never had such a thing with any of those. All have similar or exact hardware as this one, all configured the same, with the same os centos 5.5 x64, same disks, same raid, same policy. Just for comparison: the problematic PE2950 server: [root@bad ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 27.7946 seconds, 58.9 MB/s real 0m33.968s user 0m0.531s sys 0m26.000s good PE2950 server (with the exact same hardware): [root@good ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 3.19999 seconds, 512 MB/s real 0m7.694s user 0m0.053s sys 0m4.057s Hopefully you will have an idea what can cause the problem.

    Read the article

  • Is it possible to be a professional studying on your own?

    - by Marc Jr
    I read economics at university(nothing to see with linux, isn't it? :P). I have some basic knowledge about booting process, Linux Kernel compiling from source and stuff like that. But of course I have still much to learn sometimes some errors appears and "voila" I am lost. I had: Ubuntu, Fedora, OpenSuse, Arch.. using Gentoo now. I'd like to know what you linux users, professionals, administrators... would think it is the best way to learn linux in a professional way. Is it worth studying it and passing the LPIC test enough to work in the linux world? or do I need going to IT uni? I've heard LFS is a good way of learning about linux, is that real? I've been thinking about getting to LFS learn about more deeply about the linux process and learning scripts. It is possible to do this way? if anyone has a tip or a good way of doing, maybe someone did it. Any tip is very welcome. Words from a person in love with linux. :D The best, Marc

    Read the article

  • Centralized Windows/Mac Patch Management that is easy to use

    - by BiggsTRC
    I'm looking for advice on what patch management solutions you would recommend based upon your experience. I'm also looking for which ones you would not recommend based upon your experience. We have a mixed network of Windows and Mac clients. Our central servers are all Windows servers, although I have considered putting in a Mac server to better handle our Mac clients. The issue we are facing currently is that we need to maintain the patches on all of our third-party applications. Right now we use WSUS, which handles with patching of Windows and some Microsoft products but that is about it. I need something to cover the other applications, specifically things like Adobe products (Reader, Flash, Dreamweaver, etc.) Our network isn't that big (maybe 200 clients) and I don't have a person to dedicate just to patching and maintaining a patch management solution. Thus very large and complicated solutions like System Center are most likely out. I have recently been looking at Dell's Kace K1000 solution (http://www.kace.com/products/systems-management-appliance/). It seems simple and it provides a lot of tools in one package that I would like/need as well. I like the fact that it is self-contained in an appliance and that it is designed for solutions like mine. However, I'm not sure if this is the best solution. I've also looked some at Shavlik's Netchk solution (http://www.shavlik.com/netchk-protect.aspx) but I don't need an anti-virus product. However, it looks like they might have a very good patch database. My question is this: What are your thoughts on these to products? Are there better products out there? Are there issues that I'm not considering? I want something that is very good at patching a broad range of products, that is simple to use, that takes a minimal amount of management (like WSUS), and that (hopefully) works with Mac and Windows.

    Read the article

  • New virtualization project and old SAN

    - by Chris
    Hi, We'll start shortly a partial virtualization of our infrastructure and consolidate a dozen servers into virtuals instances. We'll also add some client application virtualization into the mix for good measure. Two HP DL 380 with the new xeons 56xx and 96 GB of memory each running xenserver + xenapp will then take charge of most of our IT needs. So far, so good. One element that is missing from the picture is the storage part. We need some sort of shared storage to enable live motion and other HA features. We have an IBM DS 4300 SAN that we can use for that. But since it's in production since 2005, I'm not sure about such a critical role for a 5yr old part. So my question is: What is the reliability of this kind of equipment after 5 yr ? Can it last 10 yr with no or few problems ? Since our budjet is tight, not buying another SAN will be a big plus. This lead me to another question: FC disks cost an arm and a leg from IBM. When I type the replacement # in google (for example IBM 300GB 15K 4GBPS FC HDD 42D0410), I can find it at a fraction of the price at various sites. So am I stupid to buy from IBM or naive to trust 3rd party reseller ?? Thanks, Chris

    Read the article

  • Is it possible to detect nearby Wi-Fi enabled devices, not necessarily on the same network? [closed]

    - by Sky
    first question on StackExchange ever. I hope I got the right board. I'm trying to create a device (either from a standard AP or some other unconventional means) that will be able to detect nearby Wi-Fi enabled devices. For example, if a cellular phone (iPhone for instance) would be carried into the secured area, its MAC address will be logged. A cellular phone is a good example because it's the most common threat that should be detected. Some important points: The detection can be either active or passive, doesn't matter. The detected device might be connected to a different network, or might not be connected to anything at all. I assume most cellular phones are actively probing when not connected, but I'm not sure. It is important to not only identify the breach, but also to identify the device (MAC address). Conventional hardware is only optional. Distance of detection is at least 6 meters (20 feet). Handling one device at a time is good. Speed of detection is important, under 5 seconds is ideal. So my question is, is this even possible? If so, what can I use in order to make this a reality? Thank you for reading!

    Read the article

  • bind9 dlz/mysql at ubuntu segfault libmysqlclient.so

    - by Theos
    I have a big problem. I installed the bind9 nameserver to three different computer. two Ubuntu 10.04.4 LTS, and one Ubuntu 11.10 I compiled it 9.7.0, 9.7.3, 9.9.0 with this method: ./configure --prefix=/usr --sysconfdir=/etc/bind --localstatedir=/var \ --mandir=/usr/share/man --infodir=/usr/share/info \ --enable-threads --enable-largefile --with-libtool --enable-shared --enable-static \ --with-openssl=/usr --with-gssapi=/usr --with-gnu-ld \ --with-dlz-mysql=yes --with-dlz-bdb=no \ --with-dlz-filesystem=yes --with-geoip=/usr make make install After the set up for dlz/mysql, the BIND server is working perfetctly until 5-30 minute long. Ahter i got segfault. I resolve temporaly the problem with a simple process watchdog, and if the named is stopped, the watchdog is restart it, but this is not a good idea in long therm. My log output is: messages: Apr 13 19:33:51 dnsvm kernel: [ 8.088696] eth0: link up Apr 13 19:33:58 WATCHDOG: named not running. Restarting Apr 13 19:35:08 dnsvm kernel: [ 87.082572] named[1027]: segfault at 88 ip b71c4291 sp b5adfe30 error 4 in libmysqlclient.so.16.0.0[b714e000+1aa000] Apr 13 19:35:08 WATCHDOG: named not running. Restarting Apr 13 19:35:08 dnsvm kernel: [ 87.457510] named[1423]: segfault at 68 ip b71d6122 sp b52f0a40 error 4 in libmysqlclient.so.16.0.0[b7160000+1aa000] Apr 13 19:35:09 WATCHDOG: named not running. Restarting Apr 13 19:41:56 dnsvm kernel: [ 494.838206] named[1448]: segfault at 88 ip b731c291 sp b5436e30 error 4 in libmysqlclient.so.16.0.0[b72a6000+1aa000] Apr 13 19:41:57 WATCHDOG: named not running. Restarting Apr 13 19:57:26 dnsvm kernel: [ 1424.023409] named[2976]: segfault at 88 ip b72d1291 sp b6beee30 error 4 in libmysqlclient.so.16.0.0[b725b000+1aa000] Apr 13 19:57:26 WATCHDOG: named not running. Restarting Apr 13 20:11:56 dnsvm kernel: [ 2294.324663] named[6441]: segfault at 88 ip b7357291 sp b6473e30 error 4 in libmysqlclient.so.16.0.0[b72e1000+1aa000] Apr 13 20:11:57 WATCHDOG: named not running. Restarting syslog: http://pastebin.com/hjUyt8gN the first server is a native, normal x64 server (u1004lts), the second is virtualised server (u11.10) the third is also virtualised (10.04lts) This servers is only for dns providing with mysql server db. But the problem is be with all server, and all bind version. named.conf: http://pastebin.com/zwm1yP7V Can anybody help me, or any good idea?

    Read the article

  • mpirun -np N, what if N is larger than my core number?

    - by Daniel
    Say I have a 4-core workstation, what would linux (Ubuntu) do if I execute mpirun -np 9 XXX Q1. Will 9 run immediately together, or they will run 4 after 4? Q2. I suppose that using 9 is not good, because the remainder 1, it will make the computer confused, (I don't know is it going to be confused at all, or the "head" of the computer will decide which core among the 4 cores will be used?) Or it will be randomly picked. Who decide which one core to call? Q3. If I feel my cpu is not bad and my ram is okay and large enough, and my case is not very big. Is it a good idea in order to fully use my cpu and ram, that I do mpirun -np 8 XXX, or even mpirun -np 12 XXX. Q4. Who decides all of these effciency optimization, Ubuntu, or linux, or motherboard or cpu? Your enlightenment would be really appreciated.

    Read the article

  • Http-Only cookies in WebLogic: what versions support them/how and why are they supported?

    - by John
    We want to make all cookies set by our webapp http-only. I only have a basic understanding of the benefits of doing this but I'm told by security people that it's a Good Thing (tm) Our app is running under JDK1.6.05 and WebLogic10.3.0 After way too much digging around Oracle's website for documentation, I've found good evidence that the first version of WebLogic to support http-only cookies is 10.3.1. By "support," I mean the cookie-http-only deployment-descriptor element. Before we go about upgrading, I'd be nice to have these questions answered: 1a) Is it accurate that WL10.3.1 is the first version to support http-only cookies and that we're out of luck with 10.3.0? 1b) If we do indeed need to upgrade, is there an easy to do so under Windows? I've heard people mention an "upgrade jar" that you just stick in the classpath but I can't find any mention of this by Oracle. Does an easy way exist, or do we need to do a full-install of the new version? 2) What does the cookie-http-only deployment-descriptor element do when enabled? Will it ensure all cookies set by the application have an http-only=true attribute? Will it do more or less? Is there anything I'll have to do programmatically? 3) Is there anything in general I should know about http-only cookies, getting my web app to take advantage of them, or other security concerns?

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • What are some alternatives to word processing with Markdown?

    - by Hassan
    I've used MS Word-style editors for a long time, but I never got used to how unintuitive and cumbersome they are. I'm not talking specifically about MS Word, but also other editors that seem to mimic Word, like OpenOffice, NeoOffice, etc. I've found myself preferring to write in Markdown (much like on this site). I've found a few good Markdown editors, and I like using them a lot more than using Word-style editors. Here is what they generally look like: As you can see, it works much differently than a Word-style editor. This is a generally cleaner way of writing, since formatting is done right in the text, and is extremely simple to use (no highlighting some text, then clicking a button in some menu you have to find). Although editing text this way is great, I've realized that the syntax can only be used for very specific needs (bullets, numbered lists, headings and sub-headings, bold, italic, and some other common ones). However, many features are missing. Here are some features that would be nice in a word processor: Tables. Indenting paragraphs. Good image support (you can link to images, but not add them, since Markdown is just text). More simple to use than Word and its cronies. Cross-platform. Some of these can be fixed with in-line HTML, but nobody wants to do that. It seems Markdown was designed for editing text on the internet. Is there a similar setup that works better for desktop word processors?

    Read the article

  • How do I find funny pictures?

    - by Hanno Fietz
    No, not lolcats. And I'm not really looking for a specific site, either. I have often wished that I had some funny picture to illustrate a presentation, a website, a post, an email, or something else. Google image search and stock photo services have hardly ever helped me, although that may be because I'm doing something wrong. Jeff Atwood seems to have no problem to find funny pictures for his codinghorror and stackoverflow blogs, as well as for the error messages on the trilogy sites. One of my favourites was this elephant. Other bloggers also seem to be quite good at it. I'm wondering if I simply lack the creativity or if there's sources or methods I don't know about. I could think of the following ways to get pictures, but I'm not sure whether this is really "how they do it". keep a collection of pictures that you stumbled upon and liked (takes quite some time to build up to a proper library), when you need a picture, there's one in there maybe have pictures on paper, too, like from magazines or ads. when you are looking for a picture, search online (Flickr, Google, stock photos). This has never really worked for me, I don't know why. produce the pictures yourself, i. e. have a good library of source material or find some online and apply some creativity and suitable software. I could imagine that this could work well once you have the practice.

    Read the article

  • Webserver optimization

    - by f-aminov
    Hi guys! I have a website hosted on a VPS (512Mb - minimum guranteed memory, 510Mhz proccessor, Debian 5.0 Lenny, Apache 2.2.9 with nginx 0.7.65 as a frontend to serve static content, MySQL 5.1.44, PHP 5.3.2 with APC caching). I'm a web developer, so I'm not very good at optimizing servers, but I've managed to install and setup all those neccessary components (LAMP, nginx, etc.). After that I decided to stress test my website (which uses Drupal 6.16 with caching and all possible optimization enabled) using a utility called "Webserver Stress Tool 7". And it seems to me that the results aren't any good - here is a graph (sorry, as a new user I'm not allowed to post images) As you can see the response time depending on amount of simultaneous users increases very quickly. With 10 simultaneous users the time is about 1000ms, with 100 simultaneous users it's about 15000ms (15s!). The question is do you think this is normal behavior for such a server or something is wrong with the settings and optimization? If you think something is wrong what particulary could be wrong? Any other suggestion how to speed this a little bit up?

    Read the article

  • Why are my socks proxies slow

    - by vps_newcomer
    I have a linux vps, and i have tried a few socks proxy setups to test their performance: All tests were using speedtest.net The standard ssh tunnel proxy 0.8mbit/s download and 0.1-0.2mbit/s upload speeds dante-server proxy 1.3mbit/s download and 0.4-0.5mbit/s upload I am wondering why are these speeds so slow? Is anything shaping them? Is it just the nature of socks proxies? I know that the ssh tunnel has to do encryption and what not so that is why its slow, but i was surprised to see that the second setup was also quite slow. On the VPS i have received download speeds of 25MB/s per second (thats about 200mbit/s and upload speed of atleast 5MB/s (haven't got a good enough pipe to test anything faster). The other option i was going to try is to setup OpenVPN and see how that goes, however i need to find a good tutorial as it's fairly complicated to setup. So why is it so slow? How can i test to see where the bottleneck is? How can i make it faster :D

    Read the article

  • Password Manager that allows syncing accross platforms

    - by lexu
    I use OS X, Linux, Solaris and windows for work and from home. There are good tools that allow me to manage the many logins/passwords required platform independently. But mostly they expect me to carry a thumb-drive around or require direct access to a central location (a sky drive in the cloud). The thumb-drive is too easily lost (= synchronized backup needed), the central location not always reachable/ mountable. Besides company policy rightly prevents this often. Is there a tool that allows me to add passwords locally and then syncs it's DB with the "mother-ship" later. Or is there another approach that you use, that solves my problem? EDIT My question is more about "synchronize" than cross platform. I've evaluated (=read feature list) some good cross platform tools, but need one that does the synchronizing for me. By synchronize I mean "merge two versions" not "replace (hopefully) old file with new." I'm not sure I'm always disciplined/awake enough to prevent data loss. UPDATE Lifehacker just posted that AgileSolutions now have a beta version of 1Password for Windows.

    Read the article

  • It seems Windows 8.1 killed my two T60 laptop batteries

    - by rstock
    Upgraded Windows 7 to Win8 earlier this year, and last week upgraded to Windows 8.1. (Lenovo T60) Had no problem with battery usage when on Win7 nor Win8. After about a week of Win8.1 on my system, the battery stop working, while the system was on. The orange batt. indicator just keeps flashing. The system does not charge the battery (even though I know there was life in it). I installed a known good fully charged battery from another T60, it worked for aboute 40 mins then it instantly died in fron of my eyes. The system now shows the same orange flashing batt. light, but it is not charging. I know both these batteries are still good, they just appear to be dead. My research suggest that the new Win8.1 may not have updated the battery driver to Win8. I have since done that. Same problem. Research i s also pointing me to some 'smart chip' on the batteries that need to a reset. Is this possible ?? Does anyone know a process to reset the 'smart chip' on these batteries (fru# 92P1139) ????

    Read the article

  • scp No such file or directory

    - by Joe
    I've a confusing question for which superuser doesn't seem to have a good answer, and neither google. I'm trying to scp a file from a remote server to my local machine. The command is this scp user@server:/path/to/source/file.gz /path/to/destination The error I get is: scp: /path/to/source/file.gz: No such file or directory user is my username on the server. The command syntax appears fine to me. ssh works fine and I can cd to the file and it doesn't seem to be an access control issue? Thanks; Edit: Thank you John. I spotted the issue. ls returned this: -r--r--r-- 1 nobody users 168967171 Mar 10 2009 /path/to/source/file.gz So, the file was on a read-only file system and user is able to read it but not scp. I just copied the file to a different directory and chown it and worked fine. It would be good if someone can explain why this is the case though.

    Read the article

  • Alternatives to ntbackup

    - by Chris J
    Is anyone aware of any good (for a given value of "good") alternatives to ntbackup? There's a few quirks with ntbackup that I still find odd, and occasionally (and for no obvious reason) ntbackup just doesn't back up - usually complaining about that the wrong tape's inserted even though it's been told to just use the tape that's in the drive regardless. I've experiemented with cygwin's tar and cpio and Win32 ports of these utilities, but I've not had any luck getting them to see the tape device (so if someone does use these utilities to write to a tape device, also be interested to know how). Essentially all I'm looking for is a reliable program that I can tell to back-up a list of locations to the tape, and not to care about anything like formatting tapes, or sticking volume labels on them (and conversely then makes it fairly straightforward to restore off as well). On the flip-side, I don't want something that will attempt to manage our tapes. Don't get me wrong here -- ntbackup can do the job, but its quirks are making look at possible alternatives. Any suggestions? If it's open source, even better.

    Read the article

< Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >