Search Results

Search found 13293 results on 532 pages for 'small ticket'.

Page 115/532 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • How can I install iTunes in such a way that it can't put any "hooks" or helper programs on my computer?

    - by Joshua Carmody
    I'm buying a new iPad, which means I must once again install iTunes. I've not used iTunes in more than 6 months, since I bought a new computer. I don't like iTunes, but I can live with using it to buy/manage media and sync my Apple devices when the program is open. What I would like to do though, is find a way to install iTunes in such a way that it has absolutely no effect on my system when it is closed. iTunes normally installs several helper programs such as iTunesHelper.exe, and the Bonjour service. These programs run in the background when iTunes is closed. You can force-close them, or remove them from your setup files, but iTunes will often put them right back when you run it. I know these programs are mostly harmless, but they have at times caused issues such as iTunes spending system resources trying to catalog media files or drives connected to VPN, or other issues. At best they're just one more small background process eating up a small piece of my CPU time and RAM. How can I run iTunes without letting it get it's "hooks" into my system? One thought I had is that I could create a Windows user account just for iTunes, and deny it admin privileges. Then if I installed iTunes using that account maybe anything it installed wouldn't affect the "main" account on my PC? But I'm not sure if that would work.... Failing that, maybe some kind of virtualization software or sandbox I could install it in? I'm open to any suggestions. My system is an Intel-based PC running Windows 7 Professional 64-bit. Thanks!

    Read the article

  • Nginx 502 Bad Gateway: It just won't stop

    - by David
    I have the same problem that most people seem to have with Nginx: 502 bad gateway errors. They are intermittent but typically happen more than once per session, which means my users are probably running into it nearly every time they use the app. I've tried adjusting fastcgi_buffers and fastcgi_buffer_size (in both directions) to no avail. I've tried various other things with the configuration file but nothing seems to work. Here's my config (note that I've stripped away most of the things I've tried, since they didn't work and I didn't want to bloat the file with a bunch of un-related directives): server { root /usr/share/nginx/www/; index index.php; # Make site accessible from http://localhost/ server_name localhost; # Pass PHP scripts to PHP-FPM location ~ \.php { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; } # Lock the site location / { auth_basic "Administrator Login"; auth_basic_user_file /usr/share/nginx/.htpasswd; } # Hide the password file location ~ /\. { deny all; } client_max_body_size 8M; } I'm running a small Rackspace cloud server, which should be plenty for handling an app with a small user base...

    Read the article

  • Mesh Networked servers via vpn

    - by microspino
    I got a design idea and I would like to have some advice from SF about It. I have 5 customers with small real-estate databases. I've built for them a desktop app and now they would like to merge their database to share their data. I don't want to centralize everything in one place nor I want to do maintenance for servers. They told me also, that all of them in their offices, have little servers and maintenance guys available. Although everything seems suitable for web application, I had the idea to experiment something new: Any customer small-server wild be connected to the others in a sort of mesh network without a single point of failure and through VPNs. If one of the servers went down the customers could still connect to their databases from one of the other mesh networked servers instead of from the local one that is down. During normal operations all the servers sync the db with the others through VPNs. I can accept a half-day timing window of NON synched data, in other words, since I don't need real time synchronization, the server don't have to always stay in synch. I can migrate my data over to other Non-Sql technologies like CouchDB or Redis or whatever you suggest. As you can see I don't have a lot of constraints and although I could go with a web application I would like to delegate and decentralize support, data-privacy and management, as more as I can to my customers offices. Is that a crazy idea? Do you know If something similar exist? Which technology would you suggest?

    Read the article

  • Remote Desktop Zooming

    - by codeulike
    Using Remote Desktop from a device with a hi-res screen (say, a Surface Pro) is decidedly tricky - as everything displays 1:1 scale and so looks tiny. If the machine you are remoting into runs Server 2008 R2 or later, you can change the dpi zooming setting (see here). But for older hosts, that doesn't work. Using normal Remote Desktop, you can connect with a lower resolution, say 1280x768, and turn on smart-sizing. However smart-sizing can scale down (to display a huge desktop in a small area) but does not seem to scale up (to display a small desktop in a big area). Using the Windows 8 Remote Desktop App, you can zoom - but you cannot set the default resolution of the host. What I want is a lower resolution in the host, scaled up to fit my screen. So both of those are close to what I want, but dont quite work. So question is: Does the Remote Desktop App allow screen resolution to be set somehow? Is there some other Remote Desktop client that can handle zooming better?

    Read the article

  • torrent downloads not showing on Squid log

    - by noobroot
    hello, i have just a few months working as sysadmin, hence i still have lots to learn, first thing id like to do is as follows: We have an OpenBSD 4.5 box acting like firewall,dns,cache etc, the box has 2 network cards, one conected directly to the internet and the other to our switch, i used to work with sarg for the log analysis but then changed to the much faster free-sa. I use a daily free-sa report to check the bandwidth usage and report our top 5 bandwidth consumers (3 days a week being #1 and you will be buying the pizzas :D, we are a small company ~20 so we are very familiar). this was working really good until recently, one of us required to download some stuff via torrent (~3GB) and since the pizza rule is active for non-work related downloads, he told me (verified) that his download was indeed work related so i would dismiss that 3GB off his quota, but to my surprise the log didnt showed that 3GB, since his ip consumption was only around 290MB. More recently, since the FIFA world cup started, we know that some of the employees are watching the match's streaming, we know it and we dont care about it since, like already stated, we are a small company so we dont have restrictive policies, we all can chat, watch youtube, download anything we want BUT we are only allowed 300MB a day otherwise you'll get in the top5-pizza-board, anyway, that streaming consumption is also not showing in the free-sa reports. So my question is, why is these data being excluded from the reports? im thinking that the free-sa reports list only certain types of things but im also thinking if are the squid logs the ones that are not erm... logging these conections. Any help, guide, advice or clarification is appreciated.

    Read the article

  • Ubuntu: Network connection seems to fail after some time

    - by chrischu
    I just bought a Shuttle XS-35 barebone mini-PC and put a 1 TB WD hard drive and 2 Gigs of RAM into it and installed Ubuntu onto it. The machine will post as a media server (streaming videos to my PS3) and as a webserver for some small private projects. Now I wanted to copy my videos from my Windows 7 machine to the Ubuntu machine and therefore created a Samba share on the Ubuntu machine. I tried copying the files with the standard Windows copy function and with SyncToy but after some time (sometimes 5 copied files, sometimes 120 copied files) the Samba share just disappears. When that happens I can't reach the internet from the Ubuntu machine although the network connection still seems to be fine (IP still there etc.). Between the machines lies a LinkSys router. When I try to ping my router (after the connection doesn't work anymore) from the Ubuntu machine only a very small subset of the packages actually get there (something around 20%). When I restart the Ubuntu machine everything seems to work normal again. I have no idea where the problem lies here. Does anybody have a clue? Thanks in advance!

    Read the article

  • Looking for an application that scrolls or pans netbook screens running Windows

    - by therobyouknow
    I'm looking for a Windows 7 and XP compatible Windows desktop panning/scrolling tool. This is to solve a problem where some applications for example MSN have settings/preference Windows that are not resizeable. I have a Netbook with a small maximum screen resolution e.g. 1024x600. The fixed non-resizeable windows are too large for this display screen size so I cannot see all of the items on these windows, particularly the OK button to save settings. What I would like is a desktop scrolling/panning tool where if I move my mouse pointer to any edge of the display, it pans to show the region of the too-large-fixed window that I could not see. I use a Samsung N110 and Toshiba NB100 netbooks. I'm looking for: A general program that provides desktop panning/scrolling/expanded resolution to allow all regions of a non-resizeable fixed window Preferably a non-graphics hardware specific program but will accept a solution that works with both the above machines I'm NOT looking for (i.e. unsatisfactory answers others have asked that I've already searched and found): Advice on what programs to use that DON'T have the problem of fixed windows Alternative operating system solutions Plugging in an external monitor with larger resolution - I use this option but I need a solution when one is not available, e.g. while travelling etc Advice about not using small screen netbooks - I enjoy the compact convenience of them Advice about change the dpi settings in the Control Panel Display settings Advice about guesswork with the tab key to move the focus the off-screen item I cannot see Thank you in advance.

    Read the article

  • Why does BitLocker need a minimum volume size of 64 MB?

    - by Iszi
    Since the future of TrueCrypt appears to be still unclear, I figured I'd try to get my stuff migrated into BitLocker at least for the time being. I nearly never have to access my encrypted data from anything that's not BitLocker-capable, so cross-platform compatibility isn't a big deal to me at this time. However, I am having a bit of an issue understanding the minimum requirement of a 64 MB volume. With TrueCrypt, I was able to protect small files (and most of my protected files are fairly small) in containers down to 300 KB or even less. When I finally created a VHD of an appropriate size last night (100 MB), it seemed the file system itself only took up about 3 MB and encrypting it with BitLocker didn't appear to take up any more. While 3 MB is still an order of magnitude larger than the smallest volume I could make with TrueCrypt, it's still relatively reasonable in comparison to 64 MB. This is an especially large amount of overhead (and largely wasted at that, since it's mostly empty space for now) when I consider that some of these volumes will be stored and synced in the cloud. What possible reasons could BitLocker have for needing volumes to be 64 MB large, when it's not even appearing to use that space? BitLocker FAQ on TechNet

    Read the article

  • WSUS performance for unneeded updates

    - by mhouston100
    We have a WSUS server serving around 300 PC's and a couple of dozen servers and a discussion came up at work as to what products to include. We have a single SQL 2005 instance on one of the servers and it has NEVER been updated. My first thought was to just tick the box for SQL 2005 and let WSUS do it's thing to upgrade to the latest service pack at least. One of the other guys here has the opinion that having updates that are relevant to only a small selection of hosts would effect the performance of WSUS as a whole, claiming that each update does a 'check' against all the hosts or something similar. My argument is that manually updating these servers is obviously not working as the admins are not paying attention to what is needed. So my question is: Do updates that only effect a sub-set of the hosts effect the overall performance of the WSUS server in relation to ALL the hosts? (disk space is not an issue at this point) Is there any performance justification for or against manually updating small amounts of products? Basically I'm needing a rebuttal against his argument and I'm unable to find any concrete documentation to prove him wrong.

    Read the article

  • Recover windows cached domain password

    - by theguy
    I have a computer from another small organization that works with our school. It was previously joined to another domain from elsewhere. The organization doesn't have an IT person so they didn't think of what they needed to do about the information on the computer before they moved it to our school. The previous user of the computer is no longer with the organization so no information about the password. The computer has information that needs to be accessed and programs so putting the hard drive on another computer and grabbing the information is a no go as I need the computer itself to be working as well. The computer is running Windows Vista Business Edition and is joined to a domain with a cached profile. The admin accounts are disabled by GPO. I've been asked to see if I could recover the password but running ophcrack gave me no hits on the cached profile. I'm not too familiar with password recovery tools that would work on a cached profile from a domain so I'm looking for answers here. Any other suggestions? Preferably something free as we're a small school and an easy to use liveCD solution like ophcrack would be appreciated.

    Read the article

  • Why is only one Excel spreadsheet crippled, but others are fine?

    - by Dallas
    I have an inherited spreadsheet that I really don't want to rebuild at the moment. It's a simple small workbook that is small (< 200 rows that don't even reach to AA) and does nothing more than calculate some totals within the same worksheets. No macros, no external data sources, nothing beyond basic formatting of dates, numbers and strings. I see importing data from CSV/text has created many many workbook connections over time, but even if I delete them all (there were hundreds) it makes no difference in performance. Even clicking to simply change focus from cell to cell takes 10+ seconds, adorned by the spinning cursor and (Not Responding) appending to the title bar and the application locking up. The program seems to "recover" every time, but efficiency of editing this file is obviously seriously handicapped. All other files seem fine in Excel, and other programs have no apparent performance issues. I see Excel is chewing up CPU but I'm not sure how to narrow down what process or service is "clashing" with Excel. I tried the same file on other computers and performance is fine. If I turn off all start-up services and run only Excel, performance is restored... until I start using other programs and then it bogs down again. At this point, I would entertain almost any idea, theory or suggestion that helps pinpoint, solve or work around the issue.

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

  • What LPR arguments do I need to print a 1400x800 pixel image on a 4x6 label?

    - by Nick
    This is driving me nuts. UPS sends our system a 1400x800 GIF image of a shipping label, which is supposed to fit nicely on a 4x6 page. Unfortunately, I can't seem to get the command line options right to make it happen. We're using an Eltron/Zebra 2844 with a network adapter, and printing from our Ubuntu 8.04 server using CUPS. We're using the correct drivers, and test pages print correctly. No matter what I try though, it insists on printing the UPS labels accross 6 pages, with a little bit of the label on each page, or way too small. I've tried a bazillion different lpr settings, most of them producing garbage. The closest I've gotten is this: lpr -P Eltron2844 -o natural-scaling=55 -o page-right=0 -o page-left=0 -o landscape -o media="4x6" ./1ZY437560399620027.gif but it causes the image to be too small on the page. It's about an inch too short, and there's a 1/2" margin on both sides. If I bump the scale up to 56, it explodes the image onto two pages, and squashes it. Any ideas?

    Read the article

  • How do you avoid that server documentation gets out of sync with the actual setup?

    - by Frerich Raabe
    I'm a hobbyist maintaining a small FreeBSD server serving mail via IMAP - it's an exercise in server administration. The setup does have reasonably good documentation (in AsciiDoc format) which recently allowed another person to recreate the entire setup from scratch in less than 30 minutes. However, I noticed that after the initial setup, it easily happens that small changes done to the system (say: inetd gets disabbled, my IMAP server listens on an additional port for ManageSieve connections, a new router is added to the exim configuration) don't end up in the documentation immediately (if at all). My idea was to avoid this problem by (partially?) generating the documentation out of the configuration files and the comments therein - one way to implement this may be to put /etc and /usr/local/etc into some source code management system (say - git) and then run a script which regenerates the documentation on every commit. However, I'm not sure whether that would be overkill and/or too difficult to get right (after all, I don't want complete copies of the source files in my documentation but rather just the diffs). How do other people avoid that the server documentation gets outdated - is there a good way to keep them in sync automatically, or do you just have the discipline to update the documentation the same time you modify the system?

    Read the article

  • Joomla performance problems on AWS

    - by Bobby Jack
    I'm running a site on AWS with the following setup: Single m1.small instance (web server) Single RDS m1.small db Joomla 1.5 Generally, the site is performant, but is fairly low-traffic - say around 50-100 visits / hour. However, at peak time, we see about double that traffic. During peak time, pretty much every day: CPU usage on the web server slowly climbs to 100% CPU usage on the RDS server climbs quite quickly to about 30%, from an average of about 15 Database connections shoot up to about 140, from a normal average of about 2 or 3 The site is then occasionally unreachable, certainly according to pingdom monitoring. Does anyone recognise this behaviour? Can you point me in the right direction to begin investigating? Of course, RDS makes it difficult to do things like slow query logging, so I've started by regularly dumping the mysql process list into a file to see if there's anything I can spot there, but it would be good to have something more concrete to investigate. UPDATE At least, can someone confirm that I'm definitely right in saying that the level of traffic implies the problem must be a specific type of query taking way longer than it should to execute? This would happen if a table gets locked, and many queries need to write to it, right? For this very reason, I've already changed the __session table type to InnoDB.

    Read the article

  • No partition on USB Flash Drive?

    - by Skytunnel
    A friend gave me a corrupted USB memory stick to try recovery data from. But I've had some unusual results, so thought I'd share to see if anyone is familiar with this problem... First off I just tried opening from my own PC. Windows prompted to Format the drive, which I of course declined Downloaded TestDisk to anaylsis the drive. And right away I noticed something strange, on the listed drives it comes up as Disk /dev/sdc - 6144 B - USB Flash Drive That's right, the first USB flash drive smaller than a floppy disk!? Moving on anyway... first anaylsis comes up with: Partition sector doesn't have the endmark 0xAA55 TestDisk's Quick Search gave no results, moved on to Deeper Search: No partition found or selected for recovery This left me stumped. I tired a couple of other programs with no success I did manage to get a backup image, but it was just as small as TestDisk indicated, so nothing of use on it After a few hours trying various suggestions from other sources, I gave in and just tried formatting the drive. But returned the message: Windows was unable to complete the format. From googling that, the suggestion was to delete the partition. But there is no partition to delete in this case. most recently I've tried formatting from cmd, and got this result: Format D: /FS:FAT32 The type of the file system is RAW The new file system is FAT32 Verifying 0M 11 bad sectors were encountered during the format. These sectors cannot be guaranteed to have been cleaned The volume is too small for FAT32 Anyone got any suggestions? UPDATE: As per suggestion from @Karen, I tried running a CLEAN from DISKPART, results as follows DiskPart has encountered an error: The request could not be preformed because of an I/O device error.

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • Is there anything like Heroku for PHP and/or .NET?

    - by Wayne M
    In my area PHP is very widespread, so is .NET. Ruby not so much; most places have never heard of it. For some personal things I am "forced" to choose Rails because I want to take advantage of Heroku - the ability to deploy and scale on the cloud very easily is the main reason. Also, they offer a small FREE plan, with no ads or strings attached, that I can use for demo sites or, in this case, for my business' static page; as a totally bootstrapped startup I have maybe $50 or so in initial capital and cannot afford to pay monthly fees while I'm getting started. Are there any similar offerings for other languages? Specifically, I really like the small, 5MB site for free that Heroku offers - is there anything like that for PHP and/or .NET? I'm not even that concerned about the "cloud" part, but that would be a nice bonus. If there is, I might be able to kill two birds with one stone and pick up a useful skill as I'm doing my own thing instead of using something that nobody else knows or cares about. I should add I'm specifically interested in something that offers a free plan. As I said, Heroku has a 5mb plan that you can have as many as you want for free; I have yet to find anything similar for any other platform (most of the "free" sites require you to have ugly banners on your page, or don't allow you to use your own domain name), and to be honest I'm not too thrilled about using Ruby on Rails for everything simply to take advantage of this. I'm asking this here because I already asked it on StackOverflow and someone suggested it would be better suited here.

    Read the article

  • Cisco VoIP stuck as Unregistered?

    - by Shifty
    Question: Why is one VoIP stuck as Unregistered? Background: We have a Cisco UC540 Small Business switch/router/voip combo. This phone was working until I powered everything down to install a larger UPS unit. The phone originally had a status of "Deceased". I removed the registration and tried to add it again. Now it just sits as "Unregistered". I even tried giving it another extension. I am stuck using the Cisco Communication Assistant since this is small business hardware. There is very limited CLI access. Also, from what I heard, if you access the CLI with out cisco permission, you will void any warranty. The phone in question is a Cisco SPA501G. It is connected to a SG300-28P. There are 5 other phones on this switch working just fine. I have tried other ports with no luck. Both the link and PoE lights are lit up. Any ideas?

    Read the article

  • Exchange Full Access issue

    - by Benjamin Jones
    I was just hired as a System Admin for a small company. They use Exchange 2010 for their Mail Server. I've never had a permission issue like this with Exchange because I worked for a larger firm with less responsibility before. Their old system admin is LONG GONE, so I can't ask him what he did. The issue: Right now ANYONE can gain access to a mailbox and view the mail in the mailbox. This is disabled by default you say and you have to grant them full access ? You are right, but the old System Admin I guess didn't know what he was doing. SO right now user A can open up user B mailbox with out being granted permission. So here is what I found out. Every user in EMC Full Access Permission has Exchange Server group granted. Within the Exchange Server Group, Domain User's is a Member Of. Within Domain User's all user's are listed as Members. So my guess is because of this all users can access ANY mailbox? Well GOOD News. The company is small (35 people) and they are not computer savvy, so hopefully no one has figured out they can open anyone's mailbox.(From what I can tell no). Next thing I did was with my domain user in EMC, delete Exchange Servers Group in FUll Access Permissions and grant access to my user. I made sure that my memeber was apart of the Exchange Server Group. Went to our OWA site and now I don't have permission to my own mailbox. Re did everything to the way it was with my user and now I'm stuck. Any help? I would think granting a single user that is in the Exchange Server group, Full Access to that mailbox would enable them to open that mailbox???? I guess I am wrong.

    Read the article

  • Best solution top keep data secure

    - by mrwooster
    What is the simplest and most elegant way of storing a small amount of data in a reasonably secure way? I am not looking for ridiculous levels of advanced encryption (AES-256 is more than enough) and I am only looking to encrypt a small number of files. The files I wish to encrypt are mostly comprised of password lists and SSH keys for servers. Unfortunately it is impossible to keep track of ever changing passwords for my servers (and SSH keys) and so need to keep a list of the passwords. Obviously this list needs to be secure, and also portable (I work from multiple locations). At the moment, I use a 10MB encrypted disk image on my mac (std .dmg AES-256) and just mount it whenever I need access to the data. To my knowledge this is very secure and I am very happy using it. However, the data is not very portable. I would like to be able to access my data from other machines (especially ones running linux), and I am aware that there are quite a few issues trying to mount an encrypted .dmg on linux. An alternative I have considered is to create a tar archive containing the files and use gpg --symmetric to encrypt it, but this is not a very elegant solution as it requires gpg to be installed on every system. So, what over solutions exist, and which ones would you consider to be the most elegant? Ty

    Read the article

  • SBS 2008 Sharepoint Database good Memory Limit.

    - by ldelgado
    I manage a small network running on Small Business Server 2008. Lately, the Sharepoint embedded database is getting out of control with its memory usage. I've got a total of 16 GB of RAM on this server, and the Sharepoint database sometimes uses almost 8 GB of RAM. This never happened before, and it started happening after I installed Backup Exec 2010. It happens after a backup is performed. So I suspect there is a memory leak involved. I am working on that issue, but this question isn't about that. I would like to limit the amount of memory the Embedded database uses. I know how to do it. My question is, what would be the ideal amount of memory that I can allocate to Sharepoint? There are only 4 users on my network. One of the users uses two computers but not at the same time. They use sharepoint for a company calendar, and sometimes they share files that way also. Let me know if you need to know anything else. Thanks,

    Read the article

  • XFS: No space left on device

    - by beketa
    I am using XFS on small HDD (/dev/sdb1, less than 1TB) and storing many small files (-32KB). df -h and -i show that it has available space. # df -hv Filesystem Size Used Avail Use% Mounted on /dev/sda3 127G 19G 102G 16% / tmpfs 16G 0 16G 0% /lib/init/rw udev 16G 168K 16G 1% /dev tmpfs 16G 0 16G 0% /dev/shm /dev/sda1 99M 20M 75M 21% /boot /dev/sdb1 136G 123G 14G 91% /mnt/sdb1 # df -iv Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 8421376 36199 8385177 1% / tmpfs 4126158 5 4126153 1% /lib/init/rw udev 4124934 671 4124263 1% /dev tmpfs 4126158 1 4126157 1% /dev/shm /dev/sda1 26112 222 25890 1% /boot /dev/sdb1 24905120 11076608 13828512 45% /mnt/sdb1 However I got No space left on device error. # touch /mnt/sdb1/test touch: cannot touch `/mnt/sdb1/test': No space left on device I think inode64 issue is not related to this case because drive is less than 1TB and df -i shows that there are free inodes. I unmounted and mounted with -o inode64 but got the same error. xfs_repair does not report any problem. xfs_info shows drive information as follows. # xfs_info /dev/sdb1 meta-data=/dev/sdb1 isize=1024 agcount=16, agsize=2227764 blks = sectsz=512 attr=2 data = bsize=4096 blocks=35644210, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=17404, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Any ideas? Thanks!

    Read the article

  • HP DL380 G3 2U For Basic Web Server in 2012

    - by ryandlf
    I have an opportunity to pick up a used HP DL380 G3 2U for $100. I'm looking for a basic entry level web server that I can host a small - medium size website on and more or less learn the ins and outs of running my own web server before I bite the bullet and spend a couple grand on a server. The specs are: Dual (2) Intel Xeon 2.4GHz 400MHz 512KB Cache 4GB PC2100 ECC Registered Memory 6 x 72GB 10K U320 SCSI Hard Drives Smart Array 5i RAID Controller Redundant Power Supplies DVD/Floppy, Dual Intel GB NIC's, USB Or would I be better off spending a couple hundred bucks on something like: this new HP Seems like the only major difference is SATA and a bit of storage, but I will likely be implementing a separate storage system of some sort anyways. I guess it also wouldn't hurt to mention that I plan on running a linux server distro, so would the hardware be likely to support linux with a system that is 4 generations old? I don't mind spending a couple hundred extra dollars if its a better solution, but as mentioned previously I am simple looking for a server to learn on and probably use for a year or so while I put together a small - medium size website.

    Read the article

  • How to backup Servers to an SSH-Host with low traffic and access to versions and encryption?

    - by leto
    Hello, I've not run backups for the past dont't remember anymore years for my personal stuff until waking up lately and realising contrary to my prior belief: Actually. I care! :) Now I have a central data server at home where I want to attach an external media to, to which I want to save backups of my most important stuff, like years of self-written scripts, database dumps, you name it. I've tinkered with rsync+ssh over the last two years, also tried tar over ssh, but don't know the simplest and most easy to maintain way to do it yet. Heres my workload: A typical LAMP-Server (<5GB Data) which I'd like to backup fully so lots of small files connected via 10Mbit My personal stuff (<750GB Data) from a Mac connected via GE My passwords in an encrypted container (100Mb) from OpenBSD connected via serial-PPP My E-Mail from the last ten years (<25GB) as Maildir which I need to keep in readable format Some archives (tar.*) which I need to backup only once and keep in readable format (Deleted my ideas, as I'm here for suggestions) What I need: 1. Use an ssh-tunnel for data transfer 2. Be quick with lots of small files 3. Keep revisions 4. Be sure the data I save is not corrupted 5. Intelligent resume functions and be able to deal with network congestion :) 6. Compressed and optionally encrypted storage 7. Be able to extract data from backup easily (filesystem like usage would be nice) How would and with what software would you backup this stuff? Hints to tools that can help solve only part of my problem (like encryption) also greatly appreciated. Greets

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >