Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 194/728 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • In Icinga (Nagios), how do I configure hosts with multiple IPs?

    - by gertvdijk
    I'm setting up Icinga (Nagios fork) and I have some machines with multiple interfaces. Some services are only listening on one of them and to check them correctly, I like to know if it's possible to have multiple IP addresses configured for a single host in Icinga. Here's a minimal example: Remote Server: eth0: 1.2.3.4 (public IP) eth1: 10.1.2.3 (private IP, secure tunnel) Apache listening on 1.2.3.4:80. (public only) OpenSSH listening on 10.1.2.3:22. (internal network only) Postfix SMTP listening on 0.0.0.0:25 (all interfaces) Icinga Server: eth0: 10.2.3.4 (private IP, internet access) Now if I define a host: define host { use generic-host host_name server1 alias server1.gertvandijk.net address 10.1.2.3 } This will not check the HTTP status correctly. And defining an additional host: define host { use generic-host host_name server1-public alias server1.gertvandijk.net address 1.2.3.4 } will check everything, but shows up as two independent hosts. Now I want to 'aggregate' these two hosts to show up as a single host, yet providing an easy configuration to check the services on their proper address. What is the most elegant number-of-configuration-lines-saving solution to this? I read about several plugins available to workaround this, but I can't figure out what is the current way to address it. Solutions go back to 2003, but I'm running Icinga 1.7.1, already capable of the address6 option, yet that triggers IPv6-only resolving on the hostname... Ideally, I wish to configure Icinga to be intelligent enough to know that the Postfix instance running on 10.1.2.3:25 is the same as 1.2.3.4:25 and thus not triggering two alarms. I guess this must have been tackled before and sysadmins have it set up now. Please share your solution to this. Thanks! :)

    Read the article

  • Wireless router setup for 1-1 NAT

    - by Carlos
    What I have: A linksys router WAG160N with firmware version 2 A "pool" of 5 external static IP's provided by my ISP 213.xx.xxx.n All the required configuration values for the static IPs such as (Subnet Mask, Gateway and static DNS 1, 2, 3) Current WAN Configuration: Encapsulation: RFC 2364 PPPoA Multiplexing: VC QoS type: UBR DSL modulation: MultiMode What's connected to the network: 1 x Server (That I want to make available to the outside) 5 x Desktops with static internal IP's, such as 192.168.0.xx 2 x Network printers, also with internal static IP's 2 x Laptops 1 x NAS (Network Attached Storage) also on static IP What I want to do: I would like to make the server available from outside the network, for example from your house. The problem is that Im not really sure how to do this. I have tried following the steps on the instruction manual in Linksys but they do not seem to work, once I set it up as shown bellow, I loose internet and all hell breaks loose. Going into further detail, I would prefer if the network is changed as little as possible, by this I mean that all the computers stay networked within eachother and only the server is accessible from the outside the network. What I need HELP with: I have read around that it is possible to set a 1-1 NAT (I know where it is in the menu but have no clue what it does...) so that I can NAT a single public IP directly to a single private IP (in our case the server). But please, How do I do that? Or maybe an alternative?

    Read the article

  • Creative software leftovers - ASIO error message

    - by Tony Patriarche
    I temporarily installed an old Creative Audigy 2 soundboard on my Vista x64 Home Premium computer. Bad idea! I uninstalled the board & all software visible on the control panel. Now, with one particular app. (Sibelius) I keep getting a start-up message "CTASIO Warning: Creative ASIO: there are no Creative audio products installed on the system that support ASIO". I offer this as a candidate for "Most useless message", but that's beside the point. I used a commercial registry cleaner (PC-Tools Registry Mechanic) and then edited the registry looking for "creative", "audigy" and "ASIO". After removing everything I could find, I still get the message. Any suggestions?

    Read the article

  • Any dangers in using DDR memory with a higher frequency than the FSB?

    - by raw_noob
    I'm looking to upgrade memory in an older motherboard. The processor is an AMD Sempron 2500+ with a maximum speed of 333/166MHz. The motherboard is an MSI MS-7061 (KV3M-V), which accepts up to 2Gb of DDR memory maximum PC2700 in 2 slots and has a maximum FSB of 333MHz. The board does not have dual-channel support. Existing memory includes a stick of 512Mb PC3200, which seems to be running OK (presumably at PC2700) but is rated 200MHz, which is below the FSB speed. The other stick is 256Mb PC2100/133MHz, again below the FSB speed. (All figures from CPU-Z.) I have a chance to acquire a single used stick of PC3200/400MHz memory very cheaply. Crucial's system scanner seems to suggest that this will be OK with my system, but other sites have suggested that running memory with a higher frequency than the FSB can cause instability. Is this true? Would I be better waiting until I can buy the correct PC2700/333MHz stick? I'm assuming that the mixed memory I have at present is running as 768Mb at 133MHz. Is this a reasonable assumption? If so, would you expect the performance differences between 768Mb/133MHz and 1Gb/333MHz to be very noticeable? If I install the new 1Gb/400 or 333MHz stick in slot 1, am I right in thinking that adding back the existing 512Mb/200MHz stick in slot 2 would pull the whole 1.5Gb system memory speed down to 200MHz? If so, which would be better - 1.5Gb/200MHz, or the single 1Gb stick at the full 333MHz that the FSB permits? Is more headroom more important than extra speed? Any help - or even opinions - gratefully received. I can't find reliable information, and I can't afford to make expensive mistakes.

    Read the article

  • High speed network configuration

    - by Peter M
    Sorry if this seems to be a stupid question, I'm not sure how to specify what I want to know when checking google. I will have 2 or 3 devices pumping out data on a 100Base-T port. The combined data rate of all devices is about 15KB/S which exceeds the optimal 100Base-T channel capacity (12KB/S), but well within the realms of a 1000Base-T connection. Each device will be sending a burst of data in the form of an FTP transfer to a common, single host computer in a sequential manner ie: Device A establishes FTP connection and transfers data Device B establishes FTP connection and transfers data Device C establishes FTP connection and transfers data It may be that the A&B, B&C and C&A transfers overlap in the time domain to some extent. There will be minimal traffic going back from the computer to each device (in general what ever is needed to support the FTP transfers), and the network will be dedicated to transferring data between these devices and the host computer. Is it possible to use a switch to combine the multiple incoming 100Base-T streams into a single outgoing 1000Base-T stream? if so what features in a switch should I be looking for? Or would it be better to have 3 physical point-to-point 100Base-T dedicated connections between each device and the host computer? (thus having at least 3 physical Ethernet interfaces on that computer) Note that I can't change the interface on the devices, but I am free to choose the network and host computer configuration. Thanks for you help Peter

    Read the article

  • Vacation scheduler/viewer

    - by Norfeldt
    I'm looking for a solution that allows multiple persons to put plan and notify their vacation by putting it in their electronic calendar and invite a dedicated "robot" email. On the other side I should be able to get a quick overview of the vacation for each person and do a print out that allows me to put it on a board. Example: John puts his winter vacation for week 7 into his calendar and invite [email protected]. Ben does the same thing for week 4 and 5 and invites [email protected]. Dilbert host the [email protected] and prints out and overview for the next 3 months. Each person's vacation is either stated by name or/and color on the print out. I would like to do the thing with standard business software like Outlook 2010 without installing too many softwares. But at the same time it should be easy and quick to make the print outs without too much fiddling Am I dreaming ?

    Read the article

  • Volume randomly turning itself down on Windows 7 64-bit

    - by Arda Xi
    This is the weirdest issue I've ever encountered with my PC. Every so often, my sound will start playing back at a lower volume. This happens when watching video, listening to music, all independently. It usually lasts anywhere up to a minute, after which it will turn up again. The weird thing about it is that the volume control in Windows remains at 100%, even though the volume is audibly a lot lower. (No, I'm not going deaf, it's just my PC. I checked.) I just have no idea where to go to troubleshoot this, even. I'm using Windows 7 64-bit with an on-board Realtek sound-card. Oh, just in case someone finds this question on Google or whatever, making sure this is on Do Nothing may fix your problem. Unfortunately, it did not work for me. My settings all seem fine. This is my audio slider. (Took the screenshot while the issue showed.)

    Read the article

  • PHPMyAdmin - Error 500

    - by christian.thomas
    Have scoured the board but can't seem to find anything that's helped yet. If I go to http://localhost/ it's fine, if I go to http://localhost/phpmyadmin I get an 'Error 500: Internal Server Error' There doesn't seem to be anything that'll show up in the log files either. I've tried the RewriteLog as mentioned in PHPMyAdmin 500 Internal Server Error - But that doesn't really seem to help either, nothing gets written to it when I've got: # Logfiles ErrorLog /home/www/beta.**.com/logs/error.log CustomLog /home/www/beta.**.com/logs/access.log combined RewriteLog /home/www/beta.**.com/logs/rewrite.log RewriteLogLevel 9 I've tried uninstalling the package and re-installing it, but that's not helped either. Anyone got any other suggestions? I'm running Debian and Apache 2.

    Read the article

  • Do most front and rear USB connections deliver the same power and performance?

    - by Bratch
    I was reading this Three Monitors For Every User and there were some comments about rear USB ports being able to deliver more power than front USB ports because they are directly connected to the motherboard and closer to the power supply (by circuit board runs). Even though the front USB ports may have connectors farther from the power supply, and there are cables from the motherboard to the front ports, I think that the difference in power would be negligible (unless the case is over 5 meters long). Anyone know for sure if they are the same or different? Note that I'm not talking about an older case where the front might have been USB 1.1 and the rear USB 2.0. A modern case would have USB 2.0 on all ports. And of course using a powered hub would deliver plenty of power.

    Read the article

  • What are the best possible ways to benchmark RAM (no-ECC) under linux / arm?

    - by moul
    I want to test integrity and global performances of no-ECC memory chips on a custom board Are there some tools that run under linux so I can monitor system and global temperature in the same time ? Are there some no-ECC specific tests to do in general ? EDIT 1: I already know how to monitor temperature (I use a special platform feature /sys/devices/platform/......../temp1_input). For now : wazoox : it works but I've to code my own tests Jason Huntley : ramspeed : does not work on arm stream benchmark : it works and is very fast, so I'll look if it's accurate and complete memtest : I'll try later, since it does not run directly from linux stress for fedora : I'll try later too, it's too problematic for me to install fedora now I found this distribution : http://www.stresslinux.org/sl/ I'll continue to check tools that run directly under linux without too big dependencies, after I'll maybe give a try to solutions like stresslinux, memtest, stress for fedora. Thanks for you answers, I'll continue to investigate

    Read the article

  • Creating a bootable USB drive from a distro split over two DVD ISOs

    - by Kev
    I am searching and not finding the right way to do this. Please note, I don't think I'm trying for anything strange here. I just want to make a bootable USB stick of a single OS that happens to be larger than one DVD and happens to be larger than FAT32 will allow for in a single file. On our slow connection I spent a long time downloading CentOS 5.9's two DVD ISOs: CentOS-5.9-x86_64-bin-DVD-1of2.iso (4.4 GB) CentOS-5.9-x86_64-bin-DVD-2of2.iso (718 MB) I have a USB stick that I want to somehow get these two ISOs on. Since the first one is 4.4 GB, I can't use ISO2USB because it insists on FAT32. I cannot find an alternative that lets you specify more than one ISO image--of the same distro, I'm not trying for some fancy multi-boot thing--to put on the same stick. I guess I should have downloaded the CD ISOs, but I thought I was "saving time" because then I wouldn't have as many files to run through the md5 checker. There's no IMG file of the whole thing (only a net install version, which I don't want--I want to pre-download everything) otherwise I would've gone for that. So, given that I have these two DVD ISOs, how can I get them on a stick that will boot and make use of both of them properly to install CentOS somewhere? Again, I don't think this is anything out of the ordinary, yet I can't find software/docs that seem to support this. Am I stuck re-downloading everything in CD-sized ISOs just to do this? I found this, but it doesn't run on Windows. I am using Windows to prepare the stick.

    Read the article

  • Multiple servers vs 1 big server performace

    - by pistacchio
    Hi to all! My team of developers has suggested a server structure for an upcoming project we are developing. Our structure is "logical", meaning that the various logical components of the application (it is a distributed one) relies on different servers. Some components are more critical than others and will be subjected to more load. Our proposal was to have 1 server per component but the hardware guys suggested to replace the various machines with a single, bigger one with virtual servers. They're gonna use Blade Servers. Now, I'm not an expert at all, but my question to the guys was: so if we need, for example, 3 2GHz CPU / 2GB RAM machines and you give me 1 machine with 3 2GHz CPUs and 6 GB of RAM it is the same? They told me it is. Is this accurate? What are the advantages or disadvantages of both the solutions? What are the generally accepted best practices? Could you point out some URL reference dealing with the problem? Thank you in advance! EDIT: Some more info. The (internet / intranet) application is already layered. We have some servers on the DMZ that will expose pages to the internet and the databases are on their own machines. What we want to split (and they want to join) are some webservers that mainly expose webservices. One is a DAL that communicates with the database layer, one is our Single Sign On / User Profile application that gets called once per page and one is a clone of what seen on the Internet to be used on our lan.

    Read the article

  • Port knocking via SSH tunnels

    - by j0ker
    I have a server running in my university's internal network. There is only one SSH daemon running which is secured by port knocking with knockd. Works fine if I try to connect from within the internal network. But since the server has no external IP, I have to tunnel into the internal network every time I want to access the server from outside. And since tunneling only works for a single port I cannot do the port knocking as easily as from an internal client. In fact, I don't get it to work at all. What I'm trying is opening tunnels for all the different ports that have to be knocked. Then I send TCP-SYN packets into the tunnels. But that doesn't work even for a single port. If I establish the tunnel on the first port in the knock sequence and send a packet through it, it doesn't reach the server. There is no entry in the log file of knockd, while there should be something like 123.45.67.89: openSSH: Stage 1 (as shown with internal knocks). So I guess, the problem doesn't exist within my knocking script but is a more general one. Are there any known problems with what I'm trying to do? Is it even possible or am I missing something? Thanks in advance!

    Read the article

  • Linux: disbale USB without disabling power

    - by Ergot
    TLDR I want toggle between the following usages of a usb-port via the terminal: use like a normal usb-port only supply energy to charge Story I recently got me something like a magna doodle that can save your drawings to pdf, which can be moved to your computer via usb afterwards. Now the thing is that you can't save anything while it's plugged in. Because it's the only way to charge it, it bugs me that I can't find a software solution and laziness I want to keep it plugged in and toggle the connection to the computer only when needed. I noticed that it's charging and usable when it is plugged in and the computer is shut down or suspened. So I guess that there's a way to do it. Tech info computer: ThinkPad X201 Linux Kernel: 3.14.5-1-ARCH "Magna doodle": Boogie Board Sync

    Read the article

  • How can I create a simple Exchange 2010 backup solution?

    - by bduncanj
    I'm sure this question's been asked a dozen times in one form or another, however after much searching, there doesn't appear to be an obvious simple recovery solution for a single Exchange box. We're using Exchange 2010 on a single server, the server hosts the AD and nothing else on the network uses the AD. The intent is to run this server as you would an externally hosted Exchange server - access only via HTTP (RPC mode or OWA) - all other ports blocked. I've a daily backup running, using Windows Server 2008 volume shadow service to backup the Exchange data to an external hard disk. My question is, how do I perform a bare metal recovery of this server? 1) Do I need to be explicitly including the active directory information in this nightly backup, or will it be there by virtue of the fact that this system is the primary AD server and the Windows backup service knows this? 2) I understand I can re-install Server 2008 onto my new hardware (in the case of hardware failure) and then run Exchange 2010 setup.exe with a /recover argument, referencing the backup volume. 3) It is acceptable to have some downtime during this recovery process. But is there anything else I should be aware of? Thanks! Duncan

    Read the article

  • Where should I plug in my monitor -- Motherboard or Graphics card?

    - by Jeremy White
    Assuming I am using the following equipment... motherboard with HDMI/DVI & no embedded graphics discrete graphics card (nVidia or ATI) on PCI-E slot Intel CPU with integrated graphics ...where should I plug my monitor into the computer? Presumably, I'll get the fastest speed on games connected directly to the graphics card. But there is also power savings when connecting to the motherboard and accessing the Intel on-board graphics. I've read that some motherboards can switch automatically between the Intel graphics and discrete graphics. Is that something that works well, and where do I connect the monitor to enable that?

    Read the article

  • Raid on ICH9R chip set

    - by user500982
    Hi Im looking at buyign this MB: http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPE-HF-D525.cfm Im wondering though if the chipset will support the raid configuration I need. Im looking to configure the following arrays: raid array 1: 2 2TB Disks in Raid 0 raid array 2: 2 2TB Disks in Raid 0 raid array 3 (not actualy an array): 1 300GB Disk not in raid, to be used for OS and boot. So in total there would be 5 drives and the board supports 6. so im good when it comes to connections. However I have herd some chip sets only support one raid array (volume). so either all drives are individual, or are in the array. I must have 2 sperate raid arrays independent of each other, and a 5th drive not in any array. Anybody know if this setup will work? Thanks, -Stewart

    Read the article

  • Copying files between linux machines with strong authentication but without encryption

    - by Zizzencs
    I'm looking for a suitable program to copy files from one linux machine to another one. The program should be able to do authentication but it should not do encryption. The reason behind the latter is the lack of CPU power to do the encryption. I copy backups from ~70 machines to a single backup server simultaneously. The single server is an HP Proliant DL360 G7, with 10 Gbps ethernet connection and an FC storage backend that can do 4 Gbps. Through FTP I can write ~400MB/sec to the storage (that's about what I want) but through ssh with arcfour I can only do ~100MB/sec while having 100% CPU usage. That's why I want file transfers not to be encrypted. The alternatives that I found not really suitable: rcp: no authentication, forget it FTP: making the authentication "secure" (at least preventing plain-text password exchange) is possible but not really easy and I haven't found a method to force any FTP daemon to encrypt the control channel (for the authentication) and not to encrypt the data channel (for data transfers) SCP/SFTP: in farely recent ssh(d) implementations you can't turn off encryption. The best you can do is to use the arcfour cypher for the encryption but it sill uses too much CPU power for my needs. rsync over ssh: same problems as with SCP/SFTP. plain rsync: from the documentation of rsyncd: "The authentication protocol used in rsync is a 128 bit MD4 based challenge response system. This is fairly weak protection, though (with at least one brute-force hash-finding algorithm publicly available), so if you want really top-quality security, then I recommend that you run rsync over ssh." It's a no-go. Is there a protocol/program that can do exactly what I want? (A big plus would be if it could work on windows as well and/or if it would support rsync-stlye copying/synchronization (e.g. copy only the differences).)

    Read the article

  • Correct CPU Frequency in BIOS

    - by akula
    One of the machines I have is a 10 year old one. I can't discard it due to some sentiment (Mom). Legacy board: Mercury 810e, 133 MHz FSB Processor: Pentium III Tualatin 1.2 GHz Observation: I see an entry in the BIOS for the CPU Frequency from 6.0 to 11.0. Last value is Safe Mode. I don't know what value to choose for my CPU. So I'm running that CPU in "Safe Mode". What is the correct value for this CPU? Is Safe Mode really safe to run this CPU?

    Read the article

  • Disable RAID Controller

    - by B.Mr.W.
    I have some decent HP Proliants server that come with "HP Smart Array P410i Controller" enabled, I am using these boxes to set up a Hadoop cluster and I know, RAID is for sure a no-no for Hadoop since the application itself will take care of data redundancy and extra intelligence provided by RAID won't be helpful and might turn down the performance. I tried to disable the devices at the BIOS and the box cannot even access the disk afterwards. So I am assuming the controller is sitting between disks and mother board, and we have to turn it on and configure it to "level0" or something like that. I am wondering what should I do to "disable" the RAID functionality so it will fit into the Hadoop environment.

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • Can't reliably ping 6224 router from directly-attached system

    - by David Mackintosh
    OK, here's my situation. This is on the internet. The 6224 is the router in this picture and physically resides in Kanata. Both VLAN 1697 and 3994 are provided by an internet service provider. These VLANs are provided through a single 1Gb ethernet wire. The Kanata hosts are directly attached to the 6224; the other two sites are remote. VLAN 3994 is a single IP address space, so theoretically it shouldn't matter physically where the hosts on that subnet are. Here's the problem. I have a monitoring system which is connected further into the internet, so probes from the monitor would come in to this diagram on the 1697 VLAN. When I ping hosts at Albert or Bells Corners from the internet, there is 0 loss. The connection looks perfect. When I ping hosts at Kanata, I lose anywhere from 10 to 40% of the pings. The loss is not predictable, but: when I do lose them, I always lose at least 3, usually 4, rarely more, pings in a bunch. I have attached a monitor directly to the 6224 in Kanata on 3994.. When the monitor pings the 6224 routing interface, I see exactly the same loss pattern -- but NOT at the same time as the loss from the remote system. Ping time is around 1ms. When the monitor pings another system directly attached to the 6224, there is 0 loss. Ping time is about 0.1ms, one-tenth of the time to ping the router. Anyone know what is going on here?

    Read the article

  • a brand new FS based on a database without using fuse

    - by Devrim
    hi all, To serve millions of files out of a single directory, being able to connect to a drive from hundreds of endpoints, and for some other reasons (to avoid gluster/nfs/all fs based networking solutions), I want to evaluate the possibility of making a filesystem that's based on a mongodb (or any other). Basically, it works like fusefs, every single file is kept in mongo gridfs. In theory, I do, mount mongodbfs /mountPoint mongodb://localhost then when i say touch /mountPoint/test.txt this file is inserted into mongodb. This FS will also store uid/gid and perms with the file, we can throw hundreds of servers to it, and no useradd will be necessary. I'm not thinking to include all the features of FS, just the ones we need. My question is, how do I start my quest in finding resources, books, links, people, developers who'd help me implement this? at least a proof of concept. Is it feasible? What should I expect as a timeline for such undertaking? Please only think about gazillion small files and folders.

    Read the article

  • How to force 640*480@60Hz screen resolution on xubuntu 12.04

    - by c2h2
    It seems xubuntu won't be able to correctly set resolution at 640*480@60Hz at its Display settings. And I am unable to correctly my super small 6.4 inch Mitsubishi VGA panel via VGA cable. I have tried to hack both X11 conf /etc/X11/Xorg.conf and xfce4 conf, but all the document I can find is outdated. and conf files are changed into other location. Can someone give me a hand and I'll mark correct for other people to use? Thanks! EDIT: The board is an Intel Atom D2700, gpu is SGX545. I tried to use xrandr --output default --mode 640*480 It seems works fine, but refresh rate is 75Hz, but the screen only suports 60Hz So I used xrandr --output default --mode 640*480 --rate 60 but it give error: xrandr: Failed to get size of gamma for output default Can anyone pointing any directions?

    Read the article

  • JFFS2 poor mount performance

    - by Marcin Polkowski
    I run multiple ARM boards with Debian Linux installed. Board is equipped with 512 MB of NAND memory. I've observed that after ~3 months of continuous run booting time increased significantly - it takes over 3 minutes to mount filesystem (JFFS2). System was using about 35% of available storage so I’ve removed unnecessary files (got to ~18%) but this didn't change anything. Then I realized that my software produces directories that are left empty so I’ve removed ~500 empty and unnecessary dirs. This didn’t help either. After system is started I see JFFS2 garbage collector (jffs2_gcd_mtd4) running and occupying over 90% of CPU. Now my question: is there a way to „optimize” JFFS2 filesystem for better performance - faster booting (my system have limited timid to boot up)? It would be great if this optimization could be done remotely - I have no physical access to boards.

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >