Search Results

Search found 2273 results on 91 pages for 'smart metetring'.

Page 69/91 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • Barriers to IPv6 deployment: addressing

    - by sysadmin1138
    There are several things that are keeping IPv6 deployment from being a topic of active discussion here at my work. There are the usual technical issues, but one non-technical one appears to be a major stumbling block on the path to actually getting a deployment project going. Addresses, memorizing of. Specifically, IPv4 addresses are comprehensible, and IPv6 addresses just look like a big long string of hex. The human mind has real trouble memorizing lists of more than 7-8 items, and an IPv4 address (192.168.231.148) has four items in it which makes it easy for us to memorize. A fully populated IPv6 address has not only 8 sections, but each section has 4 hex digits in it. IPv6 addresses were not designed for memorization. To the technician who knows that the DNS server is at 192.168.42.42 (or more likely "42.42", since the company prefix is likely memorized), the idea of memorizing an IPv6 address fills them with dread. Which in turn makes them much less enthusiastic about participating in an IPv6 deployment project. Because of how our network works we're not fully dynamic in terms of v4 addressing. We have several to many subnets that are entirely statically assigned for a variety of reasons, chief among them being that the overhead of static DHCP assignments is perceived as being too great. Also, some devices still aren't smart enough to pull DNS addresses out of DHCP while also having a static assignment, and therefore require manually configured DNS settings. Therefore, some v6 address memorization will have to be done. We're not under any mandate to get v6 out the door, so we don't have pressure from the top. However, it is time to start prepping our infrastructure to handle IPv6 even if we don't convert wholesale. For those of you who have been in IPv6-land for a while, what short-cut methods do you use to discuss or keep track of subnets and specific/critical IP addresses? If I can help reduce some of the dread surrounding IPv6 we might get the project going.

    Read the article

  • Accidentally dd'ed an image to wrong drive / overwrote partition table + NTFS partition start

    - by Kento Locatelli
    I screwed up and set the wrong output for dd when trying to copy a freenas iso, overwriting the wrong external hard drive. Ironically, I was trying to setup a freenas server for data backup... External drive is only used for data storage, system is entirely intact Drive had a single NTFS partition filing the entire device (2TB WD elements) Drive originally had an MBR partition table. Drive now shows as having a GPT, presumably from the freenas image. Drive was mounted at the time, with maybe a couple kB of data written/read after running dd Drive is just a few months old and healthy (regular SMART / fs checks) I have not reboot the OS (crunchbang) /proc/partition still holds the correct information (and has been stored) Have dd's output (records in / out / bytes) testdrive did not find any partitions on quick or deep search running photorec to recover the more important data (a couple recent plaintext files that hadn't been backed up yet). Vast majority of disk content ( 80%) is unnecessary media files. My current plan is to let photorec do it's thing, then recreate the mbr with gparted and use cfdisk to create another NTFS partition using the sector information from /sys/block/.../. Is that a good course of action (that is, a chance of success)? Or anything else I should try first? Possibly relevant information: dd if=FreeNAS-8.0.4-RELEASE-p3-x86.iso of=/dev/sdc: 194568+0 records in 194568+0 records out 99618816 bytes (100 MB) copied grep . /sys/block/sdc/sdc*/{start,size}: /sys/block/sdc/sdc1/start:2048 /sys/block/sdc/sdc1/size:3907022848 cat /proc/partitions: major minor #blocks name ** Snipped ** 8 32 1953512448 sdc 8 33 1953511424 sdc1 current fdisk -l output: WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 2000.4 GB, 2000396746752 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table

    Read the article

  • Defragging Host OS of VMWare

    - by JackLocke
    Hi All, I want to ask something that has been puzzling me from last few days. I will try to explain my problem as clear as I can ... I have VMWare Workstation installed in my machine. And I use one separate 100Gb drive which stores all of my virtual machines, nothing else. Now, last week I was playing with a De-fragmentation tool called "Smart Defrag" which showed me in its analysis report that my drive where I am currently storing all of my Virtual Machines has more than 80% of fragmentation !!! Now my question is ... What will be the effect on my Guest / VM machine performance if I defrag my Host machine ... I mean this Host machine is essentially storing those virtual machines, but still dont have any direct access to what ever is stored in those machines ... so defraging the host should not cause any problem. But before proceeding, I want to hear from other people who may have met same problem. I will really appreciate any help ... BTW, I am using Windows 7 as Host and the guest machines I am using are Windows 2008 & 2003 & Ubuntu 10.04 THanks, Jack

    Read the article

  • I've just set up FreeBSD 8.0 and can't login with ssh

    - by Matt
    /etc/hosts.allow is set to allow any protocol from anywhere. I can "ssh localhost" and it works. I simply get "connection refused" from putty on another machine. Any ideas? Will try to get a copy of the sshd_server.conf file as soon as I can find a flash disk to copy it to, but I thought someone might know what you need to set initially to permit login. EDIT: I think I can see why it's not working now. If I telnet to the IP address of the server I'm seeing MGE UPS SYSTEMS SNMP Web/Agent configuration menu. Enter Password: Doh. Ok, so the IP address is assigned by DHCP, but it seems there is already a device statically assigned to that address. I'll put in a reservation and try again. ok, sorted now. It was an ip address conflict. Windows DHCP isn't smart enough to check if there is something listening on the address before first assigning it.

    Read the article

  • organizing my music and my itunes

    - by Cawas
    What can we do to organize our music? I've got over 20k items on my iTunes Library, at least 5k with ratings and play counts, apparently just 12k music files and I can't understand how this question have not been properly answered yet. Maybe there is no answer. I have too many duplicates, broken links, bad music, corrupted files... Well, a big mess with no tags! Probably there's no single software capable of just organizing everything, though I'd love one. Hopefully some time in the near future we all will be able to just sync the cloud of our automagically selected music to the newly created offline copy. But meanwhile... Please, do consider I've at least gave a shot (even while not a full test drive) to every single answer linked here already, plus a few more. I'm fine with using other software (mac too, please) to organize, but I'd need it to sync (retrieve and put back) at least iTunes ratings, because of iPhone and smart playlists. Not looking for iTunes replacement. I'm hoping to hear what you hardcore music organizers out there are using as your own solutions! :) I myself am using way too many tools, getting way too little done and end up going song by song.

    Read the article

  • Moving server room to another part of the building

    - by PHLiGHT
    This question is a bit different than the typical we are moving our server room to an off site location or we are moving the whole office to a new building. Management wants to add some more office space and to do so they want to move the server room to another location. The server room has Verizon smart jacks, a few servers, PBX and all the office network drops go into this room. I'm going to go over there to scout out an alternate location for the equipment because that is still TBD. This sounds like quite a pain since the Verizon equipment for our MPLS will need to be moved (never done that) and the office jacks will need to be re-run. How do you handle the jacks? I was thinking of keeping them in the same location and having new wall plates put in with half the ports going to the current location and the other half to the new location. Or do you think that 40 drops could just be done over the weekend so the old stuff would be ripped out and replaced with the new? Currently the wiring is a mess so this could be a blessing in the long run.

    Read the article

  • Odd log entries when starting up PotgreSQL

    - by Shadow
    When restarting pgSQL, I get the following log entries: 2010-02-10 16:08:05 EST LOG: received smart shutdown request 2010-02-10 16:08:05 EST LOG: autovacuum launcher shutting down 2010-02-10 16:08:05 EST LOG: shutting down 2010-02-10 16:08:05 EST LOG: database system is shut down 2010-02-10 16:08:07 EST LOG: database system was shut down at 2010-02-10 16:08:05 EST 2010-02-10 16:08:07 EST LOG: autovacuum launcher started 2010-02-10 16:08:07 EST LOG: database system is ready to accept connections 2010-02-10 16:08:07 EST LOG: connection received: host=[local] 2010-02-10 16:08:07 EST LOG: incomplete startup packet 2010-02-10 16:08:07 EST LOG: connection received: host=[local] 2010-02-10 16:08:07 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:08 EST LOG: connection received: host=[local] 2010-02-10 16:08:08 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:08 EST LOG: connection received: host=[local] 2010-02-10 16:08:08 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:09 EST LOG: connection received: host=[local] 2010-02-10 16:08:09 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:09 EST LOG: connection received: host=[local] 2010-02-10 16:08:09 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:10 EST LOG: connection received: host=[local] 2010-02-10 16:08:10 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:10 EST LOG: connection received: host=[local] 2010-02-10 16:08:10 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:11 EST LOG: connection received: host=[local] 2010-02-10 16:08:11 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:11 EST LOG: connection received: host=[local] 2010-02-10 16:08:11 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST LOG: incomplete startup packet My question regarding a potential consequence of this is posted here: http://stackoverflow.com/questions/2238954/mdb2-says-connection-failed-db-logs-say-otherwise , but I didn't realize this was happening when I asked that question, and I figured this [part of the] problem is for SF. Edit: I can connect to the database and manipulate things normally with the psql CLI and the postgres user.

    Read the article

  • Why is writing to my external hard drive slow, while benchmarks show fast writing?

    - by matix2267
    I have an iOmega eGo 320GB portable drive connected through USB2.0 to my laptop running Windows Vista. It's been working fine for quite some time until recently it became very slow when writing e.g. when copying ~300MB movie over to the drive at first it is extremely fast but it actually doesn't write it only puts in cache and then hangs on last 10-20MBs for about a minute. When copying larger files it's the same story: starts fast but then slows down to ~5MB/s (sometimes even slower down to 2MB/s). Strange thing is that I have always had caching disabled for this drive (it was disabled by default and I never bothered changing it). At first I thought that the disk is dying so I checked S.M.A.R.T. values and everything is fine there. I also run chkdsk and it seemed to fix the problem - it worked fast for a few minutes but then it slowed down again. I also tried plugging it into another USB port - no difference. Additionally I noticed that reading under certain circumstances is sometimes slower e.g. loading times for some games are ~10 times longer, whereas simple copying files from this drive to my internal HDD is fast. I ran a speed benchmark using CrystalDiskMark with a 5x100MB run and strangely got these results: read write (MB/s) Seq 33.05 28.25 512k 17.30 15.27 4k 0.267 0.372 4kQD32 0.510 0.260 This is different from what most other people have (I've found many threads about slow disk write while googling but all of them were slow on benchmarks too) which is why I decided to post this problem here. BTW most of the time when writing (or sometimes reading) the activity led is mostly idle (blinks a while and then stops for longer, sometimes has slower blinks ~1 sek, sometimes goes off for a few seconds - extremely long blink :) ) but when benchmarking, defragmenting or just reading (copying from this drive, installing apps from installers there, watching HD videos) it is blinking really fast (like it should) and there are no slowdowns. It shouldn't be driver issue unless stock Windows drivers have some issues I'm not aware of.

    Read the article

  • Fast extraction of a time range from syslog logfile?

    - by mike
    I've got a logfile in the standard syslog format. It looks like this, except with hundreds of lines per second: Jan 11 07:48:46 blahblahblah... Jan 11 07:49:00 blahblahblah... Jan 11 07:50:13 blahblahblah... Jan 11 07:51:22 blahblahblah... Jan 11 07:58:04 blahblahblah... It doesn't roll at exactly midnight, but it'll never have more than two days in it. I often have to extract a timeslice from this file. I'd like to write a general-purpose script for this, that I can call like: $ timegrep 22:30-02:00 /logs/something.log ...and have it pull out the lines from 22:30, onward across the midnight boundary, until 2am the next day. There are a few caveats: I don't want to have to bother typing the date(s) on the command line, just the times. The program should be smart enough to figure them out. The log date format doesn't include the year, so it should guess based on the current year, but nonetheless do the right thing around New Year's Day. I want it to be fast -- it should use the fact that the lines are in order to seek around in the file and use a binary search. Before I spend a bunch of time writing this, does it already exist?

    Read the article

  • Getting access to physical drives in ESXi v5.5 installation on Dell PowerEdge R710 with PERC 6/i

    - by Big-Blue
    I've acquired a Dell PowerEdge R710 server a few days ago, which includes a PERC 6/i RAID controller. The server is now fitted with a SATA SSD, one SAS drive and four SATA HDD's, all of which I would like to be passed through to ESXi in an "as-is" state, without creating any logical drives in the RAID controller. Now, the ESXi v5.5 installation image I grabbed from the Dell homepage starts just fine but only lists the logical drives and connected flash drives as possible installation targets, not any of the physical drives. If I create a small logical drive on my SSD (which the PERC 6/i detects as SATA-SSD type), the ESXi install wizard lists the SSD value on that drive as false; which is far from optimal. I have also tried disabling the RAID controller entirely in the setup, but that also did not help. Everything that should enable passthrough is enabled in BIOS, but that shouldn't be a concern at this early stage of the ESXi installation. How would I be able to install ESXi v5.5 to a part of my SSD that is connected to the storage controller, while giving it entire physical access to the disk (to allow for SMART values to be read etc.)?

    Read the article

  • IIS7 default document for urlMapped url throws 403 error

    - by MorningZ
    Hopefully this all makes sense: I have a Web Application project against an IIS7 server that is "theme-able" using different master pages. As a result of what I am trying to do, the root of the project has no aspx files, so I am using the web.config's ability to rewrite "~/default.aspx" to "~/themes/a/default.aspx" this works great as long as i type in "http://www.mysite.com/default.aspx", but typing just "http://www.mysite.com" results in a "403 - Forbidden: Access is denied" error I was hoping that the combination of urlMapping and default document would be smart enough to handle this, but it's not <system.webServer> <defaultDocument enabled="true"> <files> <clear /> <add value="default.aspx"/> </files> </defaultDocument> </system.webServer> i also tried <system.webServer> <defaultDocument enabled="true"> <files> <clear /> <add value="~/themes/a/default.aspx"/> </files> </defaultDocument> </system.webServer> to no avail I was hoping a browser would come in without a document defined, IIS7 would assume it was default.aspx, and then the urlMapping would map it accordingly, but nope any pointers? I've read a ton of posts here with similar issues, but not the exact issue

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • Growing a Linux software RAID5 array

    - by chrismetcalf
    On my home file server, I've got a 1.5TB software RAID5 array, built from four 500gb Western Digital drives. I've got a fifth drive that I usually run as a hot spare (but have out of the array at the moment), but if I can I'd like to add that to the array and grow it to 2TB since I'm running out of space. I Googled for guidance, but there seem to be a lot of differing opinions out there (many of them probably now out-of-date) as to whether or not that is possible and/or smart. What's the right way to go about this, or should I start looking into building a new array with more space? Version details: %> cat /etc/issue Debian GNU/Linux 5.0 \n \l %> uname -a Linux magrathea 2.6.26-1-686-bigmem #1 SMP Sat Jan 10 19:13:22 UTC 2009 i686 GNU/Linux %> /sbin/mdadm --version mdadm - v2.6.7.2 - 14th November 2008 %> cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md1 : active raid1 hdc1[0] hdd1[1] 293033536 blocks [2/2] [UU] md0 : active raid5 sde1[3] sda1[0] sdc1[2] sdb1[1] 1465151808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

    Read the article

  • Ubuntu network card problem.

    - by Steve Greene
    Hello folks, Several days ago, I installed Ubuntu 9.10 onto my Acer Aspire 3100 laptop, running it alongside Widows Vista as a dual-bootable system. Creation of the Ubuntu boot CD went fine, and the installation onto my hard drive was flawless. Ubuntu opens and behaves as I would expect, except for one little problem. For reasons unknown to me, Ubuntu is not communicating with my laptop's networking hardware, and I have no internet connectivity, it works fine under Windows Vista. Up in the right side of the Ubuntu desktop, I click on the network icon and it does not show a wireless connection at all. At home, where I use a dialup modem, I also see no means of getting online. My modem is an HDAUDIO Soft Data Fax Modem with Smart CP,manufactured by CXT (Conexant Systems Inc., file version 4.0.13.0, and the driver version is 7.58.0.0). I am an advanced computer user, but I am not a programmer. I seek a solution that is user-friendly for normal people, something equivalent to a driver that I can easily install or activate that will allow Ubuntu to see my hardware and get me connected. Can anyone help me over this hopefully-little glitch My processor is a Mobile AMD Sempron Processor 3500+ at 1.80 GHz, 1.50 GB RAM, and a 32-bit Operating System.

    Read the article

  • What is the difference between a PDU and a power strip (both 120V, 15A)?

    - by rob
    I just chatted with an APC rep about upgrading the UPSes at our office. She recommended a single higher-capacity 6-outlet Smart-UPS to replace the four Back-UPS units we currently have. When I asked how she recommended plugging in all the current devices, she recommended using a APC's AP9567 PDU, but said not to use a power strip. At first she said I had to use an APC brand PDU, but after I inquired about using a Tripp-Lite PDU, she said any brand PDU would be fine. The APC PDU previously referenced looks like a standard 120V power strip with overload protection but no surge protection. Other than overload protection (which seems redundant if plugging into the UPS), is there something else I'm missing, or should any power strip (without surge protection) be fine? Edit: I didn't mention it earlier, but we don't have a proper rack--though I did still plan to mount the PDU or power strip to something. I guess I'm wondering if there's any special reason I should pay as much as $180 for the low-end APC PDU (which just looks like a power strip to me) vs. $20-$30 for a workbench power strip.

    Read the article

  • Unable to connect to server after a certain amount of time

    - by Troy
    I am a business FIOS subscriber with 5 static IPs. I have the following network setup: Verizon provided ONT Dlink switch Dell server running Ubuntu 12.04 with iptables enabled and a static IP address. The makes/models of hardware are: FIOS ONT Alcatel-Lucent I-211M-H ONT D-Link D-Link Web Smart Switch DES-1228P Server Dell Optiplex 755 (Ubuntu 12.04 Server) I have iptables running on the server with http, https and ssh ports open. I can connect to a website on the server from an external computer, but after a certain amount of time (mins to hours), I can no longer connect. All I have to do to re-enable connectivity is connect to the server via SSH from a computer INSIDE the network. I don't have to actually login, I just have to establish a connection. I can then access the website externally again. I did some googling and it seems some of verizon's equipment had an ARP bug where the ARP entries would expire after a certain time period, but those issues all seem to be from back in 2009 - 2010. I know the switch has an 'auto learning Mac address' feature, but I'm not sure if that could be the problem or not. Does anyone have any ideas or advice on how I can troubleshoot this? Thanks!

    Read the article

  • Emails not sending from outlook / OWA - Not even hitting the mail queue in exchange

    - by webnoob
    We are having an issue this morning where we can receive external emails but cannot send internal or external ones from Outlook or OWA. If I use: Send-MailMessage –From <[email protected]> –To <[email protected]> –Subject “Test #01”-Body “Just a test message.” –SMTPServer <Server-Name> –Credential <domain\user> the email is sent correctly which makes me think there is a connection issue with OWA and Outlook. However, outlook is reporting as Connected with exchange. I have checked the message tracking in exchange tools and emails sent via outlook and OWA do not appear. Nothing has changed on the server on the weekend so I don't really know where to start debugging this issue. We are using Windows SBS 2011. We only have one send connector which isn't using Smart Hosts and is set to use DNS MX records. Use external DNS is not checked and I can ping google.com etc so doesn't appear to be a DNS issue (plus the email sends from the console anyway). EDIT It appears that users using IMAP can send emails correctly, its only ones that rely on the normal exchange connection type that don't work. EDIT Emails from IMAP are hitting the email queue's where as emails from the normal exchange accounts aren't. EDIT It seems that some of the emails we tried to send yesterday sent at about 1am but now it won't work again..

    Read the article

  • Incredble low disk performance on HP DL385 G7

    - by 3molo
    Hi, As a test of the Opteron processor family, I bought a HP DL385 G7 6128 with HP Smart Array P410i Controller - no memory. The machine has 20GB ram 2x146GB 15k rpm SAS + 2x250GB SATA2, both in Raid 1 configurations. I run Vmware ESXi 4.1. Problem: Even with one virtual machine only, tried Linux 2.6/Windows server 2008/Windows 7, the VMs' feel really sluggish. With windows 7, the vmware converter installation even timed out. Tried both SATA and SAS disks and SATA disks are nearly unsusable, while SAS disks feels extremely slow.I can't see a lot of disk activity in the infrastructure client, but I haven't been looking for causes or even tried diagnostics because I have a feeling that it's either because of the cheap raid controller - or simply because of the lack of memory for it. Despite the problems, I continued and installed a virtual machine that serves a key function, so it's not easy to take it down and run diagnostics. Would very much like to know what you guys have to say of it, is it more likely to be a problem with the controller/disks or is it low performance because of budget components? Thanks in advance,

    Read the article

  • Ignore non-unicode programs language when installing software

    - by mitya
    This is something that is driving me nuts for a while and I haven't been able to find a solution for this problem anywhere. I am running Windows 7 and my "Language for non-Unicode programs" setting is set to Russian. I need for some non-unicode software that has a Russian UI. However, for most of my software I prefer to use the English UI. A lot of software out there is multilingual and is too smart for my liking. When installing, it switches the UI to Russian and the software UI stays in Russian after the installation without an option to change that, besides setting the "non-unicode language" to English. It switches back to Russian once I revert the setting and reboot. Most of the time it is driver software, i.e: Intel, HP, etc. How can force the installation to run English and stay that way after install, ignoring the "Language for non-Unicode programs" setting? Now, I understand this might be specific to the installer: MSI, Install Shield, etc. But any solution will be good, even if I have to apply it for every software installation. Thanks in advance for any helpful information!

    Read the article

  • How do I permanently delete /var/log/lastlog?

    - by GregB
    My /var/log/lastlog file is huge. I know it's really only a few kilobytes, but tar isn't smart enough to know that, so when I image a virtual machine, my restore fails because it thinks I'm trying to load more data than I have capacity on my disk. I want to delete /var/log/lastlog and stop any and all logging to the file. I'm aware of the security implications. This logging needs to stop to preserve my backup strategy. I've made a change to /etc/pam.d/login which I was told would disable logging to /var/log/lastlog, but it does not appear to work as /var/log/lastlog keeps growing. # Prints the last login info upon succesful login # (Replaces the `LASTLOG_ENAB' option from login.defs) #session optional pam_lastlog.so Any ideas? EDIT For anyone interested, I use Centrify Express to authenticate my users via LDAP. Centrify Express is "free", but one of the drawbacks is that I can't manage user UIDs via LDAP, so they are given a dynamic UID when they login to a server. Centrify picks some crazy high UID values (so they don't conflict with local users on the server, presumably). /var/log/lastlog is indexed by UID, and grows to accommodate the largest UID on the system. This means that when a Centrify user logs in, they get a UID in the upper-end of the UID range, which causes lastlog to allocate an obscene amount of space, according to the file system. ~$ ll /var/log/lastlog -rw-rw-r-- 1 root root 291487675780 Apr 10 16:37 /var/log/lastlog ~$ du -h /var/log/lastlog 20K /var/log/lastlog More Into --- Sparse Files

    Read the article

  • Flash Backed Write Cache (FBWC) without capacitor pack

    - by Martyn
    I brought a HP Smart Array P410 controller and it is installed and working fine in a HP Prolient Microserver with 4 drives in two RAID 1 arrays. I didn’t realise however that it came without any cache so would only work by directly writing straight to the disk, and the performance was horrible. So I then brought the 512MB Flash Backed Write Cache (FBWC) memory module as I was under the impression that with FBWC I would not need a battery. I got this idea from a forum post. "What do you guys think of the choice between 'BBWC' (battery backed write cache) and 'FBWC' (flash backed write cache)? The flashed based ones use non-volitile memory so need no battery." After installing the cache module however the server pretty much won’t boot. The P410 has a flashing amber light on it, and from the manual that doesn’t sound good. I’ve managed to get to the on board BIOS once and even managed to get to boot to the HP Array Configuration Utility (ACU) CD once, but every other time the Server continuingly reboots once it get to the POST screen and reads ARRAY INITILIZING %%%. The one time I reached the ACU, it reported a problem with the Cache Module. To me, it seems like the cache module is faulty, however the supplier tells me “Do you have an FBWC battery pack, p/n 587324-001, because that is required for the cache to work. If you have it, please complete an RMA form and we'll send a replacement / credit.” Does this sound right to you? I’ve been ordering the parts from the US and I don’t want to spend $77 + $40 p&p on a battery, wait a week for the shipping to find the card is faulty, and I don’t want to send back a working card?

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • How to configure default text selection behavior in Windows XP, 7? (eg. mouse click selects entire word vs. mouse click inserts an active cursor)

    - by Mouse of Fury
    I find the mouse click behavior of Windows XP and Windows 7 annoying and intrusive. I don't remember Windows NT being quite this bad, or MacOS 7 - 10 which I used in the nineties. When I'm using a browser and I click on a text field - for example, the address bar, or a search box - the first thing which happens is the entire field is selected.Subsequent clicks seem to select parts of words, often deciding arbitrarily to exclude or include adjacent punctuation. The same in Excel and other apps, and when trying to rename files, so I'm assuming this behavior comes from a system-wide text handling routine. I frequently want to edit text, cut out or replace odd parts of the insides of words or chunks of sentences, and often find that to get a simple cursor to insert I have to click the mouse up to 4 times in succession. I've had to do a lot of this recently and it has been driving me insane. Is there a place at the system level where this can be configured? In a perfect world, I'd like a single click on a new text area to insert a cursor point, and a rapid double click to select the entire area. Words or text within the area could be selected by inserting a cursor, holding down the mouse button and dragging to the exact point where I want the selection to end - even if that's in the middle of a word. No, I don't need or want Windows to "smart select" a word or sentence for me. I've looked in the Mouse and Accessibility Options control panels (Windows XP). Haven't found anything even close. Thanks -

    Read the article

  • Managing Internal Yum Repository Groups

    - by elmt
    What is the best method for handling yum groups dependencies? For example, take this comps.xml file <comps> <group> <id>production</id> <name>Production</name> <default>true</default> <description>Packages required to run</description> <uservisible>true</uservisible> <packagelist> <packagereq type="default">ssh</packagereq> </packagelist> </group> <group> <id>development</id> <name>Development</name> <default>false</default> <description>Packages required to develop</description> <uservisible>true</uservisible> <packagelist> <packagereq type="default">gcc</packagereq> </packagelist> </group> </comps> which is packaged with createrepo -g comps.xml x86_64. The ssh and gcc rpms are not installed in the x86_64 directory. If I run yum groupinstall development, yum is smart enough to pull the gcc package from the RHEL repo even though the groups are defined in my internal repository. However, is this the proper way of doing this, or should I copy the rpms to my local repository and recreate the repo?

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >