Search Results

Search found 12766 results on 511 pages for 'little b'.

Page 353/511 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • Mesh Networked servers via vpn

    - by microspino
    I got a design idea and I would like to have some advice from SF about It. I have 5 customers with small real-estate databases. I've built for them a desktop app and now they would like to merge their database to share their data. I don't want to centralize everything in one place nor I want to do maintenance for servers. They told me also, that all of them in their offices, have little servers and maintenance guys available. Although everything seems suitable for web application, I had the idea to experiment something new: Any customer small-server wild be connected to the others in a sort of mesh network without a single point of failure and through VPNs. If one of the servers went down the customers could still connect to their databases from one of the other mesh networked servers instead of from the local one that is down. During normal operations all the servers sync the db with the others through VPNs. I can accept a half-day timing window of NON synched data, in other words, since I don't need real time synchronization, the server don't have to always stay in synch. I can migrate my data over to other Non-Sql technologies like CouchDB or Redis or whatever you suggest. As you can see I don't have a lot of constraints and although I could go with a web application I would like to delegate and decentralize support, data-privacy and management, as more as I can to my customers offices. Is that a crazy idea? Do you know If something similar exist? Which technology would you suggest?

    Read the article

  • Poor write performance on Debian server running NFS with 22TB exported JFS filesystem

    - by user143546
    I am currently running a debian server that is exporting a large JFS filesystem (22TB) over NFS (nfs-kernel-server.) When attempting to write to the NFS share, the performance is very poor. The 22TB disk is sitting on a NAS mounted using iSCSI. It will bust for a moment near expected line speed, and then sit idle for several seconds. Very little traffic measured in the low kb/sec. The wait peeks on write. When reading from the NFS mount, the system operates at expected speeds (11MB/sec). The issue does not occur when using SFTP, rsync, or local coping (non-nfs). The issue persists between stable and testing releases. On the same machine I have a 14TB ext4 filesystem using the exact same export configuration that does not share the issue. This share is not in regular use and thus not consuming resources. NFS Server: cat /etc/exports /data2 10.1.20.86(rw,no_subtree_check,async,all_squash) cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /etc/default/nfs-kernel-server RPCNFSDCOUNT=8 RPCNFSDPRIORITY=0 RPCMOUNTDOPTS=--manage-gids NEED_SVCGSSD= RPCSVCGSSDOPTS= NFS Client: cat /etc/fstab 10.1.20.100:/data2 /root/incoming nfs rw,noatime,soft,intr,noacl 0 2 cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /proc/mounts 10.1.20.100:/data2/ /root/incoming nfs4 rw,noatime,vers=4,rsize=262144,wsize=262144,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.1.20.86,minorversion=0,addr=10.1.20.100 0 0 This problem has me pretty stumped. Any help would be greatly welcomed. Thanks.

    Read the article

  • Random server lag, no CPU/mem/pagefile usage

    - by Kev
    We have a fairly new server running Windows 2003 SP2, and the past few days we've noticed random slowdowns. When I'm logged into the server over remote desktop while this is happening, or if I'm physically sitting at the server logged in, suddenly everything becomes extremely laggy. Any UI element I try to interact with takes upwards of ten seconds to react, and then responds very slowly. Then a minute later everything is quite snappy again. During this, I have Task Manager minimized to the tray, and there's no CPU usage. I open it up right after this happens, and there's very little CPU usage on the graph, and no memory or pagefile usage above normal. (Normal being 1.5 GB free in the case of memory.) This is what I see logged into the server, and then users start calling saying things are slow, timing out, and failing--anything to do with our server. No events in the Event Viewer around the times this happens. The context I'm working in (last thing I clicked, etc.) seems different every time--different programs active, different combinations of programs open. Never anything particularly stressful (like adding an event entry to a Cobian Backup configuration, or editing text in TextPad, which has been exceptionally stable in my extensive usage of it.) I would've thought it was just the server, but a family member's home PC (entirely separate) running WinXPSP3 had the same thing happen to it last night a few times. Is this some new behaviour introduced by the latest Windows Updates? Either way, where do I even start to look when nothing seems to be chewing up resources?

    Read the article

  • Why can't I get out of display mirror mode?

    - by Roy Smith
    I've been running Ubuntu (10.04.1 LTS, 64-bit) for a while and just replaced my hardware with a faster machine with an ATI Radeon HD 5700 video card. I've got twin 1920 x 1080 displays. I downloaded the latest driver (ati-driver-installer-10-9-x86.x86_64.run) from the ATI web site and installed that. I've gone through a few rounds of playing with /etc/X11/xorg.conf, and can't get things right. At the moment, it's in display mirroring mode, and I can't figure out how to get it out of mirror mode. If I run Monitor Preferences, there's a "Same image in all monitors" checkbox. If I uncheck that, the little preview window switches to show two monitors. When I click Apply, it asks me to log out and log back in again. When I do that, I'm right back to mirrored mode. What's really weird is that I'm currently running a copy of xorg.conf from a coworker's machine. He's got identical hardware, and his display works fine. So, I'm inclined to think there's something else going on other than the conf file. Any ideas what might be wrong?

    Read the article

  • Flickering dual screens in Virtual Box Ubuntu 13.10 Guest

    - by alexleonard
    I have Ubuntu 13.10 x64 installed as a guest in VirtualBox (under a Windows 8.1 host) and have the settings for the virtual machine setup to run with a monitor count of 2, 128MB video memory and 3D acceleration enabled. In my guest I have the virtual box additions installed (which allowed me to have two 1920x1080 screens). Here's a screenshot of my VM settings. My laptop is an Asus N550JV which has both Intel's HD Graphics 4600 GPU and Nvidia's GeForce GT 750M. By default though I believe the Intel GFX card is being used to render the VM. When I boot up the VM it loads perfectly on dual screens, however whenever I move the mouse from one screen to the other (I have a Dell S2340L running over a HDMI connection as a second screen) the screen flickers. I've tried a variety of settings changes in both Ubuntu and the VM settings, but cannot seem to stop this screen flicker. I also used the NVidia control panel in Windows to force the dedicated graphics card to always be used but found that the display driver sometimes crashed whilst working in the VM, resulting in my VM session being destroyed, so I figured it's better to stick with the Intel GFX as that appears to be more stable. I also tried without 3D acceleration but that was much worse, and if I ran the VM with a low amount of graphics memory it really struggled. Here's my dmesg output: http://pastebin.com/1LJuYWMj (not sure if this is helpful in this situation). I read some posts suggesting changes to /etc/X11/xorg.conf but I don't appear to have an xorg.conf file. There were also a few posts (though related to Synergy) suggesting running xset -dpms but this command doesn't appear to have had any effect for me. As an additional note, I'm finding that window drawing in the guest is a little laggy/glitchy. For example, quickly scrolling through a web page may result in parts of the viewport displaying original content. Certainly I notice drawing issues most in the web browser, but it also impacts other software with parts of the window not being drawn when, say, switching between accounts in thunderbird. Any suggestions greatly appreciated!

    Read the article

  • Linux Cups raspberry pi offload processing to server

    - by jaredmsaul
    I am interested in setting up a raspberry pi as the local end of a printing solution. In my testing the pi chokes on acting as a complete cups based print server. It seems a little underpowered for some of the ghostscipt processing and other filtering that occurs-- particularly on larger or complex documents the processing time can be 5 or more minutes. My question is can the processing be largely done elsewhere and the prepared end product of the processing chain be fed to the pi for output on the connected printer? So in this scenario any arbitrary document (html, pdf, text) is initially 'printed' on a relatively powerful machine but the output is stored in a file. This file is then grabbed by the pi and with all the heavy work out of the way easily printed using cups. I know files can be pushed through cups in raw mode but I am fuzzy on the pros and cons and the applicability in what I describe. I have tested this with pdftops creating a ps file then feeding that raw to cups and I think it works but it seems like there may be a cleaner solution. This scenario would ideally work for any number of printer types.

    Read the article

  • DPM server 2010 Attach agent error : administrator privileges missing?

    - by Michael
    I’m hoping you would be able to help me out with this little problem I’m having. I installed DPM 2010 in our test environment to test backups on Exchange 2010 servers. The environment includes : 1xDC 2x Exchange Server 2010 1x DPM 2010 server All of these are running on Microsoft server 2008 R2 Virtual machines. The host machines are using Hyper-v. So the problem goes like this : 1- I tried to install the agents from the DPM server GUI, which failed saying I didn’t have the correct permissions. 2- So then I tried the manual installation using the commands from : the Microsoft site http://technet.microsoft.com/en-us/library/bb870935.aspx 3- The agent installation worked but when I get to attaching the agents to the DPM server it still gives me the error saying that the specified account does not have administrator rights. 4- I tried the Domain admin, users who are domain admin + local admin, single local admins. 5- I have turned off the windows firewall and made sure all the services are running. So now I’m out of ideas and really need help, the agent attach to the DPM server is the last thing that is holding me back from deploying everything to the production site. Any help would be really appreciated.

    Read the article

  • What mobile phones are viable for a "nerdy" person? [closed]

    - by Blixt
    This community wiki is for determining a list of good mobile phone choices if you are "nerdy" (definition follows.) As a point of reference, the almost two years old N95 8GB will be used. Mostly because that's what I can most relate to since I've had it since it came out. Today, iPhone and other modern mobile phones really out-shine it in usability and interface. However, it still does everything I want it to do (and here's the definition of "nerdy"; modify as needed): Syncs contacts, calendar, tasks and mail in the background Can run multiple installable applications in the background (Google Maps with Latitude, for example) Good amount of space for music etc. Lets you develop your own little toy applications (Python; not to mention it can also run an Apache server with a public URL!) Tethering! Supports Flash (maybe not very important, but it has its uses) Has a manufacturer that believes in the nerdiness! (The people at Nokia Labs make lots of cool stuff and share early versions with the community and are generally open) A high resolution screen (at least 320x240) Special hardware features such as an accelerometer and GPS What's missing from the N95 8GB but still qualifies as good, "nerdy", qualities: 7.2 Mbit/s (or faster) internet through HSDPA or HSPA+. A good web browser that can do most of the stuff a desktop browser can (especially render sites properly and run JavaScript correctly) Touch (especially multi-touch) More special hardware features such as a compass Intuitive and fluent user interface (Shiny stuff) Ability to configure it to trust root certificates of my own choice A processor fast enough to run Quake 3! =)

    Read the article

  • Is there any way to tweak / rotate mouse orientation? Any applications? Registry edits?

    - by calbar
    I've got a very frustrating issue with my new Logitech Marathon Mouse M705. It is absolutely perfect for what I need, with the exception that it tracks on an angle for some reason. What I mean is when you slide the cursor to the left, it trends upward - when you slide to the right, it trends down. Moving the cursor along a flat horizontal line is no longer a natural motion - you need to fight what I suspect is a mechanical error of some kind. Unfortunately, I've already exchanged this mouse once and tested both on different Windows 7 and Mac OS X machines - the problem continues to occur. Is there a software solution for me? I'm incredibly surprised there is no simple way to adjust the orientation... how can every mouse manufacturer possibly adjust their hardware to track to everyone's tastes? What about those who need to flip orientation a full 90 or 180 degrees? I only need to adjust mine a few degrees, but I'm sure that need has arisen as well. Anyway, I'm running the latest SetPoint drivers (6.00) on Windows 7 and there are no orientation options available. I've checked out uberOptions and the M705 isn't supported yet (with the last version update over 6 months ago). MAF Mouse instructions are a very strange series of mouse clicks to activate. This app also seems a little overkill and costs $$ (which I'm willing to pay as a last resort). Is there no universal registry value for mouse orientation? How about for SetPoint drivers specifically? I've done a simple search in regedit without any luck. An XML file somewhere? Anything?

    Read the article

  • Windows thinks outgoing connections are incoming connections?

    - by Slayer537
    I have a rather weird issue.. I'm trying to configure Windows Firewall to block all outgoing connections to a certain app, but allow all incoming. This app is used to transfer files across a network. The reason for this type of setup is to only allow certain users (IP Address) access to the files I have, but to still allow others to see what's available. Since Windows Firewall defaults to allowing all outgoing connections, I made a rule to deny all outgoing connections that were not in the IP ranges I specified. For the incoming connections, I'd like to leave it at allow all, but at the moment it is set to only allow the connections that also have outgoing permissions set. If I blanket say allow all incoming connections, I observe that unauthorized IP Address are able to actually download files, even though their IP was blocked in the outgoing connections. To shed a little more visibility on this, I used NetLimiter to see what was going on. NetLimiter showed me that the connection was an incoming connection. Shouldn't this be an outgoing connection, as I am uploading files to them, not the other way around? Is there a way to make the connection type be correct and show up as outgoing instead of incoming?

    Read the article

  • In APC+PHP, how much RAM is too much? Is it okay to set apc.shm_size to many GB?

    - by Jeremy Clarke
    On our server we have a LOT of RAM for our traffic levels (16GB). The HTTP processes regularly eat up all CPU and need to be restarted without even getting close to using swap memory, so I'm looking for ways to spend RAM to ease the load on Apache (and/or help the seperate MySQL server which may be breaking Apache). I have many WordPress installs on the HTTPD instance so APC sometimes uses as much as 900MB of ram (according to the apc.php charts). Just in case I have apc.shm_size set to 1600MB which is more than it needs but not more than I can spare. This means there is usually lots of extra RAM available to APC but also very little turnover and fragmentation is never more than 1%. Is this dangerous? Should I be slimming down APC to less than 1GB just on principle? Should I be expecting some turnover within APC in the name of bringing it's overall footprint down? Having so much memory devoted to APC means that in top/htop every single httpd process shows ~1.9GB in the VIRT memory column. Obviously this is shared memory and not used per-process, but could it be hurting our server? NOTE: The problem with the server remains unclear but the effect is that about 60 times a day all 8 CPU's fill up to 100% and everything stops working until Monit sees that Apache is broken and restarts it (Monin also saves the MySQL server). I'm not sure if APC is even part of the problem but I'm trying to optimize everything just in case.

    Read the article

  • how do I fix a wrong UUID in grub.cfg?

    - by mozerella
    I run Debian Wheezy alone on my PC and I recently copied the root partition to another with rsync as I found that worked well (I also know about dd and ddrescue but they leave unusable space on the new partition). I generated a new random UUID for the new partition with sudo tune2fs -U random /dev/hda9 and also updated fstab / and /home entries. Then as I know so little about GRUB I used a gui (GRUB Customizer) to probe for the new OS and add an entry to GRUB and the MBR -it makes an /etc/grub.d entry then updates GRUB. On startup, the GRUB list contains the new OS (on sda9) but it boots the first OS (which I copied from -sda5). /boot/grub/grub.cfg contains the new debian OS but it looks like this set root='(hd0,msdos9)' search --no-floppy --fs-uuid --set=root 64662470-0e58-4dfd-90ac-43227d773556 linux /boot/vmlinuz-3.2.0-2-amd64 root=UUID=cc3bca0d-aee4-4b9c-95c2-57212cc36d4d ro quiet initrd /boot/initrd.img-3.2.0-2-amd64 the 1st uuid is of sda9, but the 2nd uuid there is of sda5. I can change the 2nd uuid at startup (with E) and it boots sda9. So how can I get grub.cfg corrected so that the sda9 GRUB list entry boots from sda9 permanently?

    Read the article

  • How to verify system using right GPU, after system reset [duplicate]

    - by Antoros
    This question already has an answer here: Is my mobile AMD card being used? 2 answers OS: Windows 8 CPU: Intel® Core™ i7 Processor 3635QM GPU 1 : Intel HD Graphics 4000 GPU 2 : AMD Radeon™ HD 8870M other info: System Spects Problem: im unsure that CCC is using AMD card instead of Intel's, i have encountered several issues since updating to 8.1 and i don't know what to do What happened: Installed 8.1 patch first day After 1 minute of use, BBSOD, windows never loaded again System restore wouldnt recognize 8.0 restore points i did a system reset to windows 8 since the laptop was only 3 weeks old System Broke, it did restore to factory BUT kept the registry almost intact, i had to install almost everything again, since the factory drivers where working with the updated one's registry and several problems CCC Broke too <- What i've already done Installing new drivers on top of old ones didnt work, so i used AMD uninstaller first Uninstalled and Re-installed Intel's HD Graphics Driver Tried to install mobile center, but AMD told me that it wasnt compatible (even if thats the only driver that they provide via their page as seen Here) Tried to use Auto-Detect, couldnt install driver because card was disabled because it didnt have the drivers... (see what they did here?) Had to use a workaround with Samsung Update, the driver didnt appear as download so had to use search and downloaded the driver manually. Now the graphic card appears on device manager and catalyst but as 8800 series (not exact model), and cant check the card with neither dxdiag/GPU-z/HWMonitor when right-clicking on CCC only Intel card appears launching a game and using as "high performance" would speed it up a little but i cant be sure How to verify its working properly? HWMonitor wont show AMD card even when set to high performance Latest GPU-Z wont work because a problem with Intel's, and legacy ones wont either what can I do now? I don't even know if I fixed my problem or not, and i also want to to use Adobe Premier with it, and its locked (the option to run it with the amd card not intels) Edit: now it seems to work, but cant change the setting for adobe Premiere and other programs that i Need to

    Read the article

  • How to setup a hyper-v domain with internet access

    - by fynnbob
    First off let me say that I'm not a network admin or server guy, I know very little about that stuff. What I'm trying to do is setup a virtualized domain using hyper-V. Here is the configuration: Physical Server: 4Mb RAM Windows Server 2008 R2 running Hyper-V Virtual Environment: One Domain Controller running Windows Server 2008 R2 One Client running Windows Server 2008 R2 I have been successful in setting up a virtual domain controller and adding a virtual client to that domain controller but I'm stuck at trying to give the virtual Environment Internet access. I can give the client VM Internet access if I remove them from the virtual domain but once I add them back to the virtual domain, Internet access is gone. I've read articles describing many different ways this can be done (using RRAS with NAT, using a wireless connection, etc...) but all of those articles only cover a small piece of the setup and also seem to be geared towards people who know there way around networking and servers which I don't. I'd like to know more but my thing is software development and I have my hands full trying to keep up with everything in that realm. I simply want to setup a virtual domain with Internet access for testing. Can anyone point me to any "for Dummy's" type information on how to setup this type of environment or can anyone provide this kind of step-by-step help. Any help would be very much appreciated.

    Read the article

  • Can a USB/IDE/SATA adapter be flaky?

    - by Ward
    I use USB/IDE/SATA converters a lot and on the two that I have now, I sometimes get errors copying files to drives. It only happens when I'm copying big files to the drive (big can mean as little as 100MB, I think it happens more often with bigger files - 300MB or more), and basically the copy will fail and I'll get one or more error messages about "Delayed write failed." But if I disconnect the drive and re-connect it, I'll usually be able to continue. (The file that was being copied will be corrupt, but otherwise the drive is fine.) I just noticed a new type of flakiness: the data transfer rate can vary widely. I copied one set of files (5x300MB files) and it took 10+minutes, then I copied another set (approx. the same sizes) and it took less than a minute. I haven't done systematic testing, the other things I'm doing on my laptop at the same time might have some impact, and I haven't cross-checked the two adapters I have and the 3 hard drives I'm working with to see if there's a pattern. I'm more wondering if anyone else has seen anything like this.

    Read the article

  • Why does my microwave kill the Wi-Fi?

    - by Ohlin
    Every time I start the microwave in the kitchen, our home Wi-Fi stops working and all devices lose connection with our router! The kitchen and the Wi-Fi router are in opposite ends of the apartment but devices are being used a little here and there. We've been annoyed by the instability of the Wi-Fi for some time and it wasn't until recently we realized it was correlated to microwave usage. After some testing with having the microwave on and off we could narrow down the problem to only occurring when the router is in b/g/n mode and uses a set channel. If I change to b/g mode or set channel to auto then there is no problem any more...but still! The router is a Zyxel P-661HNU ("802.11n Wireless ADSL2+ 4-port Security Gateway" with latest firmware) and the microwave is made by Neff with an effect of 1000W (if this information might be useful to anyone). There is an "internet connection" light on the router and it doesn't go out when the interruption occurs so I think this is only an internal Wi-Fi issue. Now to my questions: What parts of the Wi-Fi can possibly be affected by the microwave usage? Frequency? Disturbances in the electrical system? How can setting Auto on channels make a difference? I thought the different channels were just some kind of separation system within the same frequency spectrum? Could this be a sign that the microwave is malfunctioning and slowly roasting us all at home? Is there any need to be worried? Since we were able to find router settings that cooperate well with our microwave's demand for attention, this question is mainly out of curiosity. But as most people out there...I just can't help the fact that I need to know how it's possible :-)

    Read the article

  • SQL Server 2000, large transaction log, almost empty, performance issue?

    - by Mafu Josh
    For a company that I have been helping troubleshoot their database. In SQL Server 2000, database is about 120 gig. Something caused the transaction log to grow MUCH larger than normal to over 100 gig, some hung transaction that didn't commit or roll back for a few days. That has been resolved and it now stays around 1% full or less, due to its hourly transaction log backups. It IS my understanding that a GROWING transaction log file size can cause performance issues. But what I am a little paranoid about is the size. Although mainly empty, MIGHT it be having a negative effect on performance? But I haven't found any documentation that suggests this is true. I did find this link: http://www.bigresource.com/MS_SQL-Large-Transaction-Log-dramatically-Slows-down-processing-any-idea-why--2ahzP5wK.html but in this post I can't tell if their log was full or empty, and there is not any replies to the post in this link. So I am guessing it is not a problem, anyone know for sure?

    Read the article

  • Nginx + WordPress + HHVM: Why isn't Batcache working? Would Varnish help even more?

    - by javipas
    I've heard great things about HHVM, so I've setup a copy of WordPress blog (on another domain) with Nginx (with the Pagespeed module) and HHVM. Right now the benefits are obvious: on the same config, load times are between two and three times faster. I'm trying to speed up things a little bit, and I've also installed Memcached and Batcache. I've installed the memcached package, copied object-cache.php (Pastebin) onto the root folder of the WordPress blog, and after that I've installed the Batcache plugin and copied the advanced-cache.php (Pastebin) file onto the wp-content folder. Also, I've included the line define('WP_CACHE', true); in the wp-config.php file. It seems it doesn't work, though. If I quickly reload the page several times Batcache should show the cached page, but it doesn't. It's easy to check that by reloading (Cmd+R on Chrome on OS X) the page several times and then viewing the page's code. Under the <head> section I should see some batcache stats, but they aren't there. I wonder if someone could give me some hint on this. On a side note, I don't know if I could add some other component in order to help the performance be even better. I'm thing about Varnish, but I'm not sure if it's just useless and it's just another way to the same I'm currently doing. Any other component there? (I'll test CDN for images, minifying js, etc and some other tricks as well, but I'm talking from the server perspective).

    Read the article

  • How to have LiveJournal delegate my OpenID to something else?

    - by T-Boy
    I understand that if I have full control over my domain, I can set it up so that I can delegate the task of authenticating to another OpenID service provider. The problem is, what I'd like to do is to get the LiveJournal server to pass the authentication to someone else, instead of having LJ doing it. Preferably what I'd like to do is get LiveJournal, when asked by a web site, say, "No, I don't do it anymore -- go to this address". The plan was that this address would then be in a domain I fully control, which then would pass it on to whichever service provider I choose. I don't even know if I've gotten my understanding of OpenID right, if all this shenanigans are necessary, if my question makes sense, or if it's even possible with a service provider like Livejournal. ETA: Doing a little more reading up, and examining the source for my LiveJournal user page, I note that this particular line in the file's <header> area: <link rel="openid.server" href="http://www.livejournal.com/openid/server.bml" /> I suspect that changing this will allow me to forward OpenID requests to whomever I wish, I think; so far so good. Now comes the hard part -- figuring out how to change all of that using LiveJournal's customization options, if that is at all possible (here's hoping I don't need to pay to get that functionality).

    Read the article

  • How to go about rotating logs which are arbitrary named and placed in deeply nested directories?

    - by Roman Grazhdan
    I have a couple of hosts which are basically a playground for developers. On these hosts, each of them has a directory under /tmp where he is free to do all he wants - store files, write logs etc. Of course, the logs are to be rotated, or else the disc will be 100% full in a week. The files can be plenty, but I've dealt with it with paths like /tmp/[a-e]*/* and so on and lived happily for a while, but as they try new cool stuff on the machine logrotate rules grow ugly and unmanageable, and it's getting more difficult to understand which files hit the glob. Also, logrotate would segfault if asked to rotate a socket. I don't feel like trying to enforce some naming policies in that environment, I think it's going to take quite a lot of time and get people annoyed and still would fail at some point. And I still need to manage the logs, not just rm the dirs at night. So is it a good idea in circumstances like these to write a script which would handle these temporary files? I prefer sticking with standard utilities whenever possible, but here I think logrotate is getting less and less manageable. And probably someone heard of some logrotate alternatives which would work well in such an environment? I don't need emailing logs or some other advanced features, so theoretically some well commented find | xargs would do. P.S. I do have a log aggregator but this stuff is not going to touch my little cute logstash machine.

    Read the article

  • Fill down in Excel, but based on multiple values

    - by Jenn D.
    I have spreadsheets (not created by me) that have blank entries in one column where they should really have data. I want to take every empty cell and fill it with the nearest value above it. I'm looking for as little manual intervention as possible, because I'll have to do it repeatedly. I thought some previous version of Excel, or maybe another spreadsheet from the distant past, would do this by default -- that is, if you selected the column with foo and bar, and chose the equivalent of "fill down", you would get what's in the WANT column. What I actually get in Excel is the GET column. HAVE: WANT: GET: foo 1 foo 1 foo 1 2 foo 2 foo 2 bar 1 bar 1 foo 1 2 bar 2 foo 2 3 bar 3 foo 3 I'm worried that this might need a macro to be done properly. I used to be a whiz with Excel macros, and then suddenly they were all in VB. My fallback position will be to dump the whole thing to CSV and write a Python script, but if there's any way to do it in Excel that would be much preferable. Even if it involves a couple of different manual steps, that's fine; just not one step per group of lines. That is, a process of "copy the column, do X to it, cut and paste it back" would work, but "do X for each occurrence of foo or bar" won't. The files are too big for that. Any thoughts are appreciated!

    Read the article

  • Windows 2008 R2 file share - any way to "lock it down" outside of a 3rd party app?

    - by TheCleaner
    I have a 3rd party app that "makes a call" to write files to a file share on our network using the currently logged in credentials of the Windows domain user. Meaning the 3rd party app doesn't pass the apps credentials but simply issues a behind the scenes copy command to take a source file specified and copy/move it to the destination "repository" on the file share. The basic premise is that it keeps revisions/approvals for Document Control (think svn/git I guess, similar to this question: Lock down Windows folder to only be updatable by SVN). This all works fine...but here's my issue: I need a way to lock down the file share from being accessed/modified outside of using the 3rd party app (meaning prevent explorer/word/excel/etc from getting to that share). I know I can do the following: make the share a hidden share ($) - this definitely helps. Most users would have zero clue on how to get to such a share. Solves probably 95% of my issue. go one step further and set the "Hidden" attribute on the folders in the hidden share - this would go a little further in that even if a user knows the path to the hidden share like \\server\hidden$ they still won't see folders in that share without changing their explorer options to "show hidden files/folder Any other ideas on how I can lock this down? The users still need modify rights to this share/folders since the 3rd party app relies on their Windows permissions to that location when copying the files into it. I can't really use 3rd party tools to password protect the folder/share without causing the 3rd party app functions to fail.

    Read the article

  • Move EFI System Partition to another drive

    - by Pincopallino
    I had a Windows 8 installation on an HDD, using UEFI as boot. The HDD has the following GPT table: DISKPART> list partition Partizione ### Tipo Dim. Offset --------------- ---------------- ------- ------- Partizione 1 Ripristino 300 Mb 1024 Kb Partizione 2 Sistema 100 Mb 301 Mb Partizione 3 Riservato 128 Mb 401 Mb Partizione 4 Primario 390 Gb 529 Mb Partizione 5 Primario 540 Gb 390 Gb (I apologize it's in Italian, but the translation is quite straightforward). I recently bought an SSD drive, connected it and installed a fresh Windows 8. Now I have a working dual boot, but the UEFI partition is on the HDD instead of the SSD. Here's the SDD partition list: Partizione ### Tipo Dim. Offset --------------- ---------------- ------- ------- Partizione 1 Riservato 128 Mb 1024 Kb Partizione 2 Primario 221 Gb 129 Mb I think that the best solution would be to have it on the SSD for two reasons: the first is performance (I guess it would be a little be faster on the SSD due to the spin up time for an HDD, but I may be wrong about that) second reason is consistency. As I plan to use only the Windows 8 installation that is located on the SSD and I'm probably going to erase the system partition on the HDD to use it as a data storage device, I think that the boot partition should be on the same drive as the OS. So the question is how do I move the EFI System Partition to the SSD?

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • Ways to parse NCSA combined based log files

    - by Kyle
    I've done a bit of site: searching with Google on Server Fault, Super User and Stack Overflow. I also checked non site specific results and and didn't really see a question like this, so here goes... I did spot this question, related to grep and awk which has some great knowledge but I don't feel the text qualification challenge was addressed. This question also broadens the scope to any platform and any program. I've got squid or apache logs based on the NCSA combined format. When I say based, meaning the first n col's in the file are per NCSA combined standards, there might be more col's with custom stuff. Here is an example line from a squid combined log: 1.1.1.1 - - [11/Dec/2010:03:41:46 -0500] "GET http://yourdomain.com:8080/en/some-page.html HTTP/1.1" 200 2142 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; C) AppleWebKit/532.4 (KHTML, like Gecko)" TCP_MEM_HIT:NONE I'd like to be able to parse n logs and output specific columns, for sorting, counting, finding unique values etc The main challenge and what makes it a little tricky and also why I feel this question hasn't yet been asked or answered, is the text qualification conundrum. When I spotted asql from the grep/awk question, I was very excited but then realised that it didn't support combined out of the box, something I'll look at extending I guess. Looking forward to answers, and learning new stuff! Answers doesn't have to be limited to platform or program/language. For the context of this question, the platforms I use the most are Linux or OSX. Cheers

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >