Search Results

Search found 2703 results on 109 pages for 'just curious'.

Page 84/109 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Referencing groups/classes from Puppet dashboard in my site manifest

    - by Banjer
    I'm using Puppet Dashboard as my ENC and I'm not sure how to reference or use class and group classifications from /etc/puppet/manifests/site.pp. I have two groups defined in the dashboard: CentOS6 and SLES11. What should my site.pp look like if I want to include a certain list of modules in the CentOS6 group and a certain list of modules in the SLES11 group? I'm trying to do something like this: # /etc/puppet/manifests/site.pp node basenode { include hosts include ssh::server include ssh::client include authentication include sudo include syslog include mail } node 'CentOS6' inherits basenode { include profile } node 'SLES11' inherits basenode { include usrmounts } I have OS-specific case statements within my modules, but there are some modules that will only be applied to a certain distro. So I suppose I have two questions: Is this the best way to apply modules/resources in an OS-specific manner? Or does the above make you want to vomit? Regardless of #1, I'm still curious as how to reference classes, groups, and nodes from Dashboard within my manifests. I've read the External Nodes doc, but I'm not seeing how they correspond to manifests. Thanks all.

    Read the article

  • CPU-adaptive compression

    - by liori
    Hello, Let assume I need to send some data from one computer to another, over a pretty fast network... for example standard 100Mbit connection (~10MB/s). My disk drives are standard HDD, so their speed is somewhere between 30MB/s and 100MB/s. So I guess that compressing the data on the fly could help. But... I don't want to be limited by CPU. If I choose an algorithm that is intensive on CPU, the transfer will actually go slower than without compression. This is difficult with compressors like GZIP and BZIP2 because you usually set the compression strength once for the whole transfer, and my data streams are sometimes easy, sometimes hard to compress--this makes the process suboptimal because sometimes I do not use full CPU, and sometimes the bandwidth is underutilized. Is there a compression program that would adapt to current CPU/bandwidth and hit the sweet spot so that the transfer will be optimal? Ideally for Linux, but I am still curious about all solutions. I'd love to see something compatible with GZIP/BZIP2 decompressors, but this is not necessary. So I'd like to optimize total transfer time, not simply amount of bytes to send. Also I don't need real time decompression... real time compression is enough. The destination host can process the data later in its spare time. I know this doesn't change much (compression is usually much more CPU-intensive than decompression), but if there's a solution that could use this fact, all the better. Each time I am transferring different data, and I really want to make these one-time transfers as quick as possible. So I won't benefit from getting multiple transfers faster due to stronger compression. Thanks,

    Read the article

  • 13" MacBook Pro with Win 7 and External VGA gets 640x480

    - by Jim McKeeth
    I have a brand new 13" MacBook Pro - 2.26 GHz and the NVIDIA 9400M Video card. I installed Windows 7 (final) in boot camp and booted up to Windows 7. Installed all the drivers from the Apple disk and it was working great. Then I attached the external VGA adapter (from apple) to connect to a projector and it dropped down at 640x480 resolution. No matter what I did it wouldn't let me change to a higher resolution if the external VGA was connected. Once it disconnects then it goes back to the normal resolution. If I am booted into Snow Leopard it works fine. I tried updating the NVIDIA drivers and it behaved exactly the same. Ultimately I want to get 1024x768 or better resolution when connected to an external display. If it isn't fixable then I am curious if anyone else has seen this, if it is a known issue, and who to contact for support (Apple, Microsoft or NVIDIA?) Update: Just attaching the Mini-DVI to VGA adapter kicks it into 640x480, no projector is required. I tried forcing the display driver from Generic PnP Monitor to one that supported 1024x768 and that didn't work either.

    Read the article

  • Windows Server 2008 backup VHD's - is it possible to mount/open in Windows 7?

    - by Simon
    Hi All, Is it possible to mount the VHD files created by the Windows Server 2008 backup utility onto a Windows 7 (release) client? Following an array failure I was very worried that there was a problem with both the backup sets on different USB drives as attaching the VHD to a Win 7 box did not show the expected structure (instead they behaved like unformatted disk space). Subsequently, I've attached the backup drive to a 2008r2 machine that I'd intended to be the replacement and the backup set can be browsed without issue (seemingly). When the new disks arrive I'll go through the recovery process and see where we are, but it looks promising so far. Is it simply the case that you can't take server created VHD's and mount them on desktop machines? (Rather than hyper-ventilating at the thought of years of lost photos and email, I'm now just mildly curious) Edit:One thing that has confused things is that the backup utility on Win7 is more restrictive about restoring from external devices than the equivilent on 2008r2. With r2, I can restore files 'from another server' and browse to external storage. Win7 only allows the back to be located on a network share. Once my box of new disks arrive and I've got something to restore onto, I'll move the smaller of the backup VHDs onto network storage reachable by Win7 and see if the VHD is readable. I haven't read up on the VHD process used by the backup app - I'm assuming it's a base VHD and differencing files used for incremental backups and that the restore app understands this. Finally: In retrospect the question should have been, 'can I restore a 2008r2 backup set via a Win 7 client' Thanks

    Read the article

  • Performance experiences for running Windows 7 on a Thin-Client?

    - by Peter Bernier
    Has anyone else tried installing Windows 7 on thin-client hardware? I'd be very interested to hear about other people's experiences and what sort of hardware tweaks they had to do to get it to work. (Yes, I realize this is completely unsupported.. half the fun of playing with machines and beta/RC versions is trying out unsupported scenarios. :) ) I managed to get Windows 7 installed on a modified Wyse 9450 Thin-Client and while the performance isn't great, it is usable, particularly as an RDP workstation. Before installing 7, I added another 256Mb of ram (512 total), a 60G laptop hard-drive and a PCI videocard to the 9450 (this was in order to increase the supported screen resolution). I basically did this in order to see whether or not it was possible to get 7 installed on such minimal hardware, and see what the performance would be. For a 550Mhz processor, I was reasonably impressed. I've been using the machine for RDP for the last couple of days and it actually seems slightly snappier than the default Windows XP embedded install (although this is more likely the result of the extra hardware). I'll be running some more tests later on as I'm curious to see particularl whether the streaming video performance will improve. I'd love to hear about anyone's experiences getting 7 to work on extremely low-powered hardware. Particularly any sort of tweaks that you've discovered in order to increase performance..

    Read the article

  • Does NetworkSolutions have a good DNS service?

    - by joxl
    I'm recovering from a DNS disaster and I need some good advice on an alternate solution. My company owns a domain name through NetworkSolutions. Our website is hosted by another company who also maintains our DNS records. Our email is hosted by Google Apps, and the MX records are maintained through the afore-mentioned website/DNS host. Yesterday our website/DNS host had a serious hiccup in some software and completely overwrote all of our DNS records with invalid values; successfully pointing our domain and MX records at the wrong servers. Unfortunately it wasn't caught until it had time to significantly propagate. On top of that, it wasn't fixed until several hours later, combine that with a long TTL on the records; we have customers who are still bouncing emails. Anyhow, I am now completely terrified of this company's ability to do a good job, so I am considering switching to NetworkSolutions for our DNS service. I need the ability to configure A, CNAME, MX, and TXT records, preferably with a nice user interface (our current provider has a poor UI and doesn't support TXT records). Is NetworkSolutions a recommended DNS host? I am a little biased in their direction because the service will be free since we already pay them for our domain name. However I'm curious what others have experienced with their service.

    Read the article

  • Lenovo S10 Ideapad will not boot while original hard drive is installed, neither from hard drive or

    - by aki
    Hello, first time posting here so I'll try to be very clear. I have a Lenovo S10 Ideapad netbook which fails to boot to an OS. It shows the Lenovo splash screen and can get to the BIOS but it doesn't get to GRUB (was dual booting Ubuntu 9.10 and Win 7, was working fine for months, ie this isn't a new dual boot gone bad). After the splash screen it displays a flashing cursor in the upper left corner. Power cycled to no avail. Here is what I have done trying to narrow the problem down: The machine will boot to Ubuntu using an install/live USB drive, but only if ANOTHER hard drive is installed or NO hard drive is installed. The boot order always lists USB first. Also, there is a 2 gb RAM upgrade but I think that's fine; the Ubuntu USB drive boots fine with it, and "free" sees the whole 2gb of memory. So it seems like the hard drive is bad. I was able to put the bad drive in a different laptop and mount it to recover files. I'm ready to replace the bad hard drive, but I would like to know if this situation makes any sense. If the hard drive is bad, shouldn't I still be able to boot with the Ubuntu USB drive while the bad drive is installed? I would have expected the machine to boot into Ubuntu anyway even if with a bad drive, since the boot order lists USB first. But it seems that when the bad drive is installed, the machine ignores the USB drive and hangs with the flashing cursor. Thanks for any ideas! Sorry for the long post, I just want to put all the info I have up front! Basically I'm going to buy a new drive, but I am mostly curious if this is a typical or at least not unusual situation.

    Read the article

  • FTP FileWatcher

    - by Meiscooldude
    So, I am in this little predicament where I am stuck watching a few ftp folders to see if they have new files added to them. If they do, it needs to throw an event with the file name. Thereby telling something else to download that file. This is a pretty simple object to make, I was just curious if anyone knew how expensive this operation would be? I plan on using the command NLIST because I don't need file size information, and there will be no sub-directories in the folder. Each file in the folder will have exactly 25 characters in its name. There could be anywhere from 10 to 'maybe' a couple thousand (max around 2000) files per folder (usually on the lower end, 100-300, but currently growing). The files are anywhere from 250kb to a very VERY unlikely 10mb (usually within the 250kb to 4mb range). There possibly could be up to a few hundred folders (in which case I could change the watch frequency depending on number of folders), but currently there are only a few (6-10ish). There also would be multiple logins for the ftp server, different logins would have access to different folders. I am not asking for an implementation, just if anyone has some first or second hand knowledge about FTP, how could this affect my network. I am not opposed to putting in file retention times or change the frequency in which I check for new files.

    Read the article

  • Can't connect to MySQL on CentOS 5 Error 13 - Permission Denied

    - by abszero
    Ok, I have an install of CentOS 5 running as a GuestOS in VirtualBox. The network card for the Cent box is bridged with that of my host OS so that the boxes can see each other. Cent has an IP of 192.168.1.108 and my Host box has an IP of .104. Everything, with regard to networking, seems to be working properly as I can access the Drupal install that is on the Cent box from a web browser on my host box by navigating to http://192.168.1.108 however when I try to configure the database for Drupal through the Drupal install interface I am getting the Can't connect to MySQL error. First I thought this might of been a Firewall issue so I stopped iptables but that had no effect. I thought maybe the user I had setup did not have access to the server so I tried root and that did not work. Searching on the net said that I needed to provide a bind-address parameter to my.cnf so I did that with no change. (As a side note the length of my my.cnf file was MUCH shorter than the ones presented online. In fact under mysqld all I have are datadir, socket, user, and bind-address. Is this normal or should the file be more verbose?) After a few hours of messing with permissions and such I tried using 'localhost' as the value for the database server, from my HOST OS, and the Drupal install kicked off without a problem. So while my issue is resolved I am curious as to why 'localhost' works and why 192.168.1.108 did not? Is there something i need to do to specifically access the MySQL box via the aforementioned IP? Thanks.

    Read the article

  • Network card very slow, only on Windows

    - by J Penguin
    This only happens to 1 of my machine, and only when booting into Windows 7. No matter what network card I put in, Windows would default its mode to 10Mbps full duplex. Transfer speed is approximately 1 MB/s. If I set it to 100Mbps, the transfer drops to 100-200K/s. If I set it to 1000Mbps, the connection is lost completely. I've tried swapping in different cards, both PCI-E and PCI. I'v etried update the windows, I've tried reinstalling the drivers... On this very same machine, if I boot into Fedora, it can use the card at its full capacity 1000Mbps transfering 80+ MB/s And all the cards work just fine when plugging into other machines on the same network. I'm very curious. What could be the reason for this? The only different software that this machine has is virtual box with a VPN emulator, but disabling that VPN doesn't seem to do anything. I would like to get this fixed, hopefully, without reinstalling windows _< Will that be possible?

    Read the article

  • Msg 10054, Level 20, State 0, Line 0 Error when altering a stored procedure to add a couple of curso

    - by doug_w
    We have a home-rolled backup stored procedure that uses xp_cmdshell to create and clean up database backups. We have an instance that is 2005 sp3 that we are trying to deploy this script to. I am at a bit of a loss for why it is not working. When I execute the create it runs for about 30 seconds and yields the following error: Msg 10054, Level 20, State 0, Line 0 A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) In my tinkering I discovered that by removing the cursors that actually do the work it will allow me to create the stored procedure (not very helpful for me though). If I add the cursors back in using an alter the error returns. I would be curious if someone has experienced this problem and knows of a solution or work around. I am not opposed to posting the source, it is just lengthy. Things I have checked: Error Logs No dump files in the log directory Thanks in advance for the help.

    Read the article

  • Paid antivirus solutions for Windows

    - by AP Erebus
    NOTE: If your looking for recommendations on free antivirus, check this question: http://superuser.com/questions/2/free-antivirus-solutions-for-windows Much like the above, I'm curious to opinions on the best PAID antivirus solution, personal or commercial. Enterprise solutions are welcome and as much detail regarding costs is welcome. Personally I'm looking for a licence that will grant me more than 1 computer install and quality technical support, for personal use. as in the free antivirus question: See if your antivirus of choice is already listed. Chances are it is. If you spot an answer that mentions one you already use, vote that up if you think it's a good solution. If you know of a feature or drawback not listed, or can include experiences in dealing with it, please edit the answer accordingly. If you know of any that can also be used at work please point this out. This covers all Windows platforms from XP, Vista and Windows 7. If you see an existing entry that needs an update or to add your testimonial, please do.

    Read the article

  • Unexplained cache RAM drops on Linux machine

    - by FunkyChicken
    I run a CentOS 5.7 64 machine with 24gb ram and running kernel 2.6.18-274.12.1.el5. This machine runs only Nginx, php-fpm and Xcache as extra applications. Since about 3 weeks my memory behavior on this machine has changed and I cannot explain why. There are no crons running which flush anything like this. There are also no large numbers of files being deleted/changed during these drops. The 'cached' memory gets dropped about every few hours, but it's never a set gap between flushes, this indicates to me that some bottleneck gets reached instead. It also always seems to be when total memory usages gets to about 18GB, but again, not always exactly 18GB. This is a graph of my memory usage: As you can see in the graph the 'buffers' always stay more or less the same, it is mainly the 'cache' that gets dropped. Running vmstat -m I have outputted the memory usage just before and just after a memory drop. The output is here: http://pastebin.com/diff.php?i=hJqZqztm 'old version' being before, 'new version' being after a drop. About 3 weeks ago my server crashed during a heavy DDOS attack, after I rebooted the machine this odd behavior started. I have checked a bunch of logs, restarted the machine again, and cannot find any indication what changed. During these 'cache' memory drops, my iNode usage drops at the same time. Does anyone have any idea what might be causing this behavior? Clearly my RAM isn't full, so I am curious why this could be happening.

    Read the article

  • Wireless disconnects every 30 minutes

    - by Kez
    I have had a look through all the related questions and I get the feeling my problem is unique. My wireless connection disconnects every 30 minutes, for maybe 1 to 3 seconds. If I am browsing the web while it happens, I get the page cannot be displayed error message. I have checked the event logs as I was curious to know if there was anything in there. There is. Event 8033: BROWSER - The browser has forced an election on network \Device\NetBT_Tcpip_{B919CC30-25A9-45DD-A09F-549A6262FC9E} because a master browser was stopped. Reported exactly every 30 minutes which coincides with my wireless problem. I am running Windows 7 Ultimate, 32-bit. My wireless is Realtek RTL8187 integrated into a ASUS P5K-E/Wifi motherboard. It is on a workgroup and has never been on a domain. This problem does not affect any other computers. Wireless reception is great, and I have ensured that the wireless unit is transmitting on a frequency not used by any nearby wireless basestations. How can I fix this pesky problem?

    Read the article

  • How does USB device recognition work?

    - by GorillaSandwich
    I'm curious how USB device recognition works in Windows. I imagine it's something like this: When you plug in a device, it tells Windows "here's my device ID to tell you what I am" Windows looks to see if any drivers have been installed that match that device ID. The driver probably tells Windows what the device should be called - like "BlackBerry Curve" or "Canon Printer" If so, it somehow associates that device with that driver Otherwise, it looks for a matching driver online (if you let it) Am I right? If so, that still leaves some questions. When you install drivers, where do they go? Are they files in a folder, or do they get added to the registry? What is Windows doing when it first recognizes the device, thinks, and finally says "your new device is installed and ready to use?" Where does Windows look for missing drivers? Is it in their own database? Do device manufacturers submit drivers to Microsoft for inclusion there? Can anybody explain how this process really works? Also, do other OSes do this differently?

    Read the article

  • Local references to old server name remain after Windows 2003 server rename

    - by imagodei
    I have a standalone Win 2003 server with Windows Sharepoint Services (WSS3) running on it. I had to rename the server and I had bunch of problems resulting from this. Note that the server is not in AD environment. Most obvious problems were with Sharepoint, which didn't work. I was somewhat naive to think it will work in the first place, but OK - I've solved this using step 1 & 3 from this site (TNX) Other curious behavior/problems remain. Most disturbing is that Sharepoint isn't able to send email notifications to participants. I noticed there are several references to old server name everywhere I look: in Registry, in Windows Internal Database (MICROSOFT##SSEE). I see instances of old server name in the Sharepoint Central Administration - Operations - Servers in farm. There is reference to a servers: oldname.domain.local oldname.local On one of those servers there is also Windows SharePoint Services Outgoing E-Mail Service (Stopped). Also, when I try to telnet locally to the mail server (Simple Mail Transfer Protocol (SMTP) service), I get a response: 220 oldname.domain.local Microsoft ESMTP MAIL Service, Version: 6.0.3790.4675 ready at Tue, 15 Jun 2010 13:56:19 +0200 IMO these strange naming problems are also the reason why email notifications from within Sharepoint don't work. Can anyone tell me how to correct/replace those references to oldservername? Why is the email service insisting on old name? Of course I would like to try it without reinstalling the server. TNX!

    Read the article

  • Router not connecting to the internet

    - by Peter
    I had a weird problem this morning with my router - it's an Alice Gate VoIP 2 Plus WI-FI - not connecting to the internet after more or less 1 hour online (I'm always connected with the ethernet cable). The led status lights for Power, WI-FI, ADSL, Internet, Service should be on in order to be connected and navigate online. The problem was that the leds ADSL, Internet were off and I did not know why because it never happened before. I looked at the stats in the settings and the numbers for Bytes/Packages for both Sent/Received were there and increasing but I couldn't connect to the internet. I called tech support, they checked and told me to keep the router on for 48 hours because they were checking it. I've reset it twice before and after I called tech support and it still did not work so after about 2 hours of waiting I tried connecting using WI-FI and the leds 'magically' turned on, first the ADSL and Internet(Internet led always turns on last). At this point I'm curious what could of caused this and I'm doubting that the tech-support guys did something. What could of been the problem with the ethernet cable not connecting in the first place? It always works. What do the tech support guys normally do when they tell me to keep the router on so they can 'check it'? PS: I'm using ubuntu 32 bit

    Read the article

  • Correctly setting up UFW on Ubuntu Server 10 LTS which has Nginx, FastCGI and MySQL?

    - by littlejim84
    Hello. I'm wanting to get my firewall on my new webserver to be as secure as it needs to be. After I did research for iptables, I came across UFW (Uncomplicated FireWall). This looks like a better way for me to setup a firewall on Ubuntu Server 10 LTS and seeing that it's part of the install, it seems to make sense. My server will have Nginx, FastCGI and MySQL on it. I also want to be allow SSH access (obviously). So I'm curious to know exactly how I should set up UFW and is there anything else I need to take into consideration? After doing research, I found an article that explains it this way: # turn on ufw ufw enable # log all activity (you'll be glad you have this later) ufw logging on # allow port 80 for tcp (web stuff) ufw allow 80/tcp # allow our ssh port ufw allow 5555 # deny everything else ufw default deny # open the ssh config file and edit the port number from 22 to 5555, ctrl-x to exit nano /etc/ssh/sshd_config # restart ssh (don't forget to ssh with port 5555, not 22 from now on) /etc/init.d/ssh reload This all seems to make sense to me. But is it all correct? I want to back this up with any other opinions or advice to ensure I do this right on my server. Many thanks!

    Read the article

  • Multiple *NIX Accounts with Identical UID

    - by Tim
    I am curious whether there is a standard expected behavior and whether it is considered bad practice when creating more than one account on Linux/Unix that have the same UID. I've done some testing on RHEL5 with this and it behaved as I expected, but I don't know if I'm tempting fate using this trick. As an example, let's say I have two accounts with the same IDs: a1:$1$4zIl1:5000:5000::/home/a1:/bin/bash a2:$1$bmh92:5000:5000::/home/a2:/bin/bash What this means is: I can log in to each account using its own password. Files I create will have the same UID. Tools such as "ls -l" will list the UID as the first entry in the file (a1 in this case). I avoid any permissions or ownership problems between the two accounts because they are really the same user. I get login auditing for each account, so I have better granularity into tracking what is happening on the system. So my questions are: Is this ability designed or is it just the way it happens to work? Is this going to be consistent across *nix variants? Is this accepted practice? Are there unintended consequences to this practice? Note, the idea here is to use this for system accounts and not normal user accounts.

    Read the article

  • How can I safely close this window and forever avoid seeing similar pop-ups from Mackeeper Zeobit's malware and spyware?

    - by Michael Prescott
    The attached image shows a window that just popped up and the only button available is the OK button. I could Force quit Safari, but I've got several sites open right now and don't want to try and find my place again. Besides, I've seen similar hacks in the past and I'd like to learn how to handle them in a way better than just a brute force-quit. I've never heard of MacKeeper or Zeobit, so I opened Firefox and did a few searches while Safari is obviously still stuck, waiting for me to click the sneaky OK button in the dialog window. Anyhow, at least the first few pages of most search results contain lots of blabbering from questionable witnesses about how MacKeeper saved them from some malware or spyware. However, any company that is hacking the browser to maliciously install their product is itself the criminal and not providing a true security application. So, there are three questions here: How can I close this window? Can I do something to Safari to avoid these hacks in the future? (Just curious) Is MacKeeper or Zeobit somehow loading the search results so that no information about their application being malware or spyware is listed (I can't be the only person in the world that is offended by their tactics, even though it appears I am)?

    Read the article

  • Subdocument in Word won't save

    - by ChrisW
    Because I know Word has a history of not liking very large documents (my supervisor specifically told me not to use LaTeX... grr), I decided to learn the Master document / subdocument feature of Word when writing my PhD thesis. I have the title page / table of contents etc in the master document, and each chapter as a separate document. However, when I save the master document, it appears to save all the chapter documents apart from one (Chapter 4), for which it brings up the Save Document dialog box, helpfully with "Chapter4.docx" in the "Save as" box (n.b. Chpater4.dox is not open). Clicking save does nothing, and doesn't make the dialog box go away. Saving as a different document means that my changes aren't reflected in the same document. There must be some reason Word doesn't like this particular document but I've got no idea why - there's nothing special in it that isn't in any of the other chapters. I have tried closing all documents, renaming Chapter4.docx, opening the master document, expanding all documents, OKing the warning that Chapter4.dox does not exist, and inserting the 'new' document, but even when I save the master document it still won't save the new Chapter4 document. If anyone knows any reason why Word is acting like this (or if I'm doing anything stupid), I'll be eternally grateful (p.s. sorry for the long rambling message. It's late; I've been working on my PhD 4.5 years, I really really want to throw this computer out the window, and I hope people are kind enough not to downvote this question because of it's rambling nature!) Update With Word closed, I've tried to delete Chapter4.docx (having made a backup!) - but I get a warning that it can't be deleted because it's open in Microsoft Word... these files are on a network drive and the same problems are happening on 2 different computers. I could login to the filestore through ssh and force the file to be deleted, but I'm curious to know why this is happening!

    Read the article

  • Applying ACLs to a Dovecot public namespace

    - by larsks
    I have a public namespace define in my dovecot (dovecot-2.0.9) configuration that looks like this: namespace { type = public separator = . prefix = news. location = maildir:/var/spool/news subscriptions = no } I would like to make all the mailboxes in this namespace read-only. I've got the following configuration for the ACL plugin: plugin { acl = vfile:/etc/dovecot/acls:cache_secs=300 } After perusing the documentation, it seemed as if I had a mailfolder /var/spool/news/.foo.bar that I could place the following into /var/spool/news/.foo.bar/dovecot-acl: anyone rl But that doesn't have any affect. I also tried creating a file /usr/local/etc/dovecot/acls/news.foo.bar with the same contents, but that didn't do anything, either. I've turned on mail debugging: mail_debug = yes But the log doesn't produce anything that appears to be relevant to ACL processing. I'm curious to know if anyone has gotten this to work correctly and if so if you could provide some configuration examples. Also, if there's any way to do this that doesn't involve per-mailbox configuration (.e.g, the ability to apply an ACL to news.* or something), that would be awesome. Getting the documented behavior for default ACLs working would be a step in the right direction.

    Read the article

  • MongoDB on 128mb 32-bit VPS (plus Tornado and Redis)

    - by apito
    i am curious about how mongodb will perform in a limited vps. specifically, i'll deploy this configuration on 32-bit ubuntu 9.04 server with 128Mb memory (UPDATE: now i'm considering 360mb too). nginx and redis three instances of tornado apps (one is for mobile site; limited app, not my primary audience); has around 8 Collections. social webapp for my community. mongodb all beside mongodb seems to have small footprint. memory-mapping-wise, i dont know how mongodb will behave. i know it's a little bit a stretch to use this kind of config on a tiny vps, but that's what i can afford for now. i expect to have.. hmm.. maybe ~50 15rps. i did my homework doing a lot of frontend optimizations and yslow says grade A 91 (ruleset V2) :-) anyone willing to share experiences? eg. how big the data set size when mongo hit the ceiling, performance when mongo do a lot of disk IO, etc. thanks. UPDATE: this is my pet project. i'll get back to you when i have next spare time to do same httperf in a vbox with exact spec. suggestion how to do stress testing welcomed. i'm new to this kind of stuff.

    Read the article

  • VMware "boot screen" - add info like contact, phone number, etc.?

    - by TheCleaner
    I've tried searching Google and VMware's KB but maybe I'm not typing the right search criteria...only finding ways to fix problems with booting or screen issues. On the default boot screen of a host it looks similar to this picture from GIS: I'm curious if it is possible somehow to make it look like this instead (adding custom details): I know for the most part the info is "useless" since it is administered remotely, etc. But when I deploy standalone hosts to branch offices, it'd be nice for them to see this type of info on the boot screen. I may also include the VMs hosted on it (again on standalone hosts). Normal monitoring, etc. will be done remotely. This is strictly for odd times when the branch contact may say "the electricians are saying they need to turn off this circuit but I have no idea who to call in IT to tell them this box needs to be shut down" or similar. Anyone who has dealt with small branch offices can tell you that if it isn't labeled they easily forget what it is for and will simply say after the incident "I didn't know what it was or who to call." Possible?

    Read the article

  • How does Tunlr work?

    - by gravyface
    For those of you not in the US, Tunlr uses DNS witchcraft to allow you to access US-only (and UK-only stuff like BBC radio online) services and Websites like Hulu.com, etc. without using traditional methods like a VPN or Web proxy. From their FAQ: Tunlr does not provide a virtual private network (VPN). Tunlr is a DNS (domain name system) unblocking service. We’re using sophisticated technologies (a.k.a. the Tunlr Secret Sauce ©) to re-adress certain data envelopes, tricking the receiver into thinking the envelope originated from within the U.S. For these data envelopes, Tunlr is transparently creating a network tunnel from your location to our U.S.-based servers. Any data that’s not directly related to the video or music content providers which Tunlr supports is not only left untouched, it’s also not even routed through Tunlr. In order to use Tunlr, you will have to change the DNS address. See Get started for more information. I can't really wrap my head around how this works; I have always assumed that these services performed a geolocation lookup via your client IP. Just really curious as to how this works. EDIT 2 I believe they're only proxying the initial geo check and then modifying the data stream request to include your real IP address so that the streaming is direct, not proxied.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >