Search Results

Search found 4426 results on 178 pages for 'bunch'.

Page 117/178 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Are there any Microsoft Exchange Clients for iOS and Android that store their local data in an encrypted manner?

    - by Zac B
    I don't feel like this is a product recommendation question, more of a "does this tech even exist and is it feasible" question, but if I'm wrong, feel free to give this question the boot. Context: Our company has a bunch of traveling employees who access the company's Exchange server via thier iDevices or android phones, but because of the data protection laws in the state where our company is based (and the nature of the data our company works with), a recent security audit found that all mobile devices (laptops, phones, etc) operated by our company need to have all company correspondence and related data encrypted all the time. For laptops, that was easy: BitLocker or TrueCrypt, problem solved. For phones and tablets, however, I'm stumped. Sure, you can put lock screens/passwords on the phones, but the data is still accessible via external extraction, as law enforcement authorities already know. Question: Are there any clients for Microsoft Exchange that run on iOS or Android which store local data encrypted? The people using our mobile devices do a lot of their work while offline, so just giving them OWA access with SSL connection security isn't enough. Are there apps/technologies that present an additional login credential prompt to decrypt locally stored data in the app's storage area on the phone? My gut reaction when I started looking into this was "that doesn't sound like something Apple would allow into the App Store", but I've been wrong before...

    Read the article

  • Running multiple services on different servers with IPv6 and a FQDN

    - by Mark Henderson
    One of the things NAT has permitted us to do in the past decade is split physical services onto different servers whilst hiding behind a single interface. For example, I have example.com behind a NAT on 192.0.2.10. I port-forward :80 and :443 to my web server. I'm also port forward :25 to my mail server, and :3389 to a terminal server and :8080 to the web interface of my computer that downloads torrents, and the story goes on. So I have 5 port forwardings going to 4 different computers on example.com. Then, I go and get me some neat IPv6. I assign example.com an IPv6 address of 2001:db8:88:200::10. That's great for my websites, but I want to go to example.com:8080 to get to my torrents, or example:3389 to log on to my terminal server. How can I do this with IPv6, as there is no NAT. Sure, I could create a bunch of new DNS entries for each new service, but then I have to update all my clients who are used to just typing example.com to get to either the website or the terminal server. My users are dumber than two bricks so they won't remember to connect to rdp.example.com. What options do I have for keeping NAT-style functionality with IPv6? In case you haven't figured it out, the above scenario is not a real scenario for me, or perhaps anyone yet, but it's bound to happen eventually. You know, with devops and all.

    Read the article

  • Postfix: change sender in queued messages

    - by ring0
    Following a complete re-installation we got a problem with the configuration: the sender address was wrong and some recipients (mail servers) rejected them. So there is a bunch of mails stuck in the Postfix queue. Ideally, a change of the sender address directly in the queued mails, and then flushing the queue would be optimal. I tried this answer that addresses this very problem. But messages don't seem to be easily modifiable in the version I have (2.11.0). For instance there is no /var/spool/mqueue dir, but, instead, /var/spool/postfix/... active bounce corrupt defer deferred dev etc flush hold incoming lib maildrop pid private public saved trace usr and the dir of interest is deferred. I tried to modify a few files there changing the wrong domain with the correct one (and was careful to ensure only those were changed). But then, those mails were moved to corrupt, meaning that a simple text change doesn't seem to work (done with vi). Any other cleaner way to change the sender in queued mails?

    Read the article

  • iSCSI SAN Implementation with several ESXi hosts and two Equallogic SANs

    - by Sergey
    I work for a small state college. We currently have 4 ESXi hosts (all made by Dell), 2 EqualLogic SANs (PS4000 and PS4100) and a bunch of old HP Procurve switches. The current setup is very far from being redundant and fast so we want to improve it. I read several threads but get even more confused. The Procurve Switches are 2824. I know they don't support Jumbo Frames and Flow Control at the same time, but we have plans to upgrade to something like Procurve 3500yl. Any suggestions? I heard Dell Powerconnects 6xxx are pretty good but I'm not sure how they compare to HPs. There will be a 4-port Etherchannel (Link Aggregation) between the switches, and all control modules on SAN will be connected to different switches. Is there anything that will make the setup better? Are there better switches then Procurves 3500yl that cost less than 5k? What kind of bandwidth can I expect between ESXi hosts (they will also be connected to 2824 with multiple cables) and SANs?

    Read the article

  • What would keep a Microsoft Word AutoNew() macro from running?

    - by Chris Nelson
    I'm using Microsoft Office 2003 and creating a bunch of template documents to standardize some tasks. I know it's standard practice to put the templates in an certain place Office expects to find them but that won't work for me. What I want is to have "My Template Foo.dot" and "My Template Bar.dot", etc. in the "My Foo Bar Stuff" on a shared drive and users will double click on the template to create a new Foo or Bar. What's I'd really like is for the user to double click on the Foo template and be prompted for a couple of items related to their task (e.g., a project number) and have a script in the template change the name that Save will default to something like "Foo for Project 1234.doc". I asked on Google Groups and got an answer that worked....for a while. Then my AutoNew macro stopped kicking in when I created a new document by double clicking on the template. I have no idea why or how to debug it. I'm a software engineering with 25+ years of experience but a complete Office automation noob. Specific solutions and pointers to "this is how to automate Word" FAQs are welcome. Thanks.

    Read the article

  • Multiple VM environment for developing/testing

    - by Hippo
    I was asked to create a setup for automated deployment, configuration, installation/updates of websites. A bunch of small websites will be bundled on one server. If more website will come up a new server will be created... I decided to us chef for this task. All servers will be running Ubuntu at the same version and configuration. The actual question: Everything needs to be tested properly before starting live deployment, so my question is: What is the best virtualisation tool to run multiple (5 - 10) virtual machines on a Ubuntu Laptop? Requirements: easy setup, fast (clone/snapshot of VMs) All VMs should be easily connected to the internet and should be able to communicate to each other (Open-Source / free would be great) So far I looked into: Virtual box is more for Desktop virtualisation, Cloning not possible, every new machine needs to be installed VMware Player Any suggestions? If there are any question about what I am doing please comment on this question, I will answer as soon as possible. This question is not about the actual set up, it is about a nice working environment.

    Read the article

  • How to configure Apache to let PHP handle OPTIONS HTTP requests?

    - by Robin Berjon
    In order to set up a proper test suite for CORS (cross-domain requests) I need to be able to handle the HTTP OPTIONS method directly from script. I therefore have a simple PHP script that detects the OPTIONS method, and reacts accordingly by outputting some specific headers. The PHP side is not a problem. If I use curl to issue GET/POST/HEAD/PUT/etc. requests they all go to the script and it clearly handles them fine. If I issue an OPTIONS request however, it never reaches the script: Apache immediately replies listing a set of methods that it believes to be appropriate for this resource. I can tell that the script isn't run (no logging, none of its output makes it to the response, etc.). I've been going through the Apache configuration, have made sure no applicable .htaccess is in the way, I've tweaked a bunch of things such as Limit/LimitExcept directives, but I can't get it to change its behaviour. I've also tried to find information on a technique from my youth that could have helped here: NPH (non-parsed headers) scripts; but apparently that has now disappeared (at least, I can't find any recent information about it that works). So the question is: how do I tweak Apache's configuration so that it will let my script handle OPTIONS?

    Read the article

  • Start daemon after specific samba share is mounted

    - by getack
    I axed this question on AskUbuntu, but it's not getting any traction from there... So I'll try here as well: I have a homebrew headless NAS running 12.04. In it I have a bunch of disks that are presented as a samba share thanks to Greyhole. If I want to do anything to the files within this share, I must do it through greyhole so that everything is updated properly. Thus, the share must be mounted locally and then accessed from there if I want to work on the files from the local machine. I do this mounting automatically thanks to these instructions. I also have Deluge installed that takes care of all my torrenting needs. Deluge's default download location is in this share, so that all the downloads are immediately available to the rest of the network. Obviously for everything to work, the share must be mounted, otherwise Deluge is going to have a problem downloading to it. The problem is, it seems like Deluge is starting before the shares are mounted when the system boots. So downloading/seeding does not continue automatically after boot. I have to log in and force a manual rescan and start on each torrent otherwise all the torrents just hangs on the error. Is there a way I can make deluge start after the shares got properly mounted? I looked into Upstart's emits functionality but I cannot seem to get it to work properly. Any advice?

    Read the article

  • Using bind (named) as a public proxy server

    - by TrentDavis
    We have a Python DNS server that does a bunch of stuff to figure out values it should return for various DNS records. This works nicely, however as it is Python, the performance under high load won't be great. What I would like to do is have a "proxy" bind server sit in front of it to return results to the public internet. This will cache the results (typically 15 minutes, some records are a few seconds), so the load on the Python server will be greatly reduced as it will only see one look up per domain (only about 100 domains) every 15 minutes. The data in these domains changes a lot, so using a master won't work as it will constantly be changing. I have something setup that looked like it would work great (using a forwarder for the zone), and tested it with dig etc, all going great. However when we went to go live with it, things weren't working, and we figured out that named is not setting the "Authoritative" bit (fair enough, it is a forwarder). So my question is, can we tell bind to set the Authoritative bit for forwarded domains? I have looked at all the doco I can find, and can't find anything about doing things this way. Most of the doco about using it as a proxy if for a LAN to the internet. Ideally I would like to use bind as it is there and installed (CentOS 5 servers). But at a pinch we could look at a different name server to do the work if it just can't be done with bind. Thanks.

    Read the article

  • Why do I get this message from chrome when navigating to https://www.amazon.com?

    - by Denis
    This is probably not the site you are looking for! You attempted to reach www.amazon.com, but instead you actually reached a server identifying itself as *.voxcdn.com. This may be caused by a misconfiguration on the server or by something more serious. An attacker on your network could be trying to get you to visit a fake (and potentially harmful) version of www.amazon.com. Intermittently, I get a blank page when going to http://www.amazon.com. So I stuck an 's' in the URL, making it https://www.amazon.com and got that message above (with the nice red screen) from Chrome indicating there might be some monkey business going on. After hammering on the URL a bunch of times and pulling it up in Chrome's developer tool to look at the network traffic on it, the url (without the s) started behaving. The url with the s just hangs, but the red screen no longer comes up. Some specs... I've got a macBook Pro, Snow Leopard, Time Warner cable. I've had enough strange stuff happening over the past couple months (google.com, youtube.com, amazon.com not coming up or loading strange error messages with random reference numbers) that I finally decided to switch to OpenDNS. Still having problems, though.

    Read the article

  • Make Nginx fail when SSL certificate not present, instead of hopping to only available certificate

    - by Oli
    I've got a bunch of websites on a server, all hosted through nginx. One site has a certificate, the others do not. Here's an example of two sites, using (fairly accurate) representations of real configuration: server { listen 80; server_name ssl.example.com; return 301 https://ssl.example.com$request_uri; } server { listen 443 ssl; server_name ssl.example.com; } server { listen 80; server_name nossl.example.com; } SSL works on ssl.example.com great. If I visit http://nossl.example.com, that works great, but if I try to visit https://nossl.example.com (note the SSL), I get ugly warnings about the certificate being for ssl.example.com. By the sounds of it, because ssl.example.com is the only site listening on port 443, all requests are being sent to it, regardless of domain name. Is there anything I can do to make sure a Nginx server directive only responds to domains it's responsible for?

    Read the article

  • Communicating via Command Mode with IBM HS22 IMM via AMM

    - by MikeyB
    On previous model blades that contained a BMC, I was able to communicate from our external management station via pass-through commands to the BMC to do things such as power blades on/off, set VPD parameters, reboot the BMC, etc. Now on the HS22, a bunch of things happen differently. For example, we can no longer use the same pass-through commands to write VPD information pages and have them persist across reboots of the IMM - it looks as though those VPD pages are populated from information contained in the IMM. How do we use the Advanced Settings Utility from an external host to communicate with HS22 IMMs? Alternatively, what TCP Command Mode commands do we need to send to the AMM to communicate with the IMM? For our purposes, we specifically cannot communicate with the IMM from the blade itself. Specific example: When I send a pass-thru IPMI command via the AMM to the blade BMC to write information (such as MTM, Serial) into VPD page 0x10, it persists on blades with a BMC (HS21 for example). I can send the same IPMI command to write data to the VPD page on the HS22, however it does not persist across reboots of the IMM. What IPMI commands do I need to send to the IMM? What IPMI commands are asu sending when it sets the MTM & Serial?

    Read the article

  • Exchange users moved mailbox now can't open some calendars

    - by Kip
    OK So the environment is Exchange sp on Windows 2003 server. This weekend we had to move a bunch of users of off one information store that was corrupt and onto a temp store delete the original dodgy store and then move the users back from the temp store to one of the three other stores under the same original storage group. Since then we are having some weird access issues relating to calendars. I am assuming it is all related, but it might not be. The problem is that users are unable to see any calendars that they have previously had access to. The weird thing is, that some of the users in question are not ones who have been moved nor are they trying to access calendars that belong to people whose accounts have been moved. Hence my assumption its related but possibly not. The message received is "Unable to display the folder. The calendar folder could not be found." here is the kicker, if i move someone who is trying to access other calendars, to a different mailbox store (thereby creating a new email account and sending stuff over), things start to work again. this to me indicates a permissions problem however I am unsure in what way. Looking for help out there please guys :) Cheers

    Read the article

  • Installing FIREFOX with extensions/addons manually? (not really auto install)

    - by BrownChiLD
    I've been reading around with regards to creating firefox installers, bundling it w/ addons, using scripts, and CLI lines and a whole bunch of stuffs ... but it seems that going through this route is just too complicated and time consuming.. Since i don't mind a bit of manually copying files and stuff, I was planning to do the following: on my test machine, 1) install firefox on a machine AND configure it the way i want it 2) install addons AND set the configurations for it 3) set advanced configurations for firefox (about:config) Then once i'm all set, I just simply copy the contents of the firefox/profiles folder (for this particular tests it's ....\AppData\Local\Mozilla\Firefox\Profiles\6m0mef0s.default for deployment, all i have to do is: 1) Install the same version (offline installer) of the Firefox i used.. 2) overwrite the contents of the new profiles folder (randomly named by Firefox installer as usual) .. This should set all my configs and addons right? or what other folders do i have to backup and copy manually into the new profiles folder? I don't think i need to tinker w/ any registries right? anyway, if this works, though it's a bit manual, it's a whole lot simplier, and straight forward than fiddling w/ Installers and Packages etc.. PS I do this a lot w/ other simple (and some complex) software that i use and they seem to work fine for years.. i'm just not sure with firefox and how it's structured..

    Read the article

  • Recommendations for good FTP server for Win 2008 x64

    - by sfhtimssf1970
    I spent a bunch of time learning/configuring the "all new and better" FTP feature for IIS7. In my opinion, it still fails hard: In order to have multiple FTP sites on the same machine, you have to use host|user usernames (like domain.com|jason) for every account. Using IIS Manager auth doesn't seem to work at all. I'm sure I'm doing something wrong, but I can't figure out what the hell it is. I've read all the official articles on it and configured it a hundred different ways. Doesn't play well with passive connection types. That has to be disabled on the client in order for it to work. Doesn't have any way to allow one user to see multiple sites no matter what binding they are connected to. For instance, if "jason" connects to ftp.domain.com, he should be able to see domain2.com, domain3.com without seeing domain4.com and domain5.com. It takes an act of God to set this up with IIS7. So I'm wanting to install a third party FTP server instead. I've looked at FileZilla both ZFTPServer. Anyone know of any pros/cons on these? Any other recommendations?

    Read the article

  • What do different patterns mean in Windows 8 file copy dialog

    - by MainMa
    When copying or extracting files, Windows 8 shows the chart with the speed of the operation. I noticed several patterns: Randomness, High speed at the beginning, then low speed during the most part of the operation, Mostly constant speed. 1. Randomness/nice mountains. 2. High speed at the beginning, then low speed during the most part of the operation. 3. Low speed at the beginning, then high speed during the most part of the operation. (Similar to the previous image, but inverted) 3. Mostly constant speed. (Same as previous image, but without the fast start) I'm curious, what each of those patterns mean? Do some indicate that there may be a problem with hard disk performance? Why the nearly constant speed is so rare, even when copying a single large file from and to a spinning drive, or when copying a single large file or a bunch of small files from and to an SSD?

    Read the article

  • How to write re-usable puppet definitions?

    - by Oliver Probst
    I'd like to write a puppet manifest to install and configure an application on target servers. Parts of this manifest shall be re-usable. Thus I used define for defining my re-usable functionality. Doing so, I've always the problem that there are parts of the definition which are not re-usable. A simple example is a bunch of configuration files to be created. These file must be placed in the same directory. This directory must be created only once. Example: nodes.pp node 'myNode.in.a.domain' { mymodule::addconfig {'configfile1.xml': param => 'somevalue', } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', } } mymodule.pp define mymodule::addconfig ($param) { $config_dir = "/the/directory/" #ensure that directory exits: file { $config_dir: ensure => directory, } #create the configuration file: file { $name: path => "${config_dir}/${name}" content => template('a_template.erb'), require => File[$config_dir], } } This example will fail, because now the resource file {$config_dir: is defined twice. As far as I understood, it is required to extract these parts into a class. Then it looks like this: nodes.pp node 'myNode.in.a.domain' { class { 'mymodule::createConfigurationDirectory': } mymodule::addconfig {'configfile1.xml': param => 'somevalue', require => Class ['mymodule::createConfigurationDirectory'], } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', require => Class ['mymodule::createConfigurationDirectory'], } } But this makes my interface hard use. Every user of my module has to know, that there is a class which is additionally required. For this simple use case the additional class might be acceptable. But with growing module complexity (lots of definitions) I'm a bit afraid of confusing the modules user. So I'd like to know is there a better way to handle this dependencies. Ideally, classes like createConfigurationDirectory are hidden from the user of the modules api. Or are there some other "Best Practices"/Patterns handling such dependencies?

    Read the article

  • All browsers refusing to load a specific image on a webpage?

    - by Johnson
    Out of nowhere today, all 3 of my browsers (FF/Chrome/IE, OS = Win7 x64) are refusing to load the homepage of interfacelift.com correctly. It works fine on other PC's in the house (on the same network), so it is definitely related to this one PC. The browser won't load the main image on the page correctly (even though the source code looks good), however if I direct the browser to the exact location of that image, then it displays fine. So obviously I can get the HTML index (which locates the resource) and I can get to the resource. So why heck isn't it displaying properly on the index page? It's almost as if the HTML rendering engine has gone bad, on all 3 browsers at once. I've browsed to a bunch of other sites (including sites very heavy on JS, with HTML much more complex than the one in question here) and am seeing nothing funny. Only thing wonky I've done with my PC in the past several hours was replacing the system file Magnifier.exe with a copy of cmd.exe while playing around with some of the ideas mentioned in this guide. However, I've since then restored the files to their previous state, and I don't know how Magnifier would be related to this even if I hadn't restored it. Any ideas? I'm stumped! EDIT: Here is what the broken page looks like in Chrome. And here is the image loaded correctly by itself.

    Read the article

  • Which Firefox add-on is responsible for a rendering bug?

    - by Gilles
    I've found a page that isn't rendered correctly by Firefox with my usual profile. It is rendered correctly with a blank profile. I have quite a few add-ons. One of them is surely the culprit. How can I find out which? Userscripts often affect the rendering. But I turned off Greasemonkey, and it didn't help. So it's something else, presumably an extension (what else could it be? I have no chrome/userChrome.css.). I'm looking for an easy way to find out which one, easier than disabling a bunch of extensions and restarting umpteen times. Related: Create a tool to help users identify a problematic add-on by bisecting the list of installed add-ons — a similar problem which would admit a similar solution. I want to automate this as much as possible; something like git bisect, that doesn't require me to change my actual profile, would be ideal. A Linux-specific solution is fine with me.

    Read the article

  • "The zone can be scavenged after" keeps incrementing

    - by kce
    What are you trying to do? I'm trying to enable DNS scavenging on a DNS zone that has about a hundred stale DNS records. What have you tried in order to make it happen? I setup DNS Scavenging per everyone's favorite TechNet Blog post: Don't be afraid of DNS Scavenging. Just be patient. I first disabled scavenging on all of our domain controllers: DNSCmd . /ZoneResetScavengeServers contoso.com 192.168.1.1 192.168.1.2 I then enabled automatic scavenging on the DNS zone: I then enabled DNS scavenging on one of the domain controllers: I then found a few records that I expected to get delete with timstamps from a few years ago and ensured that that the Delete this record when it becomes stale and that time stamp was actually set: Finally I reloaded the zone and waited 14 days (the sum of the Refresh + No-Refresh periods). What results did you expect? I expected to see a 2501 Event in the DNS server logs noting the deletion of a bunch of DNS records. What actually happened? Nothing happened. The Zone Aging/Scavenging Properties showed that the zone could be scavenged after 6/12/2014 10:00:00 AM last week. No 2501/2502 events were recorded. All of the records with "aged" time stamps are still present. The date at which the zone can be scavenged after incremented another seven days to ?6/?18/?2014 10:00:00 AM. As I understand it until that date stays at least 14 days in the past nothing will ever even be eligible for scavenging let alone actually be scavenged. The only 2501 events recorded in the event logs are ones that I have triggered by right clicking and selecting "Scavenge Stale Resource Records". They note that scavenging will try to run again in 168 hours which was this morning. I have DNS scavenging enabled for a few months and have waited patiently for something to happen. I have reloaded the zone multiple times (which resets this timestamp). What am I missing here?

    Read the article

  • Providing access to a Samba server for VPN clients

    - by Kamil Kisiel
    We have some Windows users that connect to our network via VPN from home. They need to be able to connect to our Samba server and access a mapped network drive just as they do as when they are on our LAN. The complication is that VPN clients are placed on a subnet other than our office LAN, and behind a firewall. What's the easiest way for me to allow them to still connect to the network share? The solutions I've currently seen involve setting up a WINS server for name resolution purposes and then tunnelling a bunch of the NetBIOS stuff through the firewall. However that means I'd have to set up the VPN DHCP server to hand out the WINS address, something I'm not even sure is possible on the Cisco hardware we have. I'm thinking there must be an easier way. Should I use an LMHOSTS file? Or just map by IP address? Also, I'm not terribly familiar with Windows networking, so which ports would I need to pass through my firewall in order to get the file sharing through?

    Read the article

  • Apache debugging: where to find error logs?

    - by AP257
    I'm new to Apache and web serving generally, so apologies if this is a very stupid question. I want to configure a new sub-domain on a working site and install a forum there. I'm using a Debian server that already has Apache, mod_wsgi and a bunch of virtual hosts successfully running on it. I first installed my forum app (Django's OSQA). Following the OSQA instructions, I then created an Apache config file that specified ServerName as the new sub-domain. I also created a .wsgi file for the app, and pointed WSGIScriptAlias at it. I then restarted Apache. However, when I go to the new sub-domain, I get a 404 error message. Two questions: Is there a step missing above? Or is simply creating a new Apache config file in sites-available enough to 'tell' Apache about a new sub-domain? If there's something else going wrong, how can I debug it? The ErrorLog and CustomLog specified in the config file are both blank. apache2.conf, which I guess is Apache-wide configuration, specifies ErrorLog /var/log/apache2/error.log, but this is yet another blank file.

    Read the article

  • Two large, linked Excel files take 30 minutes to save, except in VMWare environment

    - by Gerald L
    I support some tax consultants who love to use Excel when they should probably be using Access. Anyway, they have created two Excel files, A and B. File B has cells linked to file A. File A is 27 MB and file B is 16 MB. One worksheet has roughly 1 million rows and there is another worksheet doing a whole bunch of SUMIF on the 1 million rows. Not the best idea, but whatever. Both Excel files open and recalculate within a reasonable amount of time (1-2 minutes). For a files that large, this is acceptable. Here is the problem: Once you change a cell, and save the file B, it takes a solid 30 minutes to save the file, and the processors are going full speed. I've tried this on 6 different machines, all running Windows XP SP3 with Office 2007 SP2 and all patches. The specs vary from one machine with 512 MB or RAM to a machine with 4 GB of RAM and quad processors. Same result every time. Here is the clincher: If I do this same save operation on a VMWare virtual machine, the file gets saved in 1 minute. I've tried this with my ESX servers at the office, my Mac Fusion at home, and VMWare workstation at the office. It does not matter how much RAM the virtual machine has... it saves in about 1 minute every time. Does anybody have any idea why this is happening and how to fix?

    Read the article

  • Keyboards for kiosk/outdoor/abusive environments?

    - by Justin Scott
    We have a bunch of kiosks deployed into let's just say... abusive environments. The enclosures we had built are touch as nails, and the HP thin client computers are working great. The keyboards that were purchased for the project have been nothing but problems. They're a generic brand direct from a Chinese manufacturer. They're stainless steel with keys mounted from the inside and a trackball, but they've been deployed for only a month and nearly 20% of them are already out of service due to keys sticking, keys not working, trackball problems, water damage, and a variety of other issues. Are there any kiosk keyboards that can take a beating without breaking so easily? Ideally they should be tamper-proof (keys can't be removed), waterproof, lettering should be engraved into the keys, trackball, option for a single mouse button would be nice, and some protection to keep debris out of the keys so they don't stick (sticky cleaners, food debris, etc.). Does such a beast exist? Everything we've looked at is susceptible to easy damage. We need the M1 Abrams Tank of keyboards. Any suggestions?

    Read the article

  • Nginx try_files or else continue matching against locations?

    - by Yang
    I'm wondering whether this is possible with Nginx: I just added a directory with a bunch of HTML files (foo.html, bar.html) that I'd like to serve with /foo, /bar, etc. If the URL doesn't match up with a file name I'd like to fall back to whatever the next best matching location would be. So I have: # This block is newly added. location ~ ^/([^/]+)$ { default_type text/html; alias /blah/$1.html; } # Our long list of existing subsystems below.... location /subscribe { proxy_pass http://127.0.0.1:5000; } location /upload { proxy_pass http://127.0.0.1:8090; proxy_read_timeout 99999; } location ~ /(data|garbage|blargh).* { proxy_pass http://127.0.0.1:8090; proxy_read_timeout 99999; auth_basic text; auth_basic_user_file /etc/nginx/htpasswd; } .... The problem is that the first regex now eats up the URLs that would've gone to other locations, as per the documented behavior of location. One approach is to maintain the full explicit list of files in the first location block, but this list is quite large and is always changing. Is there a way to check to see if the file exists first, and if not, then continue with what would've been the next-best location match? I took stabs using try_files (including using a @fallback and nesting locations in there) but I don't think it's capable of doing this. However I thought I'd ask here in case I'm missing something. (Or maybe there's another better approach altogether.)

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >